pith. machine review for the scientific record. sign in

arxiv: 2604.10730 · v1 · submitted 2026-04-12 · 💻 cs.CY · cs.AI

Recognition: unknown

Perceived Importance of Cognitive Skills Among Computing Students in the Era of AI

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:31 UTC · model grok-4.3

classification 💻 cs.CY cs.AI
keywords cognitive skillsgenerative AIcomputing educationstudent perceptionsAI integrationskill developmentcurricular design
0
0 comments X

The pith

Computing students expect all 11 cognitive skills to lose importance as AI integrates more deeply into their work.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper reports results from a survey of undergraduate computing students who rated the importance of 11 cognitive skills in three time periods: before widespread AI, in current mixed use, and in a future with heavier AI integration. Students consistently assigned lower future importance to every skill, including analysis, synthesis, and evaluation. This perception matters because it could encourage students to invest less effort in building those abilities, prompting educators to add explicit countermeasures against cognitive offloading from AI tools.

Core claim

A researcher-monitored survey of computing undergraduates found that students rate all 11 cognitive skills as diminishing in importance when moving from past to present to future conditions of increasing AI adoption. The authors conclude that this expected decline signals a need for educational interventions that deliberately reinforce cognitive skill development inside learning environments now shaped by AI assistance.

What carries the argument

The three-frame temporal rating task (past, present, future) applied to 11 cognitive skills in a quantitative survey of computing undergraduates.

If this is right

  • Curricular design must add explicit activities that reinforce cognitive skills even when students have ready access to AI tools.
  • Learning environments need safeguards against inadvertent drops in cognitive involvement caused by routine AI offloading.
  • Workforce preparation should account for students' lowered valuation of core thinking abilities when planning future course content.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the expectations prove accurate, programs may shift emphasis toward teaching effective AI collaboration rather than independent cognitive exercises.
  • The findings invite direct tests of whether real-world job outcomes later confirm or contradict the predicted decline in skill value.
  • This perception pattern could influence how students allocate study time and which electives they choose during their degrees.

Load-bearing premise

That students' self-reported expectations of future skill importance will match the actual value those skills retain in AI-integrated computing jobs, and that the surveyed group stands for the broader population of computing undergraduates.

What would settle it

A longitudinal follow-up that measures actual on-the-job performance and skill usage for the same students after five years in AI-heavy roles, checking whether those who lowered their emphasis on cognitive skills show measurable deficits.

Figures

Figures reproduced from arXiv: 2604.10730 by Erta Cenko, Laura Melissa Cruz Castro, Neha Rani.

Figure 1
Figure 1. Figure 1: Study procedure 3.3 Analysis To understand computing students’ perception of the current importance of different cognitive skills for computing professionals, a descriptive analysis of the data was run. A sample cognitive skill importance question was ”Rate critical thinking in terms of its current importance for com￾puting professions.” Rate on a scale of 1 to 10, where 1 is not important at all, and 10 i… view at source ↗
Figure 2
Figure 2. Figure 2: Cognitive skills importance ratings with heatmap color coding. A higher color gradient [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
read the original abstract

The availability and increasing integration of generative AI tools have transformed computing education. While AI in education presents opportunities, it also raises new concerns about how these powerful know-it-all AI tools, which are becoming widespread, impact cognitive skill development among students. Cognitive skills are essential for academic success and professional competence. It relates to the ability to understand, analyze, evaluate, synthesize information and more. The extensive use of these AI tools can aid in cognitive offloading, freeing up cognitive resources to be used in other tasks and activities. However, cognitive offloading may inadvertently lead to diminishing cognitive involvement in learning and related activities when using AI tools. Understanding cognitive skills' impact in the era of AI is essential to align curricular design with evolving workforce demands and changing work environment and processes. To address this concern and to develop an understanding of how the importance of cognitive skills changes with increasing integration of AI, we conducted a researcher-monitored and regulated quantitative survey of undergraduate computing students. We examined students' perceptions of cognitive skills across three temporal frames: prior to widespread AI adoption (past), current informal and formal use of AI in learning contexts (present), and future with even more AI integration in professional environments (future). In the study, students rated the importance of 11 cognitive skills. Our analysis reveals that students expect all 11 cognitive skills to be of diminishing importance in the future, when AI use and integration increases. Our findings highlight the need for educational interventions that explicitly reinforce cognitive skill development within learning environments that are now often relying on AI.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 3 minor

Summary. The paper reports results from a quantitative survey of undergraduate computing students assessing their perceptions of the importance of 11 cognitive skills across three temporal frames: pre-AI (past), current AI use in learning (present), and increased future AI integration. The central claim is that students rate all 11 skills as less important in the future frame than in the present, highlighting risks of cognitive offloading and the need for educational interventions to reinforce skill development.

Significance. If the survey methodology and analysis are robust, the descriptive finding on student expectations could inform computing education curricula by drawing attention to perceived devaluation of cognitive skills amid AI adoption. However, because the result rests entirely on self-reported perceptions rather than objective performance measures or longitudinal tracking, its broader significance for workforce demands or curricular change remains limited and primarily suggestive rather than prescriptive.

major comments (3)
  1. [Methods] Methods section: The description of the survey provides no sample size, response rate, participant demographics, recruitment method, or inclusion criteria. These details are load-bearing for the claim that 'students expect all 11 cognitive skills to be of diminishing importance,' as without them it is impossible to evaluate selection bias or the extent to which the sample represents computing undergraduates.
  2. [Results] Results section (and associated tables/figures): No statistical tests, confidence intervals, or effect sizes are reported for the observed declines in importance ratings across the 11 skills. The universal-decline claim therefore rests on raw mean differences alone; without significance testing it is unclear whether all 11 differences are reliable or whether some are within measurement noise.
  3. [Introduction / Survey Design] The 11 cognitive skills are introduced without explicit definitions, operationalizations, or validation (e.g., factor analysis or pilot testing). This undermines interpretation of the future-frame ratings, because students may have understood the items differently across temporal frames.
minor comments (3)
  1. [Abstract] Abstract: The phrase 'researcher-monitored and regulated quantitative survey' is vague; replace with concrete details on administration mode and oversight.
  2. [Introduction] The manuscript should cite prior literature on cognitive offloading and AI in education (e.g., studies on metacognition and tool use) to situate the 11-skill list.
  3. [Results] Figure or table presenting the 11 skills and their mean ratings across frames would benefit from error bars or standard deviations to convey variability.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback, which identifies key areas where the manuscript can be strengthened for clarity and rigor. We address each major comment point by point below, indicating the specific revisions we will make to the manuscript.

read point-by-point responses
  1. Referee: [Methods] Methods section: The description of the survey provides no sample size, response rate, participant demographics, recruitment method, or inclusion criteria. These details are load-bearing for the claim that 'students expect all 11 cognitive skills to be of diminishing importance,' as without them it is impossible to evaluate selection bias or the extent to which the sample represents computing undergraduates.

    Authors: We agree that these methodological details are essential for assessing the validity and generalizability of the findings. Their omission in the initial submission was an oversight. In the revised manuscript, we will expand the Methods section to explicitly report the sample size, response rate, participant demographics (including year of study, gender, and institutional affiliation), recruitment method (e.g., via departmental channels and student organizations), and inclusion criteria (current undergraduate computing majors). This addition will directly address concerns regarding selection bias and sample representativeness. revision: yes

  2. Referee: [Results] Results section (and associated tables/figures): No statistical tests, confidence intervals, or effect sizes are reported for the observed declines in importance ratings across the 11 skills. The universal-decline claim therefore rests on raw mean differences alone; without significance testing it is unclear whether all 11 differences are reliable or whether some are within measurement noise.

    Authors: We concur that statistical testing is required to substantiate the observed patterns. We will revise the Results section to include paired statistical comparisons (e.g., t-tests or non-parametric equivalents) between the present and future frames for each of the 11 skills. The revised text will report p-values, effect sizes, and confidence intervals for the mean differences, allowing readers to evaluate the reliability of the declines beyond descriptive means alone. revision: yes

  3. Referee: [Introduction / Survey Design] The 11 cognitive skills are introduced without explicit definitions, operationalizations, or validation (e.g., factor analysis or pilot testing). This undermines interpretation of the future-frame ratings, because students may have understood the items differently across temporal frames.

    Authors: We acknowledge the value of explicit definitions and operationalizations for consistent respondent interpretation. In the revised manuscript, we will add clear definitions for each of the 11 skills in the Introduction, grounded in established computing education and cognitive psychology literature. We will also describe the survey item development process and any pilot testing used to refine wording and ensure the temporal frames were understood consistently. While a full factor analysis was not part of the original design, the added details will improve interpretability. revision: partial

Circularity Check

0 steps flagged

No circularity: purely descriptive survey reporting

full rationale

The paper's core finding—that students rated all 11 cognitive skills lower in the future frame than in the present—is obtained by direct collection and pairwise comparison of Likert-scale responses across three temporal conditions. No equations, fitted parameters, predictive models, or self-citations of theorems are used to generate this result; the claim follows immediately from tabulating and averaging the raw survey data. The methodology section describes standard survey administration and basic statistical tests with no load-bearing self-referential steps. This is a self-contained empirical report whose validity rests on sampling and response quality rather than any internal definitional loop.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The result rests on standard assumptions of survey research rather than new axioms or entities.

axioms (2)
  • domain assumption Respondents report their perceptions honestly and accurately
    The study treats self-ratings as valid measures of perceived importance.
  • domain assumption The 11 cognitive skills listed are representative and stable constructs
    The paper assumes these skills are well-defined and relevant across time periods.

pith-pipeline@v0.9.0 · 5579 in / 1147 out tokens · 63505 ms · 2026-05-10T15:31:24.349091+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

22 extracted references · 1 canonical work pages · 1 internal anchor

  1. [1]

    Software engineering by and for humans in an ai era,

    S. Abrah ˜ao, J. Grundy, M. Pezz`e, M.-A. Storey, and D. A. Tamburri, “Software engineering by and for humans in an ai era,”ACM Transactions on Software Engineering and Methodol- ogy, vol. 34, no. 5, pp. 1–46, 2025

  2. [2]

    The application of artificial intelligence in engineering education: A systematic review,

    C. Liu, G.-C. Wang, and H.-F. Wang, “The application of artificial intelligence in engineering education: A systematic review,”IEEE Access, 2025

  3. [3]

    The engineer of 2020 visions of engineering in the new century,

    S. W. Director, “The engineer of 2020 visions of engineering in the new century,” 2004

  4. [4]

    Use of case studies in engineering education: Assess- ment of changes in cognitive skills,

    C. S. Sankar, V . Varma, and P. Raju, “Use of case studies in engineering education: Assess- ment of changes in cognitive skills,”Journal of Professional Issues in Engineering Education and Practice, vol. 134, no. 3, pp. 287–296, 2008

  5. [5]

    Modeling of student academic achieve- ment in engineering education using cognitive and non-cognitive factors,

    B. A. Al-Sheeb, A. Hamouda, and G. M. Abdella, “Modeling of student academic achieve- ment in engineering education using cognitive and non-cognitive factors,”Journal of Applied Research in Higher Education, vol. 11, no. 2, pp. 178–198, 2019

  6. [6]

    Cognitive Skill - an overview — ScienceDirect Topics — sciencedirect.com,

    “Cognitive Skill - an overview — ScienceDirect Topics — sciencedirect.com,” https://www. sciencedirect.com/topics/psychology/cognitive-skill, [Accessed 21-01-2026]

  7. [7]

    A study of the relationship between learning styles and cognitive abilities in engineering students,

    E. Hames and M. Baker, “A study of the relationship between learning styles and cognitive abilities in engineering students,”European Journal of Engineering Education, vol. 40, no. 2, pp. 167–185, 2015

  8. [8]

    Cognitive skills development among undergraduate engineering students,

    H. Smith and B. M. Frank, “Cognitive skills development among undergraduate engineering students,” in2020 ASEE Virtual Annual Conference Content Access, 2020

  9. [9]

    Defining Critical Thinking — criticalthinking.org,

    M. Scriven and R. Paul, “Defining Critical Thinking — criticalthinking.org,” https://www. criticalthinking.org/pages/defining-critical-thinking/766, 2007, [Accessed 21-01-2026]

  10. [10]

    Teaching critical thinking and problem solving skills,

    L. G. Snyder and M. J. Snyder, “Teaching critical thinking and problem solving skills,”The Journal of Research in Business Education, vol. 50, no. 2, p. 90, 2008

  11. [11]

    A literature review of critical thinking in engineering education,

    A. Ahern, C. Dominguez, C. McNally, J. J. O’Sullivan, and D. Pedrosa, “A literature review of critical thinking in engineering education,”Studies in Higher Education, vol. 44, no. 5, pp. 816–828, 2019

  12. [12]

    Navigating change: 2018 business council skills survey,

    M. Shepell, “Navigating change: 2018 business council skills survey,” 2018

  13. [13]

    Critical think- ing in the university curriculum–the impact on engineering education,

    A. Ahern, T. O’Connor, G. McRuairc, M. McNamara, and D. O’Donnell, “Critical think- ing in the university curriculum–the impact on engineering education,”European Journal of Engineering Education, vol. 37, no. 2, pp. 125–132, 2012

  14. [14]

    Ai tools in society: Impacts on cognitive offloading and the future of critical thinking,

    M. Gerlich, “Ai tools in society: Impacts on cognitive offloading and the future of critical thinking,”Societies, vol. 15, no. 1, p. 6, 2025

  15. [15]

    Exploring the nature of cognitive flexibility,

    T. Ionescu, “Exploring the nature of cognitive flexibility,”New ideas in psychology, vol. 30, no. 2, pp. 190–200, 2012

  16. [16]

    Adapting to the ai disrup- tion: Reshaping the it landscape and educational paradigms,

    M. Ozer, Y . Kose, G. Kucukkaya, A. Mukasheva, and K. Ciris, “Adapting to the ai disrup- tion: Reshaping the it landscape and educational paradigms,” inWorld Congress in Computer Science, Computer Engineering & Applied Computing. Springer, 2024, pp. 366–374

  17. [17]

    Using github copilot to solve simple programming problems,

    M. Wermelinger, “Using github copilot to solve simple programming problems,” inProceed- ings of the 54th ACM Technical Symposium on Computer Science Education V . 1, 2023, pp. 172–178

  18. [18]

    Generative ai in computing education: Perspectives of students and instructors,

    C. Zastudil, M. Rogalska, C. Kapp, J. Vaughn, and S. MacNeil, “Generative ai in computing education: Perspectives of students and instructors,” in2023 IEEE Frontiers in Education Conference (FIE). IEEE, 2023, pp. 1–9

  19. [19]

    Programming is hard-or at least it used to be: Educational opportunities and challenges of ai code generation,

    B. A. Becker, P. Denny, J. Finnie-Ansley, A. Luxton-Reilly, J. Prather, and E. A. Santos, “Programming is hard-or at least it used to be: Educational opportunities and challenges of ai code generation,” inProceedings of the 54th ACM Technical Symposium on Computer Science Education V . 1, 2023, pp. 500–506

  20. [20]

    How do we respond to generative ai in education? open ed- ucational practices give us a framework for an ongoing process,

    A. Mills, M. Bali, and L. Eaton, “How do we respond to generative ai in education? open ed- ucational practices give us a framework for an ongoing process,”Journal of Applied Learning and Teaching, vol. 6, no. 1, pp. 16–30, 2023

  21. [21]

    Students’ reliance on ai in higher education: identifying contributing factors,

    G. Pitts, N. Rani, W. Mildort, and E.-M. Cook, “Students’ reliance on ai in higher education: identifying contributing factors,” inInternational Conference on Human-Computer Interac- tion. Springer, 2025, pp. 86–97

  22. [22]

    Trust and Reliance on AI in Education: AI Literacy and Need for Cognition as Moderators

    G. Pitts, N. Rani, and W. Mildort, “Trust and reliance on ai in education: Ai literacy and need for cognition as moderators,”arXiv preprint arXiv:2604.01114, 2026