AI Salary Drops and Large Transformers
Estimated Reading Time: 6 minutes
Taking a brief break from medicine, we discuss some recent trends in artificial intelligence. AI salaries have dropped, possibly due to open source democratization. At the same time, DeepMind and OpenAI have demonstrated dramatic advances by using AI systems to solve competition programming and mathematics problems. AI is not a one-size-fits-all term, with both increasingly commoditized basic skills and ever more sophisticated large scale AI models emerging simultaneously.
AI Salaries are Dropping
A recent report in Spectrum notes that AI salaries are dropping after reaching heights in 2020.
The most valuable programming skills according to this article are familiarity with new tools like open source enterprise search platform Solr or unit testing framework Mockito. Systems software and production engineering remain challenging and sophisticated disciplines, so these shifts shouldn’t be too surprising.
Years of improvements in open source infrastructure through tools like Keras and DeepChem alongside expanding AI education have made it progressively easier to do simple machine learning tasks, making the drop in salaries unsurprising. At the same time, there is a broad range of skill levels covered by the term “AI engineer” ranging from bootcamp graduates to seasoned production engineers and prominent researchers. New fields in AI like differentiable physics require broad understanding of multiple fields and are much further from being democratized.
Competitive Programming with AlphaCode
DeepMind put out an impressive post detailing its new AI system AlphaCode which solves programming competition problems. The architecture diagram above details the structure of AlphaFold, which pretrains a transformer on GitHub and fine tunes on code competition problems. Potential solutions are sampled from the model and filtered down to a small set of candidates which are then evaluated.
The figure above from DeepMind illustrates the complexity of both the input and output space. An abstract description of the programming problem is transmuted into functional code output by the system. The final system is able to match the median code competition participant on DeepMind’s evaluation set.
Intriguingly, AlphaCode’s performance is not explained by any one innovation. The figure above illustrates how many different innovations combine to improve AlphaCode’s performance. It is useful to compare the complexity of the figure above with the report that AI salaries are dropping from our previous section. Training a model of the scale and complexity of AlphaCode is a prohibitively challenging task that only a few entities globally can take on. At the same time, simple AI tasks like running a DeepChem graph convolutional network are routinely accomplished by talented high school students (notebook). I anticipate we will see a bifurcation in AI job market classifications to distinguish between these two very different skill levels.
Competition Mathematics from OpenAI
In a related advance, OpenAI announced its ability to solve math competition challenges using its latest AI system (source). A large language model is used to generate proofs in Lean (a theorem proving language) given high school math competition problem statements. OpenAI’s advance again highlights the capabilities of large modern AI systems to solve sophisticated challenges.
Weekly News Roundup
https://www.quantamagazine.org/mathematicians-prove-30-year-old-andre-oort-conjecture-20220203/: Mathematicians have solved the André-Oort conjecture.
https://spectrum.ieee.org/fault-tolerant-quantum-computing: The race is on to build a scalable fault tolerant quantum computer.
Feedback and Comments
Please feel free to email me directly (email@example.com) with your feedback and comments!
Deep Into the Forest is a newsletter by Deep Forest Sciences, Inc. We’re a deep tech R&D company building an AI-powered scientific discovery engine. Deep Forest Sciences also leads the development of the open source DeepChem ecosystem. Partner with us to apply our foundational AI technologies to hard real-world problems. Get in touch with us at firstname.lastname@example.org!
Author: Bharath Ramsundar, Ph.D.
Editor: Sandya Subramanian, Ph.D.