The iPhone XS phones and low code programming complexity

Posted on Monday, October 1, 2018 by RICHARD HARRIS, Executive Editor

The new iPhone XS devices are the most complex and sophisticated phones Apple has ever produced. Just concerning sheer technology inside - know that the A12 Bionic processor inside is the first commercially available 7-nanometer chip for consumers,  and that it contains 6.9 billion transistors. It has an eight-core CPU that's capable of crunching five trillion operations every second (compared to last years A11 Bionic's mere 600 billion per second).

This sort of processing power hanging out in your pocket is something developers used to dream of, and we aren't even touching on what this does for AI and ML engines inside of application either.

There are numerous other technical tidbits to heighten the fact that this is one complex beast of a phone. Things like the GPU, the Neural Engine, 5G and Gigabit LTE ready chips, the camera, and screen resolutions!

For native app developers, this level of hardware gets paired with new exposed calls we can use in our code - albeit complex in itself, we have to wonder if this level of complexity and capability can be utilized in low-code environments?
 
We recently had a conversation with Ryan Duguid, Chief Evangelist at Nintex to get his take on the complexities of programming on a chip as sophisticated as the A12 Bionic, and whether or not low-code has met its match.
 

ADM: The new A12 Bionic chip inside the latest iPhones is extremely complex and intelligent. Do you think the complexity of the new chip has overpowered what simple programming using low-code instructions is capable of?


Duguid: Not even remotely. At the end of the day, the goal of our entire industry is to continue to push the boundaries of what’s possible with technology, constantly innovating, and making use of increased processing power, storage, and data transfer rates. At the same time, the goal we have here at Nintex is to continue to push the boundaries of what’s possible without having to resort to code. Why? Because we firmly believe that the way to help companies become truly digital is to tackle every problem big and small, and that means putting tools in the hands of people who don’t know how to write code, or developers who are looking for the most efficient way to solve a problem. As a result, we need to focus our efforts on taking advances in AI, machine learning, blockchain, and other emerging technologies, and figure out what it looks like to expose them in a no or low-code environment. For example, we’ve already made it possible for our customers to leverage sentiment from Azure Cognitive Services to bring intelligence to processes targeting improvements in customer service or Google’s Vision API to help field service workers identify faulty equipment that is in need of repair.

 

ADM: Low-code programming's reach is only as far as the platform they are using to interface with. As AI and ML are just getting off the ground, what challenges do low-code platforms have when interfacing with AI systems?

 

Duguid: Had you put that to me back in 1995, I’d have agreed with you, but I think you’d be pleasantly surprised at just how far low-code or visual programming platforms have come. To that end, I’d argue that the reach of these platforms is impressive and only gets better as they integrate with the ever-increasing range of SaaS platforms. The reason this work is accelerating is due to broad adoption of standards like REST, JSON, and OpenAPI, and thankfully, the leading providers of AI and ML services are adhering to these standards. That said, even with standards in place, one of the bigger challenges with these advanced services is the sophistication and dynamic nature of the output that they provide. As an example, the Google Vision API provides a powerful set of image analysis capabilities including optical character recognition (OCR), handwriting recognition, logo detection, product search, face detection, landmark detection, and general image attributes. Depending on what it is you ask the API to tell you about an image, your results will vary significantly. A search for faces for example may return a collection of faces, their x,y coordinates in the image, position of key facial features like eyes, eyebrows, mouth, etc., whether the person is happy or sad, whether they are wearing a hat…you name it. So when you’re making this kind of capability available with low to no code, you have to factor that sort of thing in to the design time experience.  

 

Microsoft recently purchased Lobe to help make AI programming available to everyone, what is the benefit for developers here?

 

Duguid: Like all low code or visual programming platforms, Lobe is designed to make some aspect of delivering software faster, more maintainable, and in cases, with lower cost resources. In this case, Lobe is designed to make it possible for mere mortals to incorporate advanced image and audio analysis capabilities in to their own apps, and that’s a win, because this kind of capability is typically reserved for high end professional developers. At the same time, it’s a common misconception that low-code or no code platforms are designed for non-developers, and that’s simply not the case. At the end of the day, professional developers can benefit just as much from this kind of capability as it takes the grunt work out of performing tasks that are already well understood, freeing them up to focus on the problems that are unique to the solution they are tasked with delivering. On top of accelerating the development of the initial solution, it also makes the finished product far more maintainable, delivering a level of agility that is much harder to achieve when everything is custom code.

More App Developer News

Tether QVAC SDK Powers AI Across Devices and Platforms



APAC 5G expansion to fuel 347B mobile market by 2030



How AI is causing app litter everywhere



The App Economy Is Thriving



NIKKE 3.5 anniversary update livestream coming soon



New AI tool targets early dementia detection



Jentic launch gives AI agents api access



Experts warn ai-generated health content risks misinterpretation without human oversight



Ludo.ai Unveils API and MCP Beta to Power AI Game Asset Pipelines



AccuWeather Launches ChatGPT Integration for Live Weather Updates



Stop Using Business Jargon: 5 Ways Buzzwords Damage Job Performance



IT spending rises as banks balance legacy and innovation



Tech hiring slumps as Software Developer job postings fall



AI is becoming more widespread in collaboration tools



FCC prohibits new foreign router models citing critical infrastructure risks



ChatGPT Carbon Footprint Matches 1.3 Million Cars Report Finds



Lens Launches MCP Server to Connect AI Coding Assistants with Kubernetes



Accelerating corporate ai investment returns



Enviromates tech startup launches global participation platform



Private Repository Secures the AI-driven Development Boom



UK Fintech Platform Enviromates Connects Projects Brands and Consumers



Env Zero and CloudQuery Announce Merger



How Industrial AI Is Transforming Operations in 2026



AI generated work from managers is damaging trust among employees



Foresight Secures $25M to Bridge Infrastructure Execution Gap



Copyright © 2026 by Moonbeam

Address:
1855 S Ingram Mill Rd
STE# 201
Springfield, Mo 65804

Phone: 1-844-277-3386

Fax:417-429-2935

E-Mail: contact@appdevelopermagazine.com