Did anybody try or got new iPad Air?

I’m just curious, I’m on ageing iPad pro 10’5 and I cannot use many synths( and fx) I would like to unless I’m bouncing it into audio (you know those synths like model 15/D)
Now I’m pretty sure ns2 like others use only one core ( correct me if I’m wrong but so far that’s what I have been reading all over the net) so I wouldn’t go for current pro as that’s just silly when we get A14 in air. So only concern (performance wise) is Ram and that air is same as my pro, that is 4Gb (I bet air got faster one) so I should be ok unless new system going to eat up more ram in future.
Basically I just want to see/hear from somebody what it can handle so I could reproduce same thing on my iPad and see how much cpu difference I could expect. Anyway I guess any increase would be good so I should just shut up and get one:DDD

And btw, what is your thoughts on new Mac line with M1? NS2 on laptop/Desktop anyone? Technology wise I think it might be great shift but there is one obvious problem especially for pro users, Max 16Gb ram

Hope you all having great time and are not distracted by all this mess what has been happening out there;)

«1

Comments

  • I was actually thinking of popping for a new iPad Pro 11 inch thinking there would be a huge performance increase in daws. Now you have me wondering. So if an app only utilizes one core that would not translate into lots more auv3s or lower latency settings? Or the ability to run more iaa apps without choking? Can anyone explain what all that processing power does for existing music apps? In particular ns2.

  • Hmm this is an interesting thing to think about. Anyone with the knowledge care to chime in here?

  • Single core benchmarks for iPads. I’m not entirely sure which generation the new ones are in but if you know the name of the processor, you can probably get a close enough idea of the power in comparison to your old device.

    https://browser.geekbench.com/ios_devices/11

  • Thanks @Stiksi . That link clarifies a lot. I can see why @Cray23 came to like the new iPad Air over the pro. That’s where I am leaning now also. So that leaves the questions around why apps like ns2 can only use one core? If a device has 4 cores and ns2 is running on one of them, then I launch another app, say AUM, would iOS automatically run that on another core or might it run on the same core as ns2 and slow it down? And what happens if you launch ns2 from within aum? I.e is core utilization load balanced by the os? And is it done at the app level or the task level within an app? I realize these are pretty technical questions but any info you can point me to would be much appreciated.

  • Whoa, those benchmarks piss me off a little. I just bought the latest and greatest iPad pro.. and within 2 months it's already outbenchmarked. Yikes. WTF Apple

  • edited November 2020

    @boomer said:
    Thanks @Stiksi . That link clarifies a lot. I can see why @Cray23 came to like the new iPad Air over the pro. That’s where I am leaning now also. So that leaves the questions around why apps like ns2 can only use one core? If a device has 4 cores and ns2 is running on one of them, then I launch another app, say AUM, would iOS automatically run that on another core or might it run on the same core as ns2 and slow it down? And what happens if you launch ns2 from within aum? I.e is core utilization load balanced by the os? And is it done at the app level or the task level within an app? I realize these are pretty technical questions but any info you can point me to would be much appreciated.

    iOS doesn't provide any way for an app to choose which cores or how many cores are used. It's all managed by iOS, and is subject to Apple's decisions at any given time to performance with battery use and heat management.

    The very latest version of iOS introduces the possibility for apps to use more than one thread for audio processing but only if older apps are heavily modified to implement it. Not all apps would even benefit from that. And, even if they use multiple threads, they still don't have the ability to select which cores those threads get allocated to.

  • Thanks @number37 . So now we are getting to heart of the question of whether more cores equals better performance for audio apps. I know that a thread cant be split between cores. If we start with the assumption that ios is going to allocate threads to cores efficiently, it would generally mean that an app dividing its tasks into more threads translates to better performance. But there is the problem of not knowing which thread will finish first. And for any task where timing is critical, like audio, I assume that’s why you say not all apps would benefit - audio needs to run In a single thread? Am I getting this right? If so then my next question is does ios itself only allow one audio thread for the whole device? If different audio apps can run on their own thread, then it seems what multiple cores would provide is the ability to run more apps without impacting each other’s performance, but not necessarily help the performance of any one app. Whereas a faster processor but fewer cores would improve the performance of any one app if there are fewer apps running. Am I oversimplifying?

  • I’m not a developer. Be aware that surely much of what follows is poorly explained at best, and completely wrong at worst.

    So now we are getting to heart of the question of whether more cores equals better performance for audio apps. I know that a thread cant be split between cores.

    I’m not sure that’s correct. I’m not sure it’s incorrect either. I do know that the developer has no control whatsoever over which cores tasks are assigned to. The operating system does that.

    If we start with the assumption that ios is going to allocate threads to cores efficiently, it would generally mean that an app dividing its tasks into more threads translates to better performance.

    Not a good assumption. The priority is on battery life and heat management, not performance.

    But there is the problem of not knowing which thread will finish first.

    Yes, and there’s a certain amount of overhead involved in coordinating all that.

    And for any task where timing is critical, like audio, I assume that’s why you say not all apps would benefit - audio needs to run In a single thread?

    Currently audio needs to run in a single real-time-safe thread per app. IOS 14 introduces some multi-threading mechanism that I don’t claim to understand. I’ll get into the weeds if I try to explain what I think are the types of apps that can benefit or not. I’m not a developer, so I’d just be pulling thoughts out of my butt.

    Am I getting this right? If so then my next question is does ios itself only allow one audio thread for the whole device?

    No. One audio thread per app. Each app executes independently of all others. A host passes audio off to apps and/or receives audio from them in chunks. The app needs to do all it’s work in its own audio thread within those handoff windows or there are dropouts.

    If different audio apps can run on their own thread

    That’s not accurate. They are independent programs A host doesn’t run an app in a thread. It instantiates the program (which runs separately in its own process) and the two communicate through inter-process communication.

    (MacOS is a bit different in that AUv3 apps can be made to run in the host process.)

    then it seems what multiple cores would provide is the ability to run more apps without impacting each other’s performance, but not necessarily help the performance of any one app.

    I can’t say. I don’t understand it well enough. What I do know is programmers have no control over core allocation. Thread management does not equal core management.

    Whereas a faster processor but fewer cores would improve the performance of any one app if there are fewer apps running.

    I have no idea.

    Am I oversimplifying?

    I can pretty much guarantee that. ;)

    It’s kind of fun pretending like I understand more of this than I do. But I think I’ll step aside now.

  • LOL! Yeah - what was I thinking? =) But I think those last 2 statements I made that neither of us knows the answer to would go a long way toward a decision between a ipad pro vs new iPad Air. More cores but slower processor vs less cores and faster processor? Better yet, just tell me which one to get if all I want it for is music =)

  • I’m pretty sure single core performance and RAM amount are the number one considerations.

  • @Stiksi thanks. I should have just asked that simple question in the first place =) new iPad Air 256 is now on my Christmas list! Anybody wanna give me one? ;) :) =) B)

  • So the newest ipad air has really high single core bench marks but only 4 gb of ram compared to the latest ipad pro which has something like 3/4 of the processing but 2 gb more ram.

    Isn’t the processor far more important when using mainly auv3 plugins when you want to use as many as you can without crackling?

    Is ram less important if you aren’t using a lot of samples?

    Im just curious because the ipad pro i have now has 4gb or ram also but only 800 on the single core processor benchmark

    I bet the ipad pro next year will have at least the a14 chip but who knows what month that will be released?

  • edited November 2020

    I believe I have seen this first from dendy somewhere, it’s great video about real-time audio and multi core on iOS ;)
    I know about 4Gb ram but that’s same as my old pro so that should be just fine;) as if I increase latency to full it can handle more so I guess ram should be ok. As long I’m not using tons of long samples.
    Well I’ll find out in couple of days, it’s on its way:D will post results soon

    And I wouldn’t be surprised if next iPad Pro gets M1

  • @Cray23 said:
    I know about 4Gb ram but that’s same as my old pro so that should be just fine;) as if I increase latency to full it can handle more so I guess ram should be ok. As long I’m not using tons of long samples.

    Side note: a few AUv3 apps crash at very high buffer settings (usually higher than 1024, but sometimes 512. 512 is the default max buffer size in some Apple developer examples, and not all developers think to be sure to enable higher settings.

  • @number37 yeah I don’t really like or even do work with high latency hence I need desperately upgrade :DDD but I have been forced few times to go to high settings in some projects and so far I have only experienced one issue with one auv3 not sure if fx or instrument but it was spectrum (nice set of auv3 modules if you are not aware of it then check them out it’s free;))
    Anyway I’ll run same projects on both in couple of days and post results or differences here. (Pro 10’5 vs air 2020)
    According to geekbench it’s almost twice it’s performance so I hope it will transform as that in real workload especially real-time audio, I don’t care really about anything else

  • @Cray23 i anxiously await your findings! I have earmarked money for an upgrade that is burning a hole in my pocket :) I am running a 2018 iPad with 32gb. Trying to record at high latency is painful. The time lag between playing a note and hearing it drives me insane. I wonder if an upgrade would impact another issue with zipper noise at mixdown, directly related to latency described at the end of this thread?... https://www.blipinteractive.co.uk/community/index.php?p=/discussion/1341/cc-data-not-being-recorded-or-played-back-as-entered#latest. There is an archive attached to test it with.

  • edited November 2020

    Hi there @boomer , I’m not entirely sure what that issue you experiencing is caused by:( I’ll try to replicate it although I don’t have wind controller, does it use standard midi resolution? Or is it some custom extended one? Or you can post that sound example in project here and I’ll render it. Btw did you try it on different devices?

    I just overlook that it mentioned there is archive with file so I’ll try it when I get it on Tuesday

  • edited November 2020

    @boomer said:
    @Cray23 i anxiously await your findings! I have earmarked money for an upgrade that is burning a hole in my pocket :) I am running a 2018 iPad with 32gb. Trying to record at high latency is painful. The time lag between playing a note and hearing it drives me insane. I wonder if an upgrade would impact another issue with zipper noise at mixdown, directly related to latency described at the end of this thread?... https://www.blipinteractive.co.uk/community/index.php?p=/discussion/1341/cc-data-not-being-recorded-or-played-back-as-entered#latest. There is an archive attached to test it with.

    The mixdown is not in realtime, so it is not very probable that an upgrade would affect that. Sorry. But you could well get a shorter buffer for recording and playback out of a more powerful device. Just how much, is anyone’s guess.

  • @Stiksi yeah I misspoke a bit about the mixdown part. What I meant to question is whether a low buffer would still exacerbate the midi zipper noise with a faster processor, regardless of mixdown or real time playback. Since I can’t get my brain around why audio buffer size would impact midi zipper noise In the first place I am grasping at straws. But you do give me hope with regard to recording with lower buffer even if I have to manually mix down with the higher to reduce zipper noise.

  • edited November 2020

    @boomer can you tell me what wind controller you use and exact iPad model or just processor you have in it please so I can see if I can replicate it with what I have at hand;) and what to expect more or less
    Still waiting for air but I thought I could try it on my old pro first;)

  • Hi @Cray23 . The wind controller is irrelevant at this point. That archive was created manually with just a few midi events. Hopefully the obsidian patch would come down with the archive. It best exposes the issue. If the archive does indeed reproduce the conditions all you should need to do is just play it back as is while modifying the buffer from lowest to highest. If your device behaves the same as mine you should hear distinct zipper noise at the lowest buffer and less at the higher. I have iPad (6th generation) with 32GB running iPadOS 14.2. Says Model A1893 on back. Processor is an A10. Thanks so much for checking this. Let me know if the archive does not transfer properly to your NS2. I think it should.

  • edited November 2020

    @boomer Ok I see, that automation is indeed just line without any stepping so it should be smooth, what I get on my old pro is very obvious stepping/zipper with almost pluck like sound to it on very low latency and slightly smoother on high latency (isn’t that opposite to what you experience?). But all this is only really happening on slower bpm, I did slow it to 20bpm but as you go around 130 and without resonance it’s pretty much gone or hard to notice. I never noticed it before as I barely go under 110 bpm. it’s strange that it doesn’t do it when you just twisting knob in real time?:( but yeah I don’t think hardware change will make it disappear at all. I had similar results inside of cubasis 2 with micrologue, apart that automation in cubasis had lots of points but appeared smooth enough but when I slow it down and cranked up resonance there was that obvious stepping as well:/ but I’m not sure that’s same. I think I realised the problem behind it. Macro knobs and xy pad are only with 0-100% value and if you look at filter cut off knob when twisting it it has much larger range then 0-99 so automation is quantised by that value therefore we get stepping/zipper but not when we actually twisting cut off knob. I believe that is the issue, resolution of macro knobs which would be easily fixed I hope but still I might be totally wrong perhaps only person who can shed some light on this is Matt himself:)

    Btw it’s really obvious when you try to modulate osc sync ratio, on 20 bpm is very obvious indeed but much more on rapid modulation. With osc sync you can sort of count tones and if you go 4/4 16th note and do 0 to 100 automation on half step that will sound only 3 tones and whole step does 6 I believe but again with twisting osc sync ration knob in real-time it sounds ok

  • Thanks @Cray23 . lower buffer = more zipper. That’s the weird part. The issue with automated mixdown is that it automatically uses lowest possible buffer which adds maximum zipper. So I have resorted to recording the output in real time with high buffer into AudioShare. Your experience with live midi response being almost fine but recording introduces zipper on playback is the same as mine. As far as tempo playback I agree. I did it at 20 to clearly demonstrate. But even at normal speeds, i will often go down into the 70s, zipper is there on certain patches. What I have been doing is setting tempo to double what I want. That seems to help as a workaround. E.g. if in want it to sound at 120 I set tempo to 240. Downsides to that is messes up arpegiators and other midi generators. What I was hoping is that a faster processor might reduce zipper noise at low buffers. Not very hopeful but no way to know until tried. I am much more hopeful that the faster processor will enable at least recording at low buffer where performance timing matters most. I think the issue lies in an NS2 unique algorithm for interpolating between recorded midi cc events. Seems like live interpolation might use something different. Unlike other sequencers, it appears that the number of recorded midi events does not matter. If you add events to that line in my demo it does not seem to change the response at all. So changing what the knobs generate would not impact the recording playback. The previous is just guessing on my part based on what I have observed but I could be way off base.

  • @boomer so it is behaving exactly same on new air, that zipper that is. Other thing I tested was model 15 which I believe is one of the most cpu heavy synths at least one I have. On my old pro 10’5 inch it went for 20% cpu (one instance) versus 14% on air which disappointed me bit at the beginning
    Then I try one of the heavier project and that was much more promising. Old pro 78% unable to play it really where air was without sweat using only 32% all this was on very low latency settings I should have mentioned at the top I suppose;)
    I’m bit busy recently but will start using it more next week so if I found something wrong with it I will post it here.

  • @Cray23 said:
    @boomer so it is behaving exactly same on new air, that zipper that is. Other thing I tested was model 15 which I believe is one of the most cpu heavy synths at least one I have. On my old pro 10’5 inch it went for 20% cpu (one instance) versus 14% on air which disappointed me bit at the beginning
    Then I try one of the heavier project and that was much more promising. Old pro 78% unable to play it really where air was without sweat using only 32% all this was on very low latency settings I should have mentioned at the top I suppose;)
    I’m bit busy recently but will start using it more next week so if I found something wrong with it I will post it here.

    It can be very tricky to get 1:1 comparisons between different iDevices because you can never really tell when it decides to switch from the low-power cores to high power.

  • Thanks for the info @Cray23 . I was not really expecting the zipper noise to improve but happy to hear about the overall performance improvement at low latency. Based on what I have been reading and the video you posted above I am not sure you can put much stock in the cpu numbers. Be curious what numbers say if add multiple instances of the model D. Could be it does not go up much. Anyway I am going to take the plunge on the air I think.

  • @boomer i can try multiple instances of model D for you, I should be home in couple of hours so will try it.
    @Stiksi well that might be the case with that single instance of model 15 didn’t show half of cpu usage, perhaps it was running on efficient core indeed, I’ll try many instances of it later on today and see how that goes;)

  • @boomer It can run 5 instances of model D on Low latency settings if I add 6th it won’t play (without crackles) on medium but It does on high latency. I used following presets to test it-Bass Human Bass, classics G Thang, classics electric new man, classic chariots, keys dirty keys and keys oasys. Plus one slate with 808 factory kit. Every track have compressor and there is one ns2 delay and ns2 reverb. Cpu is around 67% on high latency with 6 of them or 53% on low with 5 of them but if I go up to high while having 5 of them loaded it jumps to 62% and then when going back to low it crackles then if I go to lowest and then back to low it’s ok. Also this iPad runs on 48kHz.
    It can run 6 model 15 on low latency ( some with polyphonic patches) which I thought it would not so model d might be more cpu intensive. So to summarise it, we still can’t use 20+ instances of model d :( in one project :DDD maybe moog will optimise it for a14 if that’s anything they can actually do, anyway let me know if you want to try anything else as finally I got weekend to make music after while:D
    Peace ✌️

  • Thanks @Cray23 So if you AB compare same above setup old pro to new air is there a significant improvement? Or would you say it’s about the same? You did say that you saw significant improvement with a ns2 project. So if an AB compare of ModelD instances shows little difference then we are left with “it depends on how the app was coded” when it comes to performance improvement. I will find all this out myself in a couple of weeks when my air comes but Overall do you think it was worth it? Is there anybody out there that has an a12z pro and a new a14 air to do AB testing of some audio apps and daws? Lots of general performance numbers out there but I can’t find much for specific audio/midi apps.

  • edited December 2020

    I don't think zipper noise affected by device performance. I think it may be caused by by NS2 not recording the automation at a high enough resolution, so it's basically quantized. At least that's what I remember reading in an earlier thread somewhere.

Sign In or Register to comment.