Jump to content

A glimpse into the future of passenger flight


Recommended Posts

A glimpse into the future of passenger flight©Courtesy of Hybrid Air Vehicles

Pilotless air taxis, robot-controlled airports and glass-bottomed airships may seem like the stuff of science fiction, but these futuristic technologies could be at an airport near you by as soon as 2030.

Peering into the not-so-distant future, we reveal how you could be travelling just six years from now....

Flying taxis and luxury airships – this is what air travel will look like in 2030 (msn.com)

  • Thanks 1
Link to comment
Share on other sites

4 hours ago, deicer said:

AI is taking over everything!

And that is the reason for the uproar.  I can only speak from experience within my extended family, some of whom work for teams behind some of the commonly known AIs.  What I hear is that much of the core engines were never developed to be used without close supervision, yet that is precisely the trend things are on.  The sunk costs of trying to redevelop what already 'works' means that we are likely stuck with bug riddled and hack-vulnerable code for the foreseeable future.

This, coupled with the bad habit humans have of trusting computed outputs (recalling children of the magenta atm) suggest to me that it's only a matter of time before AI generated aviation products are faulted in an incident or accident.  Do I want AI to have direct control of an aircraft?  Not with the AI as we know it today.

All IMO.

Vs

  • Like 2
Link to comment
Share on other sites

B.C.’s Harbour Air aims to buy 50 electric engines to convert seaplane fleet

WH_GLOBAL_BC__0029_SIMON_LITTLE.png?w=13
By Simon Little  Global News
Posted April 23, 2024 1:29 pm
 1 min readE FONT SIZE

B.C.’s Harbour Air has unveiled plans to buy 50 new electric engines to electrify its seaplane fleet.

70c8fc80

The company made history with the 2019 test flight of the world’s first fully electric commercial aircraft and has conducted 78 subsequent test flights.

On Tuesday, it said it had signed a letter of intent with electric engine maker magniX to buy 50 magni650 electric engines.

In a media release, it said the engine maker would support work to get the engines validated by Transport Canada and gain Canadian and U.S. certification to have the mani650s installed in DHC-2 Beaver seaplanes.

Click to play video: 'Canadian seaplane airline launches world’s first commercial electric plane'
 
2:06Canadian seaplane airline launches world’s first commercial electric plane

The companies are also looking to extend support to other aircraft models.

Harbour Air said it is aiming to build a west coast sustainable aviation hub, including offering electric conversion services to third parties.

The seaplane operator is aiming for a commercial certification of their first electric aircraft by 2026.

  • Like 1
Link to comment
Share on other sites

On 4/20/2024 at 2:11 PM, Vsplat said:

....  it's only a matter of time before AI generated aviation products are faulted in an incident or accident.  Do I want AI to have direct control of an aircraft?  Not with the AI as we know it today.

Hi, Vsplat - Same gut inclination as you, but "gut" is doing some work there. So is "as we know it today". 

Alongside the risks of a continued plunge into applying AI models, is the mistake of invalid comparison to idealized perfection (in this case, no accidents at all) rather than the real world as we know it. Food for thought:

New data shows Waymo crashes a lot less than human drivers (understandingai.org)

The article refers to very limited early data, for from dispositive, & the stakes may be a bit different, but it does point towards maintaining an open mind :023:

Cheers, IFG - :b:

  • Like 1
Link to comment
Share on other sites

Agree fully IFG.  I favour an open mind and critical thinking.  The old ways were not always the best no matter how nostalgia paints them.  That doesn't mean the new ways are better but it doesn't mean they aren't, either.

What disturbs me in some safety-critical AI implementations is the lack of transparency.  Humans fail, sometimes with signals, sometimes not. For the most part, we have come to understand the baseline that a human pilot in a particular seat has and can compare observed and expected behaviour to spot trouble ahead.  Clearly this does not always work (Germanwings comes to mind).

With an AI, we have a system where code may have been generated by other code and AI generative products are often indecipherable by humans attempting to do QA, so the response is to simply check outputs from a test regime.  If the results are good the code must be right, right?  (and how many times have we seen on the line where the assumptions in the SOP or QRH didn't fit the circumstance?). The result is that we can have an automated product throwing no flags as it develops and executes an incorrect plan faster than a human can detect or prevent a bad outcome. 

Not saying for a minute that AI can't help.  Plenty of applications where it is a game changer in terms of health care and basic safety where a human monitor is a poor instrument.  Do I want to bet my life on something that is several layers of code deeper than the last human input?  Not just yet.

Vs

  • Like 1
Link to comment
Share on other sites

Could AI pull off a 'Sully'? Probably not; would it put a flyable aircraft on the bottom of the Atlantic? Probably not.

It won't have to be perfect, just consistently better.

At some point, lines will cross on a graph and there will be a new reality.

  • Like 2
Link to comment
Share on other sites

On 4/26/2024 at 2:47 PM, Airband said:

.... At some point, lines will cross on a graph and there will be a new reality.

As long as "AI" is not itself plotting that graph, of course :whistling:. Somewhere or another I've saved a partial transcript of a (very polite?!) interaction with ChatGPT on a pretty straightforward probabilities question. It was gracious about its accumulating errors, & in fairness ChatGPT doesn't represent all "AI", but "as we know it" for now, a handy saltshaker is in order.

Cheers, IFG :b:

Link to comment
Share on other sites

On 4/26/2024 at 2:47 PM, Airband said:

Could AI pull off a 'Sully'? Probably not; would it put a flyable aircraft on the bottom of the Atlantic? Probably not.

It won't have to be perfect, just consistently better.

At some point, lines will cross on a graph and there will be a new reality.

Well, here's the thing. An AI instance is 'trained' on data and algorithms.  It's not a universally objective, omniscient entity, it has biases based on how it has built and informed.

When I think of all the times I've dealt with an agency or department that absolutely, positively believed that the best thing I could do was accept this (busted) aircraft or fly through that line of weather instead of boarding extra gas and taking the time to go around it, I wonder, what would an AI, who doesn't have the same attachment to their skin as I do, have done?   I think it would depend greatly on whose thumb was on the scale when the AI was trained.  The owner of that thumb would likely not be on board to live with the consequences of their decision.

Not the first time I've said this - the biggest safety measure the passengers have is the fact that the individuals in control are actually on the aircraft with them.  Break that link and it's only a matter of time before the physics of deviance play out.

FWIW

Vs

Edited by Vsplat
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Try to play chess against a computer and watch it map out all the permutations and eventual outcomes before it makes a move.  The physical representation of the process is pretty impressive and gives a great perspective on the decision making process.  I could easily envision an AI applying the same process to flying but the weakness is qualifying what is and isn't acceptable in achieving a positive outcome.     How do you teach it judgement?

  • Thanks 1
Link to comment
Share on other sites

There are thousands of tasks that AI can be trained to perform, but as Vs has astutely pointed out, the result is only as good as the training. Skin in the game is often used to describe when one's own best interests for survival drive decision making and tend to make us more conservative. How do you teach a computer to have a moral need to stay alive?

A first attempt to train AI to detect skin cancers failed because the training was unintentionally biased due to the images chosen to train the system. The first trials resulted in the machine deciding that any image that included a measuring device to depict the size of the lesion was cancer, while any image that did not have a measuring device was marked as "not cancer". That error was corrected and now it's apparently doing an amazing job at diagnostics. My question would be how many iterations of training would it take to teach a machine to sense and feel all of the "seat of the pants" things that us pilots use to help keep us alive?

Link to comment
Share on other sites

On 4/30/2024 at 8:15 PM, Vsplat said:

 

Not the first time I've said this - the biggest safety measure the passengers have is the fact that the individuals in control are actually on the aircraft with them.  Break that link and it's only a matter of time before the physics of deviance play out.

FWIW

Vs

Way back in time, Simpson Air had placards on the dash in the aircraft:

"Pilot's Guarantee: If my ass gets there, so does yours"

Link to comment
Share on other sites

29 minutes ago, W5 said:

Way back in time, Simpson Air had placards on the dash in the aircraft:

"Pilot's Guarantee: If my ass gets there, so does yours"

I worked at a place that had this sign on the dash;

"To ensure proficiency please do not tip the pilot if you are not completely satisfied with the flight". 

Worked wonders - made lots of tips.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×
×
  • Create New...