Sapiens sets out the modest goal of being a history of humankind. This sweeping ambition in itself made me somewhat suspect. But the author presents interesting description and interpretation of the major epochs in human existence, and I found myself enjoying the book for the most part.
One of the highlights of the book was the author’s take on the agrarian revolution. Instead of describing the transition from hunter-gatherers to sedentary farming as a major milestone in furthering civilization, the author argues it should be seen as humankind stumbling into a disaster. The evidence suggests that hunger-gatherers tended to live good, robust lives. They worked only about 20 hours a week, and this work was active, varied, and probably to a large degree enjoyable. Just look at all the people in the modern world who have hunter-gatherer-like hobbies: hunting, fishing, jogging, etc. Hunter-gatherers had varied food-sources, so they were both healthier and less susceptible to hunger. Females tended to have fewer children with more time in-between births.
Much unlike our modern stereotypes of dull ogres in caves, hunter-gatherers needed to be smart and had daily lives filled with intellectual vigour. They needed to be able discern edible from poisonous among the thousands of plants in the environment. They needed to have the strategy, patience and intellectual agility to stalk and kill prey with only primitive weapons.
The bands of hunter-gatherers who stumbled on the ability to cultivate certain of the plants they found in the wild made a losing bet with the devil. Once agriculture became the main source of sustenance, malthusian logic soon took over. The initial harvest bounties and sedentary life-styles encouraged more children and in turn mouths to feed. Soon, 20 hours a week of enjoyable work was exchanged for backbreaking labor in the fields, malnutrition and a constant threat of famine from a bad harvest.
There is a good chance that under the agrarian revolution, humankind actually got dummer. No longer were we required to use our outsized brains in a daily puzzle to find food and find our way. We filled our day with rote, uninteresting, and intellectually unchallenging work. A supple brain was also no longer a trait necessary to pass on one’s genes, and human intelligence likely suffered because of it. The surprising conclusion: we are probably duller now than our hunter-gatherer ancestors.
The book had plenty of other material. From the first pages where a description of Neanderthals as intelligent, caring primates who were eradicated by the ruthless sapiens with the secret weapon of advanced language. And all the way to descriptions of our modern civilisation alongside predictions for what is to come in humankind. This latter subject is also the theme of Harari’s latest book, Homo Deus. But no matter how elegantly he summarises human existence, I am sceptical of anybody’s claims to be able to tell the future of our kind.
Philip Tetlock is a psychologist who has studied expert opinion. To his own mild befuddlement, what most people take away from his research is that the average expert is no better at forecasting the future than, as he states, a chimp throwing a dart. This is enlightening, and in tone with today’s going skepticism of experts—especially the political kind. But it is only part of the story. The arguably more interesting story, and the topic of Super Forecasters, is what defines those that are able to make accurate forecasts.
The book primarily discusses results from a large study Tetlock had run--funded by the US intelligence community--asking for volunteers to attempt to make forecasts on complex political and international issues: Will North Korea launch an inter-continental ballistic missile in the next three years?, etc. Tetlock credits US intelligence for going along with the research. Imagine the embarrassment if some Regular Jos managed to out-analyse and out-forecast trained intelligence analysts with access to classified information.
Luckily for US intelligence, a large majority of the volunteers could not out-analyze or out-forecast intelligence agents. Their forecasts were, like the experts Tetlock had studied before, no better than a chimp throwing darts. But the exciting part of the research was finding, and then analyzing the outliers. A small, but not insignificant portion of the volunteers turned out to be really good at forecasting political and international events. Better, in fact, than the intelligence agencies themselves. Tetlock calls these the Superforecasters.
One side of this study that I liked is that it looked beyond average results. The headline could have been “average forecaster no better than a chimp”. Instead the author chooses to look at the tail-behavior of his group. This reminds me of the argument of Nassim Taleb in his book The Black Swan, that analyzing a host of behaviors by the average is misleading if not dangerous, since much of the most important phenomena tend to be fat-tailed in their distributions. That is to say that the extremes, while still small, are not vanishingly so, and can play a disproportionate role in whatever system they are in. Big financial crashes are rare, but they do happen, and they can define the resulting economic and political systems. Superforecasters are rare, but they do exist, and perhaps they can tell us something important about the world.
So who are these superforecasters? They were not necessarily geniuses all with PhD’s. They did tend to have above-average intelligence. But natural intelligence was only a small part of the story. More important was a method of working. Superforecasters had to be hyper-aware of their own blind spots and constantly reevaluating what they know. Superforecasters also had to have a good sense of probabilities--something most people are notoriously bad at. Superforecasters could set probabilities of events and subsequently adjust them, not in chunks of 10 percentage points, but 5’s or even 1’s.
Since superforecasting seemingly relies more on method rather than pure brain power, the author suggests that we could all get better at superforecasting. Though this perhaps is part of the pitch in terms of selling this book in the self-help and business section of the bookstore. I suspect the combination of careful analysis and constant self-doubting awareness are rare personality traits that would be hard to learn to the degree that Superforecasters use them.
I wondered how well I would do as a forecaster. A big chunk of my job is analysis: I write academic papers for a living. I have also worked for a macroeconomic forecasting firm. I should be at least better than average, right? Maybe. I admittedly am not always the most patient person, nor always the most detail-focused. I can also be stubborn on some points, and I can imagine myself hanging on to narratives or storylines too long. Tetlock’s study is ongoing and they continue to accept volunteers, so perhaps I will have to give it a go at some point.
I just read this well known paper by Kahneman and Deaton. The pair get a hold of detailed survey data from Gallup about subjective well being. What was apparently new was that they distinguished between evaluation of life—how respondents rated their wellbeing overall—and their actual emotional well being—measures of how much stress, sadness and anger, or alternatively happiness and enjoyment they respondents report.
The authors find that reported well being grows proportionally with income on the log scale. In practice this means thinking about a percentage increase in income as the important measurement rather than an absolute increase in income. This makes sense of course. A thousand dollars can mean a big difference to a the satisfaction of a line worker at McDonalds. It is a drop in the bucket for a partner at a prestigious law firm.
Emotional well-being also grows with income. But here the authors find an important difference with subjective well-being: there is a cutoff point. After about a household income of about 75,000 dollars, increases in income no longer seem to increase emotional well-being. In fact, more income can even be found to be associated with more stress. So a lawyer making 100,000 dollars at a law firm who gets a promotion and in turn makes 130,000 dollars may well rate is subjective well-being as higher. But that extra money does not make him feel happier, less sad or less stressed.
So basically, to the old adage that money doesn’t make you happy, the researchers say, sort of. It depends on how you define happiness. And it depends on whether you are barely scraping by, or moving from affluent to even more affluent. The results make me think of the classic paper on taxi drivers in New York who were found to target their income. On a busy day with lots of fares (and a correspondingly high hourly rate) cab drivers tended to quit early. They made what they had intended to make. It seems this cabbies maybe were onto something. Why spend more time working, when extra income above a certain level won’t make you happy.
I arrived in Manchester a couple of weeks ago, ahead of a wedding I would be attending just outside the city. My expectations were not great. If I had a picture of Manchester, it was as a run-down industrial town, well past it's glory days: think the British version of Detroit or Cleveland. From those low expectations, I was pleasantly surprised in many ways. But the city is also a series of missed development opportunities that could set it apart from its big brother London, which I can’t help but compare it to.
First, the good. From the airport I was able to take a new light rail line all the way into the city centre. The line was modern, clean and easy to use. The light rail system seems to be fairly extensive and covers a large chunk of the metro. Large swaths of the city centre have been handed over to the light rail lines. In cities the size of Manchester, light rail makes a lot of sense. They are an efficient use of the street scape, they are easy to understand, and use, and they give residents and visitors a pleasant and easy to understand transportation option. I wish more cities would invest in good light rail solutions: places like Stockholm, Oslo or even bigger cities with underground's like London and New York could benefit from these systems.
We pulled into the Deansgate station downtown, in the Castlegate area of the city. This is the historic center of the city, with archeological finds going back to Roman times. More importantly to the modern shape of the city, it was also the center of the expanding industrial city that took shape in the 1800s. This was the site of the worlds first rail station and some of the original cotton mills that defined the industrial city. The area is now an exciting mix of refurbished, brick buildings from the industrial era with new residential towers. Unlike London, the city does not seem allergic to new development. In this part of town particularly, the architecture is an exciting mix of old and new, and the area is clearly adding density. None of this “doesn't fit in” nonsense getting in the way of city-building.
I haven't bothered to see what the cost of flats are in Manchester, but I imagine that they are a relative bargain to London prices. Building plenty of new apartments certainly doesn't hurt keeping prices affordable.
I could see for myself a development path for Manchester of being the anti-London. Doing right in all the things London does wrong. The considerable tech industry around the University of Manchester and the legacy industry from Manchester’s storied industrial history could be the kernel of a vibrant economy that draw in the young and educated. Cheap rents and available space could draw artists and musicians; a la Berlin. Without knowing too much about the particulars, this so far sounds plausible for what I have seen.
But a great city alternative city needs a corresponding transport system. The light rail is a good start, but for walking and biking, Manchester is an awful city. The car is, without contention, the dominant force on the roads.
I saw only one decent cycle track in my time in Manchester: along Oxford street connecting the university to downtown. But this protected track disappeared upon entering the city centre. Someone could cycle safely to the edge of the city centre, but not in to the city centre. Thereafter the cars rule: a ridiculous state of affairs. Even on this one cycle track, the planners were miserly with the space. A meter across, at a maximum. Nothing like the generous cycle ways of Copenhagen.
I did see a fair amount of cyclists, but they tended to be the types that dare brave heavy car and bus traffic: young men riding fast. The key demographic that often indicates a good bicycling city: women, was almost completely missing.
I had also planned to go for a morning run: a great way of seeing the city and getting a good start to the day. It seemed pointless. Unlike London, there were no good sized parks in the city centre. The river did not provide any continuous track to run along either. A run would seemingly involve a series of starts and stops along the road, waiting for green men and trying to avoid being run down. Not exactly the refreshing start I had been hoping for.
Walking was nearly as bad. I only found one pedestrian plaza: about a kilometer along a mall. Sidewalks tended to be narrow and crowded. The old canals that run through parts of the city had potential, and indeed there were some nice routes. But in general this space seemed under-utilised too.
England is a flat, crowded country with a mystifying insistence on awful city planning. True enough, the weather can be horrid, but no worse that cycle and walking all-stars Denmark and the Netherlands. While in England I talked to someone from Bristol, who told me this was England’s alternative city. Perhaps they have figured out proper city transport.
Bowling Alone by Robert D. Putnam.
Bowling Alone, published in 2000, is a book that is cited so much that I already knew the main thesis before even reading a page of the book. Since a peak in the 1960’s, Americans have become gradually less civic minded and social. They belong to fewer active social, political and welfare groups and organisations. The example of the sagging popularity of bowling leagues gives the title to the book.
The first third of the book is used to document the case that civic and social engagement really has undergone a multi-decade decline in the last part of the 20th century. From the end of World War II up until the late 1960’s was a period of great civic engagement. The so-called “Greatest Generation” really was great in their drive to join organisations, be engaged in politics and work for the greater welfare. But the proceeding generations did not continue the trend. After the facts are pointed out, they almost become obvious. Younger generations, like myself, think of things like bridge clubs, Moose Lodges, and voting in primaries as something old people do – and that is because it is. Younger generations have not in the same degree filled the membership rolls of old clubs nor created, to the same degree, new clubs.
The most interesting part of the book is Putnam’s run-through of the possible causes of this disengagement. Sprawl is one factor. Since the 1950’s boom in highway building, more and more people moved away from dense cities and into suburban areas. Someone living in a city might live close by their local club house and be next door to friends and family. It was easy to swing by the club after work or on the weekend, and poker night was maybe just a block down. Suburban living is something else. Housing is separated by design from all other buildings, and driving becomes a necessity to get almost anywhere. People with long commutes to work are not keen on jumping into their cars to drive back into the city to attend a meeting.
I had thought that suburbanisation would be one of the main causes of disengagement. But Putnam assigns it relatively little of the blame. A bigger factor, he argues, is TV. Penetration of TV’s into homes happened quickly in the 1950’s and the hours spent watching TV increased steadily over the next decades. Putnam points to evidence directly relating hours of watching TV with the level of disengagement. And it makes sense. Just from a time perspective, more time with the TV means less time to do other things. He also makes a distinction between those that tune in to watch a particular program, and those that simply flip it on and see what is on. The latter tend to be much less engaged.
In some ways, I wish Putnam would have gone farther with the TV argument, as there seems like something deeper behind it. Perhaps something from psychology that would explain why TV was helping to sap the social initiative out of several generations of Americans. It also occurs to me to ask if there isn’t something else underlying the relationship. Perhaps an interaction with suburbanisation, where TV is the only form of entertainment in these socially arid communities.
Putnam does not assign all or even most of the blame on TV though. It is simply an accomplice. Instead he points to broader generational shifts. Members of “the Greatest Generation” continued to be involved in groups and civic life as they aged, got TV’s and moved to the suburbs. Younger generations, almost uniformly and through time were less involved. The author speculates that the unique experiences of World War and depression gave a solidarity to this generation that is missing in younger generations, though it is hard to land on any general conclusion.
The status of working class communities today in the US are bleak. Death rates have increased and expected longevity have decreased due to opioid abuse, alcohol abuse, and lifestyle diseases like type II diabetes and obesity. Average median wages have declined over the last two decades as manufacturing has moved overseas. The election of Donald Trump is only the latest sign of the decay of large swaths of America.
Even though the book is now 17 years old, it seems even more relevant. It doesn’t seem far-fetched to link the disengagement from face-to-face civic and social life to today’s deteriorated quality of life in many parts of America. The polarisation of America, fed by cable news and talk radio is also easily linked with a world where we no longer meet each other face-to-face to discuss politics and perhaps learn to see things from the other side.
Putnam’s concluding chapter on what can be done to counter the trends is short and vague. A new civic movement needs to take place, though how we should go about creating such a movement is not at all clear. What seems clear to me is that because the decline in our social and civil lives has been generational, then a turn in the trend would also need to be generational. Retiring baby-boomers and middle-aged Gen-Xers cannot be expected to change their ways. But perhaps the much-maligned millennials, which I am a member of, can begin to turn things around.
Messy, by Tim Harford
Let me start by saying that Tim Harford is one of my favourite economics writers. His weekly Financial Times column is almost always worth reading. His two books explaining micro- and macroeconomics, are filled with great anecdotes and metaphors that illustrate important economic ideas, and I regularly use them in my teaching. In Messy, Harford has gone broader —well beyond economics—weaving together ideas and anecdotes from history, business, science and politics to argue that nice, clean, tidy solutions are not always best. Messy, even chaotic strategies can often be the ticket to success.
I enjoyed this book - it is a delightful page-turner. I am also sympathetic to the main argument. The world is a messy, complicated place, and nice neat solutions, while seemingly satisfying, can have unintended consequences. But, perhaps fittingly, the book itself sometimes felt a bit messy, with only faint connections between the chapters and subjects. I recognised some of the serious ideas that lie behind many of the stories Hatford tells. But these ideas —upon inspection— are often distinct from each other. I haven’t quite decided whether pulling them together under the banner of “messy” is appropriate. Finally, I suspect that readers looking for advice on “how to succeed at life”, may be advised to look elsewhere.
In the acknowledgement section, Harford states that he has been working on the book for five years. With recent political developments, he must have been glad he had not handed it in earlier. Brexit and Trump have shown numbers and statistics folk, like myself, that there is a lot we don’t understand about this complicated world.
Trump is discussed in one of the later chapters, in relation to a military term called OODA. Basically, playing things by ear in complicated, chaotic situations rather than trying, and inevitably failing, at following through on complicated, pre-planned strategies. Rommel’s success in North Africa under WWII can be explained by an ability to play things by ear, and creating chaotic situations that gave him an upper hand over his opponents. The same basic premise can be used to explain Trump’s victory over both a large field of highly qualified Republicans in the primaries and the Clinton machine in the general election.
Generals sometimes go on to become civilian leaders and politicians—through elections and otherwise. I have my doubts that Rommel’s create-chaos style would be equally successful in such a role. Perhaps a leader who puts a premium on preparation, planning and stability — like “No-Drama-Obama”, or Dwight Eisenhower — would be preferable. We are, of course, seeing how chaos plays out in the oval office currently. It is probably too early to give any type of final conclusion, but things don’t look promising.
The book dovetails nicely with political events, but it is not a book about politics. One of the topics that he takes on that especially resonated with me was urban planning. Neatly planned grids, with wide roads and excessive signage gives the appearance of a orderly traffic system. But the neatness encourages cars to speed up and not pay attention to pedestrians. Suddenly you have a dangerous, noisy and unpleasant urban traffic situation, familiar to everyone living in a modern city. Here a seemingly more chaotic, mixed use street layout—common in, for example, the Netherlands—where cars face obstacles and must watch out for pedestrians can be both safer and considerably more enticing. This is an important point, and sure enough something that several cities have been experimenting with. Unfortunately, the traffic engineers still hold sway here, and they are a stubborn bunch.
Other topics that are brought up in Messy include the dangers of relying on simple metrics, how a reliance on automated technology can dull our senses and create dangerous situations, and how rules requiring neat cubicles can destroy workplace moral. All these stories and the ideas behind them were interesting and engaging. I also understand how these can broadly be described as instances where messy solutions outperform neat ones. But the mechanisms behind these ideas are all quite unique from each other. Allowing an office worker to keep a bunch of knick-knacks at their desk gives them a sense of control, while automation can create a false sense of security that can have catastrophic results. Simple metrics are dangerous, because they are easily gamed and can give incentives for unwanted behaviour. I sometimes wondered if the book would have functioned better if Harford focused on one or two underlying mechanisms to tie the book together. On the other hand, the smorgasbord of ideas under a single broad theme made for an engaging and fast-paced read. Perhaps Harford was just practicing what he preaches and went for a messy, but engaging solution.
Like a lot of people, especially those beyond their twenties, I have spent some time wondering what it will be like to get old. I wonder how my career will have gone, if I will be healthy, and if I will be happy. I wonder if I will I still have a contented family life and have a good relationship with my kid.
Reading the Triumphs of Experience gave some hints, or at least some case studies to aspire too and pitfalls to avoid. The book is written by the psychologist George E. Vaillant, who was the head of what is known as The Grant Project. A cohort of more than a hundred young men who attended Harvard University in the early 1940’s as undergraduate students were followed up to the present day. During the course of the study they were asked to fill out countless surveys and interviewed occasionally. The participants and their families gave information on their upbringing, their health, their career, family life and a host of other variables.
Despite all the potential factors and various outcomes that could have been analysed, Vaillant was able to narrow down the book to a few key points. An overarching theme is that adults do not stop maturing, growing and changing at 25, 30 or any arbitrary age for that matter. People grow, mature and change throughout their lives. Participants who had been miserable, in bad marriages and with failing careers at 50 could turn it around and be happy and vigorous at 75.
Early on the author points out a strong connection between a happy childhood with warm, supportive parents and reporting back contentment and well-being as an adult. This is a result the author seems particularly enthusiastic about, and which he comes back to several times over the course of the book. On the other hand, the effects of parents did not appear to significantly affect health and longevity of the participants, an odd contrast.
A relatively large portion of the participants lived past 90. The common factors the study found in this longevity were not necessarily surprising. Smoking and alcohol abuse were inscrutably and dominantly linked to shorter lifetimes. To the author’s own surprise, however, the social support of participants in old age played little role. To my surprise, neither did regular exercise, though this did correlated with reported good health in all cohorts.
Alcohol is a recurring theme -- partly due the grants the study received to study the subject matter. The author finds that alcoholism was a major factor in almost all of life’s ills. Nearly 50 percent of failed marriages involved at least one partner who had alcohol abuse problems. Poor health, unsatisfying careers, and general dissatisfaction with life were all associated with alcohol abuse. And alcohol abuse was not predicted by income or the warmth of parents. In that particular question of nature versus nurture, the study comes down squarely on genes. Participants with alcoholism in the family had a higher chance of becoming alcoholics themselves.
A chapter in the book is also devoted to “adaptive coping”, the theory of how we unconsciously go about dealing with life’s challenges. These coping mechanisms, or “defenses” are split up into immature defenses -- like passive aggressive behaviour and acting out -- and mature defenses -- like humour and altruism. The participants who were able to master the latter, rather than falling back on the former were able to live more content and fulfilling lives. This chapter was the one with the largest amount of pychological jargon, and this turned on my skepticism. The terms in themselves seemed vague and the characteristics hard to define. I don’t dismiss the author’s assertions out of hand. Personalities and human character are exceedingly complex, and some specialised terminology and frameworks for analysing behaviour is needed. Still, I didn’t come away convinced that what the author was describing was not the result of some underlying factors — like genetic predisposition.
As an empirical researcher myself, It was on the topic of genetic predisposition and disentangling causation from correlation that I perhaps felt most skepticism. Especially in the early chapter on the role of childhood, the author fully admits that it is not possible to fully identify what is causal and what is simply correlations due to underlying genes. Do participants with warm parents report having happy adulthoods because happy childhoods lead to being happy adults? Or is it rather that happy, warm parents tend to have happy warm kids who also grow up to be happy and warm adults? While acknowledging this problem in the beginning, the author regularly gives the findings a causal interpretation throughout the book. To really disentangle cause and effect, a twin study of development would probably be needed. These exist, and I wish the author had found some to cite.
Despite these misgivings, I found the book a joy to read. Despite not always being able to establish causation, you can pretty safely take away a few lessons from the Grant Study. Don’t smoke, and stay away from drinking too much alcohol. If you have alcoholism in your family, stay away completely. Exercise and a good network of friends will probably make you happy and healthy, but it won’t necessarily make you live a long time. Being nice and compassionate to your kids is probably a good thing.
I got my first mobile phone after I graduated college and moved to New York for my first job. I got Facebook a few years later while I was beginning graduate school in Seattle. I remember being skeptical of both. The 50 dollar-a-month price tag of a mobile phone seemed steep. Facebook was something 14-year-old girls did. Still, i relented. I got my first smartphone a few years into my Ph.D in Bergen - this time with enthusiasm for all the cool things it could do - not just calls and texts, but also maps, email and taking pictures. Amazing! And it was. I daydream of going back to a simpler flip-phone, but what would I do when I get lost?
In hindsight, I am lucky that I got in most of my formal education before the electronic distractions multiplied. When I sat in the library during college, often until 10 or 11pm at night, I could sit at one of the tables with my books and work uninterrupted. I could, and would, walk over to the library's pc’s and check email and the news, but this was only occasionally. Some of my classes were less than engaging, but i came and I tried to follow along. There was no internet-connected smartphone to compete for my attention. No one had a laptop with them, for any class, ever.
My students have it tougher. Everyone has a smartphone and it is a source of easy, painless entertainment; a quick shot of dopamine. Hard, concentrated work that is frustrating and tiring has a tough time competing. I see this especially in my course of first-year students. Students will have their smartphones out, checking their emails and Facebook accounts. Many have laptops open, sometimes taking notes, but just as often browsing the web. I have students work through exercises in class. Perhaps half attempt them. The rest take out their phones or laptops and get their shot of distraction instead. That’s easier.
I have plenty of empathy. I struggle with exactly the same problems. My job, in both preparing for my classes and doing research requires periods of uninterrupted concentration. I aim to be learning new things throughout my career and life, and this necessarily means a good dose of frustration, all the time. The draw towards browsing the news, checking the latest apartment listings (my wife and I are on the market) can be overpowering.
My strategy for myself and for my students is simple. Get rid of distractions. Tie myself to the mast, and try to get them to tie themselves to the mast.
I just bought a piece of software for 60 dollars that does a strange thing. It makes my computer less useful, but it makes me more useful. The software is called Freedom - an appropriate name. It turns off my internet for pre-set intervals. No mail, no twitter, no nytimes.com. Just the task at hand. I am using it at this moment, writing this post. I usually set the timer for between 30-45 minutes. This is a good chunk of time to really get immersed in a task. After that, it is good to take a break, walk around, and check out a distraction as a reward. Take that dopamine hit, but only after 45 minutes of work.
This is the approach recommended by psychologists who study the subject as well as all-around high achievers. Barbara Oakley describes herself as someone who failed high school math and has always struggled with the subject. She is also a professor of engineering. From her own experience as well as from looking into the academic literature, she wrote a book, “A Mind for Numbers”, on how to effectively learn math and science topics. Most of the advice is more broadly applicable to all types of learning. 45 minutes seems to be a type of magic number. Long enough to get immersed in a task and dive deep, but short enough to avoid exhaustion. Taking a break after a period of concentration is important. As many people have experienced, moments of insight often come during a period of distraction.
Oakley has another important point: students learn best when they themselves are actively involved in the learning - something anyone who has taken on a mathematical subject knows. Listening to a lecture can only get you so far. But here too, distractions take their toll - as the students who never work on the in-class assignments show. Facing an unfamiliar problem can be daunting - much easier to pull up the smartphone and get the dopamine hit from Facebook. Some students are simply less motivated than others. But I can imagine that 15 years ago, when I was taking my bachelor degree, if a student bothered showing up to class, then they would have no good reason not to also attempt in-class exercises.
Here is punchline. I am skeptical of technology in the classroom. I am not skeptical to technology; I teach programming to my 3rd year students, and that is perhaps the most valuable part of the course. I am skeptical to the superficial use of technology in the classroom as an engagement trick.
The good intention of engaging students, often creates a distraction instead. Learning happens through deep concentration and effort, not a quiz through a smartphone app. When a student has a notebook, a calculator and a few pencils in front of them, they are ready to work. Lectures are an imperfect learning format in a lot of ways. But they are perfectly timed 45-minute chunks of time that could be used for concentrated learning. From my last semester of teaching, I have one major regret about my first-year course. I should have been clear on the first day: leave your laptop and smartphone in the bag. Class-time is an opportunity to do concentrated work in an otherwise distracted world.
I remember the shock of starting college. I had been a good student in high school, and had even managed to get about a year’s worth of college credit before I was done. I went into my freshman year with the confidence (arrogance) that college would not be any huge obstacle. I was wrong: going from high school to college was a pretty big jump. This feeling is near universal among almost all new college students. What I am seeing when I teaching is that what differentiates students is how they react to this challenge.
The course in business economics I teach has a reputation as being a difficult course. There is no single concept that is particularly difficult to grasp, there is just a lot of new material to absorb for students that may never have studied economics or accounting before. I also have the real-estate students, who do not necessarily have a strong motivation to learn accounting or economics from the get-go.
But the course in business economics is required for the degree, and the degree is required to gain the license as a real-estate broker. That, I imagined, should be motivation enough. But seeing how some students respond to meeting this academic challenge has been disheartening.
Early on, I told students that they should expect to be challenged, and that feeling some frustration was good. That was an important part of learning. But I saw many of the students nearly always taking the short-term, easy way out. Of around 80 students who are officially enrolled in the course and of about 70 who showed up the first day, by the end between 35 and 45 were showing up to each class period. I regularly go through problem-assignments in class - giving students some minutes to try them on their own. Many, if not most, simply take out their mobile phones, waiting for me to go through the answers.
This could have something to do with me and my teaching. This is my first time teaching the course, and there are things I could have done better. But some of my most experienced colleagues, who are known as wonderful teachers, talk of similar experiences. My teaching mentor, with some 30+ years of experience, tells a story of how he asked a student why he did not show up to class. The answer: It was too hard. Some students, upon meeting a challenge seem to almost instinctively avoid, deflect, and delay.
I’ve never considered myself particularly intellectually gifted. In fact, as an elementary school student I was placed in special sessions for pupils with learning disabilities. But I get joy of mastering new skills and knowledge. Importantly, I have a high tolerance for frustration.
But I have also experienced being overwhelmed and giving up. In my second year in college I took an advanced course in mathematical theory called abstract algebra. I still don’t really know what the course was about, even though I somehow passed the course. About half-way through the course, I remember feeling so frustrated with the course that instead of showing up to the course, I went for a long walk.
But that was the exception. Even in that course, I showed up for all the other class-periods. I also took other challenging math courses, and ended up with a degree (major) in math. This included a heavy theory course my first year, called real algebra, which I remember spending 3-4 hours per day on.
The point is not to bemoan the quality of today’s students. I have also experienced excellent students in Norway, when teaching at NTNU - the science and technology university for example. But instead, it is the observation that what seems to be holding a lot of students back is not necessarily lack of intelligence or lack of preparation - though these can certainly also play a role. But instead, an inability to take on challenges. I’m not really sure what this comes of. Nor do I know if it is anything, at this point, I can do anything about. If at 19 or 20, they haven’t learned how to deal with feelings of frustration or how to mount challenges, some stern words and encouragements from me seems unlikely to turn things around.
The conclusion seems gloomy. And maybe it is. On the other hand, the difference for me between mastering the material in one difficult math course and essentially giving up on the other had nothing to do with material (at least I don’t think it did). But instead, it had to do with the way the course was set up and the teaching of the professors. In the course I mastered there was a certain confidence that if I put in the work, and got help when I needed, I could figure things out. By the middle of the semester of the other course, this confidence had largely evaporated. Maybe there is a lesson there.
I have finished my first semester of teaching at my new job. I started the job already in March, but luckily, I got the first few months to prepare. My teaching load is a minimum of 140 hours of classroom teaching, equivalent to about three full credit courses. My first semester, I would have 150 hours, my entire teaching load plus some. It was bound to be a busy semester.
I have had two courses. One was the introductory course in what is called business economics, which is really a soft introduction to financial and cost accounting for first-year students. Two sections of this are taught - I had the section for the students aiming to complete a bachelor degree in real-estate. For many, if not most of these students, this course would be, maybe by far, the biggest academic challenge that they had met up until now. Figuring out how to teach to this group is something I am still trying to figure out.
The second course was a double-credit course in macroeconomics. The course is for third-year students, finishing up their degree and wanting to write a bachelor thesis on a topic within macroeconomics. This course tends to get motivated, well-prepared students. This course also had its challenges - but more in the way of finding a way to challenge the students without overwhelming them.
On top of the general challenge of teaching a couple of courses for the first time, these are courses well outside my recent research and interests. The last time I have taken a course in macroeconomics was nearly 10 years ago in my first year of graduate school at the University of Washington. I have never taken exactly that type of business economics course before. And then there is the fact that I would be teaching both these courses in Norwegian - a language I speak fluently, but which I have never taken single year of formal schooling in.
When people ask how it has gone, I tell them that there were no major disasters. That counts as a win.
I am starting to emerge from a hectic teaching semester. I have all my teaching in the fall – about 150 hours in the classroom. This is also the first year I have taught the courses: business economics and an intermediate double-credit macro course. The kicker is that I am teaching it all in Norwegian – a language that is literally my mother tongue, but which I don’t have even a single year of formal schooling in. It has been exhausting at times, but I have never learned so much in such a short amount of time.
In my macro course, the bulk of the materials are in english. When preparing for lectures, I read through the materials, jotting down notes in Norwegian, translating the gist of the passage in my head. This sounds cumbersome, and often it feels that way. But I wonder if this process: reading in english, translating in my head, and jotting down my Norwegian notes, also lets me comprehend and maybe remember the information in a much more active way?
Most people have experienced reading a paragraph absent-mindedly, only to realize you have no idea what you read. Making notes, and the pressure of having to explain what you are reading helps a great deal in avoiding this. But when I am reading Norwegian material for my course, I feel it is altogether easier to just write-down verbatim some sentences from the material, without giving it the same thought. The translation step forces me to actively process the material in a way that I might not always otherwise do.
Certainly, some linguists or psychologists have studied something like this. I would be interested in finding out if my hunch can be backed up.
Mit innlegg i Adresseavisen
Gunnar Okstad skriver i en kommentar 16. april at nye utenlandske kabler fører til at norske strømkunder kommer til å «blø». Nei, det kommer de neppe til å gjøre. De fleste prog-nosene viser at kraftpriser i Norge og Skandinavia kommer til å holde seg lave, med eller uten utenlandske kabler. Dette på grunn av mye ny vindkraft og små-vannkraft og økt energi-effektivisering i Skandinavia.
Utenlandske strømkabler er også en viktig markedsmulighet for Norge. Storbritannia, som vi bygger kabler til, er i gang med en storslått vindkraft-utbygging, særlig offshore. Det er derfor det er viktig å være koblet til Norge, med vår fleksible vannkraft som vi kan enkelt skru av og på og dermed lagre energien i form av vann i et høyfjellsreservoar.
Men det er ikke sånt at vi skal kun eksportere kraft via disse kablene. Når det produseres mye vindkraft og forbruket er lavt i Storbritannia, skal vi også kjøpe kraften for en billig penge. Siden vi kan lagre energien, så kan vi da selge den strømmen tilbake til Storbritannia når prisen er høy. I min forskning har jeg sett på akkurat denne handelen med Danmark: Norge «lagrer» opp til 40 prosent av vindkraft produsert i Danmark. Vi kjøper strømmen når det er billig og selger det tilbake når det er dyrt. Effekten på gjennomsnittsprisen her i Norge er beskjeden: Mindre enn en prosent økning i gjennomsnitt. Det får neppe noen til å begynne å «blø».
Uansett, hvis prisene økte litt på grunn av flere kabler, hva så? Hvem er det som egentlig tjener på denne handelen? I Norge er de fleste kraftselskapene eid av staten eller kommunen. Så, enkelt sagt, er det meg og deg som tjener på den. Overskuddet fra Trønderenergi og Statkraft går til kommunekassen og statskassen og hjelper til å betale for barnehager, skoler og eldreomsorg. Kunstig lave priser vil, på den andre siden, i stor grad forårsake ineffektiv bruk av strøm. Heller gode skoler enn å fyre for kråkene.
Sometimes it seems like technology is not helping me get things done, but rather getting in the way. The internet provides endless opportunities for distraction and time-wasting, and smartphones keeps you constantly connected. In the face of this, I am regularly battling with myself to stay focused and to be productive at work. This is the problem that is taken up by Cal Newport, a computer science professor and productivity blocker, in his book "Deep Work."
There was a lot to like in this book. First, I enjoyed the myth-busting of the always-connected, open-landscape sitting creativity worker. Doing hard, creative work requires not just inspiration, but also periods of long, concentrated and uninterrupted work and learning. I can seek out ideas and inspiration on Twitter or blogs, but once that inspiration is in place, I need to get work done.
Newport makes the case that learning difficult tools and skills is becoming essential to thriving in the modern economy. While at the same time, distractions make it ever harder to gain these skills. He gives an overview of the research on the subject in neuroscience and psychology, which seems to back up his main point; Difficult, creative work requires undisrupted concentration.
In some ways, this seems plainly obvious to me. Of course distraction-free concentration is a necessary tool for doing difficult, creative work. However, just in the last few months, I have met otherwise thoughtful people essentially claiming otherwise.
The business school I work for will soon start building a new campus, and the architect who was presenting the plans was strongly pushing an open-landscape layout for all employees - including academic staff. No one else is using offices anymore, seemed to be her main argument. Opposition from the faculty and staff was, unsurprisingly, unified. Finally, something almost all academics can agree on.
I was also recently at a pedagogy workshop hosted by my school. Here the instructors were enthusiastic about the use of all sorts of on-line tools to interact with students - including Facebook. I voiced my skepticism. Aren't all these tools also serving as a distraction from the concentrated work that students really need to learn difficult new concepts and tools. The answer back was essentially that students today are used to Facebook and internet, and can work productively with distractions present. Somehow, I doubt evolution has done such quick work on the human brain.
So there does seem to be the need to make the case that undistracted concentration is still important - perhaps, even more important in today's world. Newport then goes on to suggest a series of steps people can take to begin engaging in difficult, concentrated work. This is where the book is both strongest, but also at times at its weakest.
Newport manages to give some solid advice based on both research and widespread experience. The advice to find a quiet place, free from distractions to do concentrated work for several hours at a time seemed sensible. As did the suggestion to try to go without social media for a month. That emails don't (usually) need to be answered right away also probably needed to be said.
On the other hand, some of his advice seemed to go a bit overboard and mainly based on his own experience. He tells of how when he went for runs, he would focus his mind on difficult problems at work. I would prefer giving my mind a break when I am exercising. More so, research tends to show the importance of taking breaks and letting your sub-conscience have a go at the problem. After running himself through a productivity gauntlet for a year, he did admit that he was exhausted.
On the whole, however, I think this was a worthwhile read. It inspired me to try to establish a habit of deep work in the latter part of my workday. After about 3pm, my office tends to get quiet, and I switch from teaching prep and administrative work, to a few hours of research. I unplug the internet (if i can) and try stay focused. It's still a struggle, but worth the effort.
The title makes it sound like a book about gang warfare, and not urban infrastructure and city planning. Nonetheless, Street Fight gets across the gist of this book. The author, Janette Sadik-Khan, was New York City's transportation chief in the latter part of Michael Bloomberg's administration. Given the way she transformed the cities landscape under her tenure, and the knock-on effects New York has on other cities, she may be the most influential city planner since Robert Moses.
I enjoyed the reading about both the city planning aspects of New York's transformation but especially about the messy politics that were involved. The main message of the book is that thoroughly effective and beneficial transport policies will evoke plenty of opposition. Effective cities need more than good ideas, they need politicians and technocrats willing to fight for them.
I lived in New York City from 2004 to 2006. By then Michael Bloomberg had banned smoking in all the cities bars and cafes. This also led to loud cries of opposition - with many in the night-life industry claiming it would lead to disaster for their businesses. Of course, this was ridiculous, and if anything the law led to more business as the vast majority of New Yorkers and visitors who did not smoke could more easily enjoy a night out without breathing in lung-fulls of fumes.
The pattern repeated itself when Sadiq-Khan started to transform New York's city streets. In a crowded, dense city like New York, where the large majority of people get around without a car, it makes perfect sense to devote more of the street-scape to walking, biking, and even sitting. Bloomberg's transportation department, headed by Saddiq-Khan, started en extensive program of installing a city-wide protected bicycling network and creating extended pedestrian plazas - most notably in times square. Some bus-routes got their own lanes and car traffic was calmed and rationalized.
Some neighborhoods saw the improvements these changes would bring. But, despite the extensive community outreach that the Transport Department rolled out, many groups cried bloody murder. Taxi-drivers, rank-and-file police, neighborhood associations, and, of course, opposition politicians all took their shots. Bike lanes were dangerous for pedestrians and would ruin home values. The new pedestrian plazas would take back Times Square to its seedy past. Traffic would come to a stand still. These criticisms were all as ridiculous as the club owners claiming that smoking would kill their businesses. But, still, they made the headlines.
The stroke of genius that Saddiq-Khan and her department had, was that New Yorkers would believe with their eyes. Amazingly, many changes were implemented over the course of days and weeks, rather than years. Paint and planters were used as quick, temporary solutions. The thinking went, if a temporary solution was rolled out fast, and people could see with their own eyes the improvement that it would bring, opposition would quickly become muted. And largely, that is exactly what happened.
While Sadiq-Khan undoubtedly made huge strides in making New York City more livable, I sometimes felt like she was not ambitious enough. Several times she emphasizes how her department made an effort to preserve on-street parking. Yet she herself argued that on-street parking is hardly aiding shops and businesses. With a city as dense as New York and with such expensive real-estate, giving so much space to parked cars seems a colossal waste. Somehow, I feel Sadiq-Khan would probably agree with this. But I suppose it is easier for an academic in Norway to make such pronouncements. Politicians and city planners sometimes also need to compromise.
Mit innlegg i Sysla Grønn:
Økonomi trumfer ideologi i synet på fornybar energi i USA.
Følger man med i amerikansk politikk, vil man kanskje tro at det ikke finnes noe som de to politiske partiene kan bli enig om, dette har også omfattet energipolitikk.
Demokratene – helt fra Jimmy Carter installerte solcellepaneler på taket på Det Hvite Huset, har støttet fornybar energi.
På den andre siden, har den republikanske sidens politikk vært preget av en dyptgående skepsis til klimaendring og en energipolitikk som kan oppsummeres med “Drill, Baby, Drill!”
Men i en politisk avtale som fikk lite oppmerksomhet kun et par uker etter klimatoppmøtet i Paris i fjor, ble demokrater og republikaner enige om å forlenge skattelette for de som investerer i sol- og vindkraft som en del av et større budsjettforlik.
Fra uenighet til unison støtte
Fornybar energi går fra å være en partipolitisk sak i verdens største økonomi, til noe alle støtter.
Som en følge av finanskrisen og den økonomiske stimulanspakken som ble innført i 2008, kunne man få 30 prosent av investeringskostnadene for solkraft som skattefradrag.
Men fradraget skulle løpe ut i slutten av 2016, og gitt den generelle uviljen til å samarbeide blant partiene, var det få som trodde at det ville være mulig å forlenge investeringsstøtten til fornybar energi.
Imidlertid overrasket partiene i kongressen mange ved å bli enig om et budsjettforlik, og som en del av dette skal skattefradraget nå forlenges til ut 2019 og deretter gradvis reduseres til 2022.
Vindkraftprodusenter har fått et skattefradrag basert på produksjon og dette fradraget har flere ganger løpt ut og deretter fått midlertidig fornyelse. Dette har ført til en ”boom-and-bust” investeringstrend i markedet.
Nå får vindkraftinvestorer en subsidie som skal gradvis avta fram til 2020.
100 GW ekstra
For investorer i ny vind- og solkraft var budsjettforliket og forlengelsen av skattefradragene en viktig, og uventet, seier.
GTM Research, et konsulentselskap, anslår at det kommer til å føre til 100 ekstra gigawatt i solkraftinvesteringer i USA i løpet av de neste årene – noe som tilsvarer energiproduksjon fra flere titalls store kjernekraftverk. Det er forventet omtrent like mye ekstra kapasitet i vindkraft også.
Demokratene har lenge støttet fornybar energi og kampen mot klimaendring som en del av sin politisk plattform. Men den brede støtten som kom fra republikanerne er overraskende.
Mange i partiet, inkludert nesten alle av de nåværende presidentkandidatene, har uttalt en skepsis til nødvendigheten av å bekjempe klimaendring.
Riktignok kunne republikanerne trekke fram hevingen av forbudet mot eksport av petroleumsprodukter som en seier for deres side. Men dette forbudet, som har sin opprinnelse i energikrisen på 1970- og 80-tallet, var utdatert og hadde få sterke tilhengere. Det aller meste av petroleumsproduksjonen i USA kommer uansett til å forsyne den enorme interne etterspørselen.
Det er imidlertid grunn til å tro at støtte til sol- og vindkraft har begynt å bli en av de få sakene i amerikansk politikk som kan beskrives som tverrpolitisk. Delstater med mye vindkraft – som Iowa, Texas; og solkraft, som Arizona, Nevada og North Carolina, er noen av de mest politisk konservative i USA.
Fornybar energi har blitt en stor og innflytelsesrik industri i disse statene, med mange forholdsvis godt betalte arbeidsplasser.
Det har til og med oppstått politiske grupper fra høyresiden som støtter investeringer i fornybar energi.
En gruppe som heter Green Tea Party ble opprettet til å kjempe i mot restriksjoner på solkraft i Arizona og Nevada som kraftselskapene i disse statene ville innføre.
Mulighetene til å generere sin egen strøm i konkurranse med et statlig regulert monopol ble sett som noe som passet fint inn i høyresidens ideologi om frie markeder og individualitet.
Økonomi veier tyngre enn ideologi
Men til syvende og sist, veier nok økonomi tyngre enn ideologi. Vind- og solkraft har vokst fra å være nisje-teknologier som krevde store subsidier i noen få, små markeder, til å bli store industrier med brede markeder og muligheten for en selvforsterkende teknologisk og industriell utvikling.
Disse markedene og industriene har ført til direkte og indirekte lobbyvirksomhet som aktivt påvirker politikken. Dette blir en selvforsterkende syklus med større og sterkere industrier som presser politikken, som til gjengjeld hjelper fram enda sterkere industrier.
En rapport fra MIT fra 2015 beskriver den dramatiske endringen som allerede har skjedd i sol- og vindkraftindustriene:
- Vindkraftkostnadene falt gjennomsnittlig med 5% per år de siste 40 årene
- Solkraftkostnadene, hovedsakelig fotovoltaiske paneler, har falt med gjennomsnittlig 10% per år Siden 1976 har prisene på fotovoltaisk paneler falt med 99%
- Vindkraft er nå konkurransedyktig de fleste steder i USA
- Solkraft er konkurransedyktig i store deler avdet solrike sør og sørvestlige USA
- De siste tretti årene har sol- og vindkraftkapasiteten fordoblet seg hvert tredje år
- I perioden 2000-2014 har kostnadene av å unngå karbonutslipp ved å bytte fra kull til sol sunket med 85%
Det kommer også fram i rapporten at om det blir installert like mye vindkraft og solkraft i framtiden som alle land har lovet i forkant av Paris-møtet, så kan solkraftkapasiteten femdoble seg og vindkraftkapasitet tredoble seg fram til 2030.
Videre, slår rapporten fast at siden prisene på sol- og vindkraft har en tendens å synke i takt med installert kapasitet, så kan man gi et konservativt og rimelig anslag på at kostnadene på vind- og solkraft kan synke med ytterligere 30 og 50 prosent.
Kostnaden ved å bytte fra kull til sol og vind vil i så fall bli negativt.
Det vil si, man må betale eiere av kullkraftverk til å fortsette å produsere strøm i stedet for å legge ned kraftverket og bytte over til vind- eller solkraft.
I valgkampen for å bli president i 1992, brukte Bill Clinton uttalesen “It’s The Economy, stupid!”, til å beskrive hva velgerene var opptatt av.
Noe lignende kunne bli sagt om framtiden til fornybar energi i verdens største økonomi. Verken demokratene eller republikanerne har råd til å ignorere eller stå i mot en kraftig voksende industri der alt fra huseiere og bønder til landets største bedrifter har sterke eierinteresser.
I et steilt politisk landskap, kan fornybar energi bli en av de få sakene demokratene og republikanerne enes om.
I was reminded recently of my favorite intution for understanding what is going on with the Metropolis Algorithm, which is what lays behind most versions of MCMC that is used in modern Bayesian analysis. The intuition is from John K. Kruschke in the book "Doing Bayesian Data Analysis."
Suppose an elected politician lives on a long chain of islands. He is constantly traveling from island to island, wanting to stay in the public eye. At the end of a grueling day of photo opportunities and fundraising,2 he has to decide whether to (i) stay on the current island, (ii) move to the adjacent island to the west, or (iii) move to the adjacent island tothe east. His goal is to visit all the islands proportionally to their relative population, so that he spends the most time on the most populated islands, and proportionally less time on the less populated islands. Unfortunately, he holds his o ce despite having no idea what the total population of the island chain is, and he doesn’t even know exactly how many islands there are! His entourage of advisers is capable of some minimal information gathering abilities, however. When they are not busy fundraising, they can ask the mayor of the island they are on how many people are on the island. And, when the politician proposes to visit an adjacent island, they can ask the mayor of that adjacent island how many people are on that island.
The politician has a simple heuristic for deciding whether to travel to the proposed island: First, he ips a (fair) coin to decide whether to propose the adjacent island to the east or the adjacent island to the west. If the proposed island has a larger population than the current island, then he de nitely goes to the proposed island. On the other hand, if the proposed island has a smaller population than the current island, then he goes to the proposed island only probabilistically, to the extent that the proposed island has a population as big as the current island. If the population of the proposed island is only half as big as the current island, the probability of going there is only 50%.
In more detail, denote the population of the proposed island as Pproposed, and the population of the current island as Pcurrent. Then he moves to the less populated island with probability pmove=Pproposed/Pcurrent. The politician does this by spinning a fair spinner marked on its circumference with uniform values from zero to one. If the pointed-to value is between zero and pmove, then he moves.
What’s amazing about this heuristic is that it works: In the long run, the probability that the politician is on any one of the islands exactly matches the relative population of the island!
Wind and Solar are quickly gaining market share in the US electricity system. The picture below shows all existing solar plants and wind farms. This excludes most small residential solar power.
Long before solar and wind, the US was a huge hydro power county. I had expected hydropower to be concentrated in the west, but I was surprised to see how much there is in the rest of the county as well.
The text of my letter published in The Economist:
Schumpeter (October 10th) is correct in thinking that Norway will need a period of adjustment in the face of falling oil prices and diminishing production. But his disparagement of firms such as Statoil and Telenor, where the state has an ownership stake, is misguided. These firms are generally well run and have almost full independence, with little to no interference from politicians. You yourself have approvingly referred to Statoil as a “leading global company” (“The rich cousin”, February 2nd 2013) and as “a match for almost anyone” (“Big Oil’s bigger brothers”, October 29th 2011).
Schumpeter encourages Norway to “rediscover its Viking spirit”. Luckily, with partially state-owned firms in the vanguard, we are well on our way. You recently reported that Telenor “has rediscovered the Viking spirit of adventure, launching into foreign markets ranging from Bulgaria to Bangladesh” (“Mobile mania”, January 24th).
A lot of excitement surrounds the developments in solar and wind power, which have reached competitiveness with traditional energy resources like coal and gas in many areas.
In the excitement, it may be easy to forget the "other" renewable: hydro power - perhaps the oldest form of energy and electricity generation.
Recently I dove into data from the Energy Information Agency (raw data here) as part of a research project with some collaborators, and here are a few charts and thoughts.
In the US as a total, Hydro only stands for about 7 percent of total generation, but as the figure below shows, some states have more than others. Washington State, Oregon, and California all have lots of hydropower, as does New York State.
Hydro power is great. Zero emissions, and if you have a reservoir, it is one of the most flexible forms of energy generation. You can "store" energy in the form of water in a reservoir, and you can adjust production to match conditions with little ramping (how much time it takes to increase production) and little to no energy loss.
But as the chart below shows, investments in new hydro capacity has been pretty much dead in the US for the last 15 years. Many of the potential sites for hydropower have already been built out, while building new plants runs into environmental restrictions. Competition from cheap gas has probably also played a substantial role.
But with more solar and wind coming online, the value of flexible hydro power is increasing. You can also see this in the data in the form of "uprates" or upgrades to existing hydro plants. The figure below shows planned and completed capacity additions to existing plants.
Looking closer to the data, an interesting observation is that almost half of this new uprated capacity comes from pumped storage - which has the capability of using electricity to pump water uphill so that it can be released again when prices are higher. The power system is adjusting to new renewables.
Doing Bayesian Data Analysis is sometimes referred to as “the puppy book.” This is a direct reference to the pictures of the adorable puppies on the front cover. The puppies are, in turn, a strong signal that this is not a book meant for the hardcore mathematician and statistician looking for a rigorous guide to Bayesian statistical theory.
This is a book for your run-of-the-mill applied researcher who has heard about Bayesian analysis and statistics, and would like to learn more about it and, importantly, how to do it. The book serves this purpose well. It provides the basic formulas and concepts of Bayesian analysis, and then provides plenty of detailed explanations, examples and code in how to do a range of analysis.
As far as mathematics texts go - the book is chatty. Among technical types, this is not always a compliment. But I liked the long discussions that explained - in words - the logic of the models and results. For me, being able to explain, intuitively and in plain english, is an important part of a feeling of understanding mathematics. In that way, I felt reading through this book helped to clarify a lot of concepts that I was aware of from before, but that had felt somewhat vague.
The “chatty” style is especially important for Bayesian analysis because it reveals one of the main strengths of Bayesian methods - their intuitive appeal. Bayesian methods approach probability and uncertainty in a way that is natural and intuitive to how many people think of those concepts. In the frequentist tradition, the goal is to estimate some theoretical “true” parameter value, where uncertainty is represented as the randomness that comes from repeated draws of a sample of data. This theoretical construct is pretty abstract. In Bayesian analysis, the parameters are themselves uncertain, and the goal is to estimate an appropriate distribution for those parameters. Straighforward.
Compared to other texts and tutorials on Bayesian analysis that I have read, Doing Bayes unapologetically focuses on teaching modern Markov Chain Monte Carlo (MCMC) simulation tools rather than spending much time on solving explicit analytic solutions or using grid approximations. While there may be some trade-off in getting a mathematically deep understanding of Bayesian methods, the book clearly is focused on those wishing to do Bayesian analysis, rather than understanding the minutia of the underlying theory. The flexibility and relative ease-of-use of simulation methods make it a natural focus point.
Kruschke uses R and the Bayesian simulation language JAGS for most of the examples, but also provides some instruction for using the newer STAN. By a lot of accounts, STAN is on the cutting edge, and in some ways is also easier to use than JAGS. Therefor in the examples I tried to work out, I focused on using STAN - which has a solid and easy to follow reference manual. STAN is new and still under development, so I understand why the author chose to use the more stable JAGS, but I would imagine that a second edition will move in the direction of using STAN, and I would recommend anyone interested in learning Bayesian simulation software to start with STAN.
Kruschke is also not shy about boasting of the advantages of Bayesian analysis over typical frequentist hypothesis testing methods. A lot of this was good - and the book includes some of the best explanations and motivations for using Bayesian analysis that I have read.
Yet Kruschke seems also a bit too quick to overlook the advantages of approximation methods based on a frequentist viewpoint. Andrew Gelman - also a strong proponent of Bayesian methods - recommends, for example, using the quick and efficient approximation methods for hierarchical models to zero-in on an appropriate specification before putting together a fully Bayesian model - which can be more time consuming to both set-up and execute.
More so, though plenty of examples exist of a Bayesian analysis giving different and potentially better answers than estimation by, say, maximum likelihood, in most situations the answers the different methods give will be similar. Bayesian analysis has some strong advantages over traditional frequentist tools, but the best advice still seems to be: use the methods that best solve the problem.
On friday I was at a small workshop at the University of Aberdeen about petroleum economics. Once again I heard someone state - as if it was plainly obvious - that the fall in oil prices would hurt surging renewables. It won’t. If anything it will help.
First of all, oil and renewables generally do not compete, and when they do, renewables are so dominant that even a price fall of 50% is irrelevant. Long ago, most parts of the world stopped producing electricity by burning oil. After the energy crisis of the late 1970’s and 1980’s, using oil to create electricity was seen as too expensive and too risky - especially when abundant gas and coal could be found closer to home. Oil has essentially become a one-trick pony - transportation.
A few places still use oil to generate electricity. Hawaii has had little choice but to rely on oil to generate a big part of its electricity, that is, until solar power became cheap. Hawaii now aims to have 40 percent renewables by 2030 and 100 percent renewables by 2050. This is not purely out of concern for the environment - they will save huge amounts by not relying on imported oil - even at 50 dollars a barrel.
A recent corporate event here in Norway drives home why the drop in oil prices may even help renewables. Statoil - the partly state owned oil giant - has been going through a bit of a crisis with the fall in the oil price. Investments in expensive canadian oil sands, arctic drilling, and other challenging deep water regions have suddenly become huge money pits (or better yet, money wells). One significant, but under reported change in the corporate structure was the establishment of a new renewables division that would directly report to the CEO.
Statoil isn’t new to renewables, but the division was, quite literally, a part of the marketing division. Internally it was seen as a way to market Statoil as a clean company. The fact was, when the price of oil was at over 100 dollars a barrel, the most profitable area for a company like Statoil to deploy their capital was towards finding more oil. Renewables were toys and marketing gimmics.
However, with oil prices at close to half of their recent highs, that is not nearly as true as it was before. Suddenly, the profitability of deploying capital towards finding more oil is not much more than deploying it to build offshore wind turbines. And this, apparently, is exactly what Statoil intends to do. Already they are involved in several major offshore wind projects in the UK.
Now it looks like Statoil will pick up the pace of such investments. This will likely continue to be centered around the UK, where subsidies are the highest. But the costs of off-shore wind power have been coming down quickly, with cost declines of more than 30 percent being reported between first- and second-generation projects.
Statoil is not the only company with capital to deploy. Total is perhaps the oil company that has taken the threats and opportunities of renewable energy most seriously. They acquired the solar panel maker and developer SunPower already in 2011 - one of the largest and most efficient in the industry. The fall in oil prices only encourages oil companies like Total to use their capital outside of their core oil searching business.
Petrol states in the mideast are also in the process of shifting where they deploy their capital. In Saudi Arabia nearly 25% percent of their oil production is gobbled up by their own consumption - much of it in inefficient and expensive oil-fired power generators. No wonder the government has set lofty goals for getting a large share of their electricity from solar power.
For the last few decades, oil has been good business. Buyers - especially fast growing industrializing nations like China - seemed to be eager to consume ever more, even at sky high prices. Consumers also appeared to have few alternatives. That is changing - technology on both the production and consuming side has made oil much less valuable. Oil companies - sitting on huge amounts of physical, financial and human capital - are beginning to look for places to profitably deploy this capital outside their core business. The maturity of the renewables industry has perhaps hit a major milestone when even the oil companies are becoming eager investors.
First, an admission: I am an econ graduate school drop-out. I started in a PhD program in Economics at the University of Washington. But after the first year of heavy coursework that seemed to drain me of all the joy of learning, I decided to drop out. I hung on for a year more - getting a masters degree as a consolation prize, then moving to Norway (so dramatic!) and starting a PhD in Management Science - basically the applied mathematics division of a business school. I ended up enjoying this, finished a PhD, and now I am for the most part a happy post-doc doing fun things.
The most relevant part of this story was in the third sentence: econ graduate school wasn't fun - nor was it meant to be. Lectures, homeworks, and tests focused on memorizing a lot of information and mathematical derivation and then regurgitating. Understanding, learning, and a sense of creation came a distant second, if at all.
In hindsight, what surprises me most about my experience in graduate school well into the age of the computer (this was 2006 - 2008) was that we barely touched one. In a few of the econometrics courses, we toyed around with some datasets. But theory courses - which made up most of the coursework - were strictly pen and paper.
This is where the online course Quantitative Economics - quant-econ.net - by John Stachurski comes in (Officially, the course is listed as being co-created by the economist big-shot Thomas Sargent, but Stachurski seems to have done most of the work). A practical, readable, hands-on guide to doing modern numerical simulations in Economics. It is the course I wish I had when I was a graduate student.
The course starts out with a choice: Python or Julia - two modern, open-source programming languages; the former being more general and well-developed, the latter being explicitly designed to be used for scientific computing, but is still developing. A great start, as most Economists seems to be way behind on the prevailing open-source computation trends, with most relying on the commercial software that is expensive and limited like MatLab.
I plumped for the Python version of the lectures since I knew the language from before and I like its flexibility. The lectures began with a solid introduction to python. I have worked through a few other tutorials and books on python, but I actually found this to be one of the better ones. I learned a lot of practical tricks, like how to manually read in messy data from a file.
After the first few chapters providing a solid grounding in general computation and programming in Python, there comes two sections of applications. In the first introductory section a fairly broad spectre of topics are introduced: markov processes, dynamic programming, state-space models and the kalman filter. Some of these I knew ahead of time, some I had heard of but only had a vague sense of what was going on. But working through the examples and problems was a great way of getting a good intuition for how they work. The magic of learning by programming is that it is relatively easy to take the worked example, and apply it with some of your own tweaks and applications. Compared to a book and a pen and paper approach, the distance between doing a problem and being on the research frontier is small.
I spent quite a lot of time working through the first two sections of the book, but I skipped the third section: advanced applications. My main interests are as a statistician and empiricist, and diving into these more complex and time consuming models is probably mainly for those interested in writing papers directly using those methods. But that is the beauty of self-study - you can decide yourself when you want to quit.
In the end, dabling a few hours every day for a few months on econ-quant.net seemed to give a better starting point for actually doing research in quantitative economics than any single course I took in graduate school. Even for non-economists, the course is a good introduction to computational tools used in many fields. If I have one complaint, it is that little effort is made to give intuition behind some of the economic models that are presented. Since this course is focused on computation, that may not be that surprising. But this course has not lessened my suspicion of mathematically elegant economic theories with little real-world intuition.
Fra Bergens Tidende, 27. April
I januar neste år skal Bergen innføre køprising eller tidsdifferensierte bompenger, som det egentlig heter. Både privatpersoner og bedrifter har klaget over at dette kommer til å gjøre vondt. Men det er hele poenget.
Endret atferd, som for eksempel at bedrifter velger å flytte og at privatpersoner bestemmer seg for å bytte fra bil til buss selv om det er upraktisk, er akkurat grunnen for at køprising er så effektivt. Men vellykket politikk må til en viss grad være rettferdig. Kommunen, fylket og staten må nå gjøre en ekstra innsats for å sikre at gode alternativer eksisterer for alle i Bergen.
Om man skal bygge på erfaringer fra andre byer, så kommer køprising til å ha en betydelig effekt på både køene og den totale mengden biltrafikk gjennom sentrum. London innførte en form for køprising i 2003 og har siden økt avgiften til omtrent 130 kroner for å kjøre inn i bykjernen i rushtiden. Dette har ført til 30 prosent mindre trafikk i bykjernen. Stockholm har en mye mindre avgift, men erfarte også en nedgang på nesten 30 prosent i trafikk. Køprising fungerer.
Men hvis man går rundt i gatene i London, legger man fort merke til at byen fortsatt er dominert av biler. I noen av byens (og verdens) dyreste områder, er det flust med gateparkering. De parkerte bilene er nesten alltid dyre luksusmerker, vanlige folk har stort sett ikke råd til å ha bil i London. Gågater er vanskelig å finne og sykkelfeltene må være noen av verdens smaleste.
Imidlertid har de i Stockholm fokusert mer på å gjøre om gateareal til gående og syklende. Stockholm er ofte kåret til en av verdens beste byer til å gå i. Byen har også investert mye i tog og T-bane nettet, og selv bybane har blitt foreslått.
Når Bergen innfører køprising kommer det til å føre til bedre kapasitet på veiene. Dette bør følges med en rettferdig fordeling av den ledige kapasiteten. Busser burde få egne felt på flere av hovedveiene, og flere av Bergen sentrums smale, sjarmerende gater burde bli forbeholdt gående og syklister. Og, ja, Bybanen burde utvides, over bakken, til Åsane og Fyllingsdalen. Dette er noe alle kan få glede av, ikke bare de som har råd til å betale avgiften.
As part of my recent reading binge on tech-related books, I just finished reading The Second Machine Age, written by two economists: Erik Brynjolfsson and Andrew McAfee. I started reading this immediately after finishing The Innovators - and this book touches on a lot of the same themes of technological advance. They even land on similar conclusions: the future belongs to those that can harness the power of the computer. But unlike the optimism of The Innovators, the authors of The Second Machine Age worry more about the economic effects of seemingly accelerating technological advance on economic inequality and even societal stability. It is in addressing some of these worries that I think this otherwise thoughtful book comes up a bit short.
The authors of the book are clearly in wonder and awe of emerging technology. They describe how computers are increasingly getting competent at activities thought - just a few years - to be the sole domain of humans. They describe the wonders of Google´s self-driving cars and how IBM´s Watson can parse the subtelities of the game show Jeapardy. It is here - in describing these new technologies and their potential, that the authors are at their most engaging.
While laying out a strong case for technological and, in turn, productivity optimism, the authors have a more uncertain and pessimistic view of the effects on the wider economy and especially inequality. Computers are increasingly able to do the jobs of typically middle-class professionals like accountants, administrators and even financial analysts. More so, digitization can create a winner-takes-all economy: everyone downloading the latest album from the superstar rather than going out to the local jazz-club.
The authors attempt to suggest policies and solutions to the problems that technology could bring, but it is here where I find the book most lacking. It wasn’t that their policy prescriptions were bad, but rather that they were conventional and could have been taken out of any introductory economics course. The authors seem to be arguing in the book that technology will revolutionize the economy, but that has no effect on economic policy. This seems wrong to me.
For example, the authors advocate the negative income tax, effectively a way of subsidising low-wage work. Economists from both the left and the right think it is a good idea and Richard Nixon even tried to get one passed. But the solution has languished politically for decades. A more straightforward suggestion might be to increase the minimum wage. The standard economic argument against this is that it will lead to higher unemployment. But in the world with soaring productivity due to technological progress, the negative effect that a higher minimum wage would have on employment would seem modest. Countries like Norway, Denmark and Germany - all with high minimum wages and low unemployment - seem to support this view.
I think the book could also have speculated more on dynamics and trends that go hand-in-hand with increased technological progress. An important one is urbanisation. The industrial revolution led to an initial large-scale urbanisation. The car and post-war manufacturing boom led to suburbanisation and spreading cities, often with hollow cores.
The information age seems to be leading to a new round of dense urbanisation. This trend has many implications for the economy and jobs. City-dwellers tend to be much larger consumers of services - going out for coffee, getting a taxi, ordering take-out and sending out laundry. Many of these services are hard to farm out to machines. Coffee vending machines have been around for decades - but there are more baristas and more coffee shops now than any time before.
An important part of the solution of dealing with the negative effects of technological change is then to help promote the health of cities. Policies that encourage building and density in cities, would be an important contribution. Height regulations and overly burdensome preservation regimes often have the effect of squeezing out many potential urban-dwellers by leading to skyrocketing prices. London and Paris are two of the worst sinners here. Policies to make urban living more attractive could also help - like reducing pollution and car traffic. Too many cities are places to drive in and out of, rather than to live in.
I recently finished reading the The Innovators, by Walter Isaachsen - and I really liked it. The book can be described as a series of biographical sketches of the dozens of figures who had partial responsibility for bring about the digital age - from Charles Babbage and Ada Lovelace and their mechanical computer in the 1800’s to the founders of Google. Isaachsen does a good job of teasing out the strong personalities of these innovators, but he also manages to use a few themes to draw the book together. First and foremost, Isaachsen stresses the importance of collaboration in essentially all important innovations - dispelling the myth of the lone genius.
Almost all the biographical portraits the author presents, come in pairs or triplets. People of differing temperaments, background, and skill-sets manage to find each other and create something important. Jobs had Woz, Gates had Allen, and Moore had Noyce Unsurprisingly, many of those collaborations were short-lived as egos and differences caused splits. The author does mention a couple of lone-inventors, but they only serve to reinforce the story. They may have had a great idea, and even a good start to an innovation - but without the right surroundings and collaborators, it often didn’t lead to anything lasting.
In the final chapter, Isaachsen also takes the chance to speculate on how technology and computers will continue to evolve, and gives new meaning to his collaboration theme. He begins with the fears that computers will become so powerful and intelligent that they will become a threat to humankind - like HAL in 2001 space odyssey or Skynet of The Terminator films. Isaachsen rightfully downplays this idea. He gives a nice anecdote about chess-playing computers. Deep Blue by IBM managed to beat the best human player in the world already in 1997 - a digital generation ago. Yet he notes that in tournaments where players are free to enter in collaboration with a computer - it is not the best computer or the best human, nor even the combination of both that wins. Instead it is often a combination of computers and people who are savvy about using them.
The point he leaves the reader with is that those that are best able to make use of computers will be best placed to succeed in a world with ever more computing power. This has some important implications for a lot of fields, but especially education. I was fortunate enough to have several programming courses available to me in high school - 15 years ago. But when I started college, there was no requirement for taking a computing course of any kind - and I studied math! Nor did any requirements to take a computer science courses exist when I studied Economics and Management Science at the graduate level, even though both fields increasingly rely on programming skills. As The Innovators compellingly lays out, computing and programming are becoming essential skills in the modern world - the education system is lagging badly.
London is one of the world´s great cities, built up, piece-meal, through thousands of years of history. London, in theory, should be a great city for walking and wandering: Block to block, neighborhood to neighborhood, to and from work. But after three weeks here, I feel a palpable frustration that London is such a bad city for walking and biking. Cars - driving fast and paying little heed to pedestrians- dominate the roads. In a city with some of the most expensive real estate anywhere on earth, parking takes up huge swathes of space. Masks - to filter out the dangerous diesel fumes- are a common sight. Things may be getting better, but the pace is slow, and the hurdles large.
Even short walks in London entail brief periods of fear and tension. Taxi-drivers, who I assume must be recruited from a population of psychopathic inmates - speed down London´s narrow roads, swinging into sideroads with barely a glance. They tend to accelerate towards crossing pedestrians, rather than slowing down. Other drivers appear to be imitating their behavior.
The street layout doesn´t help. Even in heavily traveled streets, clearly marked cross-walks - showing that pedestrians have the right-of-way - are rare. The rule for most streets seems to be to cross as long as you don´t get in the way of a car. Well-intentioned markings reminding foreigners to “look left” or “look right”, reinforce the message that cars have the right of way. Pedestrians beware.
Pedestrian streets - the hallmark and highlight of many European capitals - are auspiciously missing in London. Most of the street space on Oxford street - London´s main shopping thoroughfare - is reserved for fast-moving motor traffic. If the store-owners had any sense they would be campaigning furiously to turn over the entire street to pedestrians and increase the flow of customers.
Dense, well-functioning city centres are built around pedestrians. Cars, if present, are guests. London has it backwards.
I have biked in quite a few cities with varying quality of bicycle infrastructure. But I am thoroughly impressed by the brave Londoners who use bikes to get around the city, despite the horrid conditions. In some major junctions, bicycles make up nearly a quarter of the rush-hour vehicles. This despite roads and intersections that, until recently, were designed with no thought towards accommodating bicycles. London, with its flat terrain and mild weather, could easily rival Copenhagen or Amsterdam as a biking city. If only it felt safe to jump on a bike and pedal to your destination.
Some things are getting better - especially for bicylists. “Cycling Superhighway” routes have been put in place over the last few years, and more are planned. Yet these are fairly half-assed projects. I came across a section filled with parked cars - apparently not an uncommon phenomenon. These “highways” also come to an abrupt stop as they near downtown. Recently, the Major’s office announced an initiative to create complete north-south and east-west bicycle routes made up of protected bike paths. If the mayor is able to actually carry this through, it would be a big boost. Yet the sign of a really good bicycling city is that it is possible to safely get nearly anywhere on a bicycle - not just a few routes.
Absent from any discussions that I have heard are proposals that would introduce the radical measures necessary to really make London a city for pedestrians and bicyclists. Cities like Paris and Barcelona talk of completely closing off areas of the downtown to cars. Others like New York and Stockholm aim for zero pedestrian fatalities by dramatically lowering speed limits and introducing more pedestrian zones. No such wide-scale reforms appear to be being discussed in London, let alone implemented. For a dense, flat, and vibrant city like London, this is tragic.
I just finished up Why Nations Fail by the economist Daron Acemeglu and the political scientist James Robinson. The main message is that what consistently characterizes rich, well functioning societies are inclusive and pluralistic political and economic institutions, that is governments for and by the people, and economies that provides everyone with the opportunity to succeed.
I had a bit of a “well, duh!” reaction to this argument. Of course institutions matter. And that these institutions are built-up due to seemingly arbitrary historical processes, as they argue, is also not surprising. For England, the birthplace of the industrial revolution, the authors discuss how both the Magna Carta and the Glorious Revolution were important mileposts in the building up of that country's political and economic institutions. Anyone who has taken a college course in European history would hardly find this to be a revelation.
What the authors don’t spend a whole lot of time explaining though, is that their arguments might well be controversial among an influential group of people: economists. The typical economist explanation for why some countries are rich and some poor typically center around free markets, free trade, and access to capital. Institutions, surprisingly, don’t often feature in the economists’ models. This becomes less surprising after realizing that up until the recent financial crisis, most macroeconomic models didn’t take into account a financial sector.
The real strength of this book then is that it gets out of the economists’ intellectual vacuum. They use material from political science, history, sociology and anthropology in order to explain, often through historical cases, how “inclusive” political and economic institutions are built up, and why that leads to sustained economic growth. Economic concepts like incentives, innovation and creative destruction are mainly used as the glue - binding all the disparate parts.
One line, relatively early in the book, made a particular impact on me. The authors noted how inclusive economic and political institutions gives individuals the opportunity to pursue their ambitions and make use of their individual skills and abilities. A simple statement, with a lot of personal resonance.
I thought back to my time living in Stockholm and the frustration of finding an apartment in the highly regulated rental market there. The rental system in Stockholm is a (very) mild form of what the authors call an exclusionary economic institution. The rental system works well for the insiders - in the Stockholm case, those that have lived in Stockholm their whole lives and can get a nice apartment in a central location inexpensively because they have many years in the system.
For outsiders - young people from outside stockholm, immigrants, and those wishing to stay for shorter stays - the system fails completely. I was forced on the black market - paying exorbitant rent for a tiny apartment. The system is probably responsible for deterring a lot of young people from moving to the otherwise economically dynamic Stockholm region, and stops investment in much needed housing. Economic growth is impeded. It also leads to a lot of unnecessary frustration.
Of course, on the whole, Sweden is a well-functioning place, with well-functioning institutions and a well-functioning democracy. The impact that kleptocratic governments like those in Zimbabwe or North Korea have on a society are orders of magnitude more destructive.
This is where I also have my biggest gripe with the book. The book is rife with historical as well as present day case studies and examples - but almost exclusively featuring the stories of elite figures like kings, presidents and war lords. I think the message of this book could have been a lot more powerful if they spent more time showing how institutions matter to normal, everyday people, that the average reader can relate too. After all, that is, in the end, why economic growth is important - its impact on normal people.
My letter in the Financial Times, January 21st, 2015
Sir, The threat electric cars face from falling oil prices is overblown (“Electric cars and biofuels ‘likely to be biggest green victims of oil price falls’ ”, January 19). Electric cars have been developing in niche markets where their attractiveness has little to do with the price of oil.
In the past half year, Norway made up 35 per cent of all electric car sales in western Europe. They are attractive here because they avoid hefty vehicle registration and import fees, can use bus lanes and avoid paying tolls on roads. None of these benefits is threatened by lower oil prices.
Likewise, the waiting list of millionaires in California eager to get a Tesla is unlikely to be threatened by petrol that is $2 cheaper per gallon.
When automobiles first emerged at the beginning of the last century, their eventual success had little to do with the price of hay. The success of electric cars is unlikely to be dependent on the price of oil.
Oljeprisen har falt dramatisk. Det er bare seks måneder siden prisen var vel over 100 dollar fatet. I dag er prisen under 50. Økt utvinning av skiferolje i USA er grunnen som er mest trukket fram som en forklaring. Men det er også en annen forklaringer som sjelden blir nevnt: Vi er i begynnerfasen av en teknologisk transportrevolusjon.
Skiferoljen har hatt en betydelig effekt på det globale oljemarkedet og den amerikanske økonomien. Total oljeproduksjon i USA har økt med mer enn 60 prosent i løpet av de siste fem årene, og nesten all ekstra produksjon kommer fra skiferfelt i Texas og Nord Dakota. Det blir nå pumpet ut mer olje i USA enn i Saudi Arabia.
Men prisen på oljen gjenspeiler ikke bare tilbud og etterspørsel i dag, men også forventninger om hva tilbud og etterspørsel blir i framtiden. Her er det stor risiko for oljeprodusentene.
For femti år siden, ble olje brukt til å lage strøm, for oppvarming, til å produsere plastikk og kjemikalier og til å lage bensin og diesel for transport. Men i dag er oljen hovedsakelig brukt til transport.
Imidlertid er det store forandringer på vei. Først, måten vi bruker bilen på begynner å endre seg. Stadig færre unge mennesker velger å ta førerkort. Sjåførservice som Über og bildeling som Bilringen har gjort bilen unødvendig for mange. I mange byer er det blit økt fokus på gåing, sykling og offentlig transport.
Men den viktigste forandringen er elektrifisering av transport. Om man hadde lyst til å kjøpe en elektrisk bil fra et kjent bilselskap for fire år siden, hadde man bare et valg - Nissan Leaf. I dag kan man velge blant mer enn 20 modeller.
Dagens elbiler har svært begrenset rekkevidde, men utviklingen i batteriteknologien skjer raskt. GM la nylig frem sine planer om å lansere en elbil i 2016 med 300 kilometer rekkevidde til en pris rundt 230.000 kroner. Tesla har også planer om å utvikle en bil med like god rekkevidde som sin Model S, men som koster halvdelen.
Hvis oljeprodusentene ser for seg at en stor andel av fremtidens biler kjører på strøm, kan det hende at de nå velger å selge så mye som mulig. Dette kan forklare hvorfor vi har sett økende produksjon fra land som Saudi Arabia og Russland i det siste halvåret tross sterkt fallende oljepriser. Kanskje ser de for seg en framtid med mange færre kunder.
Mange Bergensere tar en reise til Syden i denne årsperioden til å nyte varme og sol. Man forventer at det går bra på ferien. Likevel kjøper de alle fleste reiseforsikring. Det kan gå galt og det er relativt lite man må betale for å forsikre seg.
Klimaskeptikere mener at det til tross for klimaendring antagelig kommer til å gå bra med Norge og kloden. Å bruke penger på å bekjempe klimaendring er derfor bortkastet. I Norge kommer det sannsynligvis til å gå bra. Men det er stor risiko og usikkerhet tilknyttet til å endre på atmosfæren. Noen av klimamodellene viser for eksempel at golfstrømmen kan svikte og Norge kan få betydelig kaldere vær. Det kommer antagelig ikke til å skje, men det kan være godt å ha forsikring.
Og klimaforsikringen koster egentlig ganske lite. For å stabilisere karbondioksid på et forholdsvis lavt nivå blir det anslått av IPCC at det vil koste rundt 0,06 prosent fra den årlige globale økonomiske veksten. For en nordman som tjener 400.000 kroner i året blir dette cirka 240 kroner i år. For sammenlignen, kjøpte jeg nylig en burger og en øl på byen og fikk en regning på 250 kroner. Jeg regner det som billig forsikring.
A Sense of Style by Steven Pinker is one of the most readable writing and style guides out there. This is not setting the bar particularly high. But I did genuinely enjoy reading through most of this book. Pinker presents a lot of common-sense tips for writing with improved clarity and grace. I felt the book got bogged down in the grammatical and linguistic terminology sometimes - especially in the last chapter. Then again, perhaps this was required to satisfy the grammar-heads.
In my academic writing, I have had several referees and reviewers comment on my “narrative” style. The reviewers are not paying me a compliment. Instead they imply that this style of leading the reader through whatever the topic of the paper is in a story-like form is not appropriate for a serious academic paper. I felt vindicated by Pinker’s pushing of Classical Style - the idea of the writer serving as a guide, directing the reader’s attention at whatever topic or idea is being explored.
Classical Style, Pinker points out, contradicts a lot of the silly dogma that has grown around academic writing. Not using personal pronouns like “I” or “we” in a paper is a good example: writing “this research shows”, rather than “I show”. Pinker argues that the use of personal pronouns stimulate a conversation, helping to engage the reader. I could add that they also convey a sense of academic honesty. Any piece of research is necessarily an imperfect collection of trade-offs and decisions about methods, data, and theory. A researcher that uses “I” is being more honest about their imperfect role in the research process.
Perhaps the thing I liked best about the book, and which many may find most frustrating is how Pinker avoids simple formulas. Passive sentences are fine - in the right context. He does provide the readers with a few guidelines - like ending sentences heavy. But for the most part, his advice is to trust your own sense of logic. Good advice like reading your prose out loud, having someone else read through it, and putting it away for a while and then rereading it, are meant to get the reader to activate their own intuitive sense of good writing.
There were times where I felt a bit bogged down by the book. The last chapter, where pinker essentially goes through a list of grammatical Frequently Asked Questions, was hard to get through (and I ended up skimming much of it). The idea of analyzing sentences with sentence trees, which he spends an entire chapter on, seemed to be more suited to linguistic students than writers with aspirations of improving their prose. The formal language of grammar, which there is a lot of in this book, has always made my head go slightly numb. Still, I can imagine that there are readers that would feel cheated if these things were not included in a book on writing and style. The book still had plenty to offer to someone like me with little training or interest in the science of grammar.
I recently worked my way through Bayesian Methods for Hackers - I will call it Hacking Bayes from now on - by Cam Davidson-Pilon, an applied statistician, programmer and consultant. The “book” is an introduction to Bayesian analysis using the Python programming language and the PyMC package. I put “book” in quotes because the form is not typical of a book or ebook. The book is written as an online iPython notebook, where executable code and figures are integrated in the text. The book is also freely available on the web, here.
Hacking Bayes is not meant to be read. You learn by working through the example code. Details of the nitty-gritty math are then either saved for after the problems or left out completely. For those interested in a hands-on approach to learning Bayesian methods, this works great. For the most part, the examples and writing are clear and easy to follow. Perhaps the biggest advantage of this book is that it is also an introduction to the powerful PyMC Markov Chain Monte Carlo simulation package for python. The path from working through this book to doing your own analysis is short.
This is the second book I have worked through on Bayesian methods with a computational and programming approach. The first being Think Bayes, by Allen Downey, which I also liked. But the two books are distinct. Downey spent a lot of time building up the computational tools from the ground-up. Many of the problems he worked through were simple discrete problems, where the posterior could be calculated explicitly. And he wrote his own code for doing Bayesian updating, which readers could try to read through and decipher.
Hacking Bayes is different. Where Downey included a single chapter on simulation methods, Pilon-Davison focuses nearly exclusively on simulation methods using the PyMC package. While the reader misses out on a deeper understanding that comes from doing the explicit calculations of updating a prior, the usefulness of these explicit methods is limited. Simulation methods, on the other hand, can be modified to tackle a broad range of problems. I think the trade-off between understanding and applicability is worth it in this case.
Hacking bayes is not a polished finished product. Typos pop-up every once in a while, as well as the author's notes to himself like “add something here later”. This roughness does not seriously detract from the book, and seems in line with the gradual improvement ethos of software engineers: put out a functional product, then improve over time based on feedback.
My experience with this book, as well as all other methodology books, is that simply reading through the text, and running the scripts is not enough to become proficient. To get the most out of the book, you should try your own analysis with your own data, using the examples as templates. I am doing just that with some oil-well drilling data. Once I have something in a roughly good form, I will put it up on my website.
The US Commerce Department recently decided to impose duties on Chinese and Taiwanese solar panels. You can read more here.
I think this is a horrible decision. My own research shows that the introduction of cheaper Chinese panels led to a boom in installations in the US. The industry that installs and maintains solar panels is much larger and employs more people than solar panel manufacturing. I’ve heard the number that about 3 to 4 times more jobs exist in the installation industry than in the manufacturing of solar panels.
China is accused of dumping solar panels on the US, that is subsidizing solar panel production and then exporting them for prices less than the true production costs The US provides subsidies to installing solar panels in the form of a 30 percent tax credit and many states provide extra incentives on top of that.
Providing subsidies for a good and at the same time complaining that producers are selling the same good too cheaply should induce a serious case of consonance-disonance. As I have argued before, if the Chinese want to subsidize US clean-energy production, let them!
The Innovators Dilemma by Clayton Christensen is a book I had heard citet plenty of times. The catch word, “Disruptive Innovation” has been thrown around quite a bit. But after having recently read the book, I realized that Christensen’s main point is actually pretty subtle. I also think the idea of disruptive innovation is powerful in helping to understand the changes that are buffering the power industry.
Established companies that are disrupted by a new technology - think work-station computer makers who failed to see the PC-era coming - were not victims of poor management and bad business models. In fact, exactly the opposite. Christensen makes the argument that established firms that get blindsided by a disruptive innovation are doing exactly what they are supposed to be doing. They are listening to their customers, trying to expand their margins, grow their company, and improve their current technologies.
Like the first crop of underpowered, toy-like PCs, technologies that grow to be disruptive tend not to be particularly attractive to an established company’s existing customers. Usually, the early technologies serve only a niche market - like the hobbyists that bought the first Apple PCs. A manager in an established firm can be excused for not pushing a technology that does not appeal to their customers, doesn’t spur much growth and on top of it all, doesn’t appear to be particularly profitable. The problem for the established companies comes when these technologies follow a “s-curve” of technological change. The niche product
The utilities business has been essentially unchanged for a 100 years. Their basic model has been to build big, centralized power plants to take advantage of economies of scale. The plants should be dispatchable - when demand goes up, they should be able to ramp up production. And as a whole, the system needs to be 99.99 percent reliable.
No surprise then that renewable energy technologies have not historically been popular with big utilities. They were generally small-scale and geographically distributed, intermittent and not dispatchable. Early investors in wind turbines in places like Denmark, The Netherlands, and Germany tended to be small investors and cooperatives. The big energy companies stuck to what they knew best.
But things are changing. Both wind and solar power have now become so advanced and so cheap that they compete, and increasingly out-compete coal and natural gas plants. Onshore wind power in the US has recently been auctioned at 2.5 cents per kilowatt-hour (kilowatt-hour). The utility that serves Austin, TX recently bought solar power generation for 5 cents per kWh. Those are prices that are lower than what natural gas and coal producers can afford to bid.
The biggest disruptions have so far taken place in Germany. Almost everyone was surprised by how fast capacity was built up following the introduction of generous feed-in tariffs. The big utilities in Germany - Eon, RWE and Vattenfall - invested heavily in traditional generation in the early 2000’s. They were then hit by a combination of lower consumption due to the financial and economic crisis, the shut-down of their profitable Nuclear power reactors, and not least the boom in renewables that have pressed down market prices.
The utilities are trying their best to cope. RWE recently broke itself in two - splitting off its traditional gas, coal and nuclear generation. The hope being that a company squarely focused on renewables will be more nimble and less weighed down by their legacy assets. Eon is also aiming to reinvent itself into a energy services company.
But other companies have got a big head start. German firms that sell, install and manage solar panels are some of the most efficient in the world. And because of these efficient firms, total system costs of solar power systems tend to be cheaper in Germany than most other countries. The big energy utilities will have a hard time catching up to these nimble players.
The utilities are also looking for a lifeboat from the government and regulators. They argue that they should be paid for simply having capacity in case of an emergency - so called capacity payments. But the big centralized plants they own are unlikely to be well adept at providing short-term back-up and regulating power for renewables. On the other hand, normal market prices should reflect long-term generation investment needs. If prices are so low that coal and gas plants are not profitable, then it simply means that coal and gas plants are not needed. Capacity payments appear to be a dead-end.
I visited Budapest for the first time a few weeks ago. It is a fun and interesting city. It can be described as being in either central or eastern Europe, but its language and culture is unique from its Slavic and German speaking neighbors. Hungarian is closest to Finnish, though they can apparently not understand each other.
Budapest is a fun walking city - especially if you can handle some hills. We wandered around the castle area a big chunk of time as well as The Citadel, which sits atop the highest hilltop in the city centre. If we had stayed a day or two more, I think we would have taken the trip to the trails in the hills on the outskirts of the city which are also supposed to be quite nice.
I had read that Budapest was one of the safest big cities in Europe. At the same time, the cities problems with poverty and alcohol abuse are visible all around. Parts of the city are magnificent and well kept. As modern and shining as the nicest parts of London or Paris. Others are completly run down. I was surprised by how many beautiful, central buildings appeared in complete disrepair and abandoned. This is a lasting legacy of the Soviet era when many moved out of the central neighborhoods and little economic incentive existed to keep up the buildings. Now, the costs of refurbishing the apartments are so high, few are willing to take it on. Sad, but also an incredible amount of potential lying right in a historic city center.
The food scene can probably be described as up-and-coming. We found a hip burger place by accident. We also ate at a chic restaurant called Zona with a very reasonable three-course lunch (about 10 EUR) and found a cozy local cafe that served a solid breakfast and even had live guitar music playing. The rest was a mixed bag. We relied on a mix of personal recomendations and trip-advisor, and we ended up in a lot of "Traditional Hungarian Food" type places with a lot of other tourists and heavy, kind of bland dishes. Calling ahead to reserve tables at the hip places the foody locals go to would probably be worth it, and from what we observed, not necessarily that much more expensive. We did have hungarian wine with almost every meal. The white wines we tasted were mild, tasty, and reasonably priced.
Getting around was easy. A solid subway system will take you most places, while a quaint tram system gives you a more scenic ride. I bought a three day pass at the airport for about 15 Euro. The system was straightforward and easy to use and brought you pretty much anywhere you needed to go in the city.
Despite the solid public transit system, the city is clogged with traffic. Both sides of the Danube river are taken up by multi-laned highways, and the city is criss-crossed by wide thoroughfares. A waste of historic and potentially beautiful real estate. The city is trying to jump on the bicycling trend. They have a bike sharing system in place, and bike lanes appeared sporatically. But the bicycling infrastructure still seemed half-baked. Bicycles were directed to share space with fast moving busses in some places, and bike lanes were haphazardly painted on sidewalks with all sorts of obstacles in their way. Still, the intention was good, and in a few places, you could even find solid, seperated bike lanes of Danish or Dutch standard.
Bystyret har nylig vedtatt bygging av et 16-etasjer høyhus i Møllendal. Bestemmelsen har vært kontroversiell fordi høyden på det foreslåtte huset strider i mot områdeplanene til Møllendal. I den sentrale og attraktive bydelen Møllendal burde det ikke bygges ett høyhus. Det burde bygges mange. Mytene om høyhus har blitt repetert så mange ganger at selv fagfolk oppfatter dem som fakta. Høyhus vil gjøre Bergen sentrum til en mer miljøvennlig, rimelig og livlig plass å bo. Dette burde prioriteres over diffuse estetiske unnskyldninger.
Den første myten om høyhus er at man kan få like mange leiligheter med å ta i bruk lavhus. Ren aritmetikk burde være tilstrekkelig til å se at dette ikke stemmer. Man får plass til mange flere leiligheter når man bygger 15 etasjer enn når man bygger 5.
Høyhusmotstanderene vil ofte mene at lavhus kan bygges tettere, men det er lite grunn for at man ikke også kan bygge høyhus tett. Et eksempel på dette er Nordnes, hvor man finner tett-bygget områder med høyde mellom 8 og 15 etasjer.
En annen myte er at det er dyrt å bygge høyt og høyhus vil føre til høyere boligpriser. Faktum er at kostnadene ved å bygge høyt er forholdsvis beskjedne. Med dagens boligpriser i sentrum, er det svært lønnsomt å bygge høyt og utnytte dyre tomter. Utbyggerne vil gjerne bygge høyt om de får lov.
Det er sant at leiligheter i sentrumsnære høyhus er attraktive plasser å bo og dermed blir prisen på den enkelte leilighet høy. Men å bygge høyt og tett i sentrum betyr at man får plass til mange flere leiligheter. Dette vil holde prisene nede totalt sett.
Den tredje myten er at høyhus ødelegger byens sjarm og folkelighet. Jeg synes de små trehusene på Nordnes er sjarmerende, men det ville vært feil å si at Bergen sentrum ville blitt mer sjarmerende om det bare fantes små trehus. Få kunne da ha bodd i sentrum. En livlig og sjarmerende by er først og fremst preget av at alle type mennesker kan bo der. For å få til dette trenger vi å bygge høyt og tett.
One of the experiences I was looking forward to for my trip to China was taking the bullet train from Beijing to Shanghai - a trip of about 1200km that can now be done comfortably by train in five hours. The network of express tracks that China has built over the last decade - with a top regulated speed of about 300 km/h - is an engineering marvel.
The Beijing to Shanghai route opened in 2010 after only two years of construction. It halved the time by train between the cities, and doubled the train capacity. Likely it has taken a large chunk of the airline traffic as well, though the train stations are well outside the city centers - so the express trains lose that advantage over traditional trains that often can bring you to the middle of the city and let you avoid the trip out to the airport.
Coming from western Europe, where infrastructure projects seem to take ever-longer to complete and almost as a rule go way over budget, the speed and efficiency of Chinese building is marvelous. But the Chinese have also run into problems. The top speed of the express trains was reduced from 350 km/h after a derailment that left 90 dead in 2011.
Technology overconfidence has also led to China´s version of the bridge to nowhere - the Shanghai maglev train to nowhere. A maglev train uses superconductors to float over the guiding rail. With little to no friction, speeds of well over 400 km/h can be reached. The technology was developed in the west and in Japan, but the Chinese have built the only operating commercial line to the Shanghai international airport.
The line was completed in 2004, but on average runs with only 20 percent of its carrying capacity. Part of the problem is that it runs from the airport to an outer part of the city that requires another 20 minute subway trip into the center. The train is also relatively expensive - costing 40 rmb, not much less than a taxi ride.
Plans had been made to extend the line in 2006, initially to Shanghai´s other airport and neighboring train terminal and potentially continuing on to the major city of Hangzou. But the project was cancelled after both protests from residents worried about noise and rumored radiation effects from the maglev line. Perhaps just as important, a traditional high-speed line was completed to Hangzou as part of the large nationwide build-out. A good example of economies of scale trumping newer, fancier technologies.
I´ve now spent three days in Shanghai, a city with sharp contrasts with Beijing. As a port city, Shanghai has a long history of being outwardly focused and you notice the western influence on the city right away. At the same time, the city reflects the full scale and ambition of modern China.
Shanghai is the largest city in the world with somewhere around 24 million people living within its city boarders. That scale is on display everywhere you go. Even more amazing is that much of the modern infrastructure was built within the last decade or two. The modern, efficient and extremely well planned metro is an experience in itself. Unlike in the New York metro, where the doors seperate the cars in the train, all the cars are connected so you can see down the full length of the inside of the train. But the trains are so long that you can not usually see the end of the train from inside.
Shanghai´s history is entangled with Europe as well as other asian nations. This creates some lively contrasts. On the famous Bund, a mishmatch of lowrise buildings in traditional European architecture are on one side of the river, a relic from the times when foreigners controlled much of the trade through Shanghai and China. Here you could think you were in London or Berlin. On the opposite side of the river is the new ultra high-rise financial district, where the second tallest building in the world is being built. You can guess which side the cameras are turned too. The bund is the ultimate symbol of an emerging China overshadowing and and nearly litteraly rising above western nations.
One of my hopes about attending an energy economics conference in China was that I would get some clarity from insiders and experts on what often seems like an opaque Chinese energy policy. The conference is now over and the results are a mixed bag. We heard from both some mid- to high-ranking officials as well as senior professors and university administrators that presumably also have party contacts. In the end, anything close to clarity was hard to find, but you could read between the lines.
I was glad to see that it is not only Norwegian politicians that are good at double-talk. Several of the high ranking speakers managed to both emphasize the necessity of environmental protection while still insisting that a continued increase in coal consumption was necessary and that a top in coal consumption and emissions was unlikely until 2030.
To me, this seemed pessimistic (or optomistic depending on the viewpoint.) One speaker referred to the environment as the ceiling for the level of coal consumption. In that case, it is hard to see how that ceiling has not been met. September is one of the better months in terms of Beijing´s air quality, yet most days a thick haze is present. The Chinese - including those with influence - are plainly aware of this everyday hazard. Already, plans are in place to reduce air pollution - primarily related to the burning of coal - around cities.
Other sources of energy are also quickly expanding. China has a goal of having 15 percent renewables by 2020. Wind power has grown enormously over the last few years and now makes up for more generation than nuclear power even though the Chinese have been developing nuclear for 40 years. Solar power is also expanding quickly, and China is now the worlds largest market for solar panels. Nuclear power is also expected to expand greatly in the coming years.
And then there is the demand side. The economy is slowing and energy use likely even more so. Seasonally adjusted power production decreased by 2 percent in august. Establing a trend from any one month of data is dangerous, but taken together with other signs of slowing like falling house prices as well as an intention by the government of making the economy more energy efficient and less investment heavy (and in turn energy intensive), it seems plausible that the pace of power plant building will surely slow.
In the end, a convergence of forces seems to be happening. The economy is slowing and becoming less energy-intensive. The government is placing more emphasis on energy efficiency and environmental protection. Wind and solar power have become cheap - in many places cheaper than coal and gas. I wouldn´t be surprised if in a few years, 2014 is seen as the year where coal consumption and emissions peaked.
Beijing is a sprawling place. A friend who we met up with who had been gone in the US for eight years said she barely recognized the city. When she left there had been three ring roads. Soon there would be 6. Getting around a city that is at once sprawling and at the same time packed can be a challenge - and Beijing has both good and bad elements.
Up until a few years ago, the bicycle dominated the street landscape. But unlike in other quickly developing cities, it has not been completly pushed aside by cars. Wide protected bikelanes are present on almost all the streets and the bicycle is still a popular way to get around. The bike lanes are also surprisingly egalitarian - used by both the poor as well as better off youngsters on hip fixed-gear cycles.
Walking is a mixed experience in Beijing. Wide walking paths are present along most streets, but many drivers find them to be convenient places to park. I would have expected China to be better at enforcing parking laws. September is widely known as the most pleasant month in Beijing. Pleasant temperatures and relatively little pollution - though the smog is still worryingly noticeable.
The subway system is amazing. So much of it has been built in just the last few years that a guidebook that is more than 2 years old is practically useless. You can get seemingly anywhere on the subways, and the trains run nearly continuously. But the cars are nearly always packed - and we have not even tried to use the subway during the morning and evening rush. Perhaps the most amazing thing about the subway is the price: 2 yuan - about 30 US cents gets you from any one place in the system to the other. A bus ticket in Bergen costs 5 dollars.
Despite the efficiency of the subway, we have also taken our fair share of taxi trips, which are relatively cheap and can be an efficient way of getting around if the traffic is light. That is a big "if" though. Beijing traffic is notorious for being congested, and a 15 minute trip can easily turn into 45 minutes, which we experienced personally.
I arrived on Thursday morning in Beijing for a conference and some sight seeing. The first surprise was how easy it was getting through the airport. It only took about 10 minutes to get through immigration. My passport picture doesnt look much like me anymore - I had short hear, a light beard and different glasses at the time. The immigration agent gave me a few extra looks, but then waived me through without too much hasle. Getting into China is easier then getting into the US is my experience so far.
I took a cab with a colleague to the conference hotel and I was again suprised by how green the outskirts of the city were. Trees and greenery were everywhere. I was more expecting the industrial and resedential sprawl you see around New York´s airports. We arrived around midday, so traffic wasn´t too bad either. As I had been told to expect, the cab didnt have any functioning seat belts. That was somewhat unnerving, but it turned out allright.
Usually these types of conferences use a handful of student assistants and maybe a secretary or two to help with the logisitics. The Chinese, on the other hand, have no problem using considerably more resources. It feels like they have put to work at least a 100 students to help with everything from registration to getting the guests from one venue to the other. All the students are helpful, eager, and seem somewhat sleep deprived.
This morning I was able to walk around the area a bit. The conference is located in the outskirts of the city where many of the universities are co-located. The geosciences university, which is hosting the conference, is in a type of gated off residential section. Cars drive slowly and plenty of space is given over to walkers and bikers. Walking around is actually quite pleasant - especially considering the chaos that I expected.
The last couple of months I have been reading and working through the book Think Bayes by Allen B. Downey. First - one of the best things about this book in a world of overpriced text books - you can get a pdf for free from the website of Green Tea Press: http://www.greenteapress.com/thinkbayes/. And in general, I can highly recommend this book, both for self-learners as well as for lecturers in statistics and analysis courses. The hands-on and accessible philosophy of this book is refreshing. After having finished working through the book, you get a good working intuition of what bayesian analysis is and how to do it. As a bonus, you also get some bonus lessons in good programming skills and practices. I have a few gripes, but they are relatively minor - and can be overcome by pairing the book with other resources out there.
Before working my way through this book, I knew of bayesian statistics and analysis but in a very abstract sense. From my courses in mathematical statistics I knew what bayes theorem was and I had even worked my way through some of the algebra of some bayesian statistical models. But I had little sense of how to actually use bayesian analysis in an applied setting.
Think Bayes works, especially as a self-learning text, because it recognizes that you learn best by actually doing an analysis. Nearly every chapter works out a problem - through python code. Explanation of the underlying logic and math comes after the code. This works much better than the typical theory-then-example layout of most textbooks. Implementation is the focus of the book, not something that, if you’re lucky, is thrown into an appendix or an online compendium.
After going through the book, I feel like I have a good starting point for implementing my own analysis - and I have a few ideas I would like to try out. I am also starting to work through the more comprehensive Bayesian Data Analysis by Andrew Gelman and his team of coauthors and I am better able to follow along this book after first having worked through Think Bayes.
If I have a gripe, it is that the author is a bit too good at following good programming practices. Downey makes the most of Python’s object oriented design, and encapsulates a lot of the code in functions and classes, calling it as needed. While this is good programming practice, it makes it harder to follow along with the logic of the code. Especially in the later, more complicated problems, I would have preferred a more linear, all-in-one code. But, like I said, this is a small issue in an otherwise well thought-out book on how to do bayesian analysis.
Etter vårens rabalder om bybanesaken og Filip Ryggs avgang , skal snart et nytt byråd for byutvikling utnevnes. Rygg var en aktiv forkjemper for en mer bærekraftig og helhetlig byutvikling i Bergen - ikke minst ved å jobbe for bedre sykkelinnfrastruktur i byen. Dette er verdier den kommende byutviklingsbyråden, som antakeligvis vil komme fra Høyre, også bør prioritere. Dessverre har Høyre ikke profilert seg som et sykkelparti verken i Bergen, Oslo eller nasjonalt. Samtidig finnes det flere gode grunner for hvorfor det er naturlig for Høyre å satse på sykkel.
Valgfrihet og personlig ansvar er to av Høyres viktigste kjerneverdier. Å velge hvordan vi kommer til jobben, barnehagen eller butikken er noe vi må gjøre flere ganger daglig med konsekvenser for miljøet, helsen, og ikke minst humøret. Å velge sykkel vil være et lite relevant valg for de fleste hvis det ikke opplevels som trygt og effektivt. Selvfølgelig vil ikke alle velge å sykle, men så mange som mulig burde ha valget.
Norge lider av en fedme- og overvektsepidemi, og nordmenn flest trenger mer mosjon i hverdagen. Folkehelseinstituttet viser at nesten 50 prosent av nordmenn er overvektige og nær 20 prosent har fedme. Trenden med økende overvekt og mindre mosjon er særlig markant blandt barn. Dette har på sikt store konsekvenser for folkehelsen og offentlig helseutgifter. Å bruke sykkel som transportmiddel er en perfkekt løsning for de som vil satse på en bedre helse.
Ingen kan eller bør tvinges til å bruke sykkel, men man kan legge til rette for at det er et attraktivt og trygt valg. Sykkel er en sak Høyre burde kjempe for.
Recently the Seattle Times ran a 3 part story on Chinese coal production. In particular, China has plans to build a set of massive coal gasification plants - where coal is turned into natural gas in the interior of the county and then piped to electricity plants near the big coastal cities. The idea is that this will replace many of the coal plants that lie near the big coastal cities and which are a major contributor to the extreme air pollution that has become a major worry for the communist party.
The problem is that coal gasification plants emit much more carbon dioxide than normal coal plants - one study says 80 percent more. Catherine Wolfram, of the University of California Berkeley worries that the planned big push for coal gasification might help relieve air pollution of Chinese cities, but in the process it may blow-up efforts to get global emissions of greenhouse gases under control.
But despite plans on paper, a large-scale operation of coal-to-gas plants in China is unlikely to happen. As the Seattle Times article details, the one plant that has been built so far in inner mongolia is plagued with problems. The plant requires massive amounts of water in a region that has little of it. The local pollution from the plant has stirred up discontent among the locals in the area, especially the minority ethnic mongolians. But most of all, the coal-to-gas plants are likely a horrible economic investment.
Only a few years ago, the hot topic was carbon capture and storage plants that would make it possible to continue burning coal for electricity while, as the name implies, capturing the carbon emissions and storing (sequestering) them under ground. A raft of projects made the drawing board, and a handful began construction. But reality soon began to bite. capture and storage plants were expensive to build and required more than 30% of their own energy production to operate. Over the last several years in the US, a raft of standard coal plants have been shut down because they couldn’t compete with cheaper natural gas. More expensive and less efficient capture and storage plants didn’t have a chance.
The story appears to be repeating itself for gas-to-coal plants. While gasification plants have been touted as having the potential to clean up generation of electricity from coal and even increase efficiency by using combined cycle generation, the reality is that the plants are hugely expensive to build and use a significant amount of the energy they produce just to operate - nulling any gained benefits of increased efficiency from a combined cycle gas turbine. Only one, heavily subsidized commercial plant has every been built in the US. That plant, built in the 1970’s, was shut down less than a decade later. The experience of the one existing plant in China - plagued with technical problems, lack of water and local opposition doesn’t look much better.
China’s approach to energy has been to go all-in on everything - fossil fuels, nuclear, hydro and renewables. But at a certain point, some basic economic and environmental constraints will take hold. China currently uses more coal than the rest of the world combined. The expansion of Chinese coal use has pushed up coal prices and turned China from a net coal exporter to the largest coal importer.
On the other hand, the Chinese push into renewables has led to a dramatic fall in the prices of wind turbines and especially solar power. China is building more nuclear power stations than the rest of the world combined including some experimental designs that could prove to be both safer and cheaper to operate. China also is estimated to have large deposits of shale gas that are being actively developed. Then there is the Chinese economy itself, whose growth has begun to moderate and has entered a more mature consumer-led, less energy-intensive state.
Coal use in China is on the inevitable decline. A process that makes getting energy from coal more expensive and less efficient is unlikely to play a significant role in China's energy system
I am gearing up for my post postdoc job hunt and I have begun drafting an application letter. This has made me think about my (few) experiences teaching so far and how those experiences have influenced my ideas about teaching and teaching effectively.
My teaching experience has so far been light. But as a PhD student I got the chance to not only be a lecturer for a course but to design the course from scratch. I co-taught the course Physics and Economics of Renewable Energy in the spring of 2010 and 2011 with a professor of Physics from a neighboring university.
My teaching method for that course was traditional - consisting of lectures where I tried to provide a summary of relevant theory and research and homework assignments for the students. The evaluations from students were mixed - some expressing an appreciation for a class, but many were critical. I seemed unsure about the material, they said. Some thought it would have been better if a real professor taught the course and not a lowly PhD student. In the end, I think I could have done a better job teaching the course - and as a whole I judge the experience as a failure. But it was a useful failure.
A major problem was that I taught in a manner that was different from the way that I myself best learn and do research. The reason that students judged me as being unsure about the material was because I was. I was far from sure that the material I was teaching was "correct." I think a lot of professors have an ability to convey a confidence about what they are teaching. But in my opinion, it is often a false confidence.
When I begin a research project I start with a healthy skepticism of the existing base of knowledge. I insist on doing my own data-heavy, bottom-up analysis. I have realized that I need to teach in a manner that reflects this.
As a postdoc, I have not taught my own courses, but I have been a guest lecturer a couple of times. But instead of a traditional lecture-style setup, where the students are expected to dutifully sit and absorb the wisdom that I pass on to them, I turned the four hours of lectures into a data lab. Students installed and learned the basics of the statistical programming language R and were instructed in loading in a few relevant data sets, making some figures and otherwise doing a basic data analysis.
I was quite happy with the results. Student evaluations were generally positive. One particularly gratifying comment was on the lines of "I have never done this kind of programming before, but it was a lot of fun!."
I plan on incorporating such a hands-on, experiential learning approach into all my future teaching. Lectures can now easily be recorded and put on the web - freeing up class time for exercises and labs. The emphasis on data and empirics strengthens understanding of theory by showing its applications. More so, I believe strongly that basic data analysis skills are becoming essential for a wide array of professionals in government, business and industry. Teaching data analysis allows me to go from conveying knowledge to allowing students to create their own knowledge.
In the absence of good data and appropriate techniques, economists have in the past relied on simplified theoretical models to make predictions about how changes in oil prices can be expected to affect production. Not surprisingly, most of these models predict that changes in price will affect production. Yet, the extraction of oil is getting increasingly complex and expensive. A good case study of this is the petroleum industry on the Norwegian Continental Shelf, which has made Norway one of the richest countries in the world and by some estimates accounts for nearly one third of Norwegian GDP. But production in the challenging conditions of the North and Norwegian sea has always been complex and expensive. When you combine huge fixed costs, an overlapping quilt of regulations and high amounts of uncertainty, it no longer becomes obvious how a change in prices will affect production.
I just put out a working paper that takes up this subject. I don't rely on theoretical models or simulation but rather look at the actual data from the 77 currently or formerly producing oil fields on the Norwegian Continental Shelf. What I find might surprise many economists, but not people working in the industry. The main result is that changes in oil prices appears to have little to no concurrent affect on production from Norwegian oil fields. Here I define concurrent in a broad sense - within 3 years. The intuition of this result is simple. It is hugely expensive to be operating in the North Sea. The fixed costs alone for a single field are measured in the billions of dollars. Companies simply can not afford to have spare capacity sitting idle so that they can increase production if prices increase. They are producing as much as they can given their current level of infrastructure.
While I don't find any concurrent reaction to oil prices in existing fields, I do find some lagged effects - after between 4 and 8 years. This suggests an investment-led reaction. Indeed, a look at data on investments indicates that changes in the oil price do quickly affect the level of investments. But here it is important to consider the different phases of an oil field - from the planning to the build out to the depletion stage. Splitting the analysis into these stages suggests that price has little to no effect on fields in the depletion stage - neither concurrent nor lagged. Instead, the biggest effect price has appears to be in the planning stage. Again, the intuition is simple. Once plans have been finalized and the build-out begins, large amounts of capital have been allocated. More so, detailed plans must be submitted and approved by the Norwegian Petroleum Directorate before a build-out can commence. Changes to these plans would likely incur costly delays. Thus decisions about the extent of the build-out and in turn the level of production from a field are mainly determined well before production begins.
A few caveats should be mentioned. First, these results only apply to existing fields. Total production from a region can and often does respond to changes in the oil price by way of increased searching as well as starting production from previously un-economic fields. I do not consider these in this research. More so, price may affect production in existing fields in more diffuse ways that are not easily picked up by a regression model. The most important is the affect of technological change. An increase in the oil price will likely spur more investment in research and development which in turn may lead to better technologies for oil extraction. Yet the relationship between increased prices, R&D and in turn oil production - while likely real - may be too inconsistent to accurately measure.
This research is timely. Followers of the Norwegian oil industry will note that the sector appears to be entering into a period of turbulence. Production is declining yet investment levels are at all time highs. Oil firms, notable Statoil, are trying to cut costs in order to maintain their level of profitability. The high level of optimism of just a couple of years ago seems to be quickly receding. With the notable exception of the Johan Sverdrup oil field, most of the large and giant oil fields were found decades ago. An increasing share of future investments in the industry will go towards existing fields. Yet this research shows that production from these fields seems to respond little to changes in prices.
Oil production from the Norwegian Continental Shelf has nearly halved since its peak in the year 2000. But the Norwegian Petroleum Directorate has optimistic projections of a leveling-off and even a slight rise in production over the next few years. The research that lies behind this article has strengthened my skepticism to these optimistic projections. Even if oil prices increased strongly, production from Norwegian fields will likely continue to decline. The near- medium- and long-term story of the Norwegian Oil industry is one of decline.
I recently watched an interviewof the new major of New York City, Bill de Blasio, by the comedian Jon Stewart on The Daily Show. One part of the interview caught my attention. The new mayor has made a case for banning the horse drawn carriages that carry tourists around the roads near central park. In the interview, he made the animal welfare case that it wasn't ethical to have these horses out amid the heavy traffic. Jon Stewart then shot in a joke about how the same should apply to humans.
De Blasio seemed a bit speechless at first. The comment was intended as a light-hearted joke, but de Blasio is smart enough to recognize the logic - and the tangle he got himself into. de Blasio has been ambivalent - at best - about the steps taken by the Bloomberg administration to make New York more pedestrian and bicycle friendly. He has said he has mixed feelings about the new pedestrian plazas in Times Square, opposed a congestion charge for automobiles entering Manhattan, and opposed a proposed bike lane near his home in Brooklyn.
Humans and cars don't mix well - no better than horses and cars. The solution is to have more space for humans and bicyclists and less for cars. Easy.
Bloomberg news recently reported on what is being called the "The Sharing Economy." In the article, the reporter suggests that technology that allows for sharing of consumer durables could be the next big boost in efficiency in the developed and and developing world - in turn powering economic growth. Despite some hand-waiving, so-far-so-good. But from there the reporter gets stuck in the idea that a healthy economy makes a lot of stuff. Sharing will reduce the demand for manufactured stuff and isn't this bad? The answer, often neglected by politicians and economists alike, is of course no. And while technology has helped maked sharing easier, a much bigger factor is at play: cities.
Car sharing is the most prominent example of a new sharing model. Instead of buying a car and then letting it sit idle for at least 95 percent of the day, urban dwellers can join a car-share service. Here, companies such as Zipcar strategically place cars around a city. Customers can then rent the cars in hourly blocks over the internet and unlock the cars from their smartphones. The car is then used much more often than the average private automomobile - the article cites a statistic saying that a car-share vehicle can replace 14 single-owner cars.
Next, the reporter makes a leap of logic and abstraction scales this argument up to other consumer durable goods - hand bags were mentioned in the article. In the overly abstract economic wording of the article, the "capital" - that is the physical stuff - is used much more efficiently in the overall economy. Higher efficiency - you can do more with the same amount of stuff - in turn powers more economic activity and higher GDP.
But then towards the end of the article, the reporter takes a U-turn. By reducing demand for vehicles, the reporter writes, car-sharing will perhaps hurt the economy. So the logic seems to go: sharing will improve the economy, unless it doesn't.
The reporter is clearly confused about the issue. But so, unfortunately, are a lot of people - including even Nobel Prize winners. Paul Krugman has written in his blog that the U.S. needs to get back to manufacturing stuff. In his mind, and many others, selling services - like a car-share - is inferior to selling stuff - like cars.
The Making-stuff-is-best idea is wrong for at least a few reasons. Consider again the car-sharing example. A urban-dwelling young person usually has plenty of reasons for ditching a car in favor of a car-sharing service. Having a car can often be a hassle - finding parking, filling gas, washing the car, and taking it in for repairs. Some people enjoy doing these tasks, but I would bet most don`t. With a car-sharing service, a person can skip all these pesky chores and pay someone else to do it. It still comes out cheaper, since the costs of car mainentance are spread among all the car-sharing-customers. Jobs are created, and the young urban-dweller can spend more of their time doing things they enjoy.
If car-sharing really took off, it could very well mean fewer cars are sold, and this would impact the car industry. But the flip side is that the young urban dwellers have more money in their pocket. They can then go out and use that money on restaurant meals, and have a few beers out on the town. More restaurants and bars open. This is what is meant by increased efficiency. The young urban dweller can still get out of town or pick up some furniture with the car-share service, but in addition he or she can also buy an extra restaurant meal or concert.
But in a way, the sharing economy is not anything new. And as my example of the young urban-dweller suggests, it has less to do with technology and more to do with cities. When I lived in New York City I would always deliver my laundry to a laundromat and for about 11 dollars I could pick it up the next day - smelling fresh and nicely folded. It was great. As far as I know, no laundromats exist in Bergen and I had to buy my own washing machine, which stands idle 99 percent of the time. What a waste! A laundromat, in other words, is really a washing-machine-share service. A restuarant could be considered a kitchen-share with some cooking thrown in. A cafe is an espresso-machine-share.These services are most often found in cities because that is where people live densely enough to make these sharing services viable.
In a lot of ways The Sharing Economy is then really just The Urban Economy. And the Urban Economy works pretty well. Urban dwellers tend to not only be economically more efficient - getting more out of the stuff around them - they are also more energy efficient. They are also probably happier. A lot of research has shown that physical things rarely lead to a sustained feeling of satisfaction of happiness. Experiences, on the other hand, can make us happier. Experiences like a good restaurant meal, a memorable concert, or maybe even the sight of nicely folded laundry.
I was in Minnesota over Christmas, visiting my family. This was my first time back in Minnesota in two years, and not coincidentally the first time I had driven in two years. And I did a lot of driving. A combination of staying in suburbs designed around cars and temperatures that dipped to 20 degrees below zero celsius made driving a necessity. But there were signs that even here, transportation choice was improving.
I stayed with my mom in Rosemount - just south of Minneapolis. One surprising addition to the street-scape since the last time I was there was the addition of bike lanes and bike-route signage. No one was using them in the 20-below temperatures, nor were they plowed even if a brave sole did venture out. Still, they appeared plenty wide enough and were well marked. Most people who live in the area have cars, but maybe they could inspire people to occasionally use a bike to get around. In any case, I like the effort.
In the short run, what might have the bigger effect is the addition of a new Bus Rapid Transit (BRT) line running from the southern suburbs and connecting with Minneapolis' light rail line. The idea behind these lines is to mimic the regularity and transparency of a light rail line but using the existing street infrastructure. The specially designed busses run at regular and frequent intervals - a schedule is not needed. Even the stations are meant to mimic typical light rail stations. This is a big improvement over the previous bus service to the area. I tried once to use the local bus service to get from airport to the southern suburbs and it involved several transfers and a lot of waiting.
I was recently asked to write a short executive summary of some of my research for the program that funds my post-doctor position. I dutifully complied, and even enjoyed trying to boil down my work and findings to a few, easy-to-understand paragraphs. For the most part, the program leaders were happy with the results - except for one thing: they found it wrong that I kept on saying 'I' in my report. I have heard it before: 'I' sounds to personal. Research is supposed to be impersonal, objective, pure! 'I' gives the dangerous impression that there is an actual, fallable human behind this grand scholarship. This is of course nonsense and leads to a lot of bad, hard-to-read writing, and perhaps worse, bad research.
One odd tradition many academics have taken in order to try to eliminate the 'I' from research writing is to substitute in 'we'. This of course is quite natural when two or more authors write together. But I always find it strange when single-authored papers insist on using the 'we'. In the English language, little amiguity exists for when to use 'I' and 'we'; 'I' is for singular, 'we' is for plural. Why do some academics feel the need to diverge from basic grammer? Nearly all of my papers so far have been single authored. I have received plenty of feedback and advice along the way, but in the end, I take sole ownership and, more importantly, responsibility for what I do and write. Using 'I' is an honest indication of that.
Switching 'we' for 'I' in single-authored papers is a bit strange, but it doesn't really impact the readability and clearity of the prose. A worse sin is when authors try to make their writing sound more 'objective' or 'academic' by getting rid of the personal pronoun all-together. Instead of writing 'I show that x y z', you might see something like 'the research shows that x y z', or even worse, the passive 'it is shown that.' This type of writing is wrong and bad at many different levels. First and most importantly it leads to passive, hard-to-understand prose. When you give passive objects the magical ability to act ('research shows') it can quickly lead to ambiguity. Using a passive voice ('It is shown') leads to ambiguity on what 'it' is and the structure of passive sentences often becomes unweildy. Finally, by eliminating the personal pronoun, the researcher is not being honest. Research, like any other craft, involves trade-offs and decisions. A researcher needs to be humble about the imperfections and ambiguity of their work and honest about the decisions and trade-offs they made. A failure to do this can lead to both bad writing as well as sloppy thinking. Good research does not benefit from either.
p.s. One place where it is appropriate for a single author to use 'we' is where he is including the reader: "We can see in the table that x y z"
Some goods and services are not well handled by free markets. Providing health care is one place where markets can often fail - all sorts of ethical and informational factors get in the way of a well functioning market. Parking, on the other hand. is not one of these goods. But strangely, it is often treated as one - leading to a lot of frustration and waste. Smart city planners should treat parking like the private, excludable good it is, at the same time they should question whether massive amounts of parking in crowded cities really is the best use of valuable space.
Oslo recently announced that it would take steps to start charging for use of around half of the cities 18,000 free parking spaces. This anouncement came after pilot projects in certain areas of the cities had shown that introducing payed parking was hugely popular - even among car owners. Where locals previously had to spend hours every week trolling through their neighborhood looking for a open parking space, after the introduction of payed parking, finding parking suddenly became massively easier. Given the relatively modest price of theparking permits, most motorists found this to be a fantastic trade-off.
Yet Oslo's actions, while a good start, are not nearly good enough. First, there is still the case of the remaining thousands of free parking spaces. More importantly, in a city where street parking takes up a large part of the public urban space, little emphasis has been given to the alternative use of this space. Compared to its Nordic capital-city neighbors like Copenhagen and Stockholm - Oslo has done an awful job of making the city bicycle-friendly. Removing just a thousand strategically placed parking spaces would free up space for an interconnected bicycle infrastructure that Oslo sorely needs. Another thousand would give space to wider sidewalks, more trees, perhaps even small playgrounds or expanded parks. Planners need to start seeing the parking also in terms of opportunity cost - not just what drivers are willing to pay for parking but what also can that space be used for.
Removing parking spaces doesn't necessarily mean it will be harder to find parking either - but it will and should become more expensive. Drivers who wish to use a street-level parking space should pay the going market price for it. The best way for a city to ensure proper pricing would be to hand over control of the parking to either to a city-owned company or in the form of a long-term lease to a private company. In turn these entities will operate the parking spaces and be in charge of setting an appropriate charge. Arguably, such an entity would be able to have a form of local monopoly on street level parking - but they would have competition from private parking garages and people choosing to use busses, bikes and feet to get around. Certainly not a bad thing.
With development of effective and inexpensive solar and wind power technology, investing in energy projects has gone from being the exclusive domain of big utilites to anyone from a farmer to a suburban house owner. Over the last couple of weeks, I have heard several interesting presentations that explore the actions of these new small-time energy investors.
I wrote a few weeks ago about the idea from Matti Liski from Aalto University in Finland that subsidies for renewable energy investment might make energy markets more competitive by encouraging new entrants - leading to lower prices for consumers and lower profits for big utilities. Loss-making utilities in Europe - hit by low electricity prices made this idea seem entirely sensible.
Kristin Linnerudfrom the Norwegian Center for Environmental and Climate Research (CICERO) and some colleagues have asked another interesting question - how do these new small investors behave? In a paper currently out for review, she finds that small, inexperienced investors tend to be quicker to invest and require lower returns. She explains this with a reference to the psychology and behavioral economics litterature - humans often make simplifying rule-of-thumb decisions under complex situations. A norwegian farmer looking at investing in a small run-of-the-river plant will tend to evaluate if it is likely to make money or not rather than more complex calculations involving potential risk.
Another interesting presentation was by Xiaozi Liu, currently of Greenpeace International. She presented an overview at NHH about crowdsourcing of renewable energy projects. The idea here is that companies would solicit individuals to fund portions of a renewable energy project. A company operating in the US is already operating by that model - building and funding solar power plants and allowing individuals to invest relatively small amounts in return for the proceeds. An interesting idea that came up during the discussion was that individuals might be willing to fund these type of projects with a lower expected return. Crowdsourcing might then be a way of reducing the financing costs of solar and other "green" energy investments
The cost of wind and especially solar power equipment has dropped drastically over the last few years. Solar panel cells now account for a mere 20-30 percent of the total cost of a given installation. Reducing costs in other areas, especially financing, is then increasingly important and there seems to be ample room for being optimistic that here to costs can come down further.
When I lived in New York City between 2004 to 2006, the thought of using a bicycle to get around didn't enter my mind. Roads were for cars, and only half-crazed bicycle messengers dared get around on two-wheels. But in the seven years that have passed New York has made astonishing strides in making the city both more bicycle friendly as well as more pedestrian friendly. This was done, essentially with a philosophy of doing a quick, half-assed job while leaving more permanent solutions and requisite planning for the long term. Other cities struggling to implement ambitious cycling and pedestrian goals could learn from this approach.
The transportation commissioner of New York under Michael Bloomberg, Janette Sadik Khan, explains how the city got results fast by using temporary solutions. Times Square was transformed in a matter of weeks into a true pedestrian plaza by using paint, planters, and even lawn chairs. Protected bike lanes were created by moving car parking about a meter from the curb and splashing some green paint on the road. By using inexpensive and temporary solutions, the city could both test how well these solutions worked before deciding on permanent solutions as well as making permanent solutions politically feasible.
Here in Norway, the two largest cities - Oslo and Bergen - have had plans for creating interconnected bicycling networks for decades, but progress is depressingly slow. Bike paths are created a few kilometers at a time, if at all. The result is a patchwork of bike lanes and bike paths that can come to halt without warning. Bicyclists are forced onto sidwalks - annoying pedestrians - or out into traffic - annoying drivers and endangering bicyclists. Planners and politicians have been known to excuse the slow progress by pointing out how little space there is in these urban centers. The dubious logic of prioritizing cars when space is tight does not seem to have dawned on them.
Norwegian politicians and planners should learn from New York's example. Use quick and temporary sollutions in trial periods and then gather as much data as possible. Leave the complicated planning and long-term solutions for another day. The politicians and planners will likely find that controversial bike lanes, reduced parking, and lower speed limits suddenly become less controversial as people experience their benefits.
A big decision was made for the future of Bergen yesterday. The members of the local conservative party held an internal vote on whether an extension of the light rail line that will cross the city center and continue northwards should be built at street level across the historic “Bryggen” (wharf) or should be put in a tunnel. A large majority in the party voted for the tunnel option. This matters because the other parties in the city council are approximately evenly split on the issue, and so the final decision comes down to votes from the conservative party. If the tunnel option ultimately gets the go-ahead, it will likely serve to be a costly mistake and an example of populist and misguided politics trumping good city planning.
Most of the debate has centered around the effect of the light rail on the UNESCO-listed Bryggen. The most extreme of the tunnel-advocates claim that the light rail line will ruin Bryggen. Considering that a heavily trafficked four-laned road currently runs where the light-rail would go, this is clearly bluster. The exhaust, noise and danger of over 10,000 cars and busses passing per day is an order of magnitude more damaging than 100 electric light-rail crossings. While tunnel advocates have dangled the possibility for a vehicle free Bryggen, none have committed to a timeline for doing so. The only concrete plans for making Bryggen car-free comes from putting a light-rail line there. Without the light-rail line, much of the political capital for removing cars from Bryggen also disappears.
But the debate over the tunnel options has much wider, and arguably more important implications than just the status of Bryggen. A tunnel in the center of the city would also mean a tunnel along a long stretch northward as well. On the other hand, a light-rail line over bryggen would continue along what is now a heavily trafficked highway. Currently this highway is the source of much noise and air pollution, and has the effect of bisecting the northern neighborhoods of Bergen. With the light rail line at street-level, this traffic would be diverted to a tunnel and ring roads - eliminating much noise and pollution.
More so, experience from the existing light-rail line is that high-density developments spring up along the route. People want to live close to the light rail. No one wants to live close to a highway. By choosing a tunnel option, the city would be turning its back on a once in a century opportunity to revitalize and improve a large swath of Bergen.
Finally, as the perpetually delayed and over-budget subways in Rome and the east side of Manhattan show, building tunnels in dense city centers can be a complicated and messy undertaking. Some estimates already put the tunnel option at costing 2.5 billion kroners more than a street-level route, and it will, even in the best circumstances, add several years to the construction and planning. Yet, history says that with such a complex project, even the best made plans can go awry. In hindsight, members of the conservative party may come to regret voting for what has the potential to become a major boondoggle for Bergen.
It is a typically grim, gray and drizzly November in Stockholm, but there are still plenty of people getting around on their bikes. The postal service in Stockholm is an enthusiastic user of utility bikes to deliver the mail. They are easy to get around in and they never have a problem finding parking - which they need to do several hundred times over the course of a day. Technology is now making it even easier for these efficient post(wo)men. The picture below is of a post-bike with an electric assist, the first time I have seen that.
The drivers of the two expensive automobiles below have un-intentionally made en excellent visual point about the use and abuse of valuable city space. They have parked in front of a city-bicycle stand - though the bikes are missing since the system closes down for the winter in November. I counted 14 bicycle parking spaces along the length of these two cars.
On a related note, the Stockholm city and Swedish national government recently made big news by announcing an agreement on a major extension of the cities subway system. I'm usually for increased investment in public transit, but I am skeptical. Subways are extremely expensive and slow to build. More so, Stockholm has obscene amounts of on-street parking and many French-style wide-avenues. A much more cost effective and quick solution would be to build a light-rail system on existing surface streets. The cost in space would mainly be in terms of parking for Land Rovers and BMW's.
Today I listened to a presentation of a theoretical paper and, to my surprise, I found it interesting. The paper was by Matti Liski from Aalto University in Finland and a working paper can be found here. But the idea is quite simple. Traditionally it has been hard to get started as an electricity company - there are high barriers to entry. Liski mentioned the uncertainty of future fuel costs as one barrier to entry, but I would guess financing and operating a large coal, gas or nuclear plant as an equally if not more important factor. With high barriers to entry that means little competition in supply and in turn higher prices for consumers.
The idea of the paper is then to suggest that subsidies of renewable energy may, on the whole, not be that costly for society. The reason is that while subsidies may be expensive, they encourage new entrants in electricity markets which lead to a more competitive market and lower prices. Consumers pay for subsidies, but they also reap the rewards of lower overall prices.
One of the reasons I like this paper is that it gels nicely with many of the stories coming out of the electricity sector at the moment. Big electricity companies complain that they no longer are able to make reliable profits, and several have racked up big losses over the last couple of years. Many clearly resent the subsidies and regulations that are eroding their market power. Just one example was a recent story in Bloomberg News about Tea Party advocates and Green advocates teaming up against the local electricity monopoly to support solar power.
The large-scale introduction of renewables into electricity markets is leading to many surprising outcomes
Some months ago I wrote earlier in this blog that politicians should and probably would begin charging for use of roads. I saw in the financial times today that Germany is doing just that... for foreigners. Well, officially everyone has to pay the fee for the use of the autobahn, but residents will appparently get the money back as a tax rebate as they have already paid for it in their road tax. A substantial percentage of Germans do not drive and I wonder if making this implicit fee explicit will have political consequences - like a true pay-to-use system - toll roads, etc.
The buzz around the corridors of my workplace has recently been about sports cars. Perhaps not that surprising given the number of men in their 50s who work in my department. But one car in particular, the Tesla Model S, is generating most of the buzz. I know of two who have already placed orders and I wouldn’t be surprised if more followed. The Tesla is a big, fast and expensive sports car. But it is also fully electric and so it gets the label of “green” and benefits from some generous Norwegian government incentives.
Getting people to switch from gasoline and diesel cars to electric is probably, on the whole, a good thing. But heavily subsidizing sports cars for the well-off has its obvious problems. The point of the heavy subsidies was to kick-start the market for electrics. In Norway, that has been largely accomplished. The Government should, and likely will begin to roll-back the incentives.
The incentives for purchasing an electrical vehicles in Norway may be the most generous in the world. Electric car buyers avoid the hefty taxes and fees that will typically double the price of a car. In addition electric car drivers avoid road tolls and even get free passage on ferries. And then there is the fuel cost - Norway has some of the highest gasoline and diesel prices in the world, but electricity prices are moderate. More so, free charging is available in many locations.
Back when electric cars were little more than supercharged golf carts from small producers such as Reva and Think!, these generous incentives led a few early-adaptors to start zipping around Norway’s cities. But now that major manufacturers have begun making electric cars that are similar in size and utility to normal gas-powered cars, sales have taken off. The electric Nissan Leaf has been one of the best-selling cars in Norway in the last year - just behind VW’s Golf. Now that the Tesla has gone on sale, it has also become one of the most sold cars in Norway.
Electric cars have a solid foothold in Norway, and the infrastructure for them - like charging stations - are spreading quickly. But that means that much of the reasoning for subsidies is also disappearing. Electric cars are not good for the environment, despite what their manufacturers might want you to believe. They are only less bad for the environment than their gas or diesel sipping peers. They still create congestion on roads and in dense city centres. They pose a danger to pedestrians and cyclists - maybe even more than noisier petrol cars. And they cause emissions, though indirectly through the electricity they consume.
There is also the issue of fairness. A wealthy individual looking to buy a second, electric car can skip import and value-added taxes. A student looking to buy a bicycle gets no such help. More so, if the government’s aim is to reduce pollution and improve the environment they would likely get much more bang for their buck by investing in better public transport and bicycling infrastructure.
I recently put out a NHH discussion paper titled “The Silver Lining of Price Spikes: How price spikes can help overcome the energy efficiency gap.” As the title suggests, the research question revolves around what is alternatively called the energy efficiency gap or paradox, which is the observation that consumers and businesses do not always invest in energy efficiency even when the returns to doing so appear to be ample.
The inspiration for the article came from my earlier master student Louis Pauchon who wrote a master thesis comparing energy efficiency policies in both Norway and France. A central part of his argument was that efforts to improve energy efficiency were more successful in Norway compared to France because of the greater variability of prices in Norway. He presented a chart showing a correlation between heat pump installations and seasonal variation in prices. I suggested that he also look to see if there were a correlation between electricity prices and google searches for heat pumps in Norway. He produced a chart that appeared to show just that.
My paper narrows in on the informational role that price spikes might play. I also use data on google searches and prices in Norway as well as a few other time series. I argue that price spikes, by creating attention around prices and energy efficiency, can serve a useful role.
To support my argument I first needed to define a price spike - something that is harder in practice than it might first appear. No generally recognised definition of a price spike exists. I skirted around the issue - defining a range of price spikes. Essentially I first smoothed the prices to various degrees, and then calculated deviations from these smoothed curves.
I then ran regressions to see if I could say with confidence that there really was a correlation between price spikes and searches for heat pumps. The results indicated that there was good reason to believe that a strong correlation exists between price spikes and searches for heat pumps no matter how narrowly I defined price spikes.
The question then remains whether it is correct to give this correlation the interpretation of it being a causal informational affect. First, the correlations could simply reflect that demand for heat pumps go up when prices go up - a conclusion hardly worth a research paper.
To deal with this I included the smoothed terms in the regressions and argued that this should capture the price-demand effect since consumers essentially pay a smoothed price - at a minimum a monthly average. More so, as I narrow the definition of price spikes, then the estimated effect on the deviations effect should decrease if only the price-demand effect was at play. But the opposite appears to happen - the estimated effect actually tended to become bigger the more narrowly price spikes were defined.
This was what I would consider a fun paper to write. The methods and conclusions were all relatively simple, but the point that emerges has - as far as I know - not been discussed in the literature. It is not clear to me if there are any important policy implications from this research. If I were forced to point to something it might be for politicians and policymakers to be a bit more relaxed about price spikes. The news and media attention that they generate - while uncomfortable for politicians - may be good for an energy efficient society.