Apple Core Text Programming Guide Manuel Apple sur Fnac.com - Pour voir la liste complète des manuels APPLE, cliquez ici

 

 

TELECHARGER LE PDF sur :

http://developer.apple.com/library/ios/documentation/StringsTextFonts/Conceptual/CoreText_Programming/CoreText_Programming.pdf

Commander un produit Apple sur Fnac.com

 

 

Voir également d'autres Guides et documentation APPLE :

Apple-TV_2nd_gen_Setup_Guide.pdf-manuel

Apple-Archives-and-Serializations-Programming-manuel

Apple-SafariWebContent.pdf-Guide-manuel

Apple-iTunes_ExtrasandiTunes_LPTestGuide1.1.pdf-manuel

Apple-Text-System-User-Interface-Layer-Programming-manuel

Apple-CocoaTextArchitecture.pdf-manuel

Apple-Key-Value-Observing-Programming-Guide-manuel

Apple-Location-Awareness-Programming-Guide-manuel

Apple-SharkUserGuide.pdf-manuel

Apple-drawingprintingios.pdf-manuel

Apple-QuickTime7_User_Guide.pdf-manuel

Apple-Event-Handling-Guide-for-iOS-manuel

Apple-ipod_nano_3rd_gen_features_guide.pdf-manuel

Apple-iTunes_VideoandAudio_Asset_Guide5.0.pdf-manuel

Apple-ARD3_AdminGuide.pdf-manuel

Apple-SafariWebContent.pdf-Guide-manue

Apple-iphone_3gs_finger_tips.pdf-manuel

Apple-InstrumentsUserGuide.pdf-manuel

Apple-Logic-Pro-9-TDM-Guide-manuel

Apple-macbook_air_users_guide.pdf-manuel

Apple-macbook_air-13-inch_mid-2012-qs_ta.pdf-manuel

Apple-AppStoreMarketingGuidelines-JP.pdf-Japon-manuel

Apple-macbook_pro_retina_qs_ta.pdf-manuel

Apple-ipad_user_guide_tu.pdf-manuel

Apple-ipad_user_guide_th.pdf-manuel

Apple-iphone_user_guide_gr.pdf-manuel

Apple-Nike_Plus_iPod_Sensor_UG_2A.pdf-manuel

Apple-ipad_manual_del_usuario.pdf-manuel

Apple-ipad_uzivatelska_prirucka.pdf-manuel

Apple-ipad_wifi_informations_importantes.pdf-manuel

Apple-Xsan_2_Admin_Guide_v2.3.pdf-manuel

Apple-macbook_pro-13-inch-late-2012-quick_start.pdf-manuel

Apple-CocoaDrawingGuide.pdf-manuel

Apple-Cryptographic-Services-Guide-manuel

Apple-Resource-Programming-Guide-manuel

AppleSafariVisualEffectsProgGuide.pdf-manuel

/Apple-WorkingWithUSB.pdf-manuel

Apple-macbook_pro-retina-mid-2012-important_product_info_f.pdf-manuel

Apple-iOS_Security_May12.pdf-manue

Apple-Mac-Pro-2008-Performance-and-Productivity-for-Creative-Pros

Apple-iPod_shuffle_4thgen_Manuale_utente.pdf-Italie-Manuel

Apple-KernelProgramming.pdf-manuel

Apple-Core-Data-Model-Versioning-and-Data-Migration-Programming-Guide-manuel

Apple-RED_Workflows_with_Final_Cut_Pro_X.pdf-manuel

Apple-Transitioning-to-ARC-Release-Notes-manuel

Apple-iTunes-Connect-Sales-and-Trends-Guide-manuel

Apple-App-Sandbox-Design-Guide-manuel

Apple-String-Programming-Guide-manuel

Apple-Secure-Coding-Guide-manuel

Apple_AirPort_Networks_Early2009.pdf-manuel

Apple-TimeCapsule_SetupGuide_TA.pdf-manuel

Apple-time_capsule_4th_gen_setup.pdf-manuel

Apple-TimeCapsule_SetupGuide.pdf-manuel

Apple-TimeCapsule_SetupGuide_CH.pdf-Chinois-manuel

Apple-CodeSigningGuide.pdf-manuel

Apple-ViewControllerPGforiOS.pdf-manuel

Apple-KeyValueObserving.pdf-manuel

Apple-mac_mini-late-2012-quick_start.pdf-manuel

Apple-OS-X-Mountain-Lion-Core-Technologies-Overview-June-2012-manuel

Apple-OS-X-Server-Product-Overview-June-2012-manuel

Apple-Apple_Server_Diagnostics_UG_109.pdf-manuel

Apple-PackageMaker_UserGuide.pdf-manuel

Apple-InstrumentsUserGuide.pdf-manuel

Apple-Logic-Pro-9-TDM-Guide-manuel

Apple-macbook_air_users_guide.pdf-manuel

Apple-macbook_air-13-inch_mid-2012-qs_ta.pdf-manuel

Apple-AppStoreMarketingGuidelines-JP.pdf-Japon-manuel

Apple-macbook_pro_retina_qs_ta.pdf-manuel

Apple-ipad_user_guide_tu.pdf-manuel

Apple-ipad_user_guide_th.pdf-manuel

Apple-iphone_user_guide_gr.pdf-manuel

Apple-Nike_Plus_iPod_Sensor_UG_2A.pdf-manuel

Apple-ipad_manual_del_usuario.pdf-manuel

Apple-ipad_uzivatelska_prirucka.pdf-manuel

Apple-ipad_wifi_informations_importantes.pdf-manuel

Apple-Xsan_2_Admin_Guide_v2.3.pdf-manuel

Apple-macbook_pro-13-inch-late-2012-quick_start.pdf-manuel

Apple-CocoaDrawingGuide.pdf-manuel

Apple-Cryptographic-Services-Guide-manuel

Apple-Resource-Programming-Guide-manuel

AppleSafariVisualEffectsProgGuide.pdf-manuel

/Apple-WorkingWithUSB.pdf-manuel

Apple-macbook_pro-retina-mid-2012-important_product_info_f.pdf-manuel

Apple-iOS_Security_May12.pdf-manue

Apple-Mac-Pro-2008-Performance-and-Productivity-for-Creative-Pros

Apple-iPod_shuffle_4thgen_Manuale_utente.pdf-Italie-Manuel

Apple-KernelProgramming.pdf-manuel

Apple-Core-Data-Model-Versioning-and-Data-Migration-Programming-Guide-manuel

Apple-RED_Workflows_with_Final_Cut_Pro_X.pdf-manuel

Apple-Transitioning-to-ARC-Release-Notes-manuel

Apple-iTunes-Connect-Sales-and-Trends-Guide-manuel

Apple-App-Sandbox-Design-Guide-manuel

Apple-String-Programming-Guide-manuel

Apple-Secure-Coding-Guide-manuel

Apple_AirPort_Networks_Early2009.pdf-manuel

Apple-TimeCapsule_SetupGuide_TA.pdf-manuel

Apple-time_capsule_4th_gen_setup.pdf-manuel

Apple-TimeCapsule_SetupGuide.pdf-manuel

Apple-TimeCapsule_SetupGuide_CH.pdf-Chinois-manuel

Apple-CodeSigningGuide.pdf-manuel

Apple-ViewControllerPGforiOS.pdf-manuel

Apple-KeyValueObserving.pdf-manuel

Apple-mac_mini-late-2012-quick_start.pdf-manuel

Apple-OS-X-Mountain-Lion-Core-Technologies-Overview-June-2012-manuel

Apple-OS-X-Server-Product-Overview-June-2012-manuel

Apple-Apple_Server_Diagnostics_UG_109.pdf-manuel

Apple-PackageMaker_UserGuide.pdf-manuel

Apple-Instrumentos_y_efectos_de_Logic_Studio.pdf-Manuel

Apple-ipod_nano_kayttoopas.pdf-Finlande-Manuel

Apple_ProRes_White_Paper_October_2012.pdf-Manuel

Apple-wp_osx_configuration_profiles.pdf-Manuel

Apple-UsingiTunesProducerFreeBooks.pdf-Manuel

Apple-ipad_manual_do_usuario.pdf-Portugais-Manuel

Apple-Instruments_et_effets_Logic_Studio.pdf-Manuel

Apple-ipod_touch_gebruikershandleiding.pdf-Neerlandais-Manuel

AppleiPod_shuffle_4thgen_Manual_del_usuario.pdf-Espagnol-Manuel

Apple-Premiers-contacts-avec-votre-PowerBook-G4-Manuel

Apple_Composite_AV_Cable.pdf-Manuel

Apple-iPod_shuffle_3rdGen_UG_DK.pdf-Danemark-Manuel

Apple-iPod_classic_160GB_Benutzerhandbuch.pdf-Allemand-Manuel

Apple-VoiceOver_GettingStarted-Manuel

Apple-iPod_touch_2.2_Benutzerhandbuch.pdf-Allemand-Manuel

Apple-Apple_TV_Opstillingsvejledning.pdf-Allemand-Manuel

Apple-iPod_shuffle_4thgen_Manuale_utente.pdf-Italie-Manuel

Apple-iphone_prirucka_uzivatela.pdf-Manuel

Apple-Aan-de-slag-Neerlandais-Manuel

Apple-airmac_express-80211n-2nd-gen_setup_guide.pdf-Thailande-Manuel

Apple-ipod_nano_benutzerhandbuch.pdf-Allemand-Manuel

Apple-aperture3.4_101.pdf-Manuel

Apple-Pages09_Anvandarhandbok.pdf-Manuel

Apple-nike_plus_ipod_sensor_ug_la.pdf-Mexique-Manuel

Apple-ResEdit-Reference-For-ResEdit02.1-Manuel

Apple-ipad_guide_de_l_utilisateur.pdf-Manuel

Apple-Compressor-4-Benutzerhandbuch-Allemand-Manuel

Apple-AirPort_Networks_Early2009_DK.pdf-Danemark-Manuel

Apple-MacBook_Pro_Mid2007_2.4_2.2GHz_F.pdf-Manuel

Apple-MacBook_13inch_Mid2010_UG_F.pdf-Manuel

Apple-Xserve-RAID-Presentation-technologique-Janvier-2004-Manuel

Apple-MacBook_Pro_15inch_Mid2010_F.pdf-Manuel

Apple-AirPort_Express-opstillingsvejledning.pdf-Danemark-Manuel

Apple-DEiPod_photo_Benutzerhandbuch_DE0190269.pdf-Allemand-Manuel

Apple-Final-Cut-Pro-X-Logic-Effects-Reference-Manuel

Apple-iPod_touch_2.1_Brugerhandbog.pdf-Danemark-Manuel

Apple-Remote-Desktop-Administratorhandbuch-Version-3.1-Allemand-Manuel

Apple-Qmaster-4-User-Manual-Manuel

Apple-Server_Administration_v10.5.pdf-Manuel

Apple-ipod_classic_features_guide.pdf-Manuel

Apple-Lecteur-Optique-Manuel

Apple-Carte-AirPort-Manuel

Apple-iPhone_Finger_Tips_Guide.pdf-Anglais-Manuel

Apple-Couvercle-Manuel

Apple-battery.cube.pdf-Manuel

Apple-Boitier-de-l-ordinateur-Manuel

Apple-Pile-Interne-Manuel

Apple-atacable.pdf-Manuel

Apple-videocard.pdf-Manuel

Apple-Guide_de_configuration_de_l_Airport_Express_5.1.pdf-Manuel

Apple-iMac_Mid2010_UG_F.pdf-Manuel

Apple-MacBook_13inch_Mid2009_F.pdf-Manuel

Apple-MacBook_Mid2007_UserGuide.F.pdf-Manuel

Apple-Designing_AirPort_Networks_10.5-Windows_F.pdf-Manuel

Apple-Administration_de_QuickTime_Streaming_et_Broadcasting_10.5.pdf-Manuel

Apple-Opstillingsvejledning_til_TimeCapsule.pdf-Danemark-Manuel

Apple-iPod_nano_5th_gen_Benutzerhandbuch.pdf-Manuel

Apple-iOS_Business.pdf-Manuel

Apple-AirPort_Extreme_Installationshandbuch.pdf-Manuel

Apple-Final_Cut_Express_4_Installation_de_votre_logiciel.pdf-Manuel

Apple-MacBook_Pro_15inch_2.53GHz_Mid2009.pdf-Manuel

Apple-Network_Services.pdf-Manuel

Apple-Aperture_Performing_Adjustments_f.pdf-Manuel

Apple-Supplement_au_guide_Premiers_contacts.pdf-Manuel

Apple-Administration_des_images_systeme_et_de_la_mise_a_jour_de_logiciels_10.5.pdf-Manuel

Apple-Mac_OSX_Server_v10.6_Premiers_contacts.pdf-Francais-Manuel

Apple-Designing_AirPort_Networks_10.5-Windows_F.pdf-Manuel

Apple-Mise_a_niveau_et_migration_v10.5.pdf-Manue

Apple-MacBookPro_Late_2007_2.4_2.2GHz_F.pdf-Manuel

Apple-Mac_mini_Late2009_SL_Server_F.pdf-Manuel

Apple-Mac_OS_X_Server_10.5_Premiers_contacts.pdf-Manuel

Apple-iPod_touch_2.0_Guide_de_l_utilisateur_CA.pdf-Manuel

Apple-MacBook_Pro_17inch_Mid2010_F.pdf-Manuel

Apple-Comment_demarrer_Leopard.pdf-Manuel

Apple-iPod_2ndGen_USB_Power_Adapter-FR.pdf-Manuel

Apple-Feuille_de_operations_10.4.pdf-Manuel

Apple-Time_Capsule_Installationshandbuch.pdf-Allemand-Manuel

Apple-F034-2262AXerve-grappe.pdf-Manuel

Apple-Mac_Pro_Early2009_4707_UG_F

Apple-imacg5_17inch_Power_Supply

Apple-Logic_Studio_Installieren_Ihrer_Software_Retail

Apple-IntroductionXserve1.0.1

Apple-Aperture_Getting_Started_d.pdf-Allemand

Apple-getting_started_with_passbook

Apple-iPod_mini_2nd_Gen_UserGuide.pdf-Anglais

Apple-Deploiement-d-iPhone-et-d-iPad-Reseaux-prives-virtuels

Apple-F034-2262AXerve-grappe

Apple-Mac_OS_X_Server_Glossaire_10.5

Apple-FRLogic_Pro_7_Guide_TDM

Apple-iphone_bluetooth_headset_userguide

Apple-Administration_des_services_reseau_10.5

Apple-imacg5_17inch_harddrive

Apple-iPod_nano_4th_gen_Manuale_utente

Apple-iBook-G4-Getting-Started

Apple-XsanGettingStarted

Apple-Mac_mini_UG-Early2006

Apple-Guide_des_fonctionnalites_de_l_iPod_classic

Apple-Guide_de_configuration_d_Xsan_2

Apple-MacBook_Late2006_UsersGuide

Apple-sur-Fnac.com

Apple-Mac_mini_Mid2010_User_Guide_F.pdf-Francais

Apple-PowerBookG3UserManual.PDF.Anglais

Apple-Installation_de_votre_logiciel_Logic_Studio_Retail

Apple-Pages-Guide-de-l-utilisateur

Apple-MacBook_Pro_13inch_Mid2009.pdf.Anglais

Apple-MacBook_Pro_15inch_Mid2009

Apple-Installation_de_votre_logiciel_Logic_Studio_Upgrade

Apple-FRLogic_Pro_7_Guide_TDM

Apple-airportextreme_802.11n_userguide

Apple-iPod_shuffle_3rdGen_UG

Apple-iPod_classic_160GB_User_Guide

Apple-iPod_nano_5th_gen_UserGuide

Apple-ipod_touch_features_guide

Apple-Wireless_Mighty_Mouse_UG

Apple-Advanced-Memory-Management-Programming-Guide

Apple-iOS-App-Programming-Guide

Apple-Concurrency-Programming-Guide

Apple-MainStage-2-User-Manual-Anglais

Apple-iMacG3_2002MultilingualUserGuide

Apple-iBookG3_DualUSBUserGuideMultilingual.PDF.Anglais

Apple-imacG5_20inch_AirPort

Apple-Guide_de_l_utilisateur_de_Mac_Pro_Early_2008

Apple-Installation_de_votre_logiciel_Logic_Express_8

Apple-iMac_Guide_de_l_utilisateur_Mid2007

Apple-imacg5_20inch_OpticalDrive

Apple-FCP6_Formats_de_diffusion_et_formats_HD

Apple-prise_en_charge_des_surfaces_de_controle_logic_pro_8

Apple-Aperture_Quick_Reference_f

Apple-Shake_4_User_Manual

Apple-aluminumAppleKeyboard_wireless2007_UserGuide

Apple-ipod_shuffle_features_guide

Apple-Color-User-Manual

Apple-XsanGettingStarted

Apple-Migration_10.4_2e_Ed

Apple-MacBook_Air_SuperDrive

Apple-MacBook_Late2007-f

ApplePowerMacG5_(Early_2005)_UserGuide

Apple-iSightUserGuide

Apple-MacBook_Pro_Early_2008_Guide_de_l_utilisateur

Apple-Nouvelles-fonctionnalites-aperture-1.5

Apple-premiers_contacts_2e_ed_10.4.pdf-Mac-OS-X-Server

Apple-premiers_contacts_2e_ed_10.4

Apple-eMac_2005UserGuide

Apple-imacg5_20inch_Inverter

Apple-Keynote2_UserGuide.pdf-Japon

Apple-Welcome_to_Tiger.pdf-Japon

Apple-XsanAdminGuide_j.pdf-Japon

Apple-PowerBookG4_UG_15GE.PDF-Japon

Apple-Xsan_Migration.pdf-Japon

Apple-Xserve_Intel_DIY_TopCover_JA.pdf-Japon

Apple-iPod_nano_6thgen_User_Guide_J.pdf-Japon

Apple-Aperture_Photography_Fundamentals.pdf-Japon

Apple-nikeipod_users_guide.pdf-Japon

Apple-QuickTime71_UsersGuide.pdf-Japon

Apple-iMacG5_iSight_UG.pdf-Japon

Apple-Aperture_Performing_Adjustments_j.pdf-Japon

Apple-iMacG5_17inch_HardDrive.pdf-Japon

Apple-iPod_shuffle_Features_Guide_J.pdf-Japon

Apple-MacBook_Air_User_Guide.pdf-Japon

Apple-MacBook_UsersGuide.pdf-Japon

Apple-iPad_iOS4_Brukerhandbok.pdf-Norge-Norvege

Apple-Apple_AirPort_Networks_Early2009_H.pd-Norge-Norvege

Apple-iPod_classic_120GB_no.pdf-Norge-Norvege

Apple-StoreKitGuide.pdf-Japon

Apple-Xserve_Intel_DIY_ExpansionCardRiser_JA.pdf-Japon

Apple-iMacG5_Battery.pdf-Japon

Apple-Logic_Pro_8_Getting_Started.pdf-Japon

Apple-PowerBook-handbok-Norge-Norveg

Apple-iWork09_formler_og_funksjoner.pdf-Norge-Norvege

Apple-MacBook_Pro_15inch_Mid2010_H.pdf-Norge-Norvege

Apple-MacPro_HardDrive_DIY.pdf-Japon

Apple-iPod_Fifth_Gen_Funksjonsoversikt.pdf-Norge-Norvege

Apple-MacBook_13inch_white_Early2009_H.pdf-Norge-Norvege

Apple-GarageBand_09_Komme_i_gang.pdf-Norge-Norvege

Apple-MacBook_Pro_15inch_Mid2009_H.pdf-Norge-Norvege

Apple-imac_mid2011_ug_h.pdf-Norge-Norvege

Apple-iDVD_08_Komme_i_gang.pdf-Norge-Norvege

Apple-MacBook_Air_11inch_Late2010_UG_H.pdf-Norge-Norvege

Apple-iMac_Mid2010_UG_H.pdf-Norge-Norvege

Apple-MacBook_13inch_Mid2009_H.pdf-Norge-Norvege

/Apple-iPhone_3G_Viktig_produktinformasjon_H-Norge-Norvege

Apple-MacBook_13inch_Mid2010_UG_H.pdf-Norge-Norvege

Apple-macbook_air_13inch_mid2011_ug_no.pdf-Norge-Norvege

Apple-Mac_mini_Early2009_UG_H.pdf-Norge-Norvege

Apple-ipad2_brukerhandbok.pdf-Norge-Norvege

Apple-iPhoto_08_Komme_i_gang.pdf-Norge-Norvege

Apple-MacBook_Air_Brukerhandbok_Late2008.pdf-Norge-Norvege

Apple-Pages09_Brukerhandbok.pdf-Norge-Norvege

Apple-MacBook_13inch_Late2009_UG_H.pdf-Norge-Norvege

Apple-iPhone_3GS_Viktig_produktinformasjon.pdf-Norge-Norvege

Apple-MacBook_13inch_Aluminum_Late2008_H.pdf-Norge-Norvege

Apple-Wireless_Keyboard_Aluminum_2007_H-Norge-Norvege

Apple-NiPod_photo_Brukerhandbok_N0190269.pdf-Norge-Norvege

Apple-MacBook_Pro_13inch_Mid2010_H.pdf-Norge-Norvege

Apple-MacBook_Pro_17inch_Mid2010_H.pdf-Norge-Norvege

Apple-Velkommen_til_Snow_Leopard.pdf-Norge-Norvege.htm

Apple-TimeCapsule_Klargjoringsoversikt.pdf-Norge-Norvege

Apple-iPhone_3GS_Hurtigstart.pdf-Norge-Norvege

Apple-Snow_Leopard_Installeringsinstruksjoner.pdf-Norge-Norvege

Apple-iMacG5_iSight_UG.pdf-Norge-Norvege

Apple-iPod_Handbok_S0342141.pdf-Norge-Norvege

Apple-ipad_brukerhandbok.pdf-Norge-Norvege

Apple-GE_Money_Bank_Handlekonto.pdf-Norge-Norvege

Apple-MacBook_Air_11inch_Late2010_UG_H.pdf-Norge-Norvege

Apple-iPod_nano_6thgen_Brukerhandbok.pdf-Norge-Norvege

Apple-iPod_touch_iOS4_Brukerhandbok.pdf-Norge-Norvege

Apple-MacBook_Air_13inch_Late2010_UG_H.pdf-Norge-Norvege

Apple-MacBook_Pro_15inch_Early2011_H.pdf-Norge-Norvege

Apple-Numbers09_Brukerhandbok.pdf-Norge-Norvege

Apple-Welcome_to_Leopard.pdf-Japon

Apple-PowerMacG5_UserGuide.pdf-Norge-Norvege

Apple-iPod_touch_2.1_Brukerhandbok.pdf-Norge-Norvege

Apple-Boot_Camp_Installering-klargjoring.pdf-Norge-Norvege

Apple-MacOSX10.3_Welcome.pdf-Norge-Norvege

Apple-iPod_shuffle_3rdGen_UG_H.pdf-Norge-Norvege

Apple-iPhone_4_Viktig_produktinformasjon.pdf-Norge-Norvege

Apple_TV_Klargjoringsoversikt.pdf-Norge-Norvege

Apple-iMovie_08_Komme_i_gang.pdf-Norge-Norvege

Apple-iPod_classic_160GB_Brukerhandbok.pdf-Norge-Norvege

Apple-Boot_Camp_Installering_10.6.pdf-Norge-Norvege

Apple-Network-Services-Location-Manager-Veiledning-for-nettverksadministratorer-Norge-Norvege

Apple-iOS_Business_Mar12_FR.pdf

Apple-PCIDualAttachedFDDICard.pdf

Apple-Aperture_Installing_Your_Software_f.pdf

Apple-User_Management_Admin_v10.4.pdf

Apple-Compressor-4-ユーザーズマニュアル Japon

Apple-Network_Services_v10.4.pdf

Apple-iPod_2ndGen_USB_Power_Adapter-DE

Apple-Mail_Service_v10.4.pdf

Apple-AirPort_Express_Opstillingsvejledning_5.1.pdf

Apple-MagSafe_Airline_Adapter.pdf

Apple-L-Apple-Multiple-Scan-20-Display

Apple-Administration_du_service_de_messagerie_10.5.pdf

Apple-System_Image_Admin.pdf

Apple-iMac_Intel-based_Late2006.pdf-Japon

Apple-iPhone_3GS_Finger_Tips_J.pdf-Japon

Apple-Power-Mac-G4-Mirrored-Drive-Doors-Japon

Apple-AirMac-カード取り付け手順-Japon

Apple-iPhone開発ガイド-Japon

Apple-atadrive_pmg4mdd.j.pdf-Japon

Apple-iPod_touch_2.2_User_Guide_J.pdf-Japon

Apple-Mac_OS_X_Server_v10.2.pdf

Apple-AppleCare_Protection_Plan_for_Apple_TV.pdf

Apple_Component_AV_Cable.pdf

Apple-DVD_Studio_Pro_4_Installation_de_votre_logiciel

Apple-Windows_Services

Apple-Motion_3_New_Features_F

Apple-g4mdd-fw800-lowerfan

Apple-MacOSX10.3_Welcome

Apple-Print_Service

Apple-Xserve_Setup_Guide_F

Apple-PowerBookG4_17inch1.67GHzUG

Apple-iMac_Intel-based_Late2006

Apple-Installation_de_votre_logiciel

Apple-guide_des_fonctions_de_l_iPod_nano

Apple-Administration_de_serveur_v10.5

Apple-Mac-OS-X-Server-Premiers-contacts-Pour-la-version-10.3-ou-ulterieure

Apple-boot_camp_install-setup

Apple-iBookG3_14inchUserGuideMultilingual

Apple-mac_pro_server_mid2010_ug_f

Apple-Motion_Supplemental_Documentation

Apple-imac_mid2011_ug_f

Apple-iphone_guide_de_l_utilisateur

Apple-macbook_air_11inch_mid2011_ug_fr

Apple-NouvellesfonctionnalitesdeLogicExpress7.2

Apple-QT_Streaming_Server

Apple-Web_Technologies_Admin

Apple-Mac_Pro_Early2009_4707_UG

Apple-guide_de_l_utilisateur_de_Numbers08

Apple-Decouverte_d_Aperture_2

Apple-Guide_de_configuration_et_d'administration

Apple-mac_integration_basics_fr_106.

Apple-iPod_shuffle_4thgen_Guide_de_l_utilisateur

Apple-ARA_Japan

Apple-081811_APP_iPhone_Japanese_v5.4.pdf-Japan

Apple-Recycle_Contract120919.pdf-Japan

Apple-World_Travel_Adapter_Kit_UG

Apple-iPod_nano_6thgen_User_Guide

Apple-RemoteSupportJP

Apple-Mac_mini_Early2009_UG_F.pdf-Manuel-de-l-utilisateur

Apple-Compressor_3_Batch_Monitor_User_Manual_F.pdf-Manuel-de-l-utilisateur

Apple-Premiers__contacts_avec_iDVD_08

Apple-Mac_mini_Intel_User_Guide.pdf

Apple-Prise_en_charge_des_surfaces_de_controle_Logic_Express_8

Apple-mac_integration_basics_fr_107.pdf

Apple-Final-Cut-Pro-7-Niveau-1-Guide-de-preparation-a-l-examen

Apple-Logic9-examen-prep-fr.pdf-Logic-Pro-9-Niveau-1-Guide-de-preparation-a-l-examen

Apple-aperture_photography_fundamentals.pdf-Manuel-de-l-utilisateu

Apple-emac-memory.pdf-Manuel-de-l-utilisateur

Apple-Apple-Installation-et-configuration-de-votre-Power-Mac-G4

Apple-Guide_de_l_administrateur_d_Xsan_2.pdf

Apple-premiers_contacts_avec_imovie6.pdf

Apple-Tiger_Guide_Installation_et_de_configuration.pdf

Apple-Final-Cut-Pro-7-Level-One-Exam-Preparation-Guide-and-Practice-Exam

Apple-Open_Directory.pdf

Apple-Nike_+_iPod_User_guide

Apple-ard_admin_guide_2.2_fr.pdf

Apple-systemoverviewj.pdf-Japon

Apple-Xserve_TO_J070411.pdf-Japon

Apple-Mac_Pro_User_Guide.pdf

Apple-iMacG5_iSight_UG.pdf

Apple-premiers_contacts_avec_iwork_08.pdf

Apple-services_de_collaboration_2e_ed_10.4.pdf

Apple-iPhone_Bluetooth_Headset_Benutzerhandbuch.pdf

Apple-Guide_de_l_utilisateur_de_Keynote08.pdf

APPLE/Apple-Logic-Pro-9-Effectsrfr.pdf

Apple-Logic-Pro-9-Effectsrfr.pdf

Apple-iPod_shuffle_3rdGen_UG_F.pdf

Apple-iPod_classic_160Go_Guide_de_l_utilisateur.pdf

Apple-iBookG4GettingStarted.pdf

Apple-Administration_de_technologies_web_10.5.pdf

Apple-Compressor-4-User-Manual-fr

Apple-MainStage-User-Manual-fr.pdf

Apple-Logic_Pro_8.0_lbn_j.pdf

Apple-PowerBookG4_15inch1.67-1.5GHzUserGuide.pdf

Apple-MacBook_Pro_15inch_Mid2010_CH.pdf

Apple-LED_Cinema_Display_27-inch_UG.pdf

Apple-MacBook_Pro_15inch_Mid2009_RS.pdf

Apple-macbook_pro_13inch_early2011_f.pdf

Apple-iMac_Mid2010_UG_BR.pdf

Apple-iMac_Late2009_UG_J.pdf

Apple-iphone_user_guide-For-iOS-6-Software

Apple-iDVD5_Getting_Started.pdf

Apple-guide_des_fonctionnalites_de_l_ipod_touch.pdf

Apple_iPod_touch_User_Guide

Apple_macbook_pro_13inch_early2011_f

Apple_Guide_de_l_utilisateur_d_Utilitaire_RAID

Apple_Time_Capsule_Early2009_Setup_F

Apple_iphone_4s_finger_tips_guide_rs

Apple_iphone_upute_za_uporabu

Apple_ipad_user_guide_ta

Apple_iPod_touch_User_Guide

apple_earpods_user_guide

apple_iphone_gebruikershandleiding

apple_iphone_5_info

apple_iphone_brukerhandbok

apple_apple_tv_3rd_gen_setup_tw

apple_macbook_pro-retina-mid-2012-important_product_info_ch

apple_Macintosh-User-s-Guide-for-Macintosh-PowerBook-145

Apple_ipod_touch_user_guide_ta

Apple_TV_2nd_gen_Setup_Guide_h

Apple_ipod_touch_manual_del_usuario

Apple_iphone_4s_finger_tips_guide_tu

Apple_macbook_pro_retina_qs_th

Apple-Manuel_de_l'utilisateur_de_Final_Cut_Server

Apple-iMac_G5_de_lutilisateur

Apple-Cinema_Tools_4.0_User_Manual_F

Apple-Personal-LaserWriter300-User-s-Guide

Apple-QuickTake-100-User-s-Guide-for-Macintosh

Apple-User-s-Guide-Macintosh-LC-630-DOS-Compatible

Apple-iPhone_iOS3.1_User_Guide

Apple-iphone_4s_important_product_information_guide

Apple-iPod_shuffle_Features_Guide_F

Liste-documentation-apple

Apple-Premiers_contacts_avec_iMovie_08

Apple-macbook_pro-retina-mid-2012-important_product_info_br

Apple-macbook_pro-13-inch-mid-2012-important_product_info

Apple-macbook_air-11-inch_mid-2012-qs_br

Apple-Manuel_de_l_utilisateur_de_MainStage

Apple-Compressor_3_User_Manual_F

Apple-Color_1.0_User_Manual_F

Apple-guide_de_configuration_airport_express_4.2

Apple-TimeCapsule_SetupGuide

Apple-Instruments_et_effets_Logic_Express_8

Apple-Manuel_de_l_utilisateur_de_WaveBurner

Apple-Macmini_Guide_de_l'utilisateur

Apple-PowerMacG5_UserGuide

Disque dur, ATA parallèle Instructions de remplacement

Apple-final_cut_pro_x_logic_effects_ref_f

Apple-Leopard_Installationshandbok

Manuale Utente PowerBookG4

Apple-thunderbolt_display_getting_started_1e

Apple-Compressor-4-Benutzerhandbuch

Apple-macbook_air_11inch_mid2011_ug

Apple-macbook_air-mid-2012-important_product_info_j

Apple-iPod-nano-Guide-des-fonctionnalites

Apple-iPod-nano-Guide-des-fonctionnalites

Apple-iPod-nano-Guide-de-l-utilisateur-4eme-generation

Apple-iPod-nano-Guide-de-l-utilisateur-4eme-generation

Apple-Manuel_de_l_utilisateur_d_Utilitaire_de_reponse_d_impulsion

Apple-Aperture_2_Raccourcis_clavier

AppleTV_Setup-Guide

Apple-livetype_2_user_manual_f

Apple-imacG5_17inch_harddrive

Apple-macbook_air_guide_de_l_utilisateur

Apple-MacBook_Early_2008_Guide_de_l_utilisateur

Apple-Keynote-2-Guide-de-l-utilisateur

Apple-PowerBook-User-s-Guide-for-PowerBook-computers

Apple-Macintosh-Performa-User-s-Guide-5200CD-and-5300CD

Apple-Macintosh-Performa-User-s-Guide

Apple-Workgroup-Server-Guide

Apple-iPod-nano-Guide-des-fonctionnalites

Apple-iPad-User-Guide-For-iOS-5-1-Software

Apple-Boot-Camp-Guide-d-installation-et-de-configuration

Apple-iPod-nano-Guide-de-l-utilisateur-4eme-generation

Power Mac G5 Guide de l’utilisateur APPLE

Guide de l'utilisateur PAGE '08 APPLE

Guide de l'utilisateur KEYNOTE '09 APPLE

Guide de l'Utilisateur KEYNOTE '3 APPLE

Guide de l'Utilisateur UTILITAIRE RAID

Guide de l'Utilisateur Logic Studio

Power Mac G5 Guide de l’utilisateur APPLE

Guide de l'utilisateur PAGE '08 APPLE

Guide de l'utilisateur KEYNOTE '09 APPLE

Guide de l'Utilisateur KEYNOTE '3 APPLE

Guide de l'Utilisateur UTILITAIRE RAID

Guide de l'Utilisateur Logic Studio

Guide de l’utilisateur ipad Pour le logiciel iOS 5.1

PowerBook G4 Premiers Contacts APPLE

Guide de l'Utilisateur iphone pour le logiciel ios 5.1 APPLE

Guide de l’utilisateur ipad Pour le logiciel iOS 4,3

Guide de l’utilisateur iPod nano 5ème génération

Guide de l'utilisateur iPod Touch 2.2 APPLE

Guide de l’utilisateur QuickTime 7  Mac OS X 10.3.9 et ultérieur Windows XP et Windows 2000

Guide de l'utilisateur MacBook 13 pouces Mi 2010

Guide de l’utilisateur iPhone (Pour les logiciels iOS 4.2 et 4.3)

Guide-de-l-utilisateur-iPod-touch-pour-le-logiciel-ios-4-3-APPLE

Guide-de-l-utilisateur-iPad-2-pour-le-logiciel-ios-4-3-APPLE

Guide de déploiement en entreprise iPhone OS

Guide-de-l-administrateur-Apple-Remote-Desktop-3-1

Guide-de-l-utilisateur-Apple-Xserve-Diagnostics-Version-3X103

Guide-de-configuration-AirPort-Extreme-802.11n-5e-Generation

Guide-de-configuration-AirPort-Extreme-802-11n-5e-Generation

Guide-de-l-utilisateur-Capteur-Nike-iPod

Guide-de-l-utilisateur-iMac-21-5-pouces-et-27-pouces-mi-2011-APPLE

Guide-de-l-utilisateur-Apple-Qadministrator-4

Guide-d-installation-Apple-TV-3-eme-generation

User-Guide-iPad-For-ios-5-1-Software

Core Text Programming GuideContents Introduction 4 Organization of This Document 4 See Also 4 Core Text Overview 6 OS X Text Technologies 6 Design Goals and Principles 7 Core Text Features and Capabilities 8 System Data Types and Services 8 Core Text Input 8 Characters and Glyphs 9 Core Text Objects 10 Layout Objects 10 Font Objects 13 Common Operations 15 Simple Paragraphs 15 Simple Text Labels 16 Columnar Layout 17 Manual Line Breaking 20 Font Creation and Storage 21 Accessing Font Metrics 25 Creating Related Fonts 27 Document Revision History 29 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 2Figures and Listings Core Text Overview 6 Figure 1-1 Glyphs of the character A 9 Figure 1-2 Ligatures 9 Figure 1-3 Text layout data flow 10 Figure 1-4 A frame object containing lines and glyph runs 12 Figure 1-5 Creating a font from a font descriptor 14 Common Operations 15 Listing 2-1 Typesetting a simple paragraph 15 Listing 2-2 Typesetting a simple text label 17 Listing 2-3 Performing columnar text layout 18 Listing 2-4 Performing manual line breaking 20 Listing 2-5 Creating a font descriptor from a name and point size 21 Listing 2-6 Creating a font descriptor from a family and traits 21 Listing 2-7 Creating a font from a font descriptor 23 Listing 2-8 Serializing a font 23 Listing 2-9 Creating a font from serialized data 24 Listing 2-10 Calculating line height 26 Listing 2-11 Getting glyphs for characters 26 Listing 2-12 Changing traits of a font 27 Listing 2-13 Converting a font to another family 28 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 3Core Text is an advanced, low-level technology for laying out text and handling fonts. It is designed for high performance and ease of use. The Core Text API, introduced in OS X v10.5, is accessible from all OS X application environments. It is also available in iOS 3.2. The Core Text layout engine is designed specifically to make simple text layout operations easy to do and to avoid side effects. The Core Text font programming interface is complementary to the Core Text layout engine and is designed to handle Unicode fonts natively, unifying disparate OS X font facilities into a single comprehensive programming interface. This document is intended for developers who need to do text layout and font handling at a low level. If you can develop your application using higher-level constructs, such as NSTextView, then you should use the Cocoa text system, introduced in Text System Overview. If, on the other hand, you need to render text directly into a Core Graphics context, then you should use Core Text. More information about the position of Core Text among other OS X text technologies is presented in “OS X Text Technologies” (page 6). Important: This document has not been updated to address the use of Core Text in iOS 3.2. Organization of This Document This document is organized into the following chapters: “Core Text Overview” (page 6) describes the Core Text system in terms of its design goals and feature set. It also introducesthe opaque typesthat encapsulate the text layout and font handling capabilities of the system. “Common Operations” (page 15) presents snippets of code with commentary illustrating typical uses of the main Core Text opaque types. See Also In addition to this document, there are several that cover more specific aspects of Core Text or describe the software services used by Core Text. 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 4 Introduction● Core Text Reference Collection provides complete reference information for the Core Text layout and font API. ● CoreTextTest is a sample code project thatshows how to use Core Text in the context of a complete Carbon application. ● CoreTextArcCocoa is a sample code project that illustrates the use of fonts, lines, and runs in a Core Text Cocoa application. ● Core Foundation Design Concepts and Core Foundation Framework Reference describe Core Foundation, a framework that provides abstractions for common data types and fundamental software services used by Core Text. The following documents provide entry points to the documentation describing the Cocoa text system. ● Text System Overview gives an introduction to the Cocoa text system. ● Text Layout Programming Guide for Cocoa describes the Cocoa text layout engine. Introduction See Also 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 5The Core Text framework is an advanced, low-level technology for laying out text and handling fonts. Designed for high performance and ease of use, the Core Text layout engine is up to twice as fast as ATSUI (Apple Type Services for Unicode Imaging). The Core Text layout API is simple, consistent, and tightly integrated with Core Foundation, Core Graphics, and Cocoa. The Core Text font API is complementary to the Core Text layout engine. Core Text font technology is designed to handle Unicode fonts natively, bridging the gap between Carbon and Cocoa font references, and providing efficient font handling for Core Text layout. Core Text brings the capabilities and coherent design of Cocoa text and fonts to a broader, lower-level client base. OS X Text Technologies The Macintosh operating system has provided sophisticated text handling and typesetting capabilities from its beginning. In fact, these features sparked the desktop publishing revolution. Core Text is the most modern text-handling technology on the platform. It is designed specifically for OS X and is written in C, so it can be called from any language in the system. It is positioned as a core technology to provide consistent, high-performance textservicesto other frameworksthroughout the system, and the Core Text API is accessible to applications that need to use it directly. Core Text resides in the Application Services umbrella framework (ApplicationServices) so that it is callable from both Carbon and Cocoa and has all of the lower-level services it needs. Core Text is not meant to replace the Cocoa text system, although it provides the underlying implementation for many Cocoa text technologies. If you can deal with high-level constructs, such as text views, you can probably use Cocoa. For this reason, Cocoa developers typically have no need to use Core Text directly. Carbon developers, on the other hand, will find Core Text faster and easier to use, in many cases, than preexisting OS X text layout and font APIs. To decide whether Core Text isthe right OS X text technology for your application, apply the following guidelines: ● If you can, use Cocoa text. The NSTextView class is the most advanced, full-featured, flexible text view in OS X. For small amounts of text, use NSTextField. ● To display web content in your application, use Web Kit. ● If you need to use Carbon only, consider using NSTextView with HICocoaView. 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 6 Core Text Overview● If you need a lower-level API for drawing any kind of text into a Quartz graphics context (CGContext), consider using Core Text directly. Generally speaking, Core Text is for applications that need a low-level text-handling technology correlating with the Core Graphics framework (Quartz). If you work directly with Quartz and you need to draw some text, use Core Text. If, for example, you have your own page layout engine—you have some text and you know where it needs to go in your view—you can use Core Text to generate the glyphs and position them relative to each other with all the features of fine typesetting,such as kerning, ligatures, line-breaking, and justification. Design Goals and Principles Core Text is designed to provide the following benefits: ● A comprehensive, unified set of text-layout and font APIs ● High performance and ease of use ● Tight integration with Cocoa, Core Foundation, and Core Graphics (Quartz) ● Native Unicode handling ● 64-bit application support ● Clean, simple, consistent API design ● Simple interfaces for simple operations ● A flexible interface to layout and glyph data ● A predictable cost structure and rational division of labor A primary design goal of Core Text layout is to make simple things easy to do. So, for example, if you want to draw a paragraph of text or a simple text label on the screen, you don’t need much code. A corollary principle of the Core Text design is that clients are not required to pay for features they don’t use. The objects defined by Core Text opaque types provide a progression from simplicity to complexity, in terms of their use and interface. That is, higher-level objects do more for you, and so they are easier to use (although they may be more complex internally). For example, the highest-level object in Core Text is the framesetter, which fills a path (defined by a CGPath object representing a rectangle) with text. The framesetter object uses other Core Text objects, such as typesetter, line, and glyph run objects, to accomplish its work: creating frame objects, which are lines of glyphs laid out within a shape. Clients who simply need to lay out a paragraph need only work with the framesetter. Clients who need to intervene in the text layout process at a lower level can deal with lower level objects, such as line objects. Line objects can draw themselves individually or be used to obtain glyph information. With Core Text you use the highest-level object you can to get your job done. Core Text Overview Design Goals and Principles 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 7Core Text Features and Capabilities Core Text performs text layout and font access. The text layout engine generates glyphs from characters and positionsthe glyphsinto glyph runs, lines, and multiline frames. It also provides glyph- and layout-related data, such as glyph positioning and measurement of lines and frames. The API handles character attributes and paragraph styles, including various types of tab styles and positioning. The Core Text font API bringsto Carbon developersthe same capabilities enjoyed by Cocoa developersthrough NSFont and NSFontDescriptor. The API provides font viewing and selecting. It provides font references, font descriptors (objects that encapsulate font data sufficient to instantiate a font reference), and easy access to font data. It also provides support for multiple master fonts, font variations, font cascading, and font linking. The Core Text font API is designed to be very complete, so that you don’t have to go to different layers to do what you need to do. System Data Types and Services Core Text uses system data types and services wherever possible, and you use the same conventions that pertain to the other core frameworks in OS X. So, for example, Core Text uses Core Foundation objects for many input and output parameters, enabling them to be retained, released, and stored in Core Foundation collection classes. Other objects handled by Core Text are provided by the Core Graphics framework, for example, CGPath objects. Moreover, because many Core Foundation objects are toll-free bridged with Cocoa Foundation objects, you can usually use Foundation objects in place of Core Foundation objects passed into Core Text functions. Use of these standard types and toll-free bridging ensure that you don’t have to perform expensive type conversions to get data into and out of Core Text. Core Text is built to work directly with Core Graphics, also known as Quartz, which is the high-speed graphics rendering engine that handles two-dimensional imaging at the lowest level in OS X. Quartz is the only way to get glyphs drawn at a fundamental level, and, because Core Text provides all data in a form directly usable by Quartz, the result is high-performance text rendering. Core Text Input The input type most basic to Core Text is the Core Foundation attributed string, represented by CFAttributedStringRef or its Cocoa counterpart, NSAttributedString, which are toll-free bridged. The attributes are key-value pairs that define style characteristics of the characters in the string, which are grouped in rangesthatshare the same attributes. Examples of text attributes are font and color. The attributesthemselves are passed into attributed strings, and retrieved from them, using CFDictionary objects. (Though CFDictionaryRef and NSDictionary are also toll-free bridged, the individual attribute objects stored in the dictionary may not be.) The typesetting mechanism in Core Text uses the information in the attributed string to perform character-to-glyph conversion. Core Text Overview Core Text Features and Capabilities 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 8Characters and Glyphs One of the most important capabilities of fine typesetting is character-to-glyph conversion. It is important to distinguish between characters and glyphsin discussing a text layout engine. Characters are essentially numbers representing code points in a character set or encoding scheme, such as Unicode, the character set used for all text in OS X. The Unicode standard provides a unique number for every character in every modern written language in the world, independent of the platform, program, and programming language being used. A glyph is a graphic shape used to depict a character. Glyphs are also represented by numeric codes, called glyph codes, that are indexesinto a particular font. Glyphs are selected during composition and layout processing by the character-to-glyph conversion process. There are any number of glyphs that correspond to a particular character. For example, the character“uppercase A” has different glyphsfor different typefaces(such as Helvetica and Times) and type styles (such as bold and italic). Figure 1-1 shows various glyphs, all of which represent an “uppercase A.“ Figure 1-1 Glyphs of the character A Moreover, the correspondence between characters and glyphs is not one to one, and the context within which a character appears can affect the glyph chosen to represent it. For example, in many fonts an “f” and “l” appearing side-by-side in a character string are replaced by a ligature, which is a single glyph depicting the letters joined together. Figure 1-2 shows two examples of individual characters and the single-glyph ligatures often used when they are adjacent. Character-to-glyph conversion is a complex and difficult task that Core Text performs quickly and efficiently. Figure 1-2 Ligatures + = + = Core Text Overview Core Text Features and Capabilities 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 9Core Text Objects Core Text objects are based on the corresponding opaque types defined by the framework. In the sections that follow, you learn how the primary Core Text objects interact to accomplish various client tasks. Layout Objects Layout objects make up the Core Text layout engine. This section discusses the primary layout objects: framesetter, frame, typesetter, line, and glyph run objects. In addition this section briefly discusses the other Core Text layout objects: paragraph styles, text tabs, and glyph info objects. Framesetters and Frames The framesetter is the highest-level object in the Core Text layout engine, represented by the CTFramesetter opaque type. A framesetter generates text frames by filling a path with text. That is, CTFramesetter is an object factory for CTFrame objects that are ready to draw. The framesetter takes an attributed string object (CFAttributedString) and a shape descriptor object (CGPath) and calls into the typesetter to create line objects that fill that shape. The output is a frame object containing an array of lines. This array of lines is a paragraph, a multiline layout. The frame can draw itself directly into a graphic context. You can also retrieve the lines to manipulate before drawing. For example, you might adjust their positioning. Figure 1-3 shows the data flow among objects performing text layout. Figure 1-3 Text layout data flow CFAttributedString CTFramesetter CTTypesetter CTFrame CGPath The framesetter applies paragraph styles to the frame text as it is laid out. Paragraph styles are represented in Core Text by objects storing attributes that affect paragraph layout. Among these attributes are alignment, tab stops, writing direction, line-breaking mode, and indentation settings. Core Text Overview Core Text Objects 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 10It’s advantageousto use the framesetter to perform the common operation of typesetting a multiline paragraph because it handles all of the details of producing frames, instantiating other objects, such as the typesetter, as needed. The CTFramesetter opaque type provides functions to create a framesetter with an attributed string, to create frame objects, and to return itstypesetter. As with all Core Text objects, CTFramesetter can also return its Core Foundation type identifier. Typesetters A typesetter performsthe fundamental text layout operations of character-to-glyph conversion and positioning of those glyphs into lines. That is, it determines which glyphs to use and where to place them relative to each other, producing line objects. Typesetters are represented by the CTTypesetter opaque type. The typesetter also suggests line breaks. It finds how many glyphs can fit within a single line within a given space. It then determines the length of the line by using word breaks, word wrapping, or finer-grained cluster breaks. Simple word wrapping is the default method of creating line breaks. The framesetter instantiates a typesetter and uses it to create the line objects used to fill a frame. You can also use a typesetter directly, as described in “Manual Line Breaking” (page 20). Lines and Glyph Runs A line object represents a line of text and is represented in Core Text by the CTLine opaque type. A CTLine object contains an array of glyph runs. Line objects are created by the typesetter during a framesetting operation and, like frames, can draw themselves directly into a graphics context. Line objects hold the glyphs that are the result of the text layout process, created from text and style information. A line corresponds to a range of characters. It could be miles long or, more often, one of a series of lines contained within a paragraph. The paragraph is represented in Core Text by a CTFrame object, which contains the paragraph’s line objects. Accordingly, you can retrieve line objects from their frame object. A line object contains glyph-run objects, represented by the CTRun opaque type. A glyph run is a set of consecutive glyphs sharing the same attributes and direction. The typesetter creates glyph runs as it produces lines from character strings, attributes, and font objects. That is, a line is constructed of one or more glyphs Core Text Overview Core Text Objects 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 11runs. Glyph runs can draw themselves into a graphic context, if desired, although most clients have no need to interact directly with glyph runs. Figure 1-4 shows the conceptual hierarchy of a frame object containing line objects that, in turn, contain glyph-run objects. Figure 1-4 A frame object containing lines and glyph runs CTRun CTLine CTFrame a glyph run CTLine has a convenience method for creating a freestanding line independent of a frame, CTLineCreateWithAttributedString. You can use this method to create a line object directly from an attributed string without needing to create and manage a typesetter. Without a typesetter, however, there’s no way to calculate line breaks, so this method is meant for a single line only (for example, creating a text label). After you have a line object, you can do a number of things with it. For example, you can have the line create a justified or truncated copy of itself, and you can ask the line for pen offsets for various degrees of flushness. You can use these pen offsets to draw the line with left, right, or centered alignments. You can also ask the line for measurements, such as its image bounds and typographic bounds. Image bounds represent the rectangle tightly enclosing the graphic shapes of the glyphs actually appearing in the line. Typographic bounds include the height of the ascenders in the font and the depth of its descenders, regardless of whether those features appear in the glyphs in a given line. Like a frame object, a line object is ready to draw. You simply set the text position in a Core Graphics context and have the line draw itself. Core Text uses the same placement strategy as Quartz, setting the origin of the text on the text baseline. In Quartz, you specify the location of text in user-space coordinates. The text matrix specifies the transform from text space to user space. The text position is stored in the tx and ty variables of the text matrix. When you first create a graphics context, it initializesthe text matrix to the identity matrix; thustext-space coordinates are initially the same as user-space coordinates. Quartz conceptually concatenates the text matrix with the current transformation matrix and other parametersfrom the graphicsstate to produce the final text-rendering matrix, that is, the matrix actually used to draw the text on the page. Core Text Overview Core Text Objects 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 12Other Layout Objects In addition to the framesetter, frame, typesetter, and line objects, Core Text provides other objects to complete the text layout process: paragraph style, text tab, and glyph info objects. Paragraph style objects encapsulate paragraph or ruler attributes in an attributed string and are represented by the CTParagraphStyle opaque type. A paragraph style object is a complex attribute value in an attributed string, storing a number of subattributes that affect paragraph layout for the characters of the string. Among these subattributes are alignment, tab stops, writing direction, line-breaking mode, and indentation settings. The CTTextTab opaque type represents a tab stop in a paragraph style, storing an alignment type and location. The CTGlyphInfo opaque type enables you to override a font's specified mapping from Unicode to the glyph ID. Font Objects Font objects are those Core Text objects dealing directly with fonts: the font reference itself, font descriptor objects, and font collection objects. Fonts Fonts provide assistance in laying out glyphs relative to one another and are used to establish the current font when drawing in a graphics context. The Core Text font opaque type (CTFont) is a specific font instance that encapsulates a lot of information. Its reference type, CTFontRef, is toll-free bridged with NSFont. When you create a CTFont object, you typically specify (or use a default) pointsize and transformation matrix, which gives the font instance specific characteristics. You can then query the font object for many kinds of information about the font at that particular point size, such as character-to-glyph mapping, encodings, font metric data, and glyph data, among other things. Font metrics are parameters such as ascent, descent, leading, cap height, x-height, and so on. Glyph data includes parameters such as bounding rectangles and glyph advances. There are many ways to create font references. The preferred method is from a font descriptor using CTFontCreateWithFontDescriptor. You can also use a number of conversion APIs, depending on what you have to start with. For example, you can use the PostScript name of the typeface (CTFontCreateWithName), an ATS font reference (CTFontCreateWithPlatformFont), a Core Graphics font reference (CTFontCreateWithGraphicsFont), or a QuickDraw font reference (CTFontCreateWithQuickdrawInstance). There’s also CTFontCreateUIFontForLanguage, which creates a reference for the user-interface font for the application you’re using in the localization you’re using. Core Text font references provide a sophisticated, automatic font-substitution mechanism called font cascading. This mechanism takes font traits into account, so it does a better job than previous schemes of picking an appropriate font to substitute for a missing font. Font cascading is based on cascade lists, which are arrays of ordered font descriptors. There is a system default cascade list (which is polymorphic, based on the user's language setting and current font) and a font cascade list that is specified at font creation time. Using the Core Text Overview Core Text Objects 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 13information in the font descriptors, the cascading mechanism can match fonts according to style as well as matching characters. The CTFontCreateForString function uses cascade lists to pick an appropriate font to encode a given string. You specify and retrieve font cascade lists using the kCTFontCascadeListAttribute property. Font Descriptors Font descriptors, represented by the CTFontDescriptor opaque type, provide a mechanism to describe a font completely with a dictionary of attributes. CTFontDescriptorRef istoll-free bridged to NSFontDescriptor. The attributes are properties such as PostScript name, family, and style, and traits such as bold, italic, and monospace. The font descriptor can then be used to create or modify a CTFont object. Font descriptors can be serialized and stored in a document to provide persistence for fonts. Figure 1-5 illustrates the font system using a font descriptor to create a specific font instance. Figure 1-5 Creating a font from a font descriptor Font System CTFontDescriptor CTFont A font descriptor can also be considered as a query into the font system. You can create a font descriptor with an incomplete specification, that is, with one or just a few values in the attribute dictionary, and the system will choose the most appropriate font from those available. The system can also give you a complete list of font descriptors matching your query via CTFontDescriptorCreateMatchingFontDescriptors. Font Collections Font collections are unions of font descriptors, that is, groups of font descriptors taken as a single object. A font collection is represented by the CTFontCollection opaque type. Font collections provide the capabilities of font enumeration, accessto global and custom font collections, and accessto the font descriptors comprising the collection. You can, for example, create a font collection of all the fonts available in the system by calling CTFontCollectionCreateFromAvailableFonts, and you can use the collection to obtain an array of all of the member font descriptors. There is also a function that takes a callback parameter used to sort the returned array of font descriptors. Core Text Overview Core Text Objects 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 14This chapter describes some common text-layout and font-handling operations and shows, through portions of sample code, how they can be accomplished using Core Text. In addition to the code fragments in this chapter, see the following sample code applications that use Core Text: ● CoreTextTest shows how to use Core Text to typeset blocks of text with varying attributes in the context of a complete Carbon application. ● CoreTextArcCocoa illustrates the use of Core Text fonts, lines, and runs in a Cocoa application that sets type along an arched path. Simple Paragraphs One of the most common operations in typesetting is laying out a multiline paragraph within an arbitrarily sized rectangular area. Core Text makes this operation easy, requiring only a few lines of Core Text–specific code. To lay out the paragraph, you need a graphics context to draw into, a rectangular path to provide the area where the text is laid out, and an attributed string. Most of the code in this example is required to create and initialize the context, path, and string. After that is done, Core Text requires only three lines of code to do the layout. Listing 2-1 uses Cocoa to simplify initialization of the graphics context. To see how that operation is done in Carbon, see the CoreTextTest sample code or “Graphics Contexts“ in the Quartz 2D Programming Guide . Listing 2-1 Typesetting a simple paragraph // Initialize a graphics context and set the text matrix to a known value. CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort]; CGContextSetTextMatrix(context, CGAffineTransformIdentity); // Initialize a rectangular path. CGMutablePathRef path = CGPathCreateMutable(); CGRect bounds = CGRectMake(10.0, 10.0, 200.0, 200.0); CGPathAddRect(path, NULL, bounds); 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 15 Common Operations// Initialize an attributed string. CFStringRef string = CFSTR("We hold this truth to be self-evident, that everyone is created equal."); CFMutableAttributedStringRef attrString = CFAttributedStringCreateMutable(kCFAllocatorDefault, 0); CFAttributedStringReplaceString (attrString, CFRangeMake(0, 0), string); // Create a color and add it as an attribute to the string. CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); CGFloat components[] = { 1.0, 0.0, 0.0, 0.8 }; CGColorRef red = CGColorCreate(rgbColorSpace, components); CGColorSpaceRelease(rgbColorSpace); CFAttributedStringSetAttribute(attrString, CFRangeMake(0, 50), kCTForegroundColorAttributeName, red); // Create the framesetter with the attributed string. CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString(attrString); CFRelease(attrString); // Create the frame and draw it into the graphics context CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, NULL); CFRelease(framesetter); CTFrameDraw(frame, context); CFRelease(frame); Simple Text Labels Another very common typesetting operation is drawing a single line of text to use as a label for a user-interface element. In Core Text thisrequires only two lines of code, one to create the line object with an attributed string and another to draw the line into a graphic context. Common Operations Simple Text Labels 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 16Listing 2-2 omits initialization of the plain text string, font, and graphics context, but it shows how to create an attributes dictionary and use it to create the attributed string. To see how to create a Core Text font, see “Font Creation and Storage” (page 21). Listing 2-2 Typesetting a simple text label CFStringRef string; CTFontRef font; CGContextRef context; // Initialize string, font, and context CFStringRef keys[] = { kCTFontAttributeName }; CFTypeRef values[] = { font }; CFDictionaryRef attributes = CFDictionaryCreate(kCFAllocatorDefault, (const void**)&keys, (const void**)&values, sizeof(keys) / sizeof(keys[0]), &kCFTypeDictionaryCallBacks, &kCFTypeDictionaryValueCallbacks); CFAttributedStringRef attrString = CFAttributedStringCreate(kCFAllocatorDefault, string, attributes); CFRelease(string); CFRelease(attributes); CTLineRef line = CTLineCreateWithAttributedString(attrString); // Set text position and draw the line into the graphics context CGContextSetTextPosition(context, 10.0, 10.0); CTLineDraw(line, context); CFRelease(line); Columnar Layout Laying out text in multiple columns is another common typesetting operation. Strictly speaking, Core Text itself only performs the layout of one column at a time and does not calculate the column sizes or locations. You do those operations before calling Core Text to lay out the text within the rectangular path area you’ve calculated. In this sample, Core Text, in addition to laying out the text in each column, also provides the subrange within the text string for each column. Common Operations Columnar Layout 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 17Listing 2-3 mixes Cocoa method calls in Objective-C with function calls into Carbon frameworks and Core Text. It includes an implementation of the drawRect: method of NSView, which calls the local createColumns function, defined first in this listing. This code resides in an NSView subclass in a Cocoa document-based application. The NSView subclassincludes an attributedString accessor method, which is notshown here but is called in this listing to return the attributed string to be laid out. Listing 2-3 Performing columnar text layout - (CFArrayRef)createColumns { CGRect bounds = CGRectMake(0, 0, NSWidth([self bounds]), NSHeight([self bounds])); int column; CGRect* columnRects = (CGRect*)calloc(_columnCount, sizeof(*columnRects)); // Start by setting the first column to cover the entire view. columnRects[0] = bounds; // Divide the columns equally across the frame's width. CGFloat columnWidth = CGRectGetWidth(bounds) / _columnCount; for (column = 0; column < _columnCount - 1; column++) { CGRectDivide(columnRects[column], &columnRects[column], &columnRects[column + 1], columnWidth, CGRectMinXEdge); } // Inset all columns by a few pixels of margin. for (column = 0; column < _columnCount; column++) { columnRects[column] = CGRectInset(columnRects[column], 10.0, 10.0); } // Create an array of layout paths, one for each column. CFMutableArrayRef array = CFArrayCreateMutable(kCFAllocatorDefault, _columnCount, &kCFTypeArrayCallBacks); for (column = 0; column < _columnCount; column++) { CGMutablePathRef path = CGPathCreateMutable(); CGPathAddRect(path, NULL, columnRects[column]); CFArrayInsertValueAtIndex(array, column, path); CFRelease(path); Common Operations Columnar Layout 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 18} free(columnRects); return array; } - (void)drawRect:(NSRect)rect { // Draw a white background. [[NSColor whiteColor] set]; [NSBezierPath fillRect:[self bounds]]; // Initialize the text matrix to a known value. CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort]; CGContextSetTextMatrix(context, CGAffineTransformIdentity); CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString( (CFAttributedStringRef)[self attributedString]); CFArrayRef columnPaths = [self createColumns]; CFIndex pathCount = CFArrayGetCount(columnPaths); CFIndex startIndex = 0; int column; for (column = 0; column < pathCount; column++) { CGPathRef path = (CGPathRef)CFArrayGetValueAtIndex(columnPaths, column); // Create a frame for this column and draw it. CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(startIndex, 0), path, NULL); CTFrameDraw(frame, context); // Start the next frame at the first character not visible in this frame. CFRange frameRange = CTFrameGetVisibleStringRange(frame); startIndex += frameRange.length; Common Operations Columnar Layout 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 19CFRelease(frame); } CFRelease(columnPaths); } Manual Line Breaking You usually don't need to do manual line breaking unless you have a special hyphenation process or a similar requirement. A framesetter performs line breaking automatically. Listing 2-4 shows how to create a typesetter, an object used by the framesetter, and use it directly to find appropriate line breaks and create a typeset line manually. This sample also shows how to center the line before drawing. Listing 2-4 Performing manual line breaking double width; CGContextRef context; CGPoint textPosition; CFAttributedStringRef attrString; // Initialize those variables. // Create a typesetter using the attributed string. CTTypesetterRef typesetter = CTTypesetterCreateWithAttributedString(attrString); // Find a break for line from the beginning of the string to the given width. CFIndex start = 0; CFIndex count = CTTypesetterSuggestLineBreak(typesetter, start, width); // Use the returned character count (to the break) to create the line. CTLineRef line = CTTypesetterCreateLine(typesetter, CFRangeMake(start, count)); // Get the offset needed to center the line. float flush = 0.5; // centered double penOffset = CTLineGetPenOffsetForFlush(line, flush, width); // Move the given text drawing position by the calculated offset and draw the line. CGContextSetTextPosition(context, textPosition.x + penOffset, textPosition.y); CTLineDraw(line, context); Common Operations Manual Line Breaking 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 20// Move the index beyond the line break. start += count; Font Creation and Storage The example function in Listing 2-5 creates a font descriptor from a PostScript font name and a float specifying the point size. Listing 2-5 Creating a font descriptor from a name and point size CTFontDescriptorRef CreateFontDescriptorFromName(CFStringRef iPostScriptName, CGFloat iSize) { assert(iPostScriptName != NULL); return CTFontDescriptorCreateWithNameAndSize(iPostScriptName, iSize); } The example function in Listing 2-6 creates a font descriptor from a font family name and font traits. Listing 2-6 Creating a font descriptor from a family and traits CTFontDescriptorRef CreateFontDescriptorFromFamilyAndTraits(CFStringRef iFamilyName, CTFontSymbolicTraits iTraits, CGFloat iSize) { CTFontDescriptorRef descriptor = NULL; CFMutableDictionaryRef attributes; assert(iFamilyName != NULL); // Create a mutable dictionary to hold our attributes. attributes = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); check(attributes != NULL); if (attributes != NULL) { CFMutableDictionaryRef traits; Common Operations Font Creation and Storage 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 21CFNumberRef symTraits; // Add a family name to our attributes. CFDictionaryAddValue(attributes, kCTFontFamilyNameAttribute, iFamilyName); // Create the traits dictionary. symTraits = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &iTraits); check(symTraits != NULL); if (symTraits != NULL) { // Create a dictionary to hold our traits values. traits = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); check(traits != NULL); if (traits != NULL) { // Add the symbolic traits value to the traits dictionary. CFDictionaryAddValue(traits, kCTFontSymbolicTrait, symTraits); // Add the traits attribute to our attributes. CFDictionaryAddValue(attributes, kCTFontTraitsAttribute, traits); CFRelease(traits); } CFRelease(symTraits); } // Create the font descriptor with our attributes and input size. descriptor = CTFontDescriptorCreateWithAttributes(attributes); check(descriptor != NULL); CFRelease(attributes); } // Return our font descriptor. return descriptor; Common Operations Font Creation and Storage 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 22} The example function in Listing 2-7 creates a font from a provided font descriptor. It calls CTFontCreateWithFontDescriptor, passing NULL for the matrix parameter to specify the default (identity) matrix. Listing 2-7 Creating a font from a font descriptor CTFontRef CreateFont(CTFontDescriptorRef iFontDescriptor, CGFloat iSize) { check(iFontDescriptor != NULL); // Create the font from the font descriptor and input size. Pass // NULL for the matrix parameter to use the default matrix (identity). return CTFontCreateWithFontDescriptor(iFontDescriptor, iSize, NULL); } The example function in Listing 2-8 creates XML data to serialize a font for embedding in a document. Alternatively, and preferably, NSArchiver could be used. This is just one way to accomplish this task, but it preserves all data from the font needed to re-create the exact font at a later time. Listing 2-8 Serializing a font CFDataRef CreateFlattenedFontData(CTFontRef iFont) { CFDataRef result = NULL; CTFontDescriptorRef descriptor; CFDictionaryRef attributes; check(iFont != NULL); // Get the font descriptor for the font. descriptor = CTFontCopyFontDescriptor(iFont); check(descriptor != NULL); if (descriptor != NULL) { Common Operations Font Creation and Storage 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 23// Get the font attributes from the descriptor. This should be enough // information to recreate the descriptor and the font later. attributes = CTFontDescriptorCopyAttributes(descriptor); check(attributes != NULL); if (attributes != NULL) { // If attributes are a valid property list, directly flatten // the property list. Otherwise we may need to analyze the attributes // and remove or manually convert them to serializable forms. // This is left as an exercise for the reader. if (CFPropertyListIsValid(attributes, kCFPropertyListXMLFormat_v1_0)) { result = CFPropertyListCreateXMLData(kCFAllocatorDefault, attributes); check(result != NULL); } } } return result; } The example function in Listing 2-9 creates a font reference from flattened XML data. Itshows how to unflatten font attributes and create a font with those attributes. Listing 2-9 Creating a font from serialized data CTFontRef CreateFontFromFlattenedFontData(CFDataRef iData) { CTFontRef font = NULL; CFDictionaryRef attributes; CTFontDescriptorRef descriptor; check(iData != NULL); // Create our font attributes from the property list. We will create // an immutable object for simplicity, but if you needed to massage Common Operations Font Creation and Storage 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 24// the attributes or convert certain attributes from their serializable // form to the Core Text usable form, you could do it here. attributes = (CFDictionaryRef)CFPropertyListCreateFromXMLData(kCFAllocatorDefault, iData, kCFPropertyListImmutable, NULL); check(attributes != NULL); if (attributes != NULL) { // Create the font descriptor from the attributes. descriptor = CTFontDescriptorCreateWithAttributes(attributes); check(descriptor != NULL); if (descriptor != NULL) { // Create the font from the font descriptor. We will use // 0.0 and NULL for the size and matrix parameters. This // causes the font to be created with the size and/or matrix // that exist in the descriptor, if present. Otherwise default // values are used. font = CTFontCreateWithFontDescriptor(descriptor, 0.0, NULL); check(font != NULL); } } return font; } Accessing Font Metrics For every font, glyph designers provide a set of measurements, called metrics, which describe the spacing around each glyph in the font. The typesetter uses these metrics to determine glyph placement. Font metrics are parameters such as ascent, descent, leading, cap height, x-height, and so on. The sample functionsin thissection illustrate how to query a font for itsfont metric data. The example function in Listing 2-10 shows how to use line metrics accessors to calculate the line height for a font. In most cases you should not need to do this yourself. If you have a CTLineRef object for a line of text , you could call CTLineGetTypographicBounds to get the line metrics for the line. Common Operations Accessing Font Metrics 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 25Listing 2-10 Calculating line height CGFloat GetLineHeightForFont(CTFontRef iFont) { CGFloat lineHeight = 0.0; check(iFont != NULL); // Get the ascent from the font, already scaled for the font's size lineHeight += CTFontGetAscent(iFont); // Get the descent from the font, already scaled for the font's size lineHeight += CTFontGetDescent(iFont); // Get the leading from the font, already scaled for the font's size lineHeight += CTFontGetLeading(iFont); return lineHeight; } The example function in Listing 2-11 demonstrates how to get glyphsfor the charactersin a string with a single font. Most of the time you should just use a CTLine object to get this information because one font may not encode the entire string. In addition, simple character-to-glyph mapping will not get the correct appearance for complex scripts. Thissimple glyph mapping may be appropriate if you are trying to display specific Unicode characters for a font. Listing 2-11 Getting glyphs for characters void GetGlyphsForCharacters(CTFontRef iFont, CFStringRef iString) { UniChar *characters; CGGlyph *glyphs; CFIndex count; assert(iFont != NULL && iString != NULL); Common Operations Accessing Font Metrics 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 26// Get our string length. count = CFStringGetLength(iString); // Allocate our buffers for characters and glyphs. characters = (UniChar *)malloc(sizeof(UniChar) * count); assert(characters != NULL); glyphs = (CGGlyph *)malloc(sizeof(CGGlyph) * count); assert(glyphs != NULL); // Get the characters from the string. CFStringGetCharacters(iString, CFRangeMake(0, count), characters); // Get the glyphs for the characters. CTFontGetGlyphsForCharacters(iFont, characters, glyphs, count); // Do something with the glyphs here, if a character is unmapped // Free our buffers free(characters); free(glyphs); } Creating Related Fonts The example functions in this section show how to query fonts for their attributes. The example function in Listing 2-12 makes a font bold or unbold based on the value of the Boolean parameter passed with the function call. If the current font family does not have the requested style, the function returns NULL. Listing 2-12 Changing traits of a font CTFontRef CreateBoldFont(CTFontRef iFont, Boolean iMakeBold) { CTFontSymbolicTraits desiredTrait = 0; Common Operations Creating Related Fonts 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 27CTFontSymbolicTraits traitMask; // If we are trying to make the font bold, set the desired trait // to be bold. if (iMakeBold) desiredTrait = kCTFontBoldTrait; // Mask off the bold trait to indicate that it is the only trait // desired to be modified. As CTFontSymbolicTraits is a bit field, // we could choose to change multiple traits if we desired. traitMask = kCTFontBoldTrait; // Create a copy of the original font with the masked trait set to the // desired value. If the font family does not have the appropriate style, // this will return NULL. return CTFontCreateCopyWithSymbolicTraits(iFont, 0.0, NULL, desiredTrait, traitMask); } The example function in Listing 2-13 converts a given font to a similar font in another font family, preserving traits if possible. It may return NULL. Listing 2-13 Converting a font to another family CTFontRef CreateFontConvertedToFamily(CTFontRef iFont, CFStringRef iFamily) { // Create a copy of the original font with the new family. This call // attempts to preserve traits, and may return NULL if that is not possible. // Pass in 0.0 and NULL for size and matrix to preserve the values from // the original font. return CTFontCreateCopyWithFamily(iFont, 0.0, NULL, iFamily); } Common Operations Creating Related Fonts 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 28This table describes the changes to Core Text Programming Guide . Date Notes 2010-03-03 Added this document for iOS 3.2. 2010-01-20 Removed links to deprecated documents and sample code. Revised Figure 1-3 "Text layout data flow" to show that a CGPath object is provided in the creation of a CTFrame object. 2008-06-09 2008-02-08 Fixed bad link to sample code in Introduction. 2007-12-11 Made minor editorial corrections. New document that explains how to perform text layout and font-related operations using the Core Text programming interfaces. 2007-07-16 2010-03-03 | © 2010 Apple Inc. All Rights Reserved. 29 Document Revision HistoryApple Inc. © 2010 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Carbon, Cocoa, Mac, Macintosh, Objective-C, OS X, Quartz, and QuickDraw are trademarks of Apple Inc., registered in the U.S. and other countries. Helvetica and Times are registered trademarks of Heidelberger Druckmaschinen AG, available from Linotype Library GmbH. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. OpenCL Programming Guide for MacContents About Open CL for OS X 7 At a Glance 7 Prerequisites 8 See Also 8 Developing OpenCL Programs Using Xcode 9 Concepts 9 Essential Development Tasks 11 Hello World! 12 Creating An Application That Uses OpenCL In Xcode 12 Compiling From the Command Line 18 Debugging 19 Basic Programming Sample 20 Basic Kernel Code Sample 20 Basic Host Code Sample 21 Identifying Parallelizable Routines 29 Using Grand Central Dispatch With OpenCL 32 Discovering Available Compute Devices 32 Enqueueing A Kernel To A Dispatch Queue 33 Determining the Characteristics Of A Kernel On A Device 34 Obtaining the Kernel’s Workgroup Size 35 Sample Code: Creating a Dispatch Queue 37 Creating and Managing Memory Objects in OS X OpenCL 43 Overview 43 Workflow 43 Memory Visibility 44 Memory Consistency 46 Creating and Using Buffers in OpenCL 46 Representing Data With Buffer Objects 46 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 2Allocating Memory For Buffer Objects In OS X v10.7 47 Reading, Writing, and Copying Buffer Objects 48 Kernel Support For Data Processing In OpenCL-C 48 Releasing Buffer Objects 49 Setting the finalizer 49 Example: Allocating, Using, Releasing Buffer Objects 50 Creating and Using Images in OpenCL 54 Image Objects 54 Example 59 IOSurface and GL: What OpenCL Supports 64 How the Kernel Interacts With Data 64 Passing Data To a Kernel 64 Accessing Buffer Objects From a Kernel 64 Retrieving Results From a Kernel 65 OpenCL/ OpenGL Interoperation: Data Sharing 66 Sharegroups 67 Synchronizing Access To Shared OpenCL/OpenGL Objects 68 Example 68 Controlling OpenCL/OpenGL Interoperation With GCD 69 Using GCD To Synchronize A Host With OpenCL 69 Synchronizing A Host With OpenCL Using A Dispatch Semaphore 70 Synchronizing Multiple Queues 75 Using IOSurfaces With OpenCL 76 Creating Or Obtaining An IOSurface 76 Creating An Image Object from An IOSurface 76 Sharing the IOSurface With An OpenCL Device 77 Autovectorizer 79 Features 79 Without the Autovectorizer 79 Writing Optimal Code For the CPU: Let the autovectorizer do the work for you 81 Do 81 Don’t 81 What the autovectorizer does 81 Vectorization Example 81 Xcode 81 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 3 ContentsImproving Performance 82 Before Optimizing Code 82 Reducing Overhead 82 Measuring Performance 86 Measuring Performance On the Host 86 Measuring Performance On Devices 86 Estimating Optimal Performance 87 Tuning OpenCL Code For the CPU 89 In Practice 90 Tuning OpenCL Code For the GPU 99 In Practice 100 Document Revision History 108 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 4 ContentsFigures, Tables, and Listings Developing OpenCL Programs Using Xcode 9 Figure 1-1 OpenCL Development Process 11 Hello World! 12 Figure 2-1 A simple OpenCL kernel in Xcode 12 Figure 2-2 Build settings for kernel files 13 Figure 2-3 OpenCL host code in Xcode 16 Figure 2-4 Adding the OpenCL framework 17 Figure 2-5 OpenCL framework was added 17 Figure 2-6 Results 18 Basic Programming Sample 20 Listing 3-1 Kernel code sample 20 Listing 3-2 Host code sample 21 Identifying Parallelizable Routines 29 Listing 4-1 Pseudocode that computes the final grade for each student 29 Listing 4-2 The isolated grade average task 30 Using Grand Central Dispatch With OpenCL 32 Listing 5-1 Creating a dispatch queue 37 Listing 5-2 Obtaining workgroup information 40 Creating and Managing Memory Objects in OS X OpenCL 43 Figure 6-1 Physical memory of an OpenCL system 45 Listing 6-1 Sample host function creates buffers then calls kernel function 50 Listing 6-2 Sample kernel squares an input array 54 Listing 6-3 Creating a 2D image object 55 Listing 6-4 An image-processing kernel function 58 Listing 6-5 Sample host function creates images then calls kernel function 59 Listing 6-6 Sample kernel swaps the red and green channels 63 OpenCL/ OpenGL Interoperation: Data Sharing 66 Figure 7-1 OpenGL and OpenCL share data using sharegroups 67 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 5Controlling OpenCL/OpenGL Interoperation With GCD 69 Figure 8-1 Rendering loop - each pass on the main thread creates a new frame for display 71 Listing 8-1 Synchronizing the host with OpenCL processing 70 Listing 8-2 Synchronizing a host with OpenCL using a dispatch semaphore 71 Listing 8-3 Synchronizing multiple queues 75 Using IOSurfaces With OpenCL 76 Listing 9-1 Creating an IOSurface-backed CL Image 77 Listing 9-2 Extracting an Image From an IOSurface 77 Autovectorizer 79 Figure 10-1 Before autovectorization: A simple float sent to the CPU and the GPU 80 Listing 10-1 Passing single floats into a kernel 80 Improving Performance 82 Figure 11-1 Memory copy speed in GB/s (read+write) vs buffer size 88 Table 11-1 Benchmarks of boxAvgH5 variants: 103 Table 11-2 Benchmarks of boxAvgH5 variants where each work item processes 4 columns: 106 Listing 11-1 Using the gettimeofday function 86 Listing 11-2 Sample benchmarking loop on the kernel 87 Listing 11-3 Kernel for estimating performance 87 Listing 11-4 The boxAvg kernel in two passes 91 Listing 11-5 Modify the horizontal pass to compute one row per work item instead of one pixel 92 Listing 11-6 Modify the algorithm to read fewer values per pixel and to incrementally update the sum 94 Listing 11-7 Modify the horizontal pass by moving division and conditionals out of the inner loop 95 Listing 11-8 Modify vertical pass to combine rows; each work item computes a block of rows 96 Listing 11-9 Ensure the image width is always a multiple of 4 97 Listing 11-10 A safer variant that will work for any image width 97 Listing 11-11 Fused kernel 98 Listing 11-12 Kernel before optimization 101 Listing 11-13 Move the data to local memory 102 Listing 11-14 Modify the kernel to compute several rows in each work item 103 Listing 11-15 Provide a dedicated kernel for each value of RANGE 104 Listing 11-16 Fastest variant: Unroll the inner loop and convert float data to float4 106 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 6 Figures, Tables, and ListingsOpenCL™ (Open Computing Language) is an open standard for cross-platform, parallel programming of modern processors such as multicore CPUs and programmable GPUs. Introduced with OS X v10.6, OpenCL lets your application tap into the parallel computing power of these processors to improve performance and deliver features made possible by compute-intensive algorithms. OpenCL is comprised of thee parts: a C99-based kernel programing language, a powerful scheduling API and a runtime that efficiently executes kernels on the CPU or GPU. Going beyond the standard, OS X v10.7 adds integration between OpenCL, Grand Central Dispatch and Xcode making it even easer take advantage of the power of the OpenCL in your application. At a Glance Using OpenCL is easier than ever in OS X v10.7: ● OpenCL is fully supported by Xcode. The Xcode offline compiler removes a configuration step that used to have to be performed before the kernel could be run and aids in debugging earlier in the development process. See “Hello World!” (page 12). ● You can write OpenCL functions in separate files and include them in your Xcode project. These files can be compiled as your application is built. This improves application performance because kernels need not be compiled when the application is running ● OpenCL now integrates with Grand Central Dispatch, making it easier for you to focus on making your OpenCL kernels more efficient. See “Using Grand Central Dispatch With OpenCL” (page 32). ● The autovectorizer is used for compiling kernels that will run on the CPU. It accelerates performance up to four times without additional effort. The autovectorizer allows you to write one kernel that runs efficiently on both a CPU and a GPU. It is invoked regardless of whether the openclc compiler is called from Xcode or if the kernel is built at runtime. See “Autovectorizer” (page 79). ● You can, of course, continue to use code you’ve already written to the OpenCL 1.1 standard. 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 7 About Open CL for OS XPrerequisites This guide assumes that you program in C and have access to The OpenCL Specification . Although this guide discusses many key OpenCL API functions, it does not provide detailed information on the OpenCL API or the OpenCL-C programming language. See Also The OpenCL Specification , available from the Khronos Group at http://www.khronos.org/registry/cl/ provides information on the OpenCL standard. The OpenCL Programming Guide by Aaftab Munshi, Benedict Gaster, Timothy G. Mattson, James Fung, and Dan Ginsburg, available from Pearson Education, Inc. For more information about Grand Central Dispatch queues, see Concurrency Programming Guide: Dispatch Queues. About Open CL for OS X Prerequisites 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 8This chapter describes a streamlined process in which, using tools provided by OS X v10.7, you can include OpenCL kernels as resources in Xcode projects, compile them along with the rest of your application, and use Grand Central Dispatch as the queuing API for executing OpenCL commands and kernels on the CPU and GPU. If you need to create OpenCL programs at run-time, with source loaded as a string or from a file, or if you want API-level control over queueing, see The OpenCL Specification , available from the Khronos Group at http://www.khronos.org/registry/cl/. Concepts In the OpenCL specification, computational processors are called devices. An OpenCL device has one or more compute units. A workgroup executes on a single compute unit. A compute unit is composed of one or more processing elements and local memory. A Macintosh computer has a single CPU and GPUs. The CPU on a Macintosh has multiple compute units, which is why it is called a multi-core CPU. The number of compute units in a CPU limits the number of workgroups that can execute concurrently. CPUs commonly contain two to eight compute units, with the maximum increasing year-to-year. A graphics processing unit (GPU) typically contains many compute units—the GPUs in current Macintosh systems feature tens of compute units, and future GPUs may contain hundreds. As used by OpenCL, a CPU with eight compute units is considered a single device, as is a GPU with 100 compute units. The OS X v10.7 implementation of the OpenCL API facilitates designing and coding data parallel programs to run on both CPU and GPU devices. In a data parallel program, the same program (or kernel) runs concurrently on different pieces of data and each invocation is called a work item and given a work item ID. The work item IDs are organized in up to three dimensions (called an N-D range). A kernel is essentially a function written in the OpenCL language that enables it to be compiled for execution on any device thatsupports OpenCL. Although kernels are enqueued for execution by host applications written in C, C++, or Objective C, a kernel must be compiled separately to be customized for the device on which it is going to run. You can write your OpenCL kernel source code in a separate file or include it inline in your host application source code. OpenCL kernels can be: 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 9 Developing OpenCL Programs Using Xcode● Compiled at compile time, then run when queued by the host application or ● Compiled and then run at runtime when queued by the host application or ● Run from a previously-built binary A work item is a parallel execution of a kernel on some data. It is analogousto a thread. Each kernel is executed upon hundreds of thousands of work items A workgroup is set of work items. Each workgroup is executed on a compute unit. Workgroup dimensions determine how the input is operated upon in parallel. The application usually specifies the dimensions based on the size of the input. There are constraints: for example, there may be a maximum number of work items that can be launched for a certain kernel on a certain device. The program that calls OpenCL functions to set up the context in which kernels run and enqueue the kernels for execution is known asthe host application. The application isrun by OS X on the CPU. The device on which the host application executes is known as the host device. Before kernels can be run, the host application typically completes the following steps: 1. Determine what compute devices are available, if necessary. 2. Select compute devices appropriate for the application. 3. Create dispatch queues for selected compute devices. 4. Allocate the memory objects needed by the kernels for execution. (This step may occur earlier in the process, as convenient.) Note that the host device (the CPU) can itself be an OpenCL device and can be used to execute kernels. The host application can enqueue commands to read from and write to memory objects. See “Creating and Managing Memory Objects in OS X OpenCL” (page 43). Memory objects are used to manipulate device memory. There are two types of memory objects used in OpenCL: buffer objects and image objects. Buffer objects can contain any type of data; image objects contain data organized into pixels in a given format. Developing OpenCL Programs Using Xcode Concepts 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 10Essential Development Tasks In OS X v10.7, the OpenCL development process includes these major steps: Figure 1-1 OpenCL Development Process Step 1: Parallelize. Step 2: Code kernel. Step 3: Code host. Step 4: Compile. Step 5: Execute. Step 6: Debug. Step 7: Optimize. 1. Identify the tasks to be parallelized. Determining how to parallelize your program effectively is often the hardest part of developing an OpenCL program. See “Identifying Parallelizable Routines” (page 29). 2. In Xcode, write your kernel functions. See “Basic Kernel Code Sample” (page 20). 3. In Xcode, write the host code that will be calling the kernel(s). See “Basic Host Code Sample” (page 21). 4. Compile using Xcode. See “Creating An Application That Uses OpenCL In Xcode” (page 12). 5. Execute. 6. Debug (if necessary). See “Debugging” (page 19). 7. Improve performance (if necessary). See “Improving Performance” (page 82). Developing OpenCL Programs Using Xcode Essential Development Tasks 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 11Creating an OpenCL program in OS X v10.7 is easy with support built into Xcode. This chapter describes step-by-step how to create an OpenCL project in Xcode. If you already have a working OpenCL project, you need not regenerate it, but you can find information in this chapter about support for OpenCL now built into Xcode. Creating An Application That Uses OpenCL In Xcode To create a project that uses OpenCL in OS X v10.7: 1. Create your OpenCL project in Xcode as a new OS X project (empty is fine). 2. Place your kernel code in one or more .cl files in your Xcode project. You can place all your kernels into a single .cl file, or you can separate them as you choose. You can also include non-kernel code that will run on the same OpenCL device as the kernel in each .cl file. Each .cl file is compiled by default into three files containing bitcode for i386, x86_64, and gpu_32 architectures. You can change this using the OpenCL Architectures Build Setting.) At runtime your host application discovers what kind(s) of devices are available, and determines which of the compiled kernels to enqueue and execute. Figure 2-1 A simple OpenCL kernel in Xcode 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 12 Hello World!3. You can set the following build settings for your kernel (.cl) files: Figure 2-2 Build settings for kernel files ● OpenCL Compiler Version ● Compiler Version The OpenCL C compiler version supported by the platform. The default is OpenCL C 1.1. To set this parameter from the command line, use: -cl-std=CL1.1 ● OpenCL - Architectures ● Valid Architectures A StringList specifying the list of the architectures for which the product will be built. This is usually set to a predefined build setting provided by the platform. The default is that the product is built for all three architectures. To set this parameter from the command line, use: ● -triple i386-applecl-darwin ● -triple x86_64-applecl-darwin ● -triple gpu_32-applecl-darwin (So to compile for the first two, the command line would read: -triple i386-applecl-darwin -triple x86_64-applecl-darwin ● OpenCL - Preprocessing ● Preprocessor Macros Space-separated list of preprocessor macros of the form "foo" or "foo=bar". To set this parameter from the command line, use: -D Hello World! Creating An Application That Uses OpenCL In Xcode 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 13● OpenCL - Code Generation ● Use MAD Boolean. If true, allow expressions of the type a * b + c to be replaced by a Multiply-Add (MAD) instruction. If MAD is enabled, multistep instructions in the form a * b + c are performed in a single step, but the accuracy of the results may be compromised. For example, to optimize performance, some OpenCL devices implement MAD by truncating the result of the a * b operation before adding it to c. The default for this parameter is NO. To set this parameter from the command line, use: -cl-mad-enable ● Relax IEEE Compliance Boolean. If true, allows optimizations for floating-point arithmetic that may violate the IEEE 754 standard and the OpenCL numerical compliance requirements defined in in section 7.4 for single-precision floating-point, section 9.3.9 for double-precision floating-point, and edge case behavior in section 7.5 of the OpenCL 1.1 specification. This is intended to be a performance optimization. This option causes the preprocessor macro __FAST_RELAXED_MATH__ to be defined in the OpenCL program. The default is NO. To set this parameter from the command line, use: -cl-fast-relaxed-math ● Double as single Boolean. If true, double precision floating-point expressions are treated as single precision floating-point expressions. This option is available for GPUs only. The default is NO. To set this parameter from the command line, use: -cl-double-as-single ● Flush denorms to zero Boolean that controls how single precision and double precision denormalized numbers are handled. Ifspecified as a build option, the single precision denormalized numbers may be flushed to zero; double precision denormalized numbers may also be flushed to zero if the optional extension for double precision is supported. This is intended to be a performance hint and the OpenCL compiler can choose not to flush denorms to zero if the device supports single precision (or double precision) denormalized numbers. This option isignored forsingle precision numbersif the device does notsupportsingle precision denormalized numbers i.e. CL_FP_DENORM bit is not set in CL_DEVICE_SINGLE_FP_CONFIG. This option isignored for double precision numbersif the device does notsupport double precision or if it does support double precision but not double precision denormalized numbers i.e. CL_FP_DENORM bit is not set in CL_DEVICE_DOUBLE_FP_CONFIG. Hello World! Creating An Application That Uses OpenCL In Xcode 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 14This flag only applies for scalar and vector single precision floating-point variables and computations on these floating-point variables inside a program. It does not apply to reading from or writing to image objects. The default is NO. To set this parameter from the command line, use: -cl-denorms-are-zero ● Auto-vectorizer Auto-vectorizes the OpenCL kernels for the CPU. This setting takes effect only for the CPU. This makes it possible to write a single kernel that is portable and performant across CPUs and GPUs. The default is YES. To set this parameter from the command line, use: -cl-auto-vectorize-enable or -cl-autovectorize-disable ● Optimization Level You can choose whether to optimize for smallest code size or not. The default is fast O1 optimization. To set this parameter from the command line, use: ● -Os sets it to optimize for smallest code size ● -O, O1 sets it to fast ● -O2 sets it to faster ● -O3 sets it to fastest ● -O0 sets it to not optimize Hello World! Creating An Application That Uses OpenCL In Xcode 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 154. Place your host code in one or more .c files in your Xcode project. Figure 2-3 OpenCL host code in Xcode 5. Link to the OpenCL framework. a. Click on the target. b. Click the Build Phases tab. c. Open Link Binary With Libraries. d. Click the + sign. Hello World! Creating An Application That Uses OpenCL In Xcode 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 16e. Select OpenCL.framework from the dropdown. Figure 2-4 Adding the OpenCL framework f. Press Add. Figure 2-5 OpenCL framework was added 6. Build. Hello World! Creating An Application That Uses OpenCL In Xcode 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 177. Run. Figure 2-6 Results See “Basic Programming Sample” (page 20) for a line-by-line description of the host and kernel code in the Hello World sample project. Compiling From the Command Line To compile from the command line, call openclc. Hello World! Compiling From the Command Line 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 18Debugging Here are a few hints to help you debug your OpenCL application: ● Run your kernel on the CPU first. There is no memory protection on GPUs. If an index goes out of bounds on the GPU, it is likely to take the whole system down. If an index goes out of bounds on the CPU, it may crash the program that’s running, but it will not take the whole system down. ● You can use the printf function from within your kernel. ● You can use the gdb debugger to look at the assembly code once you’ve built your program. See GDB website. ● On the GPU, use explicit addressrange checksto look for out-of-range address accesses. (Remember: there is no memory protection on current GPUs.) Hello World! Debugging 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 19This chapter provides a tour through the code of a simple OpenCL application that performs calculations on a test data set. The code in Listing 3-2 (page 21) calls the kernel defined in Listing 3-1 (page 20). The kernel squares each value. Once the kernel completes its work, the host validates that every value was processed by the kernel. Basic Kernel Code Sample Listing 3-1 (page 20) is example kernel code. See to download the project. Listing 3-1 Kernel code sample //////////////////////////////////////////////////////////////////////////////// // Simple OpenCL kernel that squares an input array. // This code is stored in a file called mykernel.cl. kernel void square(global float* input, global float* output) [1] { size_t i = get_global_id(0); output[i] = input[i] * input[i]; } Notes: 1. Wrap your kernel code into a kernel block: kernel void kernelName ( global float* inputParameterName, global float* [anotherInputParameter] , …, global float* outputParameterName) 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 20 Basic Programming Sample{ ... } Note: Kernels always return void. Basic Host Code Sample Listing 3-2 (page 21) is example code that would run on a host. It calls a kernel to square a set of values, then tests to ensure that the kernel processed all the data. See to download the project. Listing 3-2 Host code sample //////////////////////////////////////////////////////////////////////////////// #include #include // This include pulls in everything you need to develop with OpenCL on OS X v10.7. #include // Include the header file generated by Xcode. This header file contains the // kernel block declaration. #include "mykernel.cl.h" [1] // Hardcoded number of values to test, for convenience. #define NUM_VALUES 1024 // A utility function that checks that our kernel execution performs the // requested work over the entire range of data. static int validate(cl_float* input, cl_float* output) { int i; for (i = 0; i < NUM_VALUES; i++) { Basic Programming Sample Basic Host Code Sample 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 21// The kernel was supposed to square each value. if ( output[i] != (input[i] * input[i]) ) { fprintf(stdout, "Error: Element %d did not match expected output.\n", i); fprintf(stdout, " Saw %1.4f, expected %1.4f\n", output[i], input[i] * input[i]); fflush(stdout); return 0; } } return 1; } int main (int argc, const char * argv[]) { int i; char name[128]; // First, try to obtain a dispatch queue that can send work to the // GPU in our system. [2] dispatch_queue_t queue = gcl_create_dispatch_queue(CL_DEVICE_TYPE_GPU, NULL); // In the event that our system does NOT have an OpenCL-compatible GPU, // we can use the OpenCL CPU compute device instead. if (queue == NULL) { queue = gcl_create_dispatch_queue(CL_DEVICE_TYPE_CPU, NULL); } // This is not required, but let's print out the name of the device // we are using to do work. We could use the same function, // clGetDeviceInfo, to obtain all manner of information about the device. cl_device_id gpu = gcl_get_device_id_with_dispatch_queue(queue); clGetDeviceInfo(gpu, CL_DEVICE_NAME, 128, name, NULL); fprintf(stdout, "Created a dispatch queue using the %s\n", name); Basic Programming Sample Basic Host Code Sample 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 22// Now we gin up some test data. This is typically the case: you have some // data in your application that you want to process with OpenCL. This // test_in buffer represents such data. Normally, this would come from // some REAL source, like a camera, a sensor, or some compiled collection // of statistics -- it just depends on the problem you want to solve. float* test_in = (float*)malloc(sizeof(cl_float) * NUM_VALUES); for (i = 0; i < NUM_VALUES; i++) { test_in[i] = (cl_float)i; } // Once the computation using CL is done, we'll want to read the results // back into our application's memory space. Allocate some space for that. float* test_out = (float*)malloc(sizeof(cl_float) * NUM_VALUES); // Our test kernel takes two parameters: an input float array and an // output float array. We can't send the application's buffers above, since // our CL device operates on its own memory space. Therefore, we allocate // OpenCL memory for doing the work. Notice that for the input array, // we specify CL_MEM_COPY_HOST_PTR and provide the fake input data we // created above. This tells OpenCL to copy over our data into its memory // space before it executes the kernel. [3] void* mem_in = gcl_malloc(sizeof(cl_float) * NUM_VALUES, test_in, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR); // The output array is not initalized; we're going to fill it up when // we execute our kernel. [4] void* mem_out = gcl_malloc(sizeof(cl_float) * NUM_VALUES, NULL, CL_MEM_WRITE_ONLY); // Dispatch your kernel block using one of the dispatch_ commands and the // queue we created above. [5] dispatch_sync(queue, ^{ // Though we COULD pass NULL as the workgroup size, which would tell Basic Programming Sample Basic Host Code Sample 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 23// OpenCL to pick the one it thinks is best, we can also ask // OpenCL for the suggested size, and pass it ourselves. [6] size_t wgs; gcl_get_kernel_block_workgroup_info(square_kernel, CL_KERNEL_WORK_GROUP_SIZE, sizeof(wgs), &wgs, NULL); // The N-Dimensional Range over which we'd like to execute our // kernel. In our example case, we're operating on a 1D buffer, so // it makes sense that our range is 1D. cl_ndrange range = { 1, // The number of dimensions to use. {0, 0, 0}, // The offset in each dimension. We want to // process ALL of our data, so this is 0 for // our test case. [7] {NUM_VALUES, 0, 0}, // The global range -- this is how many items // IN TOTAL in each dimension you want to // process. {workgroup_size, 0, 0} // The local size of each workgroup. This // determines the number of workitems per // workgroup. It indirectly affects the // number of workgroups, since the global // size / local size yields the number of // workgroups. So in our test case, we will // have NUM_VALUE / wgs workgroups. }; // Calling the kernel is easy; you simply call it like a function, // passing the ndrange as the first parameter, followed by the expected // kernel parameters. Note that we case the 'void*' here to the // expected OpenCL types. Remember -- if you use 'float' in your // kernel, that's a 'cl_float' from the application's perspective. [8] Basic Programming Sample Basic Host Code Sample 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 24square_kernel(&range,(cl_float*)mem_in, (cl_float*)mem_out); // Getting data out of the device's memory space is also easy; we // use gcl_memcpy. In this case, we take the output computed by the // kernel and copy it over to our application's memory space. [9] gcl_memcpy(test_out, mem_out, sizeof(cl_float) * NUM_VALUES); }); // Now we can check to make sure our kernel really did what we asked // it to: if ( validate(test_in, test_out)) { fprintf(stdout, "All values were properly squared.\n"); } // Don't forget to free up the CL device's memory when you're done. [10] gcl_free(mem_in); gcl_free(mem_out); // And the same goes for system memory, as usual. free(test_in); free(test_out); // Finally, release your queue just as you would any GCD queue. [11] dispatch_release(queue); } Notes: Basic Programming Sample Basic Host Code Sample 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 251. Include the header file that contains the kernel block declaration. The name of the header file for a .cl file will be the name of the .cl file with .h appended to it. For example, if the .cl file is named mykernel.cl, the header file you must include will be mykernel.cl.h. 2. Call gcl_create_dispatch_queue to create the dispatch queue. 3. Create memory objects to hold input and output data and write input data to the input objects. Allocate an array on the OpenCL device from which to read kernel results back into host memory. Use gcl_malloc and make sure to use the OpenCL size of the datatype being returned (e.g., gcl_malloc(sizeof(cl_float) * NUM_VALUES). Because the CL device operates on its own memory space, allocate OpenCL memory for the input data upon which the kernel will work. Specify CL_MEM_COPY_HOST_PTR to tell OpenCL to copy over the input data from host memory into its memory space before it executes the kernel. 4. Allocate OpenCL memory in which the kernel will store its results. 5. Dispatch your kernel block using one of the dispatch commands and the queue you created above. In your dispatch call, you can specify workgroup parameters. 6. Describe the data parallel range over which to execute the kernel. You will describe the data-parallel range for the OpenCL kernel in the host code. The cl_ndrange structure is used to specify the data parallel range. OpenCL always executes kernels in a data parallel fashion—that is, instances of the same kernel (work items) execute on different portions of the total data set. See “Representing Data With Buffer Objects” (page 46). (If you want task-parallel execution you must enqueue multiple kernels on different devices.) Each work item is responsible for executing the kernel once and operating on its assigned portion of the data set. It is your responsibility to tell OpenCL the total number of work items that you need to process all of your data. Because data sets are commonly organized in one, two, or three dimensions (representing such things as audio data streams, two- or three-dimensional images, or three-dimensional objects), you also need to indicate to OpenCL in how many dimensions your data extends (that is, how many coordinates to use for each data point). ● Determining the Data Dimensions The first step in preparing a kernel for execution is to identify the number of dimensions that you want to use to represent your data. For example, if your data represents a flat image that is m pixels wide by n pixels high, then you have a two-dimensional data set with each data pointed represented by its coordinates on the m and n axes. On the other hand, if you’re dealing with spatial data that involves the (x, y, z) position of nodes in three-dimensional space, you have a three-dimensional data set. Another way to look at the dimensionality of your data is in terms of nested loops in a traditional, non-parallel processing model. If you can loop through your entire data set with a single loop, then your data is one-dimensional. If you would use one loop nested in another, your data is Basic Programming Sample Basic Host Code Sample 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 26two-dimensional, and if you would have loops nested three deep to cycle through all your data, your data is three-dimensional. Whatever your data is, it’s up to you to determine how many dimensions to use. As of OpenCL 1.0, dimensions greater than three are not supported. ● Determining the Number of Work Items The next step in preparing your kernel for execution is determining how many work items you'll need to process all of your data. This is known as the global work size, and it defines the total number of work items needed for all dimensions combined. For one-dimensional data, the global work size equals the number of data items. For two-dimensional data with m data items in one dimension and n items in the second dimension, the global work size is n * m. Similarly, for three-dimensional data with x , y , and z work itemsin the three dimensions, the global work size is x * y * z . There is practically no limit on the number of work items, and this should be a large number (over 1000) for good performance on GPUs. ● Choosing a Workgroup Size When enqueuing a kernel to execute on a device, you can specify the size of the workgroup that you’d like OpenCL to use during execution. A workgroup is a collection of work items that execute on the same compute unit on the same OpenCL device. By providing OpenCL with a suggested workgroup size, you are telling it how you would like it to delegate the work items to the various computational units on the device. The work items executing in the same workgroup can share memory and execute synchronously. In order to take advantage of these features, however, you have to know the maximum workgroup size allowed by the OpenCL device on which your work items are executing. To get this information, use the gcl_get_kernel_block_workgroup_info function and request the CL_KERNEL_WORK_GROUP_SIZE property. This API isrequired to query the workgroup size of a kernel block to use in cl_ndrange.local_work_size. Thisis needed for good performance across devices asthe workgroup sizes vary across devices. This API must be called inside a block submitted to a Grand Central Dispatch queue created using gcl_create_dispatch_queue. If you don’t need to share data among work items, pass a NULL value to the local_work_size parameter when you enqueue your kernel for execution to have OpenCL determine the workgroup size for you. Doing so will ensure the most efficient use of the available devices. Note that you also need to use clGetDeviceInfo with the selector CL_DEVICE_MAX_WORK_ITEM_SIZES to get the maximum workgroup size in each dimension, and call the gcl_get_kernel_block_workgroup_info function with the selector CL_KERNEL_WORK_GROUP_SIZE to get the total workgroup size. Three conditions must be met for the local dimensions to be valid: a. The number of work items in each dimension (local_x, local_y, and local_z) in a single workgroup must be less than the values returned for the device from clGetDeviceInfo(CL_DEVICE_MAX_WORK_ITEM_SIZES). Basic Programming Sample Basic Host Code Sample 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 27b. The total number of work items in a workgroup (local_x times local_y times local_z ) must be less than or equal to the value returned by the gcl_get_kernel_block_workgroup_info(CL_KERNEL_WORK_GROUP_SIZE) function. c. The number of work items in each dimension in a single workgroup must divide evenly into the total number of work items in that dimension ( global_n mod local_n = 0). 7. Always pass an offset for each of three dimensions even though the workgroup may have fewer than three dimensions. 8. Call the kernel as you would call a function. Pass the ndrange as the first parameter, followed by the expected kernel parameters. Case the void* types to the expected OpenCL types. Remember -- if you use float in your kernel, that's a cl_float from the application's perspective. The call to the kernel will look something like this: kernelName( &range, (cl_datatype*)inputArray, (cl_datatype *)outputArray ); 9. Retrieve the data from the OpenCL device's memory space with gcl_memcpy. The output computed by the kernel is copied over to the host application's memory space. 10. Free OpenCL memory objects. 11. Call dispatch_release(...) on the dispatch queue you created with gcl_create_dispatch_queue(...) once you are done with it. Basic Programming Sample Basic Host Code Sample 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 28The first step in using OpenCL to improve your application’s performance is to identify what portions of your application are appropriate for parallelization. Whereas in a general application you can spawn a separate thread for a task as long as the functions in the thread are re-entrant and you’re careful about how you synchronize data access, to achieve the level of parallelism for which OpenCL isideal, it is much more important for the work items to be independent of each other. Although work items in the same workgroup can share local data, they execute synchronously and so no work item’s calculations depend on the result from another work item. Parallelization works only when the tasks that you run in parallel do not depend on each other. For example, assume that you are writing a simple application that keeps track of the grades for students in a class. The application consists of two main tasks: 1. Compute the final grade for each student, assuming the final grade is the average of all the students’ grades. 2. Obtain a class average by averaging the final grades of all students. You cannot perform these two tasks in parallel because they are not independent of each other: to calculate the class average, you must have already calculated the final grade for each student. Despite the fact that you cannot perform task 1 and task 2 simultaneously, there is still an opportunity for parallelization. To see how it can be broken down, it helpsto look at a basic pseudocode example for computing the final grade for each student serially. Listing 4-1 Pseudocode that computes the final grade for each student // assume 'class' is a collection of 'student' objects foreach(student in class) { // assume getGrades() returns a collection of integer grades grades = student.getGrades(); sum = 0; count = 0; 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 29 Identifying Parallelizable Routines// iterate through each grade, adding it to sum foreach(grade in grades) { sum += grade; count++; } // cache the average grade in the student object student.averageGrade = sum / count; } The pseudocode in Listing 4-1 (page 29) proceeds through each student in the class, one by one, calculating the average of each student’s grades and caching it in the student object. Although this example computes each grade average one at a time, there’s no reason that the grade averages for all the students couldn’t be calculated at the same time. Because the grades of one student do not affect the grades of another, you can calculate the grade averages for all the students at the same time instead of looping through the same set of instructions for each student, one at a time. This is the idea behind data parallelism. Data parallelism consists of taking a single task (in this case, calculating a student’s average grade), and repeating it over multiple sets of data. Students’ grades do not affect each other, therefore you can process them in parallel. To express this programmatically, you must first separate your task (calculating the grade average of a student) from your data (the students in the class). Listing 4-2 (page 30) shows how you can isolate the grade-averaging task. Listing 4-2 The isolated grade average task task calculateAverageGradeForStudent( student ) { // assume getGrades() returns a collection of integer grades grades = student.getGrades(); sum = 0; count = 0; // iterate through each grade, adding it to sum foreach(grade in grades) Identifying Parallelizable Routines 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 30{ sum += grade; count++; } // store the average grade in the student object student.averageGrade = sum / count; } Now that you have the task isolated, you need to apply it to allstudentsin the classin parallel. Because OpenCL has native support for parallel computing, you can rewrite the task shown in Listing 4-2 (page 30) as an OpenCL kernel function. Using the OpenCL framework API, you can enqueue this kernel to run on a device where each compute unit on the device can apply an instance of the kernel (that is, a work item) to a different set of data. The challenge in parallelizing your application is identifying the tasks that you can distribute across multiple compute units. Sometimes, asin this example, the identification isrelatively trivial and requiresfew algorithmic changes. Other times, it might require designing a new algorithm from scratch that lends itself more readily to parallelization. Although there is no universal rule for parallelizing your application, there are a few tips you can keep in mind: ● Pay attention to loops. Often the opportunities for parallelization lie within a subroutine that is repeated over a range of results. ● Nested loops might be restructured as multi-dimensional parallel tasks. ● Find as many tasks as possible that do not depend on each other. Finding a group of routines that do not share memory or depend on each other’s results is usually a good indicator that you can perform them in parallel. If you have enough such tasks, you can consider writing a task-parallel OpenCL program. ● Due to the overhead of setting up a context and transferring data over a PCI bus, you must be processing a fairly large data set before you see any benefits from using OpenCL. The exact point at which you start to see benefits depends on the OpenCL implementation and the hardware being used, so you will have to experiment to see how fast you can get your algorithm to execute. In general, a high ratio of computation to data access and lots of mathematical computations are good for OpenCL programs. Identifying Parallelizable Routines 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 31Developers already use Grand Central Dispatch (GCD) queues to implement concurrency in their applications. OS X v10.7 adds the ability to enqueue work coded as OpenCL kernels to GCD queues backed by OpenCL compute devices. You can use GCD with OS X v10.7 OpenCL to: ● Investigate the computational environment in which your OpenCL application is running. You can use OS X v10.7 OpenCL to learn about the devicesin the system that would be best for performing particular OpenCL computations and to enqueue kernels to devices: ● You can find out about the computational power and technical characteristics of each OpenCL-capable device in the system. See “Discovering Available Compute Devices” (page 32). ● GCD can suggest which OpenCL device(s) would be best for running a particular kernel. ● You can obtain recommendations about how to configure the kernel - get the suggested optimal size of the workgroup for each kernel on any particular device. See “Obtaining the Kernel’s Workgroup Size” (page 35). ● Enqueue the kernel. ● Synchronize work between the host and OpenCL devices and synchronize work between devices. Your host can wait on completion of work in all queues (See “Using GCD To Synchronize A Host With OpenCL” (page 69)) or one queue can wait on completion of another queue (See “Synchronizing Multiple Queues” (page 75)). Discovering Available Compute Devices OpenCL kernels assume a Single instruction, Multiple Data (SIMD) parallel model of computation. This means (roughly) that you have a large amount of data divided into chunks, and you want the kernel to perform the same computation on each chunk. Some SIMD algorithms will execute better on a CPU rather than on a GPU, or on one GPU rather than another, depending on many factors. Tools in OS X version 7 and later facilitate discovery of the types of devices that are available to process data. A context is needed to share memory objects between devices. If you use The OS X v10.7 gcl_ APIs, you can just retrieve and use the default global context; no context creation is needed. 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 32 Using Grand Central Dispatch With OpenCLNote: If you are using APIs defined in the OpenCL specification, you do need to create your own contexts. An OpenCL context is similar to an OpenGL sharegroup. A sharegroup is a set of tools that allow blocks of memory to be accessed by both a GPU and a CPU. See “OpenCL/ OpenGL Interoperation: Data Sharing” (page 66). When you retrieve the default global context in OS X v10.7 OpenCL, you can find out about the environment in which OpenCL kernels execute. The context includes the set of devices, the memory accessible to those devices, and one or more queues used to schedule execution of one or more kernels. From the context, you can discover the types of devices in the system and can obtain recommendations as to the optimal configuration for running a kernel. Your application can call on GCD to create a queue for a particular type of device or to create a queue for a specific device. 1. Call the gcl_get_context function to get the "global" OpenCL context that OS X v10.7 creates for you. Note: Since this context is created by the OpenCL, you should not retain / release it. (You should retain/release any contexts that you explicitly create.) 2. Call the clGetDeviceIds( ... ) function (an API in the OpenCL standard API), specifying the context you just obtained asthe context parameter. This call will return a list of the IDs of the OpenCL devices attached. 3. When you have the IDs of the devices in the context, you can call the clGetDeviceInfo() function for each of the devices to obtain information about the device. The sample code in Listing 5-1 (page 37) requeststhe vendor (the manufacturer) and the device name. You could also use the clGetDeviceInfo() function to request more technical information like the number of compute cores, the cache line size and so on. The types of information you can obtain are described in the OpenCL 1.1 specification. You can choose to send different types of work to a device depending upon its characteristics and capabilities. Enqueueing A Kernel To A Dispatch Queue You must use an OpenCL-compatible dispatch queue for your OpenCL work. You can create a queue for a particular device in the system or you can create a queue for a particular type of device. You can enqueue as many kernels on each queue as you choose. You can create as many different queues as you would like: ● To create a dispatch queue to run on any device so long as it’s of a particular type, call the gcl_create_dispatch_queue function passing CL_DEVICE_TYPE_CPU, CL_DEVICE_TYPE_GPU, or CL_DEVICE_TYPE_ACCELERATOR as the first parameter. Using Grand Central Dispatch With OpenCL Enqueueing A Kernel To A Dispatch Queue 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 33Note: The dispatch queue you create must be attached to a particular device type. You cannot create an OpenCL-compatible dispatch queue for the default device type (CL_DEVICE_TYPE_DEFAULT). OS X v10.7 OpenCL will create a dispatch queue that uses a GPU or CPU, depending upon the device type you specified. If more than one GPU is available, OS X v10.7 OpenCL will give enqueue the kernel on the device of the type you specify that has the largest number of compute cores. Note: If you've created your dispatch queue specifying CL_DEVICE_TYPE_GPU, you won't know whichGPUis being used. Callthe gcl_get_device_id_with_dispatch_queue function to find out which device is actually attached to a given dispatch queue. ● If you know exactly which OpenCL device id you want to use because you've obtained it with the clGetDeviceIds function and found out about it using the clGetDeviceInfo function, call the cl_create_dispatch_queue function with CL_DEVICE_TYPE_USE_ID and pass the id of the device you want to use. Both of these methods are illustrated in the sample code. See Listing 5-1 (page 37). Note: Always call the dispatch_release(...) function on the dispatch queue you created with the gcl_create_dispatch_queue(...) function once you are done with it. All of the example code contains this call. Once you have created a queue, you can enqueue as many kernels onto that queue as necessary. Or, you can create additional queues with different characteristics. For more information about Grand Central Dispatch queues, see Concurrency Programming Guide: Dispatch Queues. Determining the Characteristics Of A Kernel On A Device To obtain information specific to a kernel/device pair, including how much private and local memory the kernel will consume (on that device), as well as the workgroup size OpenCL thinks will be most optimal for execution, call the gcl_get_kernel_block_workgroup_info function. Thisinformation is useful when you are tuning performance for a particular device or debugging performance issues. Using Grand Central Dispatch With OpenCL Determining the Characteristics Of A Kernel On A Device 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 34Obtaining the Kernel’s Workgroup Size To find out what OpenCL thinks is the best workgroup size for executing a kernel on a particular device, call the gcl_get_kernel_block_workgroup_info function. You can use this value as the cl_ndrange.local_work_size for a kernel on a particular device. Note: You must call this API inside a block submitted to a GCD dispatch queue created using the gcl_create_dispatch_queue function. In Listing 5-1 (page 37), notice that we first execute this method in a block on a dispatch queue we've created with OpenCL requesting the local memory size: gcl_get_kernel_block_workgroup_info( square_kernel, CL_KERNEL_LOCAL_MEM_SIZE, sizeof(local_memsize), &local_memsize, NULL); Then, in Listing 5-2 (page 40), we call the gcl_get_kernel_block_workgroup_info function to ask OpenCL to return what it considers to be the optimal workgroup size for this kernel, on this device: gcl_get_kernel_block_workgroup_info( square_kernel, // this kernel CL_KERNEL_WORK_GROUP_SIZE, sizeof(workgroup_size), &workgroup_size, NULL); fprintf(stdout, "Workgroup size: %ld\n", workgroup_size); Finally, we call the gcl_get_kernel_block_workgroup_info function to once more to ask OpenCL for a workgroup size multiple. This is a performance hint based on the capabilities of the underlying device: gcl_get_kernel_block_workgroup_info( square_kernel, // this kernel CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE, sizeof(preferred_workgroup_size_multiple), &preferred_workgroup_size_multiple, NULL); Using Grand Central Dispatch With OpenCL Obtaining the Kernel’s Workgroup Size 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 35You can now use these workgroup valuesto craft an appropriate cl_ndrange structure to use in launching your kernel. cl_ndrange range = { 1, // The number of dimensions to use. {0, 0, 0}, // The offset in each dimension. We want to // process ALL of our data, so this is 0 for // our test case. // Always pass an offset for each of the // three dimensions even though the workgroup // may have fewer than three dimensions. {NUM_VALUES, 0, 0}, // The global range -- this is how many items // IN TOTAL in each dimension you want to // process. // Always pass the global range for each of the // three dimensions even though the workgroup // may have fewer than three dimensions. {workgroup_size, 0, 0 } // The local size of each workgroup. This // determines the number of workitems per // workgroup. It indirectly affects the // number of workgroups, since the global // size / local size yields the number of // workgroups. So in our test case, we will // have NUM_VALUE/workgroup_size workgroups. // Always pass the workgroup size for each of the // three dimensions even though the workgroup // may have fewer than three dimensions. }; Using Grand Central Dispatch With OpenCL Obtaining the Kernel’s Workgroup Size 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 36Sample Code: Creating a Dispatch Queue Listing 5-1 (page 37) demonstrates how to get the global OpenCL context, and how to ask that context about the devices it contains. It also shows how to create a dispatch queue by asking for a device type (CPU or GPU), and by specifying the queue's OpenCL device directly. Listing 5-2 (page 40) shows how to obtain workgroup information -- useful for obtaining peak performance -- from the kernel block. Listing 5-1 Creating a dispatch queue #include // Include OpenCL/opencl.h to include everything you need for OpenCL //development on OS X v10.7. #include // In this example, mykernel.cl.h is the header file that contains our kernel block // declaration. // This header file is generated by Xcode. #include "mykernel.cl.h" static void print_device_info(cl_device_id device) { char name[128]; char vendor[128]; clGetDeviceInfo(device, CL_DEVICE_NAME, 128, name, NULL); clGetDeviceInfo(device, CL_DEVICE_VENDOR, 128, vendor, NULL); fprintf(stdout, "%s : %s\n", vendor, name); } #pragma mark - #pragma mark Hello World - Sample 1 // Demonstrates how to get the global OpenCL context, and how to ask that // context about the devices it contains. It also shows how // to create a dispatch queue by asking for a device type (CPU or GPU) and Using Grand Central Dispatch With OpenCL Sample Code: Creating a Dispatch Queue 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 37// by specifying the queue's OpenCL device directly. static void hello_world_sample1 () { int i; // Ask for the global OpenCL context: // Note: If you will not be enqueing to a specific device, you do not need // to retrieve the context. cl_context context = gcl_get_context(); // Query this context to see what kinds of devices are available to us. size_t length; cl_device_id devices[8]; clGetContextInfo( context, CL_CONTEXT_DEVICES, sizeof(devices), devices, &length); // Walk over these devices, printing out some basic information. We could // query any of the information available about the device here. fprintf(stdout, "The following devices are available for use:\n"); int num_devices = (int)(length / sizeof(cl_device_id)); for (i = 0; i < num_devices; i++) { print_device_info(devices[i]); } // To do any work, you need to create a dispatch queue associated // with some OpenCL device. You can either let the system give you // a GPU -- perhaps the only GPU -- or the CPU device. Or, you can // create a dispatch queue with a cl_device_id you specify. This // device id comes from the OpenCL context, as above. Below are three // examples. Using Grand Central Dispatch With OpenCL Sample Code: Creating a Dispatch Queue 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 38// 1. Ask for a GPU-based dispatch queue; notice that we do not provide a // device id - we let the system give us the most capable GPU. dispatch_queue_t gpu_queue = gcl_create_dispatch_queue(CL_DEVICE_TYPE_GPU, NULL); // Get the device from the queue, so we can ask OpenCL questions about it. // Note that we check to make sure there WAS an OpenCL-capable GPU in the // system by checking against a NULL return value. if (gpu_queue != NULL) { cl_device_id gpu_device = gcl_get_device_id_with_dispatch_queue(gpu_queue); fprintf(stdout, "\nAsking for CL_DEVICE_TYPE_GPU gives us:\n"); print_device_info(gpu_device); } else { fprintf(stdout, "\nYour system does not contain an OpenCL-compatible " "GPU\n."); } // 2. Let's try the same thing for CL_DEVICE_TYPE_CPU. All Macintosh // systems will have a CPU OpenCL device, so we don't have to worry about // checking for NULL, as we did in the case of a GPU. dispatch_queue_t cpu_queue = gcl_create_dispatch_queue(CL_DEVICE_TYPE_CPU, NULL); cl_device_id cpu_device = gcl_get_device_id_with_dispatch_queue(cpu_queue); fprintf(stdout, "\nAsking for CL_DEVICE_TYPE_CPU gives us:\n"); print_device_info(cpu_device); // 3. Or perhaps you are in a situation where you want a specific device // from the list of devices you found on the context. Using Grand Central Dispatch With OpenCL Sample Code: Creating a Dispatch Queue 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 39// Notice the difference here: // We pass CL_DEVICE_TYPE_USE_ID and a device_id. We'll just use the // first device on the context from above, whatever that might be. dispatch_queue_t custom_queue = gcl_create_dispatch_queue(CL_DEVICE_TYPE_USE_ID, devices[0]); cl_device_id custom_device = gcl_get_device_id_with_dispatch_queue(custom_queue); fprintf(stdout, "\nAsking for CL_DEVICE_TYPE_USE_ID and our own device gives us:\n"); print_device_info(custom_device); // Now we could use any of these dispatch queues to run some kernels! // Use the GCD API to free your queues. dispatch_release(custom_queue); dispatch_release(cpu_queue); if (gpu_queue != NULL) dispatch_release(gpu_queue); } Listing 5-2 Obtaining workgroup information #pragma mark - #pragma mark Hello World - Sample 2 // This listing shows how to obtain workgroup info – // useful for obtaining peak performance - from the kernel block. static void hello_world_sample2() { // Get a queue backed by a GPU for running our squaring kernel. dispatch_queue_t queue = Using Grand Central Dispatch With OpenCL Sample Code: Creating a Dispatch Queue 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 40gcl_create_dispatch_queue(CL_DEVICE_TYPE_GPU, NULL); // Did we get a GPU? If not, fall back to the CPU device. if (queue == NULL) { gcl_create_dispatch_queue(CL_DEVICE_TYPE_GPU, NULL); } // In any case, print out the device we're using: fprintf(stdout, "\nExamining workgroup info for square_kernel on device "); print_device_info(gcl_get_device_id_with_dispatch_queue(queue)); // Now find out what OpenCL thinks is the best workgroup size for // executing this kernel on this particular device. Notice that we have // to execute this method in a block, on a dispatch queue we've created // with OpenCL. dispatch_sync(queue, ^{ size_t wgs, preferred_wgs_multiple; cl_ulong local_memsize, private_memsize; // The next two calls give us information about how much // memory, local and private, is used by the kernel on this // particular device. gcl_get_kernel_block_workgroup_info(square_kernel, CL_KERNEL_LOCAL_MEM_SIZE, sizeof(local_memsize), &local_memsize, NULL); fprintf(stdout, "Local memory size: %lld\n", local_memsize); gcl_get_kernel_block_workgroup_info(square_kernel, CL_KERNEL_PRIVATE_MEM_SIZE, sizeof(private_memsize), &private_memsize, NULL); Using Grand Central Dispatch With OpenCL Sample Code: Creating a Dispatch Queue 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 41fprintf(stdout, "Private memory size: %lld\n", private_memsize); // Here we ask OpenCL what it considers the optimal workgroup // size for this kernel on this device. gcl_get_kernel_block_workgroup_info(square_kernel, CL_KERNEL_WORK_GROUP_SIZE, sizeof(wgs), &wgs, NULL); fprintf(stdout, "Workgroup size: %ld\n", wgs); // Finally, we can ask OpenCL for a workgroup size multiple. // This is a performance hint. gcl_get_kernel_block_workgroup_info(square_kernel, CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE, sizeof(preferred_wgs_multiple), &preferred_wgs_multiple, NULL); fprintf(stdout, "Preferred workgroup size multiple: %ld\n", preferred_wgs_multiple); // You could now use these workgroup values to craft an // appropriate cl_ndrange structure for use in launching your kernel. }); dispatch_release(queue); } int main(int argc, const char* argv[]) { hello_world_sample1(); hello_world_sample2(); } Using Grand Central Dispatch With OpenCL Sample Code: Creating a Dispatch Queue 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 42This chapter provides an overview of how memory is used with OpenCL (since this differsfrom the way memory is used in conventional programs), describes how buffers are created and used in OpenCL, describes how images are created and used in OpenCL, and provides information about using memory in situations such as with IOSurfaces and OpenGL textures. Overview Like all computational processes, processes that run on OpenCL devices consist of: ● Data The data accessed by OpenCL instructions exists as memory buffers and cl_image memory objects. Use image objects for representing 2D or 3D images (see “Creating and Using Images in OpenCL” (page 54)); use buffer objects for containing other types of generic data (see “Creating and Using Buffers in OpenCL” (page 46)). ● Instructions (in kernel functions) that manipulate the data. Even if they are physically contiguous, host memory is distinct from OpenCL memory. Kernel instructions can only access data in the memory of OpenCL devices. The host computer can read and write to device memory, but only to set it up and retrieve results. During computation, a device looks only in device memory, and the host stays out of its way. In other words, in OpenCL, you launch a set of work items against a bolus of data. While this data might have been passed to the OpenCL device by the host, the data resides on the OpenCL device at the time of execution. A kernel cannot read or write to the host memory; it can only access data its own separate memory area. For many devices(like GPUs), the OpenCL is actually a physically distinct piece ofsilicon. For other devices, although the memory is physically on the same chip, it can only be read/written by the OpenCL kernel code. Workflow The basic workflow with OpenCL is: 1. Create memory objects for use by OpenCL 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 43 Creating and Managing Memory Objects in OS X OpenCLThe host requests that memory be set aside for it on the device. The host can ask for as many memory objects as it wants, up to the memory available. 2. Initialize the contents of the memory objects a. The host can pass data to the device to be stored in its memory objects. Thisisthe data that the kernel will process. The host can also instruct the device to leave some memory objects uninitialized so that when the kernel runs on the device, it can fill up these memory objects as its output. b. The host instructs the device to execute the kernel, passing it the memory objects it has created on the OpenCL device as arguments. The host will wait until the kernel is done. 3. Execute the kernel The kernel runs on the device, processing the data in the input memory, producing output to be stored in the designated output memory. 4. Read results from the memory objects When the host detects that the kernel has completed its tasks, it copies the results the kernel stored in the designated output memory into memory the host can access." 5. Destroy the memory objects Once the host has retrieved the output data, it instructs the device to free up the memory it had set aside for the kernel to use. Memory Visibility In a typical multi-device environment, memory is distributed between devices. No device can access all memory. For example, an OpenCL kernel resides in a separate memory space from the host that calls it. In order for the kernel to accessthe data it isto process, the data must be moved into the device’s memory. However, transferring data between memory areas to allow different devices to work can result in considerable overhead. Minimize the amount of data being transferred to optimize performance. The host specifies the memory space for a given buffer when it declares each kernel argument. Creating and Managing Memory Objects in OS X OpenCL Overview 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 44The memory spaces as an OpenCL device would see them are: Figure 6-1 Physical memory of an OpenCL system OpenCL Device Host Work Item Local Memory Workgroup Global/Constant Memory Host Memory Private Memory Work Item Private Memory Work Item Local Memory Workgroup Private Memory Work Item Private Memory ● Private memory Each work item has memory that only it can see. This is its private memory. ● Local memory Local memory is memory that work items WITHIN a work group can share. Local memory is useful if more than one work item in a group needs to use a particular chunk of global memory. You can write your OpenCL program so that one work item loads from global memory to local memory, and then the rest of the work items that need that piece of data can use the "local" copy. It takes GPU devices much less time to access local memory than to access global memory. ● Global memory This is the (relatively) massive chunk of device memory that ALL work items can "see". Any work item can read / write to a buffer declared to be in global memory. ● Constant memory Creating and Managing Memory Objects in OS X OpenCL Overview 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 45Thisis a specialized section of global memory for data that you the programmer know will remain constant throughout the execution of your kernel. It is more limited in size than global memory (typically), but is faster to access on many devices than other global memory. Memory Consistency Changes a work item has made to global or local memory may not immediately become visible to other work items within a workgroup. An system’s consistency has to do with when changes a work item has made to global or local memory become visible to other work items within a workgroup. OpenCL uses what is called a relaxed memory consistency model, which means that: ● Work items can access data within their own private memory, local memory, constant memory, and global memory. ● Work items can share local memory during the execution of a workgroup. However, memory is only guaranteed to be consistent after specific synchronization points. If a work item needs to read something that another work item has written, then you will need to place a barrier in your OpenCL code at the point where you want the memory to be consistent. The barrier will stop any work item at the barrier until all other work items have "caught up". That's why it works -- every work item in the workgroup has written its memory by that point, so it's safe to go on and read anything any work item in the group has written. See “Using GCD To Synchronize A Host With OpenCL” (page 69). Note: You can create a barrier that appliesto local memory or to global memory, but consistency only appliesto work items within a work group. There is no such thing as a global memory barrier that will make all threads in an execution wait. OpenCL only guarantees memory consistency at a barrier within a work group. Creating and Using Buffers in OpenCL Representing Data With Buffer Objects The OpenCL programming interface provides buffer objects for representing generic data in your OpenCL programs. Instead of having to convert your data to the domain of a specific type of hardware, OpenCL enables you to transfer your data as is to an OpenCL device via buffer objects and operate on the data using the same language features that you are accustomed to in C. Creating and Managing Memory Objects in OS X OpenCL Creating and Using Buffers in OpenCL 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 46OpenCL is designed to share efficiently with OpenGL. Wherever possible, data is shared between OpenCL and OpenGL programs; it is not copied. Because transmitting data is costly, it is best to minimize reads and writes as much as possible. By packaging all of your host data into a buffer object that can remain on the device, you reduce the amount of data traffic necessary to process your data. Allocating Memory For Buffer Objects In OS X v10.7 If you need a cl_mem object, (you will need one of these if you are going to be using a standard OpenCL API call), call the gcl_malloc function to allocate the memory, then call the gcl_create_buffer_from_ptr function to convert the handle gcl_malloc returns for use with the standard OpenCL API. To create buffer objects: ● void * gcl_malloc(size_t bytes, void *host_ptr, cl_malloc_flags flags) The gcl_malloc function returns a void * which is a memory object handle. The bytes parameter is the number of bytes to be allocated. The host_ptr parameter . The flags parameter and can be 0 or CL_MEM_USE_HOST_PTR. Note: The void * value returned cannot be used to directly access the memory region on the host CPU. To access this memory region for reading and writing on the host CPU, use APIs such as cl_memcpy that can be passed in a block to GCD APIs that queue tasks for dispatch. ● cl_mem gcl_create_buffer_from_ptr(void *ptr) The cl_mem gcl_create_buffer_from_ptr function creates a cl_mem buffer object from a ptr returned by cl_malloc. The cl_mem object returned can be used by CL API calls to enable sharing of objects between Grand CL and the OpenCL API. The cl_mem object returned references the data store associated with the ptr parameter. Note: Be sure to release cl_mem objects created using gcl_create_buffer_from_ptr before freeing this pointer using gcl_free. Creating and Managing Memory Objects in OS X OpenCL Creating and Using Buffers in OpenCL 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 47Reading, Writing, and Copying Buffer Objects You can create memory objects outside a dispatch queue and the memory objects you create do not have to be associated with any particular device. But before the memory object is accessed by OpenCL, it must be associated with the device the data will be moving into and out of. You associate a memory object with its device in the dispatch queue.. After you’ve created a buffer object, you can enqueue reads, writes, and copies. You can call the following functions from your host application. These can be passed in a block(s) to the Grand Central Dispatch APIs that queue tasks for dispatch such as dispatch_async. They enable you to move data to and from a host. void gcl_memcpy(void *dst, const void *src, size_t size); void gcl_memcpy_rect(void *dst, const void *src, const size_t dst_origin[3], const size_t src_origin[3], const size_t region[3], size_t dst_row_pitch, size_t dst_slice_pitch, size_t src_row_pitch, size_t src_slice_pitch); void *gcl_map_ptr(void *ptr, cl_map_flags map_flags, size_t cb); void *gcl_map_image(cl_image image, cl_map_flags map_flags, const size_t origin[3], const size_t region[3]); void gcl_unmap(void *ptr); Kernel Support For Data Processing In OpenCL-C By associating your buffer object with specific kernel arguments, you make it possible to process your data from the context of a kernel function. For example, in “Example: Allocating, Using, Releasing Buffer Objects” (page 50), notice how the code sample treats the input data pointer much as you would treat a pointer in C. In this example the input data is an array of float values, and you can process each element of the float array by indexing into the pointer. “Example: Allocating, Using, Releasing Buffer Objects” (page 50) does little more than multiply a value by itself using the * operator, but OpenCL-C provides a wide array of data types and operators that enable you to perform more complex arithmetic. Creating and Managing Memory Objects in OS X OpenCL Creating and Using Buffers in OpenCL 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 48Because OpenCL-C is based on C99, you are free to process your data in OpenCL-C functions as you would in C with few limitations. Aside from support for recursion and function pointers, there are not many language features that C has that OpenCL-C doesn’t have. In fact, OpenCL-C provides several beneficial features that the C programming language does not offer natively, such as optimized image access functions. OpenCL-C has built-in support for vector intrinsics and offers vector data types. The operators in OpenCL-C are overloaded, and performing arithmetic between vector data typesissyntactically equivalent to performing arithmetic between scalar values. Refer to the TheOpenCL Specification for more details on the built-in functions and facilities of the OpenCL-C language. Releasing Buffer Objects To avoid memory leaks, free buffer objects when they are no longer needed. Call gcl_free to free buffer objects created using gcl_malloc. void gcl_free(void *ptr); The ptr parameter is the handle of the buffer object to be released. OpenCL uses a reference counting system to keep track of the memory objects currently being used. The reference count represents how many other objects hold referencesto the particular memory object. Any time you create a buffer object, it immediately receives a reference count of 1. Any time another object would also like to maintain a reference to it, it should increment the buffer object’s reference count by calling the clRetainMemObject function. When an object wishes to relinquish its reference to a buffer object, it should call clReleaseMemObject. When the reference count for a buffer object reaches zero, OpenCL frees it, returning the memory to the system and making any persisting references to the buffer object invalid. Setting the finalizer A finalizer is a function member of a reference class that is called automatically by the garbage collector when destroying an object. To specify which finalizer function the garbage collector calls for any objects created by gcl_malloc or gcl_create_*** APIs (such as gcl_create_image), call: void gcl_set_finalizer(void *object, void (*gcl_pfn_finalizer)(void *object, void *user_data), void *user_data); Creating and Managing Memory Objects in OS X OpenCL Creating and Using Buffers in OpenCL 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 49Example: Allocating, Using, Releasing Buffer Objects In the following example, the host creates one input buffer and one output buffer, initializes the input buffer, calls the kernel to square each value in the input buffer, then checks the results. Listing 6-1 Sample host function creates buffers then calls kernel function #include #include #include // Include the automatically-generated header which provides the kernel block // declaration. #include "kernels.cl.h" #define COUNT 2048 static void display_device(cl_device_id device) { char name_buf[128]; char vendor_buf[128]; clGetDeviceInfo(device, CL_DEVICE_NAME, sizeof(char)*128, name_buf, NULL); clGetDeviceInfo(device, CL_DEVICE_VENDOR, sizeof(char)*128, vendor_buf, NULL); fprintf(stdout, "Using OpenCL device: %s %s\n", vendor_buf, name_buf); } static void buffer_test(const dispatch_queue_t dq) { unsigned int i; // We'll use a semaphore to synchronize the host and OpenCL device. dispatch_semaphore_t dsema = dispatch_semaphore_create(0); // Create some input data on the _host_ ... Creating and Managing Memory Objects in OS X OpenCL Creating and Using Buffers in OpenCL 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 50cl_float* host_input = (float*)malloc(sizeof(cl_float) * COUNT); // ... and fill it with some initial data. for (i=0; i #include #include // Include the automatically-generated header which provides the kernel block // declaration. #include "kernels.cl.h" #define COUNT 2048 static void display_device(cl_device_id device) { char name_buf[128]; Creating and Managing Memory Objects in OS X OpenCL Creating and Using Images in OpenCL 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 59char vendor_buf[128]; clGetDeviceInfo(device, CL_DEVICE_NAME, sizeof(char)*128, name_buf, NULL); clGetDeviceInfo(device, CL_DEVICE_VENDOR, sizeof(char)*128, vendor_buf, NULL); fprintf(stdout, "Using OpenCL device: %s %s\n", vendor_buf, name_buf); } static void image_test(const dispatch_queue_t dq) { // As before, we use a dispatch semaphore to achieve synchronization between // the host application and the work done for us by the OpenCL device. dispatch_semaphore_t dsema = dispatch_semaphore_create(0); // Let's create a "fake" RGBA, 8-bit-per channel image, solid red. // In a real program, you would use some real raster data. // Most OpenCL devices support a wide-variety of image formats. unsigned int i; size_t height = 2048, width = 2048; unsigned int *pixels = (unsigned int*)malloc( sizeof(unsigned int) * width * height ); for (i = 0; i < width*height; i++) pixels[i] = 0xFF0000FF; // 0xAABBGGRR: 8bits per channel, all red. // This image data is on the host side. // We need to create two OpenCL images in order to perform some // manipulations: one for the input and one for the ouput. // This describes the format of the image data. cl_image_format format; format.image_channel_order = CL_RGBA; Creating and Managing Memory Objects in OS X OpenCL Creating and Using Images in OpenCL 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 60format.image_channel_data_type = CL_UNSIGNED_INT8; cl_mem input_image = gcl_create_image(&format, width, height, 1, NULL); cl_mem output_image = gcl_create_image(&format, width, height, 1, NULL); dispatch_async(dq, ^{ // Our kernel is written such that each work item processes one pixel. // Thus, we execute over a two-dimensional range, // with the width and height of the image determining the dimensions // of execution. cl_ndrange range = { 2, // We're using a 2-dimensional execution. {0}, // Start at the beginning of the range. {width, height}, // Execute width * height work items. {0} // And let OpenCL decide how to divide the work items // into work-groups. }; // Copy the host-side, initial pixel data to the image memory object on // the OpenCL device. We copy the whole image, but you could use the // origin and region parameters to specify an offset and sub-region of // the image, if you'd like. const size_t origin[3] = { 0, 0, 0 }; const size_t region[3] = { width, height, 1 }; gcl_copy_ptr_to_image(input_image, pixels, origin, region); // Do it! red_to_green_kernel(&range, input_image, output_image); // Read back the results; let's reuse the host-side buffer we started with. gcl_copy_image_to_ptr(pixels, output_image, origin, region); Creating and Managing Memory Objects in OS X OpenCL Creating and Using Images in OpenCL 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 61// Let the host know we're done. dispatch_semaphore_signal(dsema); }); // Do other work, if you'd like... // ... but eventually, you will want to wait for OpenCL to finish up. dispatch_semaphore_wait(dsema, DISPATCH_TIME_FOREVER); // Alright - we expect '0xFF00FF00' for each pixel. Solid green, all the way. int results_ok = 1; for (i = 0; i < width*height; i++) { if (pixels[i] != 0xFF00FF00) { fprintf(stdout, "Oh no. Pixel %d was not correct. Expected 0xFF00FF00, saw %x\n", i, pixels[i]); results_ok = 0; break; } } if (results_ok) fprintf(stdout, "Image results OK!\n"); // Clean up device-size allocations. // Note that we use the "standard" OpenCL API here. clReleaseMemObject(input_image); clReleaseMemObject(output_image); // Clean up host-side allocations. free(pixels); } int main (int argc, const char * argv[]) Creating and Managing Memory Objects in OS X OpenCL Creating and Using Images in OpenCL 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 62{ // Grab a CPU-based dispatch queue. dispatch_queue_t dq = gcl_create_dispatch_queue(CL_DEVICE_TYPE_CPU, NULL); if (!dq) { fprintf(stdout, "Unable to create a CPU-based dispatch queue.\n"); exit(1); } // Display the OpenCL device associated with this dispatch queue. display_device(gcl_get_device_id_with_dispatch_queue(dq)); image_test(dq); fprintf(stdout, "\nDone.\n\n"); dispatch_release(dq); } Listing 6-6 Sample kernel swaps the red and green channels // A simple kernel that swaps the red and green channels. const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE | CLK_FILTER_NEAREST; kernel void red_to_green(read_only image2d_t input, write_only image2d_t output) { size_t x = get_global_id(0); size_t y = get_global_id(1); uint4 tap = read_imageui(input, sampler, (int2)(x,y)); write_imageui(output, (int2)(x,y), tap.yxzw); } Creating and Managing Memory Objects in OS X OpenCL Creating and Using Images in OpenCL 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 63IOSurface and GL: What OpenCL Supports How the Kernel Interacts With Data Passing Data To a Kernel Xcode uses your kernel code to automatically generate the kernel function prototype in the kernel header file. To pass data to a kernel, when you call the kernel from your host code, passthe memory objects as parameters, just as you would pass parameters to any other function. OpenCL kernel arguments can be scoped with a local or global qualifier, designating the memory storage for these arguments. In OS X v10.7, for arguments to OpenCL kernels that would have been declared with the local or __local address qualifier, the argument type used in the block declaration of the kernel will be a size_t. Consider the following kernel that has an argument declared with the local address qualifier: kernel void foo(global float *a, local float *shared); The extern declaration of this kernel block that will be generated for you in the host code will be: extern void (^foo_kernel)(const cl_ndrange *ndrange, float *a, size_t shared); Accessing Buffer Objects From a Kernel Once the data has been enqueued, in order for a device to actually process this data, you have to make this data available to the work items that execute on the device. The following sections show you how to pass your data to the compute kernels for further processing. In your host application source code, it’s your responsibility to: ● Prepare the input data. ● Create a buffer object of the appropriate size. ● Move the input data from host memory. You can do this using the clCreateBuffer function by pointing to the data on the host, or you can use the clEnqueueWriteBuffer function to enqueue a write from host memory. ● Associate the input data with the kernel’s arguments. Use the clSetKernelArg function to do this. Creating and Managing Memory Objects in OS X OpenCL IOSurface and GL: What OpenCL Supports 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 64Kernels written in OpenCL-C need a data structure to describe the data parallel range over which to execute the kernel. In OS X v10.7, you’ll use the cl_ndrange structure for this purpose: typedef struct _cl_ndrange { size_t work_dim; size_t global_work_offset[3]; size_t global_work_size[3]; size_t local_work_size[3]; } cl_ndrange; Retrieving Results From a Kernel To read the results back, call dispatch_sync. For example, dispatch_sync(queue, ^{ gcl_memcpy(ptr_c, device_c,num_floats * sizeof(float)); }); Creating and Managing Memory Objects in OS X OpenCL How the Kernel Interacts With Data 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 65OpenGL (Open Graphics Library) is an API for writing applications that produce 2D and 3D computer graphics. See OpenGL. OpenCL / OpenGL interoperability enables applicationsto share data between OpenCL and OpenGL efficiently. You do not have to create multiple copies of the same content in OpenCL and OpenGL. An OpenCL memory object created from an OpenGL object and the original OpenGL object both refer to the same memory and both GLSL (OpenGL Shading Language) shaders and OpenCL kernels can access the shared data. Another advantage of using OpenCL / OpenGL interoperability isthat the overhead of passing data for compute/display purposes is greatly reduced. If the computation and rendering are performed on the GPU, the data need not be moved between the host and the GPU. This chapter describes the OpenCL APIs that can be used to create OpenCL memory objects from OpenGL vertex buffer objects(VBOs), texture objects, and renderbuffer objects. An OpenCL buffer object may be created from an OpenGL buffer object. An OpenCL image object may be created from an OpenGL texture or renderbuffer object. To create an OpenCL memory object from an OpenGL object, an OpenCL context has to be created from an OpenGL share group (CGLShareGroup) object. An OpenGL share group object manages the OpenGL objects on the devices in the rendering context. When an OpenCL context is connected to an OpenGL share group object, both the OpenCL context and the OpenGL context can reference the same data objects. 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 66 OpenCL/ OpenGL Interoperation: Data SharingSharegroups In the example illustrated in Figure 7-1 (page 67), OpenCL is used to generate geometry on the GPU and OpenGL is used to render the shared geometry, also using the GPU. The OpenCL and OpenGL contextsreference the same sharegroup (CGLShareGroupObj). Both OpenCL and OpenGL see the same devices and can access the shared geometry. OpenGL sees the data as a VBO and OpenCL sees it as a buffer memory object. Figure 7-1 OpenGL and OpenCL share data using sharegroups CPU GPU OpenCL CGLShareGroupObj cl_context cl_mem VBO gl_context To use OpenCL / OpenGL interoperability: 1. Set the sharegroup: CGLContextObj cgl_context = CGLGetCurrentContext(); CGLShareGroupObj sharegroup = CGLGetShareGroup(cgl_context); gcl_gl_set_sharegroup(sharegroup); ... 2. After the sharegroup has been set, you can create OpenCL memory objects from the existing OpenGL objects: ● Use the following API to create an OpenCL buffer object from an OpenGL buffer object: void * gcl_gl_create_ptr_from_buffer(GLuint bufobj); ● Use the following API to create an OpenCL image object from an OpenGL texture object: OpenCL/ OpenGL Interoperation: Data Sharing Sharegroups 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 67cl_image gcl_gl_create_image_from_texture( GLenum texture_target, GLint mip_level, GLuint texture); ● Use the following API to create an OpenCL 2D image object from an OpenGL render buffer object: cl_image gcl_gl_create_image_from_renderbuffer(GLuint render_buffer); Synchronizing Access To Shared OpenCL/OpenGL Objects To ensure data integrity, the application is responsible for synchronizing access to shared OpenCL/OpenGL objects by their respective APIs. Failure to provide such synchronization may result in race conditions and other undefined behavior including non-portability between implementations. For information about synchronizing OpenCL and OpenGL events and fences, see “Controlling OpenCL/OpenGL Interoperation With GCD” (page 69). Example OpenCL/ OpenGL Interoperation: Data Sharing Synchronizing Access To Shared OpenCL/OpenGL Objects 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 68An application running on a host (a CPU) can route work or data (possibly in disparate chunks) to a device using the standard OpenCL and OpenGL APIs and OS X v10.7 extensions. While the device does the work it has been assigned, the host can continue working asynchronously. But at a certain point, the host will need to wait for the results generated by the device performing the work it was assigned,so it will wait for the device to notify it that the assigned work has been completed. OpenCL and OpenGL can also share work and data. Typically, OpenCL will be used to generate or modify buffer data which will then be rendered by OpenGL. Or, you might use OpenGL to create an image and then post-process it using OpenCL. In either case, you have to make sure you synchronize correctly. This chapter describes how to use GCD to synchronize: ● A host with OpenCL See “Using GCD To Synchronize A Host With OpenCL” (page 69). ● A host with OpenCL using a dispatch semaphore See “Synchronizing A Host With OpenCL Using A Dispatch Semaphore” (page 70). ● Multiple OpenCL Queues See “Synchronizing Multiple Queues” (page 75). You can still use the standard OpenCL and OpenGL APIs to obtain fine-grained synchronization when working on shared data, where you either: ● Call OpenGL then OpenCL ● Call OpenCL then OpenGL See the OpenGL and OpenGL specifications for more information. Using GCD To Synchronize A Host With OpenCL In Listing 8-1 (page 70), the host enqueues data in two queues to Grand Central Dispatch. The queued data is processed while the host continues to do its own work. When the host needs the results, it waits for both queues to complete their work. 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 69 Controlling OpenCL/OpenGL Interoperation With GCDListing 8-1 Synchronizing the host with OpenCL processing // Create a workgroup so host can wait for results from more than one kernel. dispatch_group_t group = dispatch_group_create(); // Enqueue some of the data to the add_arrays_kernel on q0. dispatch_group_async(group, q0, ^{ // Because the call is asynchronous, // the host will not wait for the results cl_ndrange ndrange = { 1, {0}, {N/2}, {0} }; add_arrays_kernel(&ndrange, a, b, c); }); // Enqueue some of the data to the add_arrays_kernel on q1. dispatch_group_async(group, q1, ^{ // Because the call is asynchronous, // the host will not wait for the results cl_ndrange ndrange = { 1, {N/2}, {N/2}, {0} }; add_arrays_kernel(&ndrange, a, b, c); }); // Perform more work independent of the work being done by the kernels. ... // At this point, the host needs the results before it can proceed. // So it waits for the entire workgroup (on both queues) to complete its work. dispatch_group_wait(group, DISPATCH_TIME_FOREVER); Synchronizing A Host With OpenCL Using A Dispatch Semaphore The sample Listing 8-2 (page 71) illustrates how you can use OpenCL and OpenGL together in an application. In this example, we create two vertex buffer objects (VBOs) using OpenGL (not shown). These VBOs represent the positions of some objects in an N-body simulation. We then create OpenCL memory objects from these VBOs (line [2]), which allows us to operate directly on the device memory containing this data in our OpenCL kernel. We update these positions according to our desired algorithm, expressed as a per-object operation in the included kernel, and then render the resulting VBO using OpenGL (commented, but not shown, at [4]). Controlling OpenCL/OpenGL Interoperation With GCD Synchronizing A Host With OpenCL Using A Dispatch Semaphore 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 70Since we are updating positions using OpenCL on a dispatch queue that runs asynchoronously with respect to the thread that does the OpenGL rendering, we need to take action to ensure that we do not render before the kernel has finished updating the positions. We utilize a mechanism that is common in applications that require synchronization in GCD-compatible applications: a dispatch semaphore. Before entering the main loop, we create a dispatch_semaphore_t (line [1]). In the block that we submit to the dispatch queue created in OpenCL, just after our kernel call, we signal the semaphore. Meanwhile, the "main" thread of execution has been rolling along -- perhaps doing more work -- eventually arriving at the call to dispatch_semaphore_wait(...) (line [3]). The main thread stops at this point and waits until the post-kernel signal "flips" the semaphore. Once that occurs, the code can continue to the OpenGL rendering portion of the code, safe in the knowledge that the position update for this round is complete. Figure 8-1 Rendering loop - each pass on the main thread creates a new frame for display Synchronization point “I’m done” OpenCL-created Dispatch Queue integration_kernel(…) Just might take a bit of time dispatch_semaphore_wait(…) “Main” Thread of Execution dispatch_async(…) Note: that the main thread could do other work here before stopping to wait. But eventually, we call: dispatch_semaphore_wait(…) Here we sit and wait on CL to be done. render_with_OpenGL(…) glFlush(…) Listing 8-2 Synchronizing a host with OpenCL using a dispatch semaphore // In this case, the kernel code will update the position of the vertex. ... // The host code is: // Create the dispatch semaphone. [1] Controlling OpenCL/OpenGL Interoperation With GCD Synchronizing A Host With OpenCL Using A Dispatch Semaphore 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 71dispatch_queue_t queue; dispatch_semaphore_t cl_gl_semaphore; void *pos_gpu[2], *vel_gpu[2]; GLuint vbo[2]; float *host_pos_data, *host_vel_data; int num_bodies; int curr_read_index, curr_write_index; // Extern OpenCL kernel declarations extern void (^integrateNBodySystem_kernel)(const cl_ndrange *ndrange, float4 *newPos, float4 *newVel, float4 *oldPos, float4 *oldVel, float deltaTime, float damping, float softening, int numBodies, size_t sharedPos); void initialize_cl() { gcl_gl_set_sharegroup(CGLGetShareGroup(CGLGetCurrentContext()); // Create a CL dispatch queue. queue = gcl_create_dispatch_queue(CL_DEVICE_TYPE_GPU, NULL); // Create a dispatch semaphore used for CL / GL sharing. cl_gl_semaphore = dispatch_semaphore_create(0); // Create CL objects from GL VBOs that have already been created. [2] pos_gpu[0] = gcl_gl_create_ptr_from_buffer(vbo[0]); pos_gpu[1] = gcl_gl_create_ptr_from_buffer(vbo[1]); vel_gpu[0] = gcl_malloc(sizeof(float4)*num_bodies, NULL, 0); vel_gpu[1] = gcl_malloc(sizeof(float4)*num_bodies, NULL, 0); Controlling OpenCL/OpenGL Interoperation With GCD Synchronizing A Host With OpenCL Using A Dispatch Semaphore 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 72// Allocate and generate position and velocity data // in host_pos_data and host_vel_data. // ... // Initialize CL buffers with host position and velocity data. dispatch_async(queue, ^{gcl_memcpy(pos_gpu[curr_read_index], host_pos_data, sizeof(float4)*num_bodies); gcl_memcpy(vel_gpu[curr_read_index], host_vel_data, sizeof(float4)*num_bodies);}); } void execute_cl_gl_main_loop() { // Queue CL kernel to dispatch queue. dispatch_async(queue, ^{ ndrange_t ndrange = { 1, {0}, {num_bodies} } ; // Get local workgroup size that kernel can use for // device associated with queue. gcl_get_kernel_block_workgroup_info( integrateNBodySystem_kernel, CL_KERNEL_WORK_GROUP_SIZE, sizeof(size_t), &nrange.local_work_size[0], NULL); // Queue CL kernel to dispatch queue. integrateNBodySystem_kernel(&ndrange, pos_gpu[curr_write_index], vel_gpu[curr_write_index], pos_gpu[curr_read_index], vel_gpu[curr_read_index], Controlling OpenCL/OpenGL Interoperation With GCD Synchronizing A Host With OpenCL Using A Dispatch Semaphore 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 73damping, softening, num_bodies, sizeof(float4)*ndrange.local_work_size[0]); // Signal the dispatch semaphore to indicate that // GL can now use resources. dispatch_semaphore_signal(cl_gl_semaphore);}); // Do work not related to resources being used by CL in dispatch block. // Need to use VBOs that are being used by CL so wait for the CL commands // in dispatch queue to be issued to the GPU’s command-buffer. [3] dispatch_semaphore_wait(cl_gl_semaphore, DISPATCH_TIME_FOREVER); // Bind VBO that has been modified by CL kernel. glBindBuffer(GL_ARRAY_BUFFER, pos_gpu[curr_write_index]); // Now render with GL. [4] // Flush GL commands. glFlush(); } void release_cl() { gcl_free(pos_gpu[0]); gcl_free(pos_gpu[1]); gcl_free(vel_gpu[0]); gcl_free(vel_gpu[1]); dispatch_release(cl_gl_semaphore); dispatch_release(queue); } Controlling OpenCL/OpenGL Interoperation With GCD Synchronizing A Host With OpenCL Using A Dispatch Semaphore 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 74Synchronizing Multiple Queues In Listing 8-3 (page 75), the host enqueues data in two queues to Grand Central Dispatch. The second queue waits for the first queue to complete its processing before doing its work. The host application does not wait for completion of either queue. Listing 8-3 Synchronizing multiple queues // Create the workgroup which will consist of just the work items // that must be completed first. dispatch_group_t group = dispatch_group_create(); dispatch_group_enter(group); // Start work on the workgroup. dispatch_async(q0, ^{ cl_ndrange ndrange = { 1, {0}, {N/2}, {0} }; add_arrays_kernel(&ndrange, a, b, c); dispatch_group_leave(group); }); // Simultaneously enqueue data on q1, // but immediately wait until the workgroup on q0 completes. dispatch_async(q1, ^{ // Wait for the work of the group to complete. dispatch_group_wait(group, DISPATCH_TIME_FOREVER); cl_ndrange ndrange = { 1, {N/2}, {N/2}, {0} }; add_arrays_kernel(&ndrange, a, b, c); }); // Host application does not wait. Controlling OpenCL/OpenGL Interoperation With GCD Synchronizing Multiple Queues 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 75An IOSurface is an abstraction for sharing image data. IOSurfaces are an efficient way to manage image memory because when you use an IOSurface, if no copy is necessary, no time is wasted on making a copy. An IOSurface transcends APIs, architectures, address spaces, and processes. You can get an ID for an IOSurface that can be passed around from process to process, so that each of these completely separate programs can use one single IOSurface. This makessharing an IOSurface between devices very easy. If you create an OpenCL image memory object from an existing IOSurface, you can modify the data contained in the IOSurface either in your "main program" running on the CPU, or in an OpenCL kernel running on either a GPU or a CPU. Creating Or Obtaining An IOSurface You can either create an IOSurface in code (see for an example) or you can request an IOSurface from another running process such as Photo Booth. The underlying texture transfer mechanism for an IOSurface combines GL_UNPACK_CLIENT_STORAGE_APPLE and GL_STORAGE_HINT_CACHED_APPLE together. The transfer is done as a straight DMA to/from system memory and video memory with no format conversions of any kind (other than some GPU-specific memory layout details). No matter how many different OpenGL contexts (in the same process or not) bind a texture to an IOSurface, they all share the same system memory and GPU memory copies of the data. Creating An Image Object from An IOSurface Once you’ve created or obtained an IOSurface, before you use it in OpenCL, you need to create an OpenCL image memory object using the IOSurface. When you create the memory object, you are not making a copy; the image memory object points at the same memory asthe original IOSurface. This makes using the IOSurface very efficient. If you are using GCD to interact with the IOSurface, create the IOSurface-backed CL image as shown in Listing 9-1 (page 77). 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 76 Using IOSurfaces With OpenCLListing 9-1 Creating an IOSurface-backed CL Image cl_image gcl_create_image( const cl_image_format *image_format, size_t image_width, size_t image_height, size_t image_depth, IOSurfaceRef io_surface); // create a 2D image (depth = 0 or 1) or a 3D image (depth > 1). // can also be used to create an image from an IOSurfaceRef. If you are using the standard OpenCL API and not using GCD to create an IOSurface-backed CL object, use clCreateImageFromIOSurface2D as shown in Listing 9-2 (page 77). Listing 9-2 Extracting an Image From an IOSurface cl_image_format image_format; image_format.image_channel_order = CL_RGBA; image_format.image_channel_data_type = CL_UNORM_INT8; cl_mem image = clCreateImageFromIOSurface2D( context, flags, image_format, width, height, surface, &err ); Sharing the IOSurface With An OpenCL Device Sharing an IOSurface in OpenCL is very simple. The key is to lock the IOSurface properly. If your CPU (host) is going to modify the IOSurface and then share it with an OpenCL device, you should lock the IOSurface before reading or writing to it, then unlock it before passing it to a kernel: ● The host creates or obtains the IOSurface and creates its CL image object . ● If the host will be writing to the IOSurface, the host write-locks the IOSurface: IOSurfaceLock(..., write type lock). If the host will only read from the IOSurface, the host read-locks it. ● The host writes to/reads from the IOSurface as necessary. ● The host unlocks the IOSurface: IOSurfaceUnlock(...). This tells the system that you changed the data. You can then use the IOSurface-backed image in OpenCL -- the IOSurface object will handle any necessary read locking internally for you. Using IOSurfaces With OpenCL Sharing the IOSurface With An OpenCL Device 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 77● The host enqueues the OpenCL kernel, passing it the IOSurface. The locking and unlocking are simply the minimal calls needed to give OS X enough information to ensure that each device always gets the latest, correct data. If you will be using OpenCL to modify the IOSurface, you don't have to lock it. Just access the image memory object directly. Using IOSurfaces With OpenCL Sharing the IOSurface With An OpenCL Device 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 78The autovectorizer detects operations in a scalar program that can be run in parallel and converts them into sequential operations that can be handled efficiently by today's CPUs. The autovectorizer frees you to write simple scalar code. It then vectorizes that code for you so that its performance on the CPU is maximized while the same code runs on the GPU as well. Note: Some GPUs also give higher performance when your code is vectorized. The autovectorizer does not operate on GPU code, but you can vectorize your GPU code manually. If you do manually vectorize your GPU code, test both vectorized and unvectorized versions to see which gives better performance on specific hardware. Features ● Runs by default when compiling to the CPU. ● Packs work items together. ● Generates a loop over the entire workgroup. ● Can provide performance improvement of up to the vector width of the CPU without additional effort. ● Allows you to write one scalar kernel that runs on CPU or GPU. Without the Autovectorizer The issue is that a GPU will process scalar data efficiently, but the CPU needs vectorized data to keep it fully busy. Which means that, without the autovectorizer, you either have to write multiple device-specific kernels that all perform the same function, or your performance will suffer. OpenCL sees devices as having a number of compute cores and within them a number of processing elements. When scalar code runs on the CPU, it will run on each core but will not take advantage of the vector unit. 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 79 AutovectorizerFor example, on a SSE4 machine, scalar code would run in one lane of the vector unit when it could be running in four lanes. The monitor would report that the CPU is completely busy because all the cores are running, but the CPU is actually only using a quarter of its vector width. Figure 10-1 Before autovectorization: A simple float sent to the CPU and the GPU CPU GPU If you pass simple floats into a kernel: Listing 10-1 Passing single floats into a kernel kernel void add_arrays(global float* a, global float* b, global float* c) { size_t i = get_global_id(0); c[i] = a[i] + b[i]; } The kernel will be doing a scalar addition; operating on one data element at a time. If you send the scalar float to the CPU and the GPU, the GPU will become fully engaged in processing the data. In the CPU, although all the cores are busy, only one quarter of the vector width of the processing element in each core is used. If you instead passin float4* parametersto the kernel, that makesthe addition a vector addition. The addition is now CPU-only, specialized for that device. That would extract as much work as possible from the CPU but leave the GPU idle. In other words, without the autovectorizer, you would have to write multiple device-specific, non-scalar kernels, one for each type of device. Autovectorizer Without the Autovectorizer 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 80Writing Optimal Code For the CPU: Let the autovectorizer do the work for you Do ● Write one simple (non-vectorized) kernel that can run on any device. Don’t ● Write device-specific optimizations. ● Write work item ID-dependent control flow, if possible. (If this occurs in many places in the code, it would likely prevent autovectorization from succeeding.) What the autovectorizer does ● Runs by default whenever compiling kernels to a CPU. ● Packs work items together into vector instructions. ● Workgroup size can be increased if autovectorization is successful. ● Achieves performance improvements of up to the vector width of the CPU without additional effort on your part. Vectorization Example Xcode Setting Type Default Command Line Flag -cl-auto-vectorize-enable If this is set to NO, the command line flag should be -cl-autovectorize- disable Auto-vectorizer Boolean YES Autovectorizer Writing Optimal Code For the CPU: Let the autovectorizer do the work for you 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 81This chapter providessuggestions asto how to structure OpenCL code so that it runs most efficiently, describes how to measure performance of OpenCL applications, and what to expect - how to set performance objectives. It also provides an example of an iterative process of performance tuning of a simple image filter application. Before Optimizing Code Before you decide to optimize code, it is important to answer the following questions: 1. Does the code really need to be optimized? This is the most important question, and answering it is not trivial when the OpenCL code is used inside a large application. Answering this question is out of the scope of this document, but it should be considered seriously before starting any optimization effort. 2. How to measure the performance of the code? 3. What is the expected performance? Reducing Overhead Here are some general principles you can follow to improve the efficiency of your OpenCL code: ● Choose an efficient algorithm. OpenCL can take advantage of all the devices in the system, but only if the algorithms in your program are written to allow parallel processing. Consider the following when choosing an algorithm: ● The algorithm should be massively parallel, so that the computation can be carried out by a large number of independent work items: For data parallel calculations on a GPU, OpenCL works best where many work items are submitted to the device.. When sending work to a CPU, which typically has fewer cores than the GPU, it is important to match the number of work items to the number of threads the CPU can effectively support. 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 82 Improving Performance● Most algorithms are memory-bound. Consequently, algorithms with the fewest memory accesses or algorithms with a high compute:memory ratio are usually best for OpenCL applications. The compute:memory ratio is the ratio between the number of floating-point operations and the number of bytes transferred to and from memory. ● OpenCL is most efficient on large datasets. If possible, select an algorithm that works on large chunks of data or merge several smaller tasks into one. ● For data parallel calculations on a GPU, OpenCL works best where there are a lot of work items submitted to the device; however, some algorithms are much more efficient than others. ● Building an OpenCL program is computationally expensive and should ideally occur only once in a process. Be sure to take advantage of tools in OS X v10.7 that allow you to compile once and then run many times. If you do choose to compile a kernel during runtime, you will need to execute that kernel many times to amortize the cost of compiling it. You can save the binary after the first time the program is run and reuse the compiled code on subsequent invocations, but be prepared to recompile the kernel if the build fails because of an OpenCL revision or a change in the hardware of the host machine. You can also use bitcode generated by the OpenCL compiler instead of source code; compilation will be much faster and you won’t have to ship source code with your application. ● Moving data to or from OpenCL devices is expensive. OpenCL gives you complete control over allocation of memory and host-device memory transfers. Your program will run much faster if you allocate memory on the OpenCL device, move your data to the device, do as much computation as possible on the device, then move it off—rather than repeatedly going through write-compute-read cycles. ● Allocating and freeing OpenCL resources (memory objects, kernels, etc.) takes time. Reuse these objects whenever possible instead of releasing them and recreating them repeatedly. Note, however, that image objects can be reused only if they are the same size and pixel format as needed by the new image. ● Local memory is faster than global memory and private memory is even faster. When using memory on an OpenCL device, the local memory shared by all the work items in a single workgroup is faster than the global memory shared by all the workgroups on the device. Private memory, available only to a single work item, is even faster. ● Experiment with your code to find the kernel size that works best. Using smaller kernels can be efficient because each tiny kernel uses minimal resources and breaking a job down into many small kernels can allow for the creation of very large and efficient workgroups. On the other hand, starting each kernel does take between 10-100 µs. When each kernel exits, the the results must be stored in global memory. Because reading and writing to global memory is expensive, concatenating many small kernels into one large kernel may save considerable overhead. What kernel size is ideal for your application? To figure that out, you will have to experiment with your code to find the kernel size that provides optimal performance. Improving Performance Reducing Overhead 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 83● OpenCL events on the GPU are expensive. You can use eventsto coordinate execution between queues, but there is overhead to doing so. Use events only where needed; otherwise take advantage of the in-order properties of queues. ● When tuning for performance, it's really easy to introduce subtle errors that make the code run faster but produce bad output. After each iteration, always compare the output to a reference output computed on the host. For the same reason, be sure to keep all revisions in case you realize that you need to revert your code. ● Benchmark simple kernelsto estimate upper bounds and set optimization targets. See “Estimating Optimal Performance” (page 87). ● Use OpenCL built-in functions whenever possible. Optimal code will be generated for these functions. ● Balance precision and speed. GPUs are designed for graphics, where the requirements for precision are lower. The fastest variants are exposed in the OpenCL built-ins as fast_, half_, native_ functions. The program build options allow control of some speed optimizations. ● Take advantage of the memory subsystem of the device: ● When writing for the CPU, take advantage of the memory subsystem: reuse data while it’s still in L1 or L2 cache. To achieve this, use loop blocking and access memory in a cache-friendly pattern. ● On the GPU, the memory access pattern is the most important factor. Use faster memory levels (local memory, registers) to counter the effects of a sub-optimal pattern and to minimize accesses to the slower global memory. ● Avoid divergent execution: ● The CPU predicts the result of conditional jump instructions (corresponding to if, for, while, etc.) and starts processing the selected branch before knowing the effective result of the test. If the prediction is wrong, the entire pipeline needs to be flushed, and we lose some cycles. If possible, use conditional assignment instead. ● On the GPU, all threads scheduled together must execute the same code. As a consequence, when executing a conditional, all threads execute both branches, with their output disabled when they are in the wrong branch. It is best to avoid conditionals (replace them with a?x:y operators) or use built-in functions. ● Know what kind of device your code is executing on. OpenCL enables you to determine whether a device is a GPU or a CPU and how many devices are available. You can optimize your code for the hardware on which it is running. The same OpenCL code may run efficiently on both CPU and GPU, but optimal performance will usually require different code for each device. Improving Performance Reducing Overhead 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 84GPUs and CPUs have fundamentally different architectures and so require different optimizations for OpenCL. For example, whereas a CPU has a relatively small number of processing elements and a large amount of memory (both a large cache and a much larger amount of RAM available on the circuit board), a GPU has a relatively large number of processing elements and a comparatively small amount of memory. ● When writing for the CPU: ● Write simple scalar code first. The compiler and the autovectorizer work best on linear code and can generate near-optimal code with no effort required from you. If the autovectorizer providessub-optimal results, add vectors to the code by hand. ● Use the -cl-denorms-are-zero option in clBuildProgram, unless you need to use denormals (denormals are very small numbers with a slightly different floating-point representation). Denormals handling can be extremely slow (100x slower) and can lead to puzzling benchmark results. ● CPUs are not optimized for graphics processing. Avoid using images. CPUs provide no hardware acceleration for images, and image access is slower than the equivalent buffer access. See “Tuning OpenCL Code For the CPU” (page 89) for specific optimization strategies for CPUs. ● When writing for the GPU: ● Keep in mind that each family of GPUs has a unique architecture. To get the best possible performance from a GPU, you need to understand that GPU’s architecture. For example, for a particular GPU it might be more efficient to write to memory in blocks of a certain size, or it might be desirable to have the number of work items in each workgroup a multiple of a particular number. Consult the literature of the manufacturer of any GPU you wish to support to get details about that GPU’s architecture. This document considers only general principles that should be true for any GPU. ● Avoid slow host-device transfers: ● Aggregate several transfers into a single, larger one. ● Design algorithms to keep the data on the device as long as possible. ● Try to maximize the compute/memory ratio and the number of independent dependency chains by grouping the computation of several output elements into one single work item. The GPU has huge computing power and kernels will usually be memory-bound. ● Try to use image objectsinstead of buffers. For certain memory access patterns, the different hardware data path used when accessing images may be more efficient. This is especially the case when you use 16-bit floating-point data (half). See “Tuning OpenCL Code For the GPU” (page 99) for specific optimization strategies for GPUs. Improving Performance Reducing Overhead 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 85Measuring Performance Execution time of OpenCL commands can be measured on the host or on the device. It is best to measure performance on the host, because it is closer to the user-perceived execution time. Measuring Performance On the Host To measure OpenCL command execution time on the host: 1. Call the gettimeofday function to determine the start time. The gettimeofday function provides wall clock time with microsecond granularity: Listing 11-1 Using the gettimeofday function #include // Return wall clock time (s). double getRealTime() { struct timeval tv; gettimeofday(&tv,0); return (double)tv.tv_sec+1.0e-6*(double)tv.tv_usec; } 2. Call clFinish(queue) to block the host thread until all the OpenCL commandsin the queue are executed. 3. When OpenCL processing completes, call the getTimeOfDay function to determine the elapsed time. Measuring Performance On Devices The following APIs allows you to measure time taken for various OpenCL commands and kernels queued in a block to a dispatch queue. ● Start a timer Call this function to start the timer: cl_timer gcl_start_timer(void); ● Stop the timer Call this function to stop the timer and return the elapsed time in seconds between when the call to cl_start_timer associated with the timer parameter and when commands & kernelsin the block have finished execution: Improving Performance Measuring Performance 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 86double gcl_stop_timer(cl_timer timer); Measuring execution time of several consecutive calls to the same kernel(s) usually gives more reliable results. “Warming-up” the device also improves consistency of benchmarking results. Listing 11-2 (page 87) shows an example of benchmarking loop that can be included in kernel code: Listing 11-2 Sample benchmarking loop on the kernel const int iter = 10; // number of iterations to benchmark cl_timer blockTimer; for (int it=-2; it } clFinish(queue); gcl_stop_timer(blockTimer); // t = execution time for one iteration (s) Estimating Optimal Performance Before optimizing code, it is best to know what kind of performance is achievable. The main factor determining the execution speed of an OpenCL kernel is memory usage. This is the case for both CPU and GPU devices. Benchmarking the speed of the kernel function in Listing 11-3 (page 87) provides a way to estimate the memory speed of an OpenCL device. Listing 11-3 Kernel for estimating performance kernel void copyBuffer(global const float * in,global float * out) { int i = get_global_id(0); out[i] = in[i]; // R+W one float } Improving Performance Estimating Optimal Performance 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 87On our test machine we measured the following memory copy speeds for buffer sizes ranging from 1KiB to 16MiB: Figure 11-1 Memory copy speed in GB/s (read+write) vs buffer size 100 1 KiB 4 KiB 16 KiB 64 KiB 256 KiB 1 MiB 4 MiB 16 MiB 75 50 25 0 Standard C library memcpy (running on one single thread) The OpenCL code running on the CPU The OpenCL code running on the GPU Several interesting observations can be made from these curves: ● The measured cost of invoking the OpenCL kernels is in the 10-20 µs range, something like 50,000 CPU clock cycles. For small tasks, it will be larger or comparable to the actual cost of the computation. ● The memcpy curve showsthe 4 different levels of the CPU memory hierarchy: L1 cache (90 GB/s), L2 cache (50 GB/s), L3 cache (30 GB/s), and external memory (12 GB/s). ● The OpenCL GPU curve shows how GPU memory runs much faster than the CPU external memory. We reach 36 GB/s for this mobile GPU, and some desktop GPUs can reach 160 GB/s. Important: OpenCL is more efficient when data size increases. Try to process larger problems in fewer kernel calls. The asymptotic (maximum) value memory speed can be used to estimate the speed of a memory-bound algorithm on large data. Improving Performance Estimating Optimal Performance 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 88Take, for example, the following boxAvg kernel. It takes a single channel floating point image as input, and computes a single channel floating point image where each output pixel (x,y) is the average value of all pixels in a square box centered at (x,y). A w by h image is stored in a buffer float * A, where pixel (x,y) is stored in A[x+w*y]. Here is a first version of the code, before optimization: constant int RANGE = 2; kernel void boxAvg1(int w, int h, global const float * in, global float * out) { int x = get_global_id(0); // pixel to process is (x,y) int y = get_global_id(1); float sumA = 0.0f; // sum of pixel values float sum1 = 0.0f; // number of pixels for (int dy=-RANGE;dy<=RANGE;dy++) for (int dx=-RANGE;dx<=RANGE;dx++) { int xx = x + dx; int yy = y + dy; // Accumulate if inside image if (xx>=0 && xx=0 && yy=0 && xx=0 && yy=0 && xx= 0) { sumA -= inRow[x-RANGE-1]; sum1 -= 1.0f; } if (x+RANGE < w) { sumA += inRow[x+RANGE]; sum1 += 1.0f; } // insert x+RANGE // Store current value out[x+w*y] = sumA/sum1; Improving Performance Tuning OpenCL Code For the CPU 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 94} } In this variant, we have reduced the memory accesses from 6 to 3 per iteration of the x loop. The execution speed of this variant is now 1366 Mpix/s (and only 822 Mpix/s without the autovectorizer). This is 91% of our upper bound. We can move the conditionals and the division out of the x loop by splitting it into three parts: Listing 11-7 Modify the horizontal pass by moving division and conditionals out of the inner loop // Horizontal pass v4. Global work size: H kernel void boxAvgH4(int w, int h, global const float * in, global float * out) { int y = get_global_id(0); // row to process is y global const float * inRow = in + y*w; // beginning of input row global float * outRow = out + y*w; // beginning of output row float sumA = 0.0f; float sum1 = 0.0f; // Left border int x = -RANGE; for (; x<=RANGE; x++) { // Here, sumA corresponds to segment 0..x+RANGE-1, update to 0..x+RANGE. sumA += inRow[x+RANGE]; sum1 += 1.0f; if (x >= 0) outRow[x] = sumA/sum1; } // x is RANGE+1 here // Internal pixels float k = 1.0f/(float)(2*RANGE+1); // constant weight for internal pixels for (; x+RANGE= number of CPU cores. kernel void boxAvgV3(int w,int h,global const float * in,global float * out) { // Numer of rows to process in each work item (rounded up) int rowsPerItem = (h+get_global_size(0)-1)/get_global_size(0); int y0 = rowsPerItem * get_global_id(0); // we update the range Y0..Y1-1 int y1 = min(h, y0 + rowsPerItem); for (int y=y0; y= number of CPU cores. // AUX[w*global_size(0)] is temporary storage, 1 row for each work item. kernel void boxAvg2(int w, int h, global const float * in, global float * out, global float * aux) { // Number of rows to process in each work item (rounded up) int rowsPerItem = (h+get_global_size(0)-1)/get_global_size(0); int y0 = rowsPerItem * get_global_id(0); // we update the range Y0..Y1-1 int y1 = y0 + rowsPerItem; aux += get_global_id(0) * w; // point to our work item’s row of temporary storage float k = 1.0f/(float)(2*RANGE+1); // constant weight for internal pixels // Process our rows. We need to process extra RANGE rows before and after. for (int y=y0-RANGE; y=h) continue; // out of range // Compute horizontal pass in AUX. // The boxAvg4 code goes here on input row y // The output is stored in AUX[W]. // Accumulate this row on output rows Y-RANGE..Y+RANGE for (int dy=-RANGE; dy<=RANGE; dy++) { int yy = y + dy; if (yy < max(0, y0) || yy >= min(h, y1)) continue; // out of range // Get number of rows accumulated in row YY, to get the weight int nr = 1 + min(h-1, yy+RANGE)-max(0, yy-RANGE); float u = 1.0f/(float)nr; // Accumulate AUX in row YY global float4 * outRow4 = (global float4 *)(out + w*yy); global float4 * aux4 = (global float4 *)(aux); for (int x=0; x<(w/4); x++) outRow4[x] += u * aux4[x]; } } } Thisfused version runs at 1166 Mpix/s. Some rows will be processed twice,since we have to compute horizontal filters on rows y0-RANGE to y1+RANGE-1 to update output rows y0 to y1-1. At any given time during the execution, we will access one row in aux, one input row, and 2*RANGE+1 output rows. For a 4096x4096 image, each row is 16 KiB, and all 7 rows fit in L2 cache. Important: Merging two kernels called one after the other can reduce memory accesses, and works on smaller data chunks fitting in faster cache levels, instead of of forcing the two kernels to resort to communicate via via full round trips to external memory. Tuning OpenCL Code For the GPU The conditions to efficiently use a modern GPU are similar to the conditions we listed for the CPU, but with a few notable differences. Efficient GPU optimization requires: ● Scheduling a large number of work items to use all resources and hide execution latency. ● Using the GPU memory hierarchy efficiently. Improving Performance Tuning OpenCL Code For the GPU 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 99The number of workgroups and work items required to efficiently utilize a GPU is much higher than for a CPU. Inside the GPU we have a few cores (typically 2 to 16). Each core schedules work items in small groups (64 work items in the test machine). All these items are executed at the same time and can issue up to five independent arithmetic ops. Instruction latency is much higher for the GPU. When all work items managed by the core are waiting for the result of a previously issued instruction still in the pipeline, the GPU core stalls, and we lose efficiency. One way to avoid this is to increase the number of work items; another solution is to have more independent dependency chains inside each work item. The GPU memory hierarchy can be seen as four levels with the following orders of magnitude for access bandwidth: ● Host memory accessed from the GPU through the PCI-Express bus, 10 GB/s ● OpenCL global memory, VRAM attached to the GPU, 100 GB/s ● OpenCL local memory, attached to each core, 1,000 GB/s ● OpenCL private memory, registers, 10,000 GB/s Memory management is explicit: the host code manages host-device transfers and each variable belongs to a unique address space (global, local, private, constant). The most important factor in OpenCL efficiency is the memory access pattern: at a given time, we may have hundreds of work items issuing a memory access instruction, each one with a different address. The hardware is optimized to processsome of these patterns very quickly. Other access patterns can lead to hardware conflicts. Hardware conflicts are resolved by serializing the accesses: they can’t occur in parallel,so the hardware schedules them one after the other. A bad access pattern can make code run up to 30x slower. A pattern where work item i accesses element x[i] of an array is fast. On the contrary, any pattern with a large stride (especially a power of 2), x[s*i], will be extremely slow when s becomes large enough. With such a large difference of bandwidth between the different layers, keeping reused data in the fastest levels is another key to efficiency. In particular, it is best to avoid host-device transfers: keep data resident on the GPU until it needs to be transferred to the host. If the output of OpenCL needs to be displayed, it is best to use CL/GL interoperability and have the output image mapped to an OpenGL texture. In Practice We will tune the same boxAvg code we used in “Tuning OpenCL Code For the CPU” (page 89), but this time for the GPU. We start from the same initial code for the horizontal pass as we did for the CPU: Improving Performance Tuning OpenCL Code For the GPU 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 100Listing 11-12 Kernel before optimization // Horizontal pass v1. Global work size: w x h kernel void boxAvgH1(int w,int h,global const float * in,global float * out) { int x = get_global_id(0); // pixel to process is (x,y) int y = get_global_id(1); float sumA = 0.0f; // sum of pixel values float sum1 = 0.0f; // number of pixels for (int dx=-RANGE;dx<=RANGE;dx++) { int xx = x + dx; // Accumulate if inside image if (xx>=0 && xx= 0 && xx < w)?in[xx+w*y]:0.0f; } // Block until all work items in the group finished updating AUX barrier(CLK_LOCAL_MEM_FENCE); // Compute our value float sumA = 0.0f; // sum of pixel values float sum1 = 0.0f; // number of pixels for (int dx=-RANGE;dx<=RANGE;dx++) { int xx = x + dx; Improving Performance Tuning OpenCL Code For the GPU 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 102sumA += aux[iid+dx+RANGE]; // will add 0 if out or range sum1 += (xx >= 0 && xx < w)?1.0f:0.0f; } // Store output out[x+w*y] = sumA/sum1; } In this example we are using all the work items in the workgroup to copy a segment of the input row to the local memory buffer aux. Note that a barrier call is required to ensure all items in the group have actually finished updating aux before we use the buffer. This kernel runs slightly faster, at 1043 Mpix/s. It can be modified to process several consecutive rows inside each work item, or several consecutive columns. The corresponding benchmarks are: Table 11-1 Benchmarks of boxAvgH5 variants: pix/item Mpix/s 1x1 1044 1x2 1630 1x4 1577 2x1 1300 4x1 1356 8x1 1123 Performance here is significantly improved, but is still far from the copy kernel reference speed of 4500 Mpix/s. Let’s direct our attention to the vertical pass. If it proves to be much faster, we may be able to use it twice with additional transpositions, assuming we can transpose an image efficiently. The boxAvgV1 kernel presented in the CPU section runs at 884 Mpix/s. Let’s modify this kernel to compute several rows in each work item: Listing 11-14 Modify the kernel to compute several rows in each work item // Vertical pass v4. Global work size: W x any Improving Performance Tuning OpenCL Code For the GPU 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 103kernel void boxAvgV4(int w, int h, global const float * in, global float * out) { // Number of rows to compute int rowsPerItem = (h+get_global_size(1)-1) / get_global_size(1); int x = get_global_id(0); // column to process int y0 = rowsPerItem * get_global_id(1); // rows to process are y0..y1-1 int y1 = min(h,y0+rowsPerItem); float sumA = 0.0f; // sum of pixel values float sum1 = 0.0f; // number of pixels // Load values y0-RANGE-1..y0+RANGE-1 for (int y=max(0, y0-RANGE-1); y < min(h, y0+RANGE); y++) { sumA += in[x+w*y]; sum1 += 1.0f; } // Process our rows for (int y=y0; y= 0) { sumA -= in[x + w*yy]; sum1 -= 1.0f; } yy = y+RANGE; if (yy < h) { sumA += in[x + w*yy]; sum1 += 1.0f; } // Store value out[x+w*y] = sumA/sum1; } } This one runs at 2296 Mpix/s, and we read+write 2+1 float per pixel instead of 5+1. If we can provide a dedicated kernel for each value of RANGE, we can reduce this to 1+1 float per pixel, by keeping a “ring” of previous 2*RANGE+1 values in registers. Doing so, we won’t need to reload the value for yy = y-RANGE-1 to remove it from the sum. Here is the modified code: Listing 11-15 Provide a dedicated kernel for each value of RANGE // Register ring, RANGE=2 kernel void boxAvgV4_ring(int w,int h,global const float * in,global float * out) Improving Performance Tuning OpenCL Code For the GPU 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 104{ // Compute number of rows to compute int rowsPerItem = (h+get_global_size(1)-1)/get_global_size(1); int x = get_global_id(0); // column to process int y0 = rowsPerItem * get_global_id(1); // rows to process are y0..y1-1 int y1 = min(h, y0+rowsPerItem); float2 r0, r1, r2, r3, r4; // ring has 5 values // Load values y0-RANGE-1..y0+RANGE-1 in the ring int yy; r0 = r1 = r2 = r3 = r4 = (float2)(0.0f); yy = y0-2; if (yy>=0) r1 = (float2)(in[x + w*yy],1.0f); yy = y0-1; if (yy>=0) r2 = (float2)(in[x + w*yy],1.0f); yy = y0; r3 = (float2)(in[x + w*yy],1.0f); yy = y0+1; if (yy=0) { r1 = in[x + w*yy]; s1 = 1.0f; } yy = y0-1; if (yy>=0) { r2 = in[x + w*yy]; s2 = 1.0f; } Improving Performance Tuning OpenCL Code For the GPU 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 106yy = y0; { r3 = in[x + w*yy]; s3 = 1.0f; } yy = y0+1; if (yy

This is a very short article.
The parser would report the following series of events to its delegate: 1. Started parsing document 2. Found start tag for element article 3. Found attribute author of element article, value “John Doe” 4. Found start tag for element para 5. Found characters This is a very short article. 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 6 Parser Capabilities and Architecture6. Found end tag for element para 7. Found end tag for element article 8. Ended parsing document Both the tree-based and event-based parsing approaches have theirstrengths and disadvantages. It can require considerable amounts of memory to construct an internal tree representing an XML document, especially if that document is large. This problem is compounded if it becomes necessary to map the tree structure of the parsed document to a more strongly typed, application-specific tree structure. Event-driven parsing—because it deals with only one XML construct at a time and not all of them at once—consumes much less memory than tree-based parsing. It is ideal for situations where performance is a goal and modification of the parsed XML is not. One such application for event-driven parsing is searching a repository of XML documents (or even one XML document with multiple “records”) for specific elements and doing something with the element content. For example, you could use NSXMLParser to search the property-list preferences files on all machines in a Bonjour network to gather network-configuration information. Event-driven parsing is less suitable for tasks that require the XML to be subjected to extended user queries or to be modified and written back to a file. Event-driven parsers such as NSXMLParser also do not offer any help with validation (that is, it verifying whether XML conforms to the structuring rules as specified in a DTD or other schema). For these kinds of tasks, you need a DOM-style tree. However, you can construct your own internal tree structures using an event-driven parser such as NSXMLParser. In addition to reporting parsing events, an NSXMLParser object verifies that the XML or DTD is well-formed. For example, it checks whether a start tag for an element has a matching end tag or whether an attribute has a value assigned. If it encounters any such syntactical error, it stops parsing and informs the delegate. Although the parser “understands” only XML and DTD as markup languages, it can parse any XML-based language schema such as RELAX NG and XML Schema. Parser Capabilities and Architecture 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 7The essential steps for parsing an XML document using NSXMLParser are straightforward. It requires you complete the following general steps: 1. Locate the XML. Listing 1 shows code that lets the user select an XML file from a file-system browser (NSOpenPanel). Listing 1 Opening an XML file - (void)openXMLFile { NSArray *fileTypes = [NSArray arrayWithObject:@"xml"]; NSOpenPanel *oPanel = [NSOpenPanel openPanel]; NSString *startingDir = [[NSUserDefaults standardUserDefaults] objectForKey:@"StartingDirectory"]; if (!startingDir) startingDir = NSHomeDirectory(); [oPanel setAllowsMultipleSelection:NO]; [oPanel beginSheetForDirectory:startingDir file:nil types:fileTypes modalForWindow:[self window] modalDelegate:self didEndSelector:@selector(openPanelDidEnd:returnCode:contextInfo:) contextInfo:nil]; } - (void)openPanelDidEnd:(NSOpenPanel *)sheet returnCode:(int)returnCode contextInfo:(void *)contextInfo { NSString *pathToFile = nil; if (returnCode == NSOKButton) { pathToFile = [[[sheet filenames] objectAtIndex:0] copy]; } if (pathToFile) { NSString *startingDir = [pathToFile stringByDeletingLastPathComponent]; 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 8 XML Parsing Basics[[NSUserDefaults standardUserDefaults] setObject:startingDir forKey:@"StartingDirectory"]; [self parseXMLFile:pathToFile]; } } Although an XML file is the common case, the source of the XML might not be a file. You could receive the XML from another object as a property-list object (such as an NSDictionary) or as a stream of bytes over a network. In cases like these, you must convert the form of the XML to an NSData object before initializing the NSXMLParser instance (see following step) 2. Create and initialize an instance of NSXMLParser., ensuring that you set a delegate. Listing 2 illustrates how you might do this. Listing 2 Creating and initializing a NSXMLParser instance - (void)parseXMLFile:(NSString *)pathToFile { BOOL success; NSURL *xmlURL = [NSURL fileURLWithPath:pathToFile]; if (addressParser) // addressParser is an NSXMLParser instance variable [addressParser release]; addressParser = [[NSXMLParser alloc] initWithContentsOfURL:xmlURL]; [addressParser setDelegate:self]; [addressParser setShouldResolveExternalEntities:YES]; success = [addressParser parse]; // return value not used // if not successful, delegate is informed of error } In this method, the client object converts the path to the XML file to an NSURL object and then uses that object to initialize the NSXMLParser instance with initWithContentsOfURL:. It also sets the delegate to be itself and letsthe parser know it wantsto resolve external entities(such as external DTD declarations). Other NSXMLParser methodslet you set various namespace-related options. Finally, the clientsends parse to the NSXMLParser instance to have it begin parsing the XML. If the XML was in some form other than a file, you would convert it to an NSData object and then use the initWithData: initializer: addressParser = [[NSXMLParser alloc] initWithData:xmlData]; XML Parsing Basics 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 93. Implement the delegation methods that are of interest to you. When the NSXMLParser object parses the XML, it sends a message to its delegate for each XML construct it encounters (but only if the delegate implements the associated method). Implementations of these methods vary by type of construct: DTD declarations, namespace prefixes, elements, and so on. Elements are the most common type of XML construct processed;see “Handling XML Elements and Attributes” (page 11) for details. All parsing operations begin with the delegate receiving parserDidStartDocument: and end with the delegate receiving parserDidEndDocument: (assuming, of course,the delegate implementsthemethods). The former method offers an opportunity for allocating and setting up resources needed for the parsing operation; the latter method is a good place to release those resources and properly dispose of any result. 4. Handle any parsing errors. If the parser encounters an error, it stops parsing and invokes the delegation method parser:parseErrorOccurred:. Implement this method to interpret the error and inform the user. (All parser errors are nonrecoverable.) See “Handling Parsing Errors” (page 17) for further information. Memory management becomes a heightened concern when you are parsing XML. Processing the XML often requires you to create many objects; you should not allow these objects to accumulate in memory past their span of usefulness. One technique for dealing with these generated objects is for the delegate to create a local autorelease pools at the beginning of each implemented delegation method and release the autorelease pool just before returning. NSXMLParser managesthe memory for each object it creates and sendsto the delegate. XML Parsing Basics 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 10Generally, when you parse an XML document most of the processing involves elements and things related to elements, such as attributes and textual content. Elements hold most of the information in an XML document. When the NSXMLParser object traverses an element in an XML document, it sends at least three separate message to its delegate, in the following order: parser:didStartElement:namespaceURI:qualifiedName:attributes: parser:foundCharacters: parser:didEndElement:namespaceURI:qualifiedName: The parser might send the parser:foundCharacters: message multiple times for one element; however, if the characters consist of nothing but white-space characters (space, new line, tab, and similar characters) the parser sends parser:foundIgnorableWhitespace: instead. When you are parsing XML elements, an advanced technique you can adopt is to switch processing responsibilities among multiple delegates, each of which knows how to handle a certain type of element. For more information see “Using Multiple Delegates” (page 19). Design Considerations In an object-oriented environmentsuch as Cocoa, a common strategy for handling elementsisto map them—at the higher nesting levels, at least—to objects. Root elements and other top-level elements are frequently equivalent to collections represented in Cocoa by NSDictionary and NSArray objects. Other elements might readily map to one or more of an application’s custom model objects. However, not all elements are best expressed as objects. Some lower level and particularly “leaf” elements are more logically viewed as properties of their parent element (if that element maps to an object). And, of course, you would probably make the actual attributes of any element a property (that is, an instance variable) of the corresponding object. Notwithstanding these suggestions, there is no ready-made mapping formula, and indeed your application might not have to perform any element-to-object mapping to achieve its ends. These design decisions require some thought as well as some familiarity with the structure of the XML. 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 11 Handling XML Elements and AttributesHandling an Element: An Example The example code referred to in the following discussion processes an XML file containing personal-address information and converts that information into Address Book objects (ABPerson and ABMultipleValue) that can be added to a specified user’s address database. A portion of the XML looks like the following: Listing 1 Some of the sample XML Doe John (201) 345-6789 jdoe@foo.com
100 Main Street Somewhere New Jersey 07670
Let’s look at how the first three of these elements might be handled. When the parser first encounters these elements, it invokes the delegate’s parser:didStartElement:namespaceURI:qualifiedName:attributes: method. For the first two elements, the delegate creates an equivalent object. For the third element (lastName), the delegate sets an appropriate property of the second object. Listing 2 shows the delegate’s implementation for the start tags of the first three elements. Handling XML Elements and Attributes Handling an Element: An Example 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 12Listing 2 Implementing parser:didStartElement:namespaceURI:qualifiedName:attribute: - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict { if ( [elementName isEqualToString:@"addresses"]) { // addresses is an NSMutableArray instance variable if (!addresses) addresses = [[NSMutableArray alloc] init]; return; } if ( [elementName isEqualToString:@"person"] ) { // currentPerson is an ABPerson instance variable currentPerson = [[ABPerson alloc] init]; return; } if ( [elementName isEqualToString:@"lastName"] ) { [self setCurrentProperty:kABLastNameProperty]; return; } // .... continued for remaining elements .... } The delegate identifies the element passed in (elementName), then processes it accordingly: ● If it’s an addresses element (the root element) it creates a mutable array to hold the ABPerson objects. This mutable array is held as an instance variable. ● If it’s a person element, it creates an ABPerson object. This object is held as an instance variable named currentPerson. ● If it’s a lastName element, it sets an instance variable holding the current Address Book property; this value is a enum constant declared in the Address Book framework. The important action undertaken here is having a way (instance variables in this case) to track the current element throughout the parser’s traversal of it. One reason for this importance is the semantics of parser:foundCharacters:, most likely the next delegation method invoked. This method can be invoked Handling XML Elements and Attributes Handling an Element: An Example 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 13multiple times for the same element. In this method the delegate should append the characters passed in to the characters accumulated so far for the element. The NSMutableString method appendString: is useful for this purpose, as shown in Listing 3. Listing 3 Implementing parser:foundCharacters: - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if (!currentStringValue) { // currentStringValue is an NSMutableString instance variable currentStringValue = [[NSMutableString alloc] initWithCapacity:50]; } [currentStringValue appendString:string]; } Again the code uses an instance variable (currentStringValue) as a way to track and gather the content for the current element. If the parser encounters some white-space characters in the element content, it sends the message parser:foundIgnorableWhitespace: to give the delegate the opportunity to retain any white-space characters (such as tabs or new-lines). Finally, when the parser encounters the end tag of an element, it invokes the delegation method parser:didEndElement:namespaceURI:qualifiedName:. Listing 4 presents the approach taken by the delegate in the example code. Listing 4 Implementing parser:didEndElement:namespaceURI:qualifiedName: - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { // ignore root and empty elements if (( [elementName isEqualToString:@"addresses"]) || ( [elementName isEqualToString:@"address"] )) return; if ( [elementName isEqualToString:@"person"] ) { // addresses and currentPerson are instance variables [addresses addObject:currentPerson]; [currentPerson release]; return; } NSString *prop = [self currentProperty]; Handling XML Elements and Attributes Handling an Element: An Example 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 14// ... here ABMultiValue objects are dealt with ... if (( [prop isEqualToString:kABLastNameProperty] ) || ( [prop isEqualToString:kABFirstNameProperty] )) { [currentPerson setValue:(id)currentStringValue forProperty:prop]; } // currentStringValue is an instance variable [currentStringValue release]; currentStringValue = nil; } If the delegate determines that the end tag is for the person element, it adds the ABPerson object to the addresses array and releases the ABPerson object. If the end tag is for the lastName element (for example), the delegate uses the ABRecord method setValue:forProperty: to set the appropriate property in the ABPerson object (ABRecord isthe superclass of ABPerson). Finally, the instance variable holding the accumulated content for the element (currentStringValue) is released. Handling an Attribute The addresses element shown in the example XML in Listing 1 (page 12) includes an attribute: In this hypothetical case, the attribute allows the application parsing the XML to store the created Address Book information in a specific user directory on a multi-user system. The NSXMLParser object presents attributes of an element to the delegate in a dictionary in the final parameter of parser:didStartElement:namespaceURI:qualifiedName:attributes:. Listing 5 shows how the delegate in the example handles the owner attribute. Listing 5 Handling an attribute of an element - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict { Handling XML Elements and Attributes Handling an Attribute 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 15if ( [elementName isEqualToString:@"addresses"]) { // addresses is an NSMutableArray instance variable if (!addresses) addresses = [[NSMutableArray alloc] init]; NSString *thisOwner = [attributeDict objectForKey:@"owner"]; if (thisOwner) [self setOwner:thisOwner forAddresses:addresses]; return; // ... continued ... }} The delegate extracts the user name of the owner from the attributeDict dictionary using the attribute name (owner) as a key. It then invokes a private method that associates the owner with the imported Address Book data. Handling XML Elements and Attributes Handling an Attribute 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 16When the parser encounters a syntactical error or any other problem in an XML document that prevents it from being well-formed, it stops parsing and sends a message to its delegate. The delegate, if it implements the parser:parseErrorOccurred: method, receives this message. In its implementation it should display a message informing users what the problem is. The parsing error is fatal (that is, unrecoverable) so informing the user is all that you can realistically accomplish. With this information, the user might be able to fix the XML so the document can be successfully parsed. Listing 1 illustrates how you might implement parser:parseErrorOccurred:. Listing 1 Handling parsing errors - (void)parser:(NSXMLParser *)parser parseErrorOccurred:(NSError *)parseError { NSWindow *modWin = [self windowForSheet]; if (!modWin) modWin = [NSApp mainWindow]; NSAlert *parserAlert = [[NSAlert alloc] init]; [parserAlert setMessageText:@"Parsing Error!"]; [parserAlert setInformativeText:[NSString stringWithFormat:@"Error %i, Description: %@, Line: %i, Column: %i", [parseError code], [[parser parserError] localizedDescription], [parser lineNumber], [parser columnNumber]]]; [parserAlert addButtonWithTitle:@"OK"]; [parserAlert beginSheetModalForWindow:modWin modalDelegate:self didEndSelector:@selector(alertDidEnd:returnCode:contextInfo:) contextInfo:nil]; [parserAlert release]; } - (void)alertDidEnd:(NSAlert *)alert returnCode:(int)returnCode contextInfo:(void *)contextInfo { } 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 17 Handling Parsing ErrorsThe key line in this example is the one that constructs the NSAlert object’s informative text. This text includes the error code (an NSXMLParserErrorenum constant), a localized description of the error, and a line number and column (nesting level) number isolating the location of the error in the XML document. In the example, the delegate obtains this information from two different sources: from the parser object itself (provided in the first parameter of the method) or from the NSError object provided in the second parameter. From the parser object it can also get an NSError object, and from that it can get a localized description. However, the default localized description of NSError is rudimentary. You might want to provide your own localized description instead of relying on the one obtained from the NSError object. Sometimes parsing errors may require an application-specific interpretation. To implement a function or method for this purpose, you can use the NSXMLParserError constant defining the error to determine which custom key to use in the NSLocalizedString macro. (Of course, you must also create a strings file and do whatever else is necessary to internationalize your application.) Handling Parsing Errors 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 18For some XML documents, particularly large and complex documents, having a single delegate for the NSXMLParser object might not be the best approach. The code for handling all of the different parsing events can easily become overly intricate and hard to manage. One technique for making things more manageable is to share the work of handling parsing events among multiple delegates. Take as an example an application that constructs a DOM-style tree from elements as it encounters them. Starting from the root element, one element creates a child element and passes off control to it by setting it to be the delegate. That child element creates its children (and so on), each time resetting the delegate appropriately. If an element has no children, or if it’s a mixed element, it accumulates the textual content for itself. Finally, when the parser encounters an element’s end tag, the element sets the delegate to be its parent element. Listing 1 shows the pertinent code that accomplishes this processing. Listing 1 Resetting the delegate for the next element - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qualifiedName attributes:(NSDictionary *)attributeDict { // Element is a custom class for object representing element nodes // Creation of element sets child as delegate (see below) [self addChild:[Element elementWithName:elementName attributes:attributeDict parent:self children:nil parser:parser]]; } - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { [self appendString:string]; } - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { Element *parent = [self parent]; 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 19 Using Multiple Delegates[parser setDelegate:parent]; // RESET DELEGATE TO PARENT } + (id)elementWithName:(NSString *)elementName attributes:(NSDictionary *)attributes parent:(Element *)parent children:(NSArray *)children parser:(NSXMLParser *)parser { return [[[[self class] alloc] initWithName:elementName attributes:attributes parent:parent children:children parser:parser] autorelease]; } - (id)initWithName:(NSString *)elementName attributes:(NSDictionary *)attributes parent:(id)parent children:(NSArray *)children parser:(NSXMLParser *)parser { self = [super init]; if (self) { [self setName:elementName]; if (attributes) { [self addAttributes:attributes]; } [self setParent:parent]; if (children) { [self addChildren:children]; } [parser setDelegate:self]; // CHILD SET AS DELEGATE } return self; } Another technique for managing multiple delegates is maintaining a number of delegate objects, each with its specialized role, in a collection such as an NSDictionary object. These objects would know who their child and parent elements are in any given context and so would be able to set the delegate for the next element (using the appropriate dictionary key) after their work with the current element has finished. Using Multiple Delegates 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 20Generally, if you wish to add or modify the content of an XML document, you must construct a static tree structure that completely represents the elements and other constructs in the document. Tree representations are also essential if you intend to validate an XML document against the DTD (or other language schema) that prescribes the logical structure of the document. When most developers want to construct DOM-style tree representations of XML documents, they use a tree-based parser, not a streaming parser such as NSXMLParser. (Tree-based parsing engines, however, are typically built on top of streaming parsers.) Nonetheless, that does not mean that you cannot create tree structures using an NSXMLParser instance. Although this article does not go into great detail about techniques for constructing XML tree structures using NSXMLParser, it outlines a general approach that you could take. Note: DOM (for Document Object Model) is a model proposed by the World Wide Web Consortium for describing XML and HTML documents using a standard set of objects. It also defines an interface for accessing and manipulating those objects, which represent (among other things) the elements of a document and the attributes associated with each element. The procedure discussed below does not make specific use of DOM, although there are similarities. You can represent any XML document as a hierarchical tree whose “nodes” are elements exhibiting relationships of parent and child with other elements. Each element can have one or more children and, with the exception of the root element, has exactly one parent element. The tree is anchored by a root element, which is the only element in the tree without a parent. The “leaf” nodes of the tree are typically those elements containing nothing but text, although they can also be mixed elements or empty elements. For example, consider the following short XML document: Doe John (201) 345-6789 jdoe@foo.com 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 21 Constructing XML Tree Structures
100 Main Street Somewhere New Jersey 07670
The following tree of element nodes represents this document: Figure 1 Tree representation of simple XML document addresses person lastName firstName phone email address street city state zip There are several possible ways to construct a tree representation of an XML document using NSXMLParser. This article, however, looks at a recursive, object-oriented approach that dynamically transfers delegation responsibilities among the objects representing the elements of a document. (This strategic shifting of the NSXMLParser delegate is discussed further in “Using Multiple Delegates” (page 19).) The programmatic result is doubly-linked lists of objects and arrays of objects; the abstract result is a tree representation of the document. The procedure for constructing a tree using this approach entails the following steps: 1. Create a class whose instances represent the elements of an XML document. The class should define the name of the element and its parent (one-to-one) and children (one-to-many) relationships; it should also encapsulate the attributes associated with the element. As a shorthand notation for this procedure, we’ll call this class MyElement. Constructing XML Tree Structures 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 222. From a top-level object in the application, load an XML document, create an NSXMLParser instance for it, assign the top-level object as delegate, and begin parsing the document (see “XML Parsing Basics” (page 8)). 3. The parser encounters the document’s root element first and sends parser:didStartElement:namespaceURI:qualifiedName:attributes: to its delegate. The delegate creates a MyElement object to represent thisroot element and setsits parent to nil. The method that creates and initializes the object also sets it to be the new delegate of the NSXMLParser instance. 4. The parser encounters the next element of the document—the first child of the root element—and again sendsthe delegate parser:didStartElement:namespaceURI:qualifiedName:attributes:. The delegate is now the MyElement object recently created to represent the root element. It creates another MyElement object to represent the new element (in the process setting the new object to be the delegate and setting itself to be the parent) and adds the new object to its list of children. 5. The new delegate receives the next parser:didStartElement:namespaceURI:qualifiedName:attributes: message, identifying its first child element, and it creates it and adds it to its list of children. 6. Thisrecursive descent through the first branch of the tree ends when the parser encounters“leaf” elements containing text, mixed content, or empty elements. If there is mixed content the descent is not truly over since parser:didStartElement:namespaceURI:qualifiedName:attributes: is sent to the delegate even after it receives parser:foundCharacters: for the current element. Processing depends on the kind of element: ● If it’s an empty element, processing skips ahead to the next step (end-element tag) ● If there is only text associated with the current element node, the delegate responds to the parser:foundCharacters: message by accumulating text (in sequential parser:foundCharacters: invocations). ● If there is mixed content, the delegate will process the text even after it receives messages notifying it of the start-element and end-element tags for the embedded elements. One way to handle this is to wrap the text in special text-element objects and insert these (in the proper order) in the element’s child list. 7. Finally,the parsersendsthe parser:didEndElement:namespaceURI:qualifiedName: to the delegate, notifying it that the element is now complete. The delegate sets the new delegate to be its parent and returns. 8. If the parent has more children elements, the parser sends it the next parser:didStartElement:namespaceURI:qualifiedName:attributes: message; the parent MyElement object creates a MyElement instance to represent its next child (in the process setting it to be the new delegate and setting itself to be the parent of the new MyElement) and adds the newly created Constructing XML Tree Structures 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 23object to its list of children. However, if the parent has no more children to add to its list (that is, it receives the parser:didEndElement:namespaceURI:qualifiedName: message instead) it sets the new delegate to be its parent and returns. 9. The procedure continues in this fashion until the entire XML document is processed and all branches of the tree are constructed. The objects that are the nodes of the tree (representing mostly elements) should be able to print themselves out as XML code. Your application should also implement an algorithm that asksthe objectsto print themselves in the proper document sequence. Constructing XML Tree Structures 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 24Validation is a procedure that ensures an XML document conforms to the rules governing its logical structure as specified in a language schema such as DTD (Document Type Definition). An XML document might be well-formed—that is, it obeys the syntactical rules of XML—and at the same time be invalid. For example, an element might include a child element when it issupposed to have only textual content, or a required attribute of an element might be missing. To perform validation it helpsto construct a tree of an XML document’sschema that is parallel to a tree structure representing the document’s actual content (see “Constructing XML Tree Structures” (page 21)). The schema tree presents a simple abstract view of how the document should be structured. Instead of nodes of objects representing the actual elements and text of the document, the schema tree contains nodes that express the rules by which the parts of the document can be combined. Validation tests the actual elements, attributes, and other parts of the document against the rules of the schema to see if the document conforms. If your application finds any violation of conformance, it can notify the user and perhaps require the user to fix the error. You can validate an XML document when it is first read and processed and later when users attempt to make any changes to it. Because the programmatic interface of NSXMLParser is designed to report only XML constructs and DTD declarations, this article focuses on that language schema. However, if you use an XML-based language schema, such as RELAX NG, then NSXMLParser can process the schema just it would as any XML file, reporting what it finds to its delegate. You can use the data you thereby acquire for validation. The sections on constructing rules focus primarily on element and attribute declarations because these are by far the most common and most important type of declaration. “Handling Other Declarations” (page 29) briefly discusses what to do with other kinds of declarations, such as those for entities and notations. Using NSXMLParser to Handle DTD Declarations The NSXMLParser class reports to its delegate DTD declarations it encounters in a document (assuming the delegate implements the necessary methods). If the language schema you use is DTD, NSXMLParser helps you acquire the data you need either for validation or for other purposes, such as enforcing correctness when dynamically constructing objects (for example, a menu template). 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 25 Validation Tips and TechniquesThe DTD Delegation Methods The NSXMLParser class defines a half dozen delegation methods that the parser invokes when the parser encounters a DTD declaration in a internal or external source. These methods are of the form: parser:foundTypeDeclarationWithName:... The third parameter and any subsequent parameters depend on the type of declaration. The following list briefly describes the NSXMLParser delegation methods related to DTD declarations. - parser:foundElementDeclarationWithName:model: Example: - parser:foundAttributeDeclarationWithName:forElement:type:defaultValue: Example: - parser:foundInternalEntityDeclarationWithName:value: Example: - parser:foundExternalEntityDeclarationWithName:publicID:systemID: Example: - parser:foundNotationDeclarationWithName:publicID:systemID: Example: - parser:foundUnparsedEntityDeclarationWithName:publicID:systemID: notationName: Example: Resolving External DTD Entities An XML document, in the DOCTYPE declaration that occurs near its beginning, often identifies an external DTD file whose declarations prescribe its logical structure. For example, the following DOCTYPE declaration says that the DTD related to the root element “addresses” can be located by the system identifier “addresses.dtd”. Often the system identifier assumes a standard file-system location for DTDs—for example, /System/Library/DTDs. At the start of processing, the NSXMLParser delegate is given an opportunity to resolve this external entity and give the parser a list of DTD declarations to parse. 1. When you prepare the NSXMLParser instance,send it the setShouldResolveExternalEntities: with an argument of YES. 2. Implement the delegation method parser:resolveExternalEntityName:systemID: to return the declarations in the external DTD file as an NSData object. Validation Tips and Techniques Using NSXMLParser to Handle DTD Declarations 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 26If the DTD declarations are internal to an XML document, then the delegate will receive the DTD-declaration messages automatically (assuming, of course, that it implements the related methods). Constructing Rules for Elements Just as elements are typically the most common kind of construct in an XML document, element declarations are the most common kind of declaration in a DTD. They express rules for the composition of elements from child elements, text, and other constituents. An element declaration has three parts: the !ELEMENT keyword, the element name, and a content model. The content model is everything after the name up to the terminating angle bracket. Consider the following examples: The content model can specify no content (EMPTY), any content (ANY, which israre), textual content (#PCDATA), and child elements. It may identify child elements by name or by an entity reference (such as %plistObject; in the third example above). The model can also specify mixed content—that is, the element can contain text and child elements in any order. Through occurrence modifiers (*, +, ?) and other syntactical conventions, the content model can also specify the order of child elements, whether an element is required or optional, how many times an element may occur, and acceptable choices between elements. Occurrence modifiers can be applied to groups of elements (in parentheses) as well as individual elements. The job required for validation is to examine the content model of an element declaration and derive rules for the composition of that element. As one approach, you might design classes for each type of rule as well as for the scope of a rule (individual element or group of elements). You could then associate instances of that rule class with an element through the name of the element. During validation the instances are queried with regard to a current or potential member of an element. Table 1 lists the most important rules derivable from an element declaration’s content model. Table 1 Possible rules for element validation Rule Sample content model Comments Textual content only (#PCDATA) Validation Tips and Techniques Constructing Rules for Elements 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 27Rule Sample content model Comments Vertical bars in this case have a meaning different from choice; when #PCDATA is present, they mean that text and child elements can be intermixed. (#PCDATA | bold | italic) Mixed content No content EMPTY For flag-type values. (name, address, Commas indicate prescribed sequence. phone) Required sequence Without #PCDATA being a member (see Mixed content), the vertical bars mean that one of the listed elements must be used. (read | write | readwrite) Choice No modifier punctuation mark. Can apply to individual element or group. (name, address, phone) Occurs exactly once Occurrence modifier is asterisk (“*”). Can apply to individual element or group. Occurs zero or more (%plistObject;)* times Occurrence modifier is plus sign (“+”). Can apply to individual element or group. Occurs one or more (property+) times Occurrence modifier is question mark (“?”). Can apply to individual element or group. Occurs zero or one (%implementation;?) time Constructing Rules for Attributes Elements frequently have attributes associated with them, and consequently attribute-list declarations are frequently encountered in DTDs. Attribute-list declarations specify the rules for attributes using a syntax that is different from element declarations. They specify, in order, the associated element, the name of the attribute, the type of the attribute, and a default value. For example, the declaration states that the defaultIndex attribute, which is associated with the modifierMap element, is of type NMTOKEN (meaning that it must be a valid XML name); the #REQUIRED keyword given as the default value means that a value for the attribute must be supplied. Validation Tips and Techniques Constructing Rules for Attributes 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 28When a NSXMLParser instance encounters an attribute-list declaration, it sends parser:foundAttributeDeclarationWithName:forElement:type:defaultValue: to its delegate. Passed in as parameters are attribute name, the associated element, the attribute type, and its default value. The rules for attributes derive from combinations of the last two parameter (type and default value). Table 2 lists some the possible rules that you can construct from attribute-list declarations. Table 2 Possible rules for attribute validation Type or Comments default Keywords or example Rule The attribute value must be unique in the XML document. Unique value ID type The value of the attribute must be specified in the document. Required value #REQUIRED default Value must refer to valid ID-type value elsewhere in document. IDREFS specifies a list of ID references (in parentheses). Refers to unique IDREF | IDREFS type attribute value Value must be valid XML name (including entity references). NMTOKENS specifies a list of XML names (in parentheses). NMTOKEN | type NMTOKENS Valid XML name Value is fixed #FIXED "value" default Value must be “value”. Attribute enumeration: value must be one of the XML names in parentheses. (name | address type | phone) Valid XML name in list Attribute enumeration: value must be one of the defined types in parentheses. NOTATION (tiff type | gif | jpg) Valid defined type in list Handling Other Declarations Other DTD declarationssuch asthose for entities and notations are less common than element and attribute-list declarations. You can easily derive rule constructions for these other declarations after reviewing some DTD documentation. However, there are a couple of things to keep in mind: ● You need to record entity declarations in case they are used as part of the content model for an element declaration. ● Because notations can be made an attribute type, you should also keep track of them. Validation Tips and Techniques Handling Other Declarations 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 29This glossary defines some of the terms specific to XML, DTD, and related specifications and technologies. It focuses primarily on terms that are part of the names of methods and constants declared by the NSXMLParser, NSXMLNode, NSXMLDocument, NSXMLElement, NSXMLDTD, and NSXMLDTDNode classes. atomic value A value with a simple type as defined by the XML Schema standard. The types include string, decimal, integer, float, double, Boolean, date, URI, array, and binary data. An XQuery query returns a sequence of items that can contain one or more nodes or atomic values. attribute A property of an element expressed as a name-value pair. Attributes are used to encode data or provide metadata that is associated with an element. In the following example,“version”isthe name of an attribute of element plist and its value is "1.0": attribute list declaration Identifiesin a DTD an element that has attributes, the names of those attributes, what valuesthe attributes may have, and default values. Example: In this example, phone is the element name, location is the attribute name, (home | office | mobile) is the allowable values, and home is the default value. canonical A form of an XML document in which it can be compared against another document for equivalence. If two documents with differing physical representations have the same canonical form, they are considered logically equivalent within the given application context. The canonical form of an XML document is defined by the World Wide Web Consortium at http://www.w3.org/TR/xml-c14n. CDATA block A section of text that the parsershould pass uninterpreted to the client application. It appears as element content. CDATA blocks are often used for code or data that contains “prohibited” characters, that is characters of special syntactical significance to the parser (for example, “<“ and “&”). You can also use an 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 30 XML Glossaryentity reference to express any of these prohibited characters (for example, <) is a built-in entity reference for specifying the “escaped” < character. content model The part of an element declaration that defines what the element may contain. A content model consists of the names of child elements, #PCDATA (indicating text), entity references, or EMPTY (indicating an empty elementsuch as ). Child elements and #PCDATA are enclosed within parentheses. Commas between child elements specify that the elements must occur in the given sequence. The vertical-bar character (“|”) instead of a comma indicates a logical OR relationship and can be used with #PCDATA. Occurrence modifiers can be applied to individual elements or groups of elements: ● “+” indicates the element or group can be repeated more than once but must occur at lease once. ● “?” indicates the element or group is optional and may occur only once. ● “*” indicates the element or group is optional and can occur more than once. ● No modifier indicates that the element or group must occur only once. Examples of content models. (#PCDATA) (%plistObject)* (lastName, middleInitial?, firstName, phone*)* document order The order of XML mark-up constructs as they appear in a document. When you send the NSXMLNode messages nextNode (or previousNode) to each successive node object encountered in an NSXML tree, you are traversing the tree forward (or backward) in document order. DOM (Document Object Model) An API for accessing and manipulating XML documents as tree structures. DOM derives from a World Wide Web Consortium recommendation for a general object model for storing hierarchically structured documents in memory. DTD (Document Type Definition) A way to define the legal elements and other building blocks of an XML document. element Markup tagsthat identify the nature of the content they surround. Elements have names and may contain textual data, child elements, processing instructions, comments, and CDATA blocks. An element has a single parent element, except for a document’s root element, which has no parent. An element may also XML Glossary 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 31have attributes and namespace prefixes associated with it. Elements can also be empty (that is, without content) and the developer can use them as flags. The following is an example of an element with an attribute and mixed content (in this case, text, a child element, and a CDATA block): The following C++ code gives an example of how cout is used: element declaration Specifiesin a DTD the name of an element and what is permitted as content of the element. The declaration may specify child elements, text, and entity references as content. It prescribesthe order of child elements and (forsingle elements or for the entire group) whether it isrequired and whether it can appear multiple times. Examples: See also content model. entity declaration Associates in a DTD a name with some piece of XML content that is identified by an entity reference. That content can be a literal value (such as identified by a character reference), a variable value specified elsewhere in the DTD, orsome textual or binary value referenced in an external file. The last type of entity is called an external entity. Examples: entity and character reference A reference in text to an externally or internally declared entity declaration. It must begin with an ampersand and end with a semicolon. You can refer to entities that you declare elsewhere. There are five predefined entities: “<“, “>”, “&”,single-quote character, and double-quote character. Character references XML Glossary 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 32start with “&#” and are followed by numerical code points. Examples of references are ', >, ç ; the first two are built-in entity references and the last is a character reference. See also unparsed entity. model See content model. namespace A URI (Universal Resource Identifier) that qualifies an element or attribute name so as to avoid name conflicts when a document contains XML from different sources. You declare a namespace in the start tag of an element by appending a prefix to the predefined xmlns attribute (separated by a colon), and then associating this with the value of the URI; for example: Thereafter, you need only use a namespace prefix (“h” in the above example) with an element (separated by a colon) to identify the element unambiguously. All child elements of the element with the namespace declaration are associated with the same namespace through the prefix. The prefix-element name combination (h:table from the example above) is called a qualified name. A namespace declaration with no prefix after xmlns defines a default namespace, unlessthe value is an empty string, which means “no namespace.” The URI in a namespace declaration doesn’t have to point to anything; it is just a convenient way to get a unique name. namespace prefix A prefix defined in a namespace declaration to identify the namespace a particular element is associated with. The namespace's qualified name (xmlns:localname ) appears only during output. All other operations, such as those that get or set a namespace node’s value, use the local name only. See also namespace. normalize To coalesce all adjacent child text nodes into a single text node while removing empty text nodes. Normalization is highly recommended before performing XPath and XQuery queries. notation Identifies by name the format either of an unparsed entity or an element bearing a specific notation attribute; it can also identify the target of a processing instruction. A notation declaration gives a name to the notation and an external identifier that enables a parser or its client to locate a helper application that can process the data specified by the notation. Notations occur in attribute values, attribute-list declarations, and entity declarations. processing instruction A construct that provides information to the application processing the XML document. The instructions could instruct the application how, for example, to interpret the XML or display the results. Processing instructions can occur within elements or at the top level of a document. The first word of the processing instruction is called the target (its name) and every thing else is its object value. Example: XML Glossary 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 33 qualified name An element’s full name, consisting of prefix, colon, and local name. See also namespace. sequence A collection of items, each of which can be a node or an atomic value. XQuery queries return a sequence (an NSArray in Cocoa), which may contain only a single item. validation A procedure that checks an XML document against the logical structure described by declarations in the associated DTD (or other schema) to see if the XML conforms to it. Some of the constraints involved in validation are proper element sequence and nesting, specification of required attributes, and correct attribute type. For example, if an element is supposed to have one or more child elements but doesn’t, the document containing the element is invalid. Before an XML document can be validated, it must first be well-formed. unparsed entity An external resource referred to by entity reference whose contents may be binary data or text (including non-XML text). Each unparsed entity has a notation associated with it. well-formed Refers to an XML document that obeys the syntax of XML. A parser cannot parse a document if its XML is not well-formed. Some of the checks for whether a document is well-formed are: ● Element start tags must have end tags (except for empty elements). ● Attribute values must be quoted. ● Parameter entities must be declared before they are used. ● Markup constructs appear only where permitted. XHTML A more strictly prescribed version of HTML that makes it well-formed XML. XHTML is an official World Wide Web Consortium recommendation. XPath An XML query language for locating nodes with an XML tree structure. It allowslocation paths, predicates, and general expressions in queries. The Cocoa implementation uses XPath 2.0, which is a World Wide Web Consortium recommendation. The NSXMLNode class enables XPath queries through its nodesForXPath:error: method. (Note that the NSXML classes do not support deprecated XPath 1.0 features such as namespace axis.) XML Glossary 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 34XQuery A flexible and powerful XML query language that lets you compose logically complex queries using operators, quantifiers, functions and FLOWR expressions (referring to the keywords for, let, order by, where, and return). The NSXMLNode class enables XQuery 1.0 queries through its objectsForXQuery:error: method XSLT (Extensible Stylesheet Language Transformations) An XML application for transforming an XML document into another XML document or into an HTML, RTF, or plain-text document. The stylesheet used in a transformation has template rules, each consisting of a pattern and a template. The NSXMLDocument class permits access to XSLT through its objectByApplyingXSLT:error: and objectByApplyingXSLTAtURL:error: methods. XML Glossary 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 35This table describes the changes to Event-Driven XML Programming Guide . Date Notes 2010-03-24 Update example code to new initializer pattern. 2008-09-09 Added note about introduction of namespace support in v10.4. 2006-12-05 Added memory management guideline and corrected code examples. Updated the glossary of XML terms. Changed title from "Event-Driven XML Parsing." Changed "Rendezvous" to "Bonjour." 2005-04-29 Updated the XML glossary to define additional terms primarily related to the NSXML set of classes. This glossary is shared with Tree-Based XML Programming Guide . 2004-07-27 Minor bug fix. 2004-01-21 First version of Event-Driven XML Parsing . 2010-03-24 | © 2004, 2010 Apple Inc. All Rights Reserved. 36 Document Revision HistoryApple Inc. © 2004, 2010 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Bonjour, Cocoa, Mac, and OS X are trademarks of Apple Inc., registered in the U.S. and other countries. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Date and Time Programming GuideContents About Dates and Times 5 At a Glance 5 Creating and Using Date Objects to Represent Absolute Points in Time 5 Working with Calendars and Date Components 6 Performing Date and Time Calculations 6 Working with Different Time Zones 6 Special Considerations for Historical Dates 6 How to Use this Document 7 See Also 7 Dates 8 Date Fundamentals 8 Creating Date Objects 9 Basic Date Calculations 10 Calendars, Date Components, and Calendar Units 11 Calendar Basics 11 Date Components and Calendar Units 12 Converting between Dates and Date Components 12 Converting from One Calendar to Another 14 Calendrical Calculations 16 Adding Components to a Date 16 Determining Temporal Differences 18 Checking When a Date Falls 20 Week-Based Calendars 21 Using Time Zones 23 Creating Time Zones 23 Application Default Time Zone 24 Creating Dates with Time Zones 24 Time Zones and Daylight Saving Time 25 Historical Dates 26 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 2The Gregorian Calendar Has No Year 0 26 The Julian to Gregorian Transition 27 Working with Eras with Backward Time Flow 27 Document Revision History 29 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 3Tables and Listings Dates 8 Listing 1 Creating dates with time intervals 9 Listing 2 Creating dates by adding a time interval 9 Calendars, Date Components, and Calendar Units 11 Listing 3 Creating calendar objects 11 Listing 4 Creating a date components object 12 Listing 5 Getting a date’s components 13 Listing 6 Creating a date from components 13 Listing 7 Creating a yearless date 14 Listing 8 Converting date components from one calendar to another 14 Calendrical Calculations 16 Table 1 December 2009 Calendar 21 Table 2 January 2010 Calendar 21 Listing 9 An hour and a half from now 16 Listing 10 Getting the Sunday in the current week 16 Listing 11 Getting the beginning of the week 17 Listing 12 Getting the difference between two dates 18 Listing 13 Days between two dates, as the number of midnights between 19 Listing 14 Days between two dates in different eras 19 Listing 15 Determining whether a date is this week 20 Using Time Zones 23 Listing 16 Creating a date from components using a specific time zone 24 Historical Dates 26 Listing 17 Using negative years to represent BC dates 26 Listing 18 Tomorrow in the BC era 27 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 4Date and time objects allow you to store references to particular instances in time. You can use date and time objectsto perform calculations and comparisonsthat account for the corner cases of date and time calculations. At a Glance There are three main classes used for working with dates and times. ● NSDate allows you to represent an absolute point in time. ● NSCalendar allows you to represent a particular calendar, such as a Gregorian or Hebrew calendar. It providesthe interface for most date-based calculations and allows you to convert between NSDate objects and NSDateComponents objects. ● NSDateComponents allows you to represent the components of a particular date, such as hour, minute, day, year, and so on. In addition to these classes, NSTimeZone allows you to represent a geopolitical region’stime zone information. It eases the task of working across different time zones and performing calculations that may be affected by daylight savings time transitions. Creating and Using Date Objects to Represent Absolute Points in Time Date objects represent dates and times in Cocoa. Date objects allow you to store absolute points in time which are meaningful across locales, calendars and timezones. 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 5 About Dates and TimesRelevant Chapters: “Dates” (page 8) Working with Calendars and Date Components Date components allow you to break a date down into the various parts that comprise it, such as day, month, year, hour, and so on. Calendars represent a particular form of reckoning time, such as the Gregorian calendar or the Chinese calendar. Calendar objects allow you to convert between date objects and date component objects, as well as from one calendar to another. Relevant Chapters: “Calendars, Date Components, and Calendar Units” (page 11) Performing Date and Time Calculations Calendars and date components allow you to perform calculationssuch asthe number of days or hours between two dates or finding the Sunday in the current week. You can also add components to a date or check when a date falls. Relevant Chapters: “Calendrical Calculations” (page 16) Working with Different Time Zones Time zone objects allow you to present absolute times as local—that is, wall clock—time. In addition to time offsets, they also keep track of daylight saving time differences. Proper use of time zone objects can avoid issues such as miscalculation of elapsed time due to daylight saving time transitions or the user moving to a different time zone. Relevant Chapters: “Using Time Zones” (page 23) Special Considerations for Historical Dates Dates in the past have a number of edge cases that do not exist for contemporary dates. These include issues such as datesthat do not exist in a particular calendar—such asthe lack of the year 0 in the Gregorian calendar— or calendar transitions—such as the Julian to Gregorian transition in the Middle Ages. There are also eras with seemingly backward time flow—such as BC dates in the Gregorian calendar. About Dates and Times At a Glance 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 6Relevant Chapters: “Historical Dates” (page 26) How to Use this Document If your application keeps track of dates and times, read from “Dates” (page 8) to “Using Time Zones” (page 23). The NSDate, NSCalendar, NSDateComponents, and NSTimeZone classes described in these chapters work together to store, compare, and manipulate dates and times. If your application deals with dates in the past—particularly prior to the early 1900s, also read “Historical Dates” (page 26) to learn about some of the issues that can arise when dealing with dates in the past. See Also If you are new to Cocoa, read: ● Cocoa Fundamentals Guide , which introduces the basic concepts, terminology, architectures, and design patterns of the Cocoa frameworks and development environment. If you display dates and times to users or create dates from user input, read: ● Data Formatting Guide , which explains how to create and format user-readable strings from date objects, and how to create date objects from formatted strings. About Dates and Times How to Use this Document 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 7Date objects allow you to represent dates and times in a way that can be used for date calculations and conversions. As absolute points in time, date objects are meaningful across locales, timezones, and calendars. Date Fundamentals Cocoa represents dates and times as NSDate objects. NSDate is one of the fundamental Cocoa value objects. A date object represents an invariant point in time. Because a date is a point in time, it implies clock time as well as a day, so there is no way to define a date object to represent a day without a time. To understand how Cocoa handles dates, you must consider NSCalendar and NSDateComponents objects as well. In a nontechnical context, a point in time is usually represented by a combination of a clock time and a day on a particular calendar (such as the Gregorian or Hebrew calendar). Supporting different calendars is important for localization. In Cocoa, you use a particular calendar to decompose a date object into its date components such as year, month, day, hour, and minute. Conversely, you can use a calendar to create a date object from date components. Calendar and date component objects are described in more detail in “Calendars, Date Components, and Calendar Units” (page 11). NSDate provides methods for creating dates, comparing dates, and computing intervals. Date objects are immutable. The standard unit of time for date objects is floating point value typed as NSTimeInterval and is expressed in seconds. Thistype makes possible a wide and fine-grained range of date and time values, giving precision within milliseconds for dates 10,000 years apart. NSDate computes time as seconds relative to an absolute reference time: the first instant of January 1, 2001, Greenwich Mean Time (GMT). Dates before then are stored as negative numbers; dates after then are stored as positive numbers. The sole primitive method of NSDate, timeIntervalSinceReferenceDate provides the basis for all the other methods in the NSDate interface. NSDate converts all date and time representations to and from NSTimeInterval values that are relative to the absolute reference date. Cocoa implementstime according to the Network Time Protocol (NTP)standard, which is based on Coordinated Universal Time. 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 8 DatesCreating Date Objects If you want a date that represents the current time, you allocate an NSDate object and initialize it with init: NSDate *now = [[NSDate alloc] init]; or use the NSDate class method date to create the date object. If you want some time other than the current time, you can use one of NSDate’s initWithTimeInterval... or dateWithTimeInterval... methods; typically, however, you use a more sophisticated approach employing a calendar and date components as described in “Calendar Basics” (page 11). The initWithTimeInterval... methods initialize date objects relative to a particular time, which the method name describes. You specify (in seconds) how much more recent or how much more in the past you want your date object to be. To specify a date that occurs earlier than the method’s reference date, use a negative number of seconds. Listing 1 defines two date objects. The tomorrow object is exactly 24 hours from the current date and time, and yesterday is exactly 24 hours earlier than the current date and time. Listing 1 Creating dates with time intervals NSTimeInterval secondsPerDay = 24 * 60 * 60; NSDate *tomorrow = [[NSDate alloc] initWithTimeIntervalSinceNow:secondsPerDay]; NSDate *yesterday = [[NSDate alloc] initWithTimeIntervalSinceNow:-secondsPerDay]; [tomorrow release]; [yesterday release]; Listing 2 shows how to get new date objects with date-and-time values adjusted from existing date objects using dateByAddingTimeInterval:. Listing 2 Creating dates by adding a time interval NSTimeInterval secondsPerDay = 24 * 60 * 60; NSDate *today = [[NSDate alloc] init]; NSDate *tomorrow, *yesterday; tomorrow = [today dateByAddingTimeInterval: secondsPerDay]; Dates Creating Date Objects 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 9yesterday = [today dateByAddingTimeInterval: -secondsPerDay]; [today release]; Basic Date Calculations To compare dates, you can use the isEqualToDate:, compare:, laterDate:, and earlierDate: methods. These methods perform exact comparisons, which means they detect sub-second differences between dates. You may want to compare dates with a less fine granularity. For example, you may want to consider two dates equal if they are within a minute of each other. If this is the case, use timeIntervalSinceDate: to compare the two dates. The following code fragmentshows how to use timeIntervalSinceDate: to see if two dates are within one minute (60 seconds) of each other. if (fabs([date2 timeIntervalSinceDate:date1]) < 60) ... To obtain the difference between a date object and another point in time, send a timeIntervalSince... message to the date object. For example, timeIntervalSinceNow gives you the time, in seconds, between the current time and the receiving date object. To get the component elements of a date, such as the day of the week, use an NSDateComponents object in conjunction with an NSCalendar object. This technique is described in “Calendar Basics” (page 11). Dates Basic Date Calculations 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 10Calendar objects encapsulate information about systems of reckoning time in which the beginning, length, and divisions of a year are defined. You use calendar objects to convert between absolute times and date components such as years, days, or minutes. Calendar Basics NSCalendar provides an implementation of various calendars. It provides data for several different calendars, including Buddhist, Gregorian, Hebrew, Islamic, and Japanese (which calendars are supported depends on the release of the operating system—check the NSLocale class to determine which are supported on a given release). NSCalendar is closely associated with the NSDateComponents class, instances of which describe the component elements of a date required for calendrical computations. Calendars are specified by constants in NSLocale. You can get the calendar for the user's preferred locale most easily using the NSCalendar method currentCalendar; you can get the default calendar from any NSLocale object using the key NSLocaleCalendar. You can also create an arbitrary calendar object by specifying an identifier for the calendar you want. Listing 3 shows how to create a calendar object for the Japanese calendar and for the current user. Listing 3 Creating calendar objects NSCalendar *currentCalendar = [NSCalendar currentCalendar]; NSCalendar *japaneseCalendar = [[NSCalendar alloc] initWithCalendarIdentifier:NSJapaneseCalendar]; NSCalendar *usersCalendar = [[NSLocale currentLocale] objectForKey:NSLocaleCalendar]; Here, usersCalendar and currentCalendar are equal, although they are different objects. 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 11 Calendars, Date Components, and Calendar UnitsDate Components and Calendar Units You represent the component elements of a date—such as the year, day, and hour—using an NSDateComponents object. An NSDateComponents object can hold either absolute values or quantities of units (see “Adding Components to a Date” (page 16) for an example of using NSDateComponents to specify quantities of units). For date components objects to be meaningful, you need to know the associated calendar and purpose. iOS Note: In iOS 4.0 and later, NSDateComponents objects can contain a calendar, a timezone, and a date object. This allows date components to be passed to or returned from a method and retain their meaning. Day, week, weekday, month, and year numbers are generally 1-based, but there may be calendar-specific exceptions. Ordinal numbers, where they occur, are 1-based. Some calendars may have to map their basic unit concepts into the year/month/week/day/… nomenclature. The particular values of the unit are defined by each calendar and are not necessarily consistent with values for that unit in another calendar. Listing 4 shows how you can create a date components object that you can use to create the date where the year unit is 2004, the month unit is 5, and the day unit is 6 (in the Gregorian calendar this is May 6th, 2004). You can also use it to add 2004 year units, 5 month units, and 6 day units to an existing date. The value of weekday is undefined since it is not otherwise specified. Listing 4 Creating a date components object NSDateComponents *components = [[NSDateComponents alloc] init]; [components setDay:6]; [components setMonth:5]; [components setYear:2004]; NSInteger weekday = [components weekday]; // Undefined (== NSUndefinedDateComponent) Converting between Dates and Date Components To decompose a date into constituent components, you use the NSCalendar method components:fromDate:. In addition to the date itself, you need to specify the components to be returned in the NSDateComponents object. For this, the method takes a bit mask composed of Calendar Units constants. There is no need to specify any more components than those in which you are interested. Listing 5 shows how to calculate today’s day and weekday. Calendars, Date Components, and Calendar Units Date Components and Calendar Units 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 12Listing 5 Getting a date’s components NSDate *today = [NSDate date]; NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; NSDateComponents *weekdayComponents = [gregorian components:(NSDayCalendarUnit | NSWeekdayCalendarUnit) fromDate:today]; NSInteger day = [weekdayComponents day]; NSInteger weekday = [weekdayComponents weekday]; This gives you the absolute components for a date. For example, if you ask for the year and day components for November 7, 2010, you get 2010 for the year and 7 for the day. If you instead want to know what number day of the year it is you can use the ordinalityOfUnit:inUnit:forDate: method of the NSCalendar class. It is also possible to create a date from components. You can configure an instance of NSDateComponents to specify the components of a date and then use the NSCalendar method dateFromComponents: to create the corresponding date object. You can provide as many components as you need (or choose to). When there is incomplete information to compute an absolute time, default values such as 0 and 1 are usually chosen by a calendar, but this is a calendar-specific choice. If you provide inconsistent information, calendar-specific disambiguation is performed (which may involve ignoring one or more of the parameters). Listing 6 shows how to create a date object to represent (in the Gregorian calendar) the first Monday in May, 2008. Listing 6 Creating a date from components NSDateComponents *components = [[NSDateComponents alloc] init]; [components setWeekday:2]; // Monday [components setWeekdayOrdinal:1]; // The first Monday in the month [components setMonth:5]; // May [components setYear:2008]; NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; NSDate *date = [gregorian dateFromComponents:components]; Calendars, Date Components, and Calendar Units Converting between Dates and Date Components 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 13To guarantee correct behavior you must make sure that the components used make sense for the calendar. Specifying “out of bounds” components—such as a day value of -6 or February 30th in the Gregorian calendar—produce undefined behavior. You may want to create a date object without components such as years—to store your friend’s birthday, for instance. While it is not technically possible to create a yearless date, you can use date components to create a date object without a specified year, as in Listing 7. Listing 7 Creating a yearless date NSDateComponents *components = [[NSDateComponents alloc] init]; [components setMonth:11]; [components setDay:7]; NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; NSDate *birthday = [gregorian dateFromComponents:components]; Note that birthday in this instance has the default value for the year, which in this case is 1 AD (though it is not guaranteed to always default to 1 AD). If you later convert this date back to components, or use an NSDateFormatter object to display it, make sure to not use the year value (as your friend may not appreciate being listed asthat old). You can use the NSDateFormatter dateFormatFromTemplate:options:locale: method to create a yearless date formatter that adjusts to the users locale. For more information on date formatting see Data Formatting Guide . Converting from One Calendar to Another To convert components of a date from one calendar to another—for example, from the Gregorian calendar to the Hebrew calendar—you first create a date object from the components using the first calendar, then you decompose the date into components using the second calendar. Listing 8 shows how to convert date components from one calendar to another. Listing 8 Converting date components from one calendar to another NSDateComponents *comps = [[NSDateComponents alloc] init]; [comps setDay:6]; [comps setMonth:5]; [comps setYear:2004]; Calendars, Date Components, and Calendar Units Converting from One Calendar to Another 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 14NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; NSDate *date = [gregorian dateFromComponents:comps]; [comps release]; [gregorian release]; NSCalendar *hebrew = [[NSCalendar alloc] initWithCalendarIdentifier:NSHebrewCalendar]; NSUInteger unitFlags = NSDayCalendarUnit | NSMonthCalendarUnit | NSYearCalendarUnit; NSDateComponents *components = [hebrew components:unitFlags fromDate:date]; NSInteger day = [components day]; // 15 NSInteger month = [components month]; // 9 NSInteger year = [components year]; // 5764 Calendars, Date Components, and Calendar Units Converting from One Calendar to Another 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 15NSDate providesthe absolute scale and epoch for dates and times, which can then be rendered into a particular calendar for calendrical calculations or user display. To perform calendrical calculations, you typically need to get the component elements of a date, such as the year, the month, and the day. You should use the provided methods for dealing with calendrical calculations because they take into account corner cases like daylight savings time starting or ending and leap years. Adding Components to a Date You use the dateByAddingComponents:toDate:options: method to add components of a date (such as hours or months) to an existing date. You can provide as many components as you wish. Listing 9 shows how to calculate a date an hour and a half in the future. Listing 9 An hour and a half from now NSDate *today = [[NSDate alloc] init]; NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; NSDateComponents *offsetComponents = [[NSDateComponents alloc] init]; [offsetComponents setHour:1]; [offsetComponents setMinute:30]; // Calculate when, according to Tom Lehrer, World War III will end NSDate *endOfWorldWar3 = [gregorian dateByAddingComponents:offsetComponents toDate:today options:0]; Components to add can be negative. Listing 10 shows how you can get the Sunday in the current week (using a Gregorian calendar). Listing 10 Getting the Sunday in the current week NSDate *today = [[NSDate alloc] init]; NSCalendar *gregorian = [[NSCalendar alloc] 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 16 Calendrical CalculationsinitWithCalendarIdentifier:NSGregorianCalendar]; // Get the weekday component of the current date NSDateComponents *weekdayComponents = [gregorian components:NSWeekdayCalendarUnit fromDate:today]; /* Create a date components to represent the number of days to subtract from the current date. The weekday value for Sunday in the Gregorian calendar is 1, so subtract 1 from the number of days to subtract from the date in question. (If today is Sunday, subtract 0 days.) */ NSDateComponents *componentsToSubtract = [[NSDateComponents alloc] init]; [componentsToSubtract setDay: 0 - ([weekdayComponents weekday] - 1)]; NSDate *beginningOfWeek = [gregorian dateByAddingComponents:componentsToSubtract toDate:today options:0]; /* Optional step: beginningOfWeek now has the same hour, minute, and second as the original date (today). To normalize to midnight, extract the year, month, and day components and create a new date from those components. */ NSDateComponents *components = [gregorian components:(NSYearCalendarUnit | NSMonthCalendarUnit | NSDayCalendarUnit) fromDate: beginningOfWeek]; beginningOfWeek = [gregorian dateFromComponents:components]; Sunday is not the beginning of the week in all locales. Listing 11 illustrates how you can calculate the first moment of the week (as defined by the calendar's locale): Listing 11 Getting the beginning of the week NSDate *today = [[NSDate alloc] init]; Calendrical Calculations Adding Components to a Date 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 17NSDate *beginningOfWeek = nil; BOOL ok = [gregorian rangeOfUnit:NSWeekCalendarUnit startDate:&beginningOfWeek interval:NULL forDate: today]; Determining Temporal Differences There are a few ways to calculate the amount of time between dates. Depending on the context in which the calculation is made, the user likely expects different behavior. Whichever calculation you use, it should be clear to the user how the calculation is being performed. Since Cocoa implementstime according to the NTP standard, these methods ignore leap seconds in the calculation. You use components:fromDate:toDate:options: to determine the temporal difference between two dates in units other than seconds (which you can calculate with the NSDate method timeIntervalSinceDate:). Listing 12 shows how to get the number of months and days between two dates using a Gregorian calendar. Listing 12 Getting the difference between two dates NSDate *startDate = ...; NSDate *endDate = ...; NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; NSUInteger unitFlags = NSMonthCalendarUnit | NSDayCalendarUnit; NSDateComponents *components = [gregorian components:unitFlags fromDate:startDate toDate:endDate options:0]; NSInteger months = [components month]; NSInteger days = [components day]; This method handles overflow as you may expect. If the fromDate: and toDate: parameters are a year and 3 days apart and you ask for only the days between, it returns an NSDateComponents object with a value of 368 (or 369 in a leap year) for the day component. However, this method truncatesthe results of the calculation to the smallest unit supplied. For instance, if the fromDate: parameter corresponds to Jan 14, 2010 at 11:30 PM and the toDate: parameter corresponds to Jan 15, 2010 at 8:00 AM, there are only 8.5 hours between the two dates. If you ask for the number of days, you get 0, because 8.5 hours is less than 1 day. There may be Calendrical Calculations Determining Temporal Differences 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 18situations where this should be 1 day. You have to decide which behavior your users expect in a particular case. If you do need to have a calculation that returns the number of days, calculated by the number of midnights between the two dates, you can use a category to NSCalendar similar to the one in Listing 13. Listing 13 Days between two dates, as the number of midnights between @implementation NSCalendar (MySpecialCalculations) -(NSInteger)daysWithinEraFromDate:(NSDate *) startDate toDate:(NSDate *) endDate { NSInteger startDay=[self ordinalityOfUnit:NSDayCalendarUnit inUnit: NSEraCalendarUnit forDate:startDate]; NSInteger endDay=[self ordinalityOfUnit:NSDayCalendarUnit inUnit: NSEraCalendarUnit forDate:endDate]; return endDay-startDay; } @end This approach works for other calendar units by specifying a different NSCalendarUnit value for the ordinalityOfUnit: parameter. For example, you can calculate the number of years based on the number of times Jan 1, 12:00 AM is present between. Do not use this method for comparing second differences because it overflows NSInteger on 32-bit platforms. This method is only valid if you stay within the same era (in the Gregorian Calendar this means that both dates must be AD or both must be BC). If you do need to compare dates across an era boundary you can use something similar to the category in Listing 14. Listing 14 Days between two dates in different eras @implementation NSCalendar (MyOtherMethod) -(NSInteger) daysFromDate:(NSDate *) startDate toDate:(NSDate *) endDate { NSCalendarUnit units=NSEraCalendarUnit | NSYearCalendarUnit | NSMonthCalendarUnit | NSDayCalendarUnit; NSDateComponents *comp1=[self components:units fromDate:startDate]; NSDateComponents *comp2=[self components:units fromDate endDate]; [comp1 setHour:12]; [comp2 setHour:12]; NSDate *date1=[self dateFromComponents: comp1]; Calendrical Calculations Determining Temporal Differences 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 19NSDate *date2=[self dateFromComponents: comp2]; return [[self components:NSDayCalendarUnit fromDate:date1 toDate:date2 options:0] day]; } @end This method creates components from the given dates, and then normalizes the time and compares the two dates. This calculation is more expensive than comparing dates within an era. If you do not need to cross era boundaries use the technique shown in Listing 13 (page 19) instead. Checking When a Date Falls If you need to determine if a date falls within the current week (or any unit for that matter) you can make use of the NSCalendar method rangeOfUnit:startDate:interval:forDate:. Listing 15 shows a method that determines if a given date falls within this week. The week in this case is defined as the period between Sunday at midnight to the following Saturday just before midnight (in the Gregorian calendar). Listing 15 Determining whether a date is this week -(BOOL)isDateThisWeek:(NSDate *)date { NSDate *start; NSTimeInterval extends; NSCalendar *cal=[NSCalendar autoupdatingCurrentCalendar]; NSDate *today=[NSDate date]; BOOL success= [cal rangeOfUnit:NSWeekCalendarUnit startDate:&start interval: &extends forDate:today]; if(!success)return NO; NSTimeInterval dateInSecs = [date timeIntervalSinceReferenceDate]; NSTimeInterval dayStartInSecs= [start timeIntervalSinceReferenceDate]; if(dateInSecs > dayStartInSecs && dateInSecs < (dayStartInSecs+extends)){ return YES; } else { return NO; } Calendrical Calculations Checking When a Date Falls 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 20} This code uses NSTimeInterval values for the date to test and the start of the week and uses those to determine whether the date is this week. Week-Based Calendars A week-based calendar is defined by the weeks of a year. However, this can be complicated when the first week of the calendar overlapsthe last week of the previous year’s calendar. In this case there are two important properties of the calendar: 1. What is the first day of the week? 2. How many days does a week near the beginning of the year have to have within the ordinary calendar year for it to be considered the first week in the week-based calendar year? A week-based calendar's first day of the year is on the first day of the week. The first week is preferred to be the week containing Jan 1 if that week satisfies the defined answer for the second point above. For example, suppose the first day of the week is defined as Monday, in a week-based calendar interpretation of the Gregorian calendar. Consider the 2009/2010 transition shown in Table 1 and Table 2: Table 1 December 2009 Calendar Sunday Monday Tuesday Wednesday Thursday Friday Saturday 20 21 22 23 24 25 26 27 28 29 30 31 Table 2 January 2010 Calendar Sunday Monday Tuesday Wednesday Thursday Friday Saturday 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Since the first day of the week is Monday, the 2010 week-based calendar year can begin either December 28 or January 4. That is, December 30, 2009 (ordinary) could be December 30, 2010 (week-based). Calendrical Calculations Week-Based Calendars 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 21To choose between these two possibilities, there is the second criterion. Week Dec 28 - Jan 3 has 3 days in 2010. Week Jan 4-Jan 10 has 7 days in 2010. If the minimum number of days in a first week is defined as 1 or 2 or 3, the week of Dec 28 satisfies the first week criteria and would be week 1 of the week-based calendar year 2010. Otherwise, the week of Jan 4 is the first week. As another example,suppose you wanted to define your week-based calendarscheme such that the first week of the week-based calendar year is the week beginning with the first occurrence of the first day of the week in the ordinary calendar year. Another way to put that is that you always want the first week of the week-based calendar year to be within the new ordinary calendar year, you never want your week-based calendar to start back in December of the previous ordinary year as discussed in the previous example. Or, you always want your week based calendar to start on Jan 1 or later. In Table 2 (page 21) Monday January 4 is the first Monday of the ordinary year, so the week-based calendar begins on that day. What you are requesting then is that the first week of your week-based calendar is entirely within the new ordinary year or that the minimum number of days in first week is 7. The NSYearForWeekOfYearCalendarUnit is the year number of a week-based calendar interpretation of the calendar you're working with, where the two properties of the week-based calendar discussed in above correspond to these two NSCalendar properties: firstWeekday and minimumDaysInFirstWeek. Calendrical Calculations Week-Based Calendars 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 22Time zones can create numerous problems for applications. Consider the following situation. You are in New York and it is 12:30 AM. You have an application that displays all of the Major League Baseball games that happen tomorrow. Because tomorrow is different depending on the time zone, situations like this must be carefully accounted for. Fortunately, a little planning and the assistance of the NSTimeZone class ease this task considerably. NSTimeZone is an abstract class that defines the behavior of time zone objects. Time zone objects represent geopolitical regions. Consequently, these objects have region names. Time zone objects also represent a temporal offset, either plus or minus, from Greenwich Mean Time (GMT) and an abbreviation (such as PST). Creating Time Zones Time zones affect the values of date components that are calculated by calendar objects for a given NSDate object. You can create an NSTimeZone object and use it to set the time zone of an NSCalendar object. By default, NSCalendar uses the default time zone for the application—or process—when the calendar object is created. Unless the default time zone has been otherwise set, it is the time zone set in System Preferences. In most cases, the user’s default time zone should be used when creating date objects. There are cases when it may be necessary to use arbitrary time zones. For example, the user may want to specify that an appointment is in Greenwich Mean Time, because it is during her business trip to London next week. NSTimeZone provides several class methods to make time zone objects: timeZoneWithName:, timeZoneWithAbbreviation:, and timeZoneForSecondsFromGMT:. In most cases timeZoneWithName: provides the most accurate time zone, as it adjusts for daylight saving time, the trade-off is that you must know more precisely the location you are creating a time zone for. For a complete list of time zone names known to the system, you can use the knownTimeZoneNames class method: NSArray *timeZoneNames = [NSTimeZone knownTimeZoneNames]; 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 23 Using Time ZonesApplication Default Time Zone You can set the default time zone within your application using setDefaultTimeZone:. You can access this default time zone at any time with the defaultTimeZone class method. With the localTimeZone class method you can get a time zone object that automatically updates itself to reflect changes to the default time zone. Creating Dates with Time Zones Time zones play an important part in determining when datestake place. Consider a simple calendar application that keeps track of appointments. For example, say you live in Chicago and you have a dentist appointment coming up at 10:00 AM on Tuesday. You will be in New York for Sunday and Monday, however. When you created that appointment it was done with the mindset of an absolute time. That time is 10:00 AM Central Time; when you go to New York, the time should be presented as 11:00 AM because you are in a different time zone, but it isthe same absolute time. On the other hand, if you create an appointment to wake up and exercise every morning at 7:00 AM, you do not want your alarm to go off at 1:00 PM simply because you are on a business trip to Dublin—or at 5:00 AM because you are in Los Angeles. NSDate objects store dates in absolute time. For example, the date object created in Listing 16 represents 4:00 PM CDT, 5:00 EDT, and so on. Listing 16 Creating a date from components using a specific time zone NSCalendar *gregorian=[[NSCalendar alloc] initWithCalendarIdentifier: NSGregorianCalendar]; [gregorian setTimeZone:[NSTimeZone timeZoneWithAbbreviation:@"CDT"]]; NSDateComponents *timeZoneComps=[[NSDateComponents alloc] init]; [timeZoneComps setHour:16]; //specify whatever day, month, and year is appropriate NSDate *date=[gregorian dateFromComponents:timeZoneComps]; If you need to create a date that isindependent of timezone, you can store the date as an NSDateComponents object—as long as you store some reference to the corresponding calendar. In iOS, NSDateComponents objects can contain a calendar, a timezone, and a date object. You can therefore store the calendar along with the components. If you use the date method of the NSDateComponents class to access the date, make sure that the associated timezone is up-to-date. Using Time Zones Application Default Time Zone 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 24Time Zones and Daylight Saving Time The NSTimeZone class also provides a number of instance methods to determine information about daylight saving time: ● isDaylightSavingTime determines whether daylight saving time is currently in effect. ● daylightSavingTimeOffset determines the current daylight saving time offset. For most time zones this is either zero or one. ● nextDaylightSavingTimeTransition determines when the next daylight saving time transition occurs. There are also similarly named methods for determining this information for specific dates. If you are keeping track of events and appointments in your application, you can use this information to remind the user of upcoming daylight saving time transitions. Using Time Zones Time Zones and Daylight Saving Time 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 25There are a number of issues that can arise when dealing with dates in the past that do not exist for contemporary dates. These include dates that do not exist, previous eras where time flow moves from higher year numbers to lower ones (such as BC dates in the Gregorian calendar), and calendar transitions (such as the transition from the Julian calendar to the Gregorian calendar). The Gregorian Calendar Has No Year 0 In the Julian and Gregorian calendars represented by the NSGregorianCalendar, there is no year 0. This means that the day following December 31, 1 BC is January 1, 1 AD. All of the provided methods for calendrical calculations take this into account, but you may need to account for it when you are creating dates from components. If you do attempt to create a date with year 0, it is instead 1 BC. In addition, if you create a date from components using a negative year value, it is created using astronomical year numbering in which 0 corresponds to 1 BC, -1 corresponds to 2 BC, and so on. For example, the two dates created in Listing 17equivalently represent May 7, 8 BC. Listing 17 Using negative years to represent BC dates NSCalendar *gregorian=[[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; NSDateComponents *bcDateComp=[[NSDateComponents alloc] init]; [bcDate setMonth: 5]; [bcDate setDay: 7]; [bcDate setYear: 8]; [bcDate setEra: 0]; NSDateComponents *astronDateComp=[[NSDateComponents alloc] init]; [bcDate setMonth: 5]; [bcDate setDay: 7]; [bcDate setYear: -7]; 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 26 Historical DatesNSDate *bcDate=[gregorian dateFromComponents:bcDateComp]; NSDate *astronDate=[gregorian dateFromComponents:astronDateComp]; The Julian to Gregorian Transition NSCalendar modelsthe transition from the Julian to Gregorian calendar in October 1582. During thistransition, 10 days were skipped. This means that October 15, 1582 follows October 4, 1582. All of the provided methods for calendrical calculations take this into account, but you may need to account for it when you are creating dates from components. Dates created in the gap are pushed forward by 10 days. For example October 8, 1582 is stored as October 18, 1582. Some countries adopted the Gregorian calendar at variouslater times. Nevertheless, for consistency the change is modeled at the same time regardless of locale. If you need absolute historical accuracy for a particular locale, you can subtract the appropriate number of days from the date given by the Gregorian calendar. The number of days to subtract corresponds to the number of extra leap days in the Julian calendar. Thus for every 100th year, the Julian calendar falls behind a day if that year is not a multiple of 400. If you need to create a Julian date, you must subtract the correct number of days from a Gregorian date (10 in the 1500s and 1600s, 11 in the 1700s, 12 in the 1800s, 13 in the 1900s and 2000s, and so on). You must also take into account the existence of leap days that aren’t in the Gregorian calendar. Working with Eras with Backward Time Flow In the Gregorian calendar, time is divided into two eras, the BC era and the AD era. In the BC era, time flows in a direction seemingly backwards, that is from higher year numbers to lower. However, days and months flow in the normal direction. For example February 1 follows January 31. This can be confusing if you ask what day follows December 31, 7 BC. The correct answer is January 1, 6 BC. This example is illustrated in Listing 18. Listing 18 Tomorrow in the BC era NSCalendar *gregorian=[[NSCalendar alloc] initWithCalendarIdentifier: NSGregorianCalendar]; NSDateComponents *dateBCComps=[[NSDateComponents alloc] init]; [dateBCComps setEra:0]; //Era 0 corresponds to BC [dateBCComps setMonth:12]; [dateBCComps setDay:31]; [dateBCComps setYear:7]; Historical Dates The Julian to Gregorian Transition 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 27NSDate *dateBC=[gregorian dateFromComponents:dateBCComps]; NSDateComponents *offsetDate=[[NSDateComponents alloc] init]; [offsetDate setDay:1]; NSDate *dateBC2=[gregorian dateByAddingComponents: offsetDate toDate:dateBC options:0]; After this code executes dateBC2 corresponds to January 1, 6 BC. Historical Dates Working with Eras with Backward Time Flow 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 28This table describes the changes to Date and Time Programming Guide . Date Notes Expanded Calendrical Calculationssection. Added Historical Dates Section and Week-Based Year Section. 2011-06-06 2010-02-24 Corrected code snippet. 2009-07-21 Added links to Cocoa Core Competencies. Moved information about NSCalendarDate to an appendix, rewrote articles to replace references to NSCalendarDate, and expanded content. 2008-07-03 Added a section about how to get date components using NSCalendar and NSDateComponents and a section about how to convert from one calendar to another. Removed information about converting a date to a string. See NSDateFormatter Class Reference for that information. 2007-09-04 Enhanced discussion of calendrical calculations using NSDateComponents. 2007-03-06 Added note regarding Julian and Gregorian calendars. Corrected typographical errors. Added a note about the use of width specifiers for calendar date format strings. 2006-05-23 Updated to include NSCalendar and NSDateFormatter changesintroduced in OS X v10.4. 2006-02-07 2005-08-11 Changed title from "Dates and Times." Corrected minor typographic error. Revision history was added to existing document. It will be used to record changes to the content of the document. 2002-11-12 2011-06-06 | © 2002, 2011 Apple Inc. All Rights Reserved. 29 Document Revision HistoryApple Inc. © 2002, 2011 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Chicago, Cocoa, Mac, New York, Numbers, andOS X are trademarks of Apple Inc., registered in the U.S. and other countries. Times is a registered trademark of Heidelberger Druckmaschinen AG, available from Linotype Library GmbH. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Window Programming GuideContents Introduction 6 Organization of This Document 6 See Also 8 How Windows Work 9 How a Window is Displayed 11 How Modal Windows Work 12 How Panels Work 14 How Window Controllers Work 15 Window Closing Behavior 16 Opening and Closing Windows 17 Window Layering and Types of Windows 18 Window Layering 18 Key and Main Windows 19 The Key Window 20 The Main Window 20 Changing a Window’s Status 21 Window Layers and Levels 22 Window Levels 22 Setting Ordering and Level Programmatically 22 Setting Window Collection Behavior 24 Spaces Collection Behavior 24 Exposé Collection Behavior 24 Window Cycling Behavior 25 Sizing and Placing Windows 26 Setting a Window’s Size and Location 26 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 2Window Cascading 27 Window Zooming 27 Constraining a Window’s Size and Location 27 Saving a Window’s Position into the User’s Defaults 29 Minimizing Windows 30 Using the Window Menu 31 Setting a Window’s Appearance 32 Setting a Window’s Style 32 Setting a Window’s Color and Transparency 33 Setting a Window’s Color Space 33 Setting a Window’s Content Border Thickness 33 Setting a Window’s Title and Represented File 34 Setting Attributes for the Window’s Image 35 Specifying How To Store the Window’s Image 35 Specifying Where To Store the Window’s Image 36 Specifying When the Window’s Image Is Created 36 Specifying Whether the Window’s Image Persists When Offscreen 37 Specifying the Depth Limit for the Window’s Image 37 Specifying Whether the Depth Limit Changes to the Screen’s Capacity 37 Specifying Whether Window Content Can Be Read or Written by Another Process 37 Handling Events in Windows 38 Using Keyboard Interface Control in Windows 39 Using the Window’s Field Editor 40 Using Window Notifications and Delegate Methods 41 Dragging Images to and from Windows 42 Updating the Cursor Image in a Window 43 Caching Window Images 44 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 3Document Revision History 45 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 4Figures and Listings Window Layering and Types of Windows 18 Figure 1 Main, key, and inactive windows 19 Saving a Window’s Position into the User’s Defaults 29 Listing 1 Saving a window’s frame automatically 29 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 5An application displays windows on the screen that must be managed and coordinated. A window object corresponds to at most one on-screen window. The two principal functions of windows are to provide an area in which views can be placed and to accept and distribute events the user sends through actions with the mouse and keyboard. The term window sometimes refers to the Application Kit object and sometimes to the window server’s window device; which meaning is intended is made clear in context. Panels are a special kind of window, typically serving an auxiliary function in an application, such as utility windows. This document is intended for Cocoa developers who need to work with windows and panels in their applications. Organization of This Document This programming topic describes how to use windows and panels. These articles give you basic information on the different types of windows and how they work: ● “How Windows Work” (page 9) describes the classes that define objects that manage and coordinate the windows an application displays. ● “How a Window is Displayed” (page 11) describes how window drawing is accomplished. ● “How Modal Windows Work” (page 12) describes the behavior of modal windows. ● “How Panels Work” (page 14) describes the various uses of panels. ● “How Window Controllers Work” (page 15) describesthe relationship between a window and its controller. ● “Window Layering and Types of Windows” (page 18) describes window layering and the concepts of key and main windows, and how a window can avoid becoming key or main. ● “Window Layers and Levels” (page 22) describes window levels, and how to place a window in a specific level, such as the level for document windows, palettes, or tear-off menus. ● “Setting Window Collection Behavior” (page 24) describes how to set a window’s behavior with Spaces, Exposé, and window cycles. These articles describe how to use windows: ● “Opening and Closing Windows” (page 17) describes how to open and close, or just show and hide, a window. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 6 Introduction● “Sizing and Placing Windows” (page 26) describes how to control a window’s size and position, including how to set its minimum and maximum size, how to constrain it to the screen, how to cascade it so its title bar remains visible, how to zoom it as though the user pressed the zoom button, and how to center it on the screen. ● “Saving a Window’s Position into the User’s Defaults” (page 29) describes how to store a window’s position in the user defaults system, so that it appears in the same location the next time the user starts the application. ● “Minimizing Windows” (page 30) describes how to replace a window with a smaller counterpart in the Dock. ● “Using the Window Menu” (page 31) describes how to place a window’s name in the Windows menu that appears in most Cocoa applications. These articles describe how to change what a window looks like: ● “Setting a Window’s Appearance” (page 32) describes how to choose whether to display a window’s peripheral elements, including its title bar, close box, zoom box, or size box. It also describes how to set a window’s background color and transparency, ● “Setting a Window’s Title and Represented File” (page 34) describes how to set a window’s title with either a string or the filename of the window’s represented file. ● “Setting Attributes for the Window’s Image” (page 35) describes how to set attributes for the window’s device, which stores the window’s image, including how the image is stored, when the image is created, and the image’s color depth. These articles describe how to handle a window’s events: ● “Handling Events in Windows” (page 38) gives basic information on how a window handles events. ● “Using Keyboard Interface Control in Windows” (page 39) describes how to navigate between a window’s fields using the Tab key and how to use the Return and Escape keys to select default buttons. ● “Using the Window’s Field Editor” (page 40) describes how to use the window’s text object, which is shared for light editing tasks. These articles describe some advanced features of windows: ● “Using Window Notifications and Delegate Methods” (page 41) describes the notifications and delegate methods used when a window gains or loses key or main window status, minimizes, moves or resizes, becomes exposed, or closes. ● “Dragging Images to and from Windows” (page 42) describes what happens when the user wants to drag an object into or out of a window. Introduction Organization of This Document 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 7● “Updating the Cursor Image in a Window” (page 43) directs you to information on how to change the cursor image when the cursor is over a specified area in a view. ● “Caching Window Images” (page 44) describes how to temporarily cache a portion of a window’s image so that it can be restored later. Thisis useful when highly dynamic drawing must be done over an otherwise static image of the window. See Also For additional information on specific types of windows and panels, you can also see the following programming topics: ● Sheet Programming Topics describes a dialog attached to a specific window, ensuring that a user never loses track of which window the dialog belongs to. ● Drawer Programming Topics describes a type of view that slides out from one side of a window. ● Toolbar Programming Topics for Cocoa describes a standard way to display a toolbar for a titled window below its title bar and provide users with a way to customize toolbars and save those customizations. ● Dialogs and Special Panels describes alert panels and other specialized types of panels, such as Font, Save, and Print panels. ● Document-Based App Programming Guide for Mac describes how to use the architecture supplied by AppKit to create applications that can create, open, load, and save multiple document files. ● Cocoa Event Handling Guide discusses the variety of ways your application objects can handle the events they receive. Introduction See Also 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 8The NSWindow class defines objects that manage and coordinate the windows an application displays on the screen. A single NSWindow object corresponds to at most one onscreen window. The two principal functions of an NSWindow object are to provide an area in which NSView objects can be placed and to accept and distribute, to the appropriate views, events the user instigates through actions with the mouse and keyboard. Note that the term window sometimes refers to the Application Kit object and sometimes to the window server’s display device; which meaning is intended is made clear in context. AppKit also defines an abstract subclass of NSWindow—NSPanel—that adds behavior more appropriate for auxiliary windows. An NSWindow object is defined by a frame rectangle that encloses the entire window, including its title bar, border, and other peripheral elements (such as the resize control), and by a content rectangle that encloses just its content area. Both rectangles are specified in the screen coordinate system and are restricted to integer values. The frame rectangle establishesthe window’s base coordinate system. This coordinate system is always aligned with and measured in the same increments as the screen coordinate system (in other words, the base coordinate system can’t be rotated or scaled). The origin of the base coordinate system is the bottom-left corner of the window’s frame rectangle. Typically, you create windows using Interface Builder, which allows you to position them, set many of their attributes, and lay out their views. The programmatic work you do with windows more often involves bringing them on and off the screen; changing dynamic attributes such as the window’s title; running modal windows to restrict user input; and assigning a delegate that can monitor certain of the window’s actions,such as closing, zooming, and resizing. You can also create a window programmatically with one of itsinitializers by specifying, among other attributes, the size and location of its content rectangle. The frame rectangle is derived from the dimensions of the content rectangle. When it’s created, a window automatically createstwo views: an opaque frame view that fillsthe frame rectangle and draws the border, title bar, other peripheral elements, and background, and a transparent content view that fills the content rectangle. The frame view and its peripheral elements are private objects that your application can’t access directly. The content view is the “highest” accessible view in the window; you can replace the default content view with a view of your own creation using the setContentView: method. The window determines the placement of the content view; you can’t position it using the NSView methods that begin with setFrame; you must use the NSWindow class’s placement methods, as described in “Opening and Closing Windows” (page 17). 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 9 How Windows WorkYou add other views to the window as subviews of the content view or as subviews of any of the content view’s subviews, and so on, via the addSubview: method of NSView. This tree of views is called the window’s view hierarchy. When a window is told to display itself, it does so by sending display... messages to the top-level view in its view hierarchy. Because displaying is carried out in a determined order, the content view (which is drawn first) may be wholly or partially obscured by itssubviews, and these subviews may be obscured by their subviews (and so on). How Windows Work 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 10Displaying an NSWindow object begins with the drawing performed by its view objects, which accumulates in the window’s display buffer or appears immediately on the screen. Windows, like NSView objects, can be displayed unconditionally or merely marked as needing display, using the display and setViewsNeedDisplay: methods, respectively. A displayIfNeeded message causes the window’s views to display only if they’ve been marked as needing display. Normally, any time a view is marked as needing display, the window makes note of this fact and automatically displays itself shortly thereafter. This automatic display is typically performed on each pass through the event loop, but can be turned off using the setAutodisplay: method. If you turn off autodisplay for a window, you’re then responsible for displaying it whenever necessary. A window’s views can be drawn concurrently. You can use the methods allowsConcurrentViewDrawing and setAllowsConcurrentViewDrawing: to determine and set, respectively, whether or not a window draws its views concurrently. By default, a window’s views are drawn concurrently. On each passthrough the event loop, the application object invokesits updateWindows method, which sends an update message to each window. Subclasses of NSWindow can override this method to examine the state of the application and change their own state or appearance accordingly—enabling or disabling menus, buttons, and other controls based on the object that’s selected, for example. In addition to displaying itself on the screen, a window can print itself in its entirety, just as a view can. The print: method runs the application’s Print panel and causes the window’s frame view to print itself. dataWithEPSInsideRect: behaves similarly. For additional information see Printing Programming Guide for OS X . 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 11 How a Window is DisplayedYou can make a whole window or panel run in application-modal fashion, using the application’s normal event loop machinery but restricting input to the modal window or panel. Modal operation is useful for windows and panels that require the user’s attention before an action can proceed. Examples include error messages and warnings, as well as operations that require input, such as open dialogs, or dialogs that apply to multiple windows. There are two mechanisms for operating an application-modal window or panel. The first, and simpler, is to invoke the runModalForWindow: method of NSApplication, which monopolizes events for the specified window until one of stopModal, abortModal, or stopModalWithCode: is invoked, typically by a button’s action method. The stopModal method ends the modal status of the window or panel from within the event loop. It doesn’t work if invoked from a method invoked by a timer or by a distributed object because those mechanisms operate outside of the event loop. To terminate the modal loop in these situations, you can use abortModal. The stopModal method is typically invoked when the user clicks the OK button (or equivalent), abortModal when the user clicks the Cancel button (or presses the Escape key). These two methods are equivalent to stopModalWithCode: with the appropriate argument. The second mechanism for operating a modal window or panel, called a modal session, allowsthe application to perform a long operation while it still sends events to the window or panel. Modal sessions are particularly useful for panels that allow the user to cancel or modify an operation. To begin a modal session, invoke beginModalSessionForWindow: on the application, which sets the window up for the session and returns an identifier used for other session-controlling methods. At this point, the application can run in a loop that performsthe operation, invoking runModalSession: on the application object on each passso that pending events can be dispatched to the modal window. This method returns a code indicating whether the operation should continue, stop, or abort, which is typically established by the methods described above for runModalForWindow:. After the loop concludes, you can remove the window from the screen and invoke endModalSession: on the application to restore the normal event loop. Note: You can write a modal event loop for a view object so that the object has access to all events pertaining to a particular task, such as tracking the mouse in the view. For an example, see “Responding to User Events and Actions” in “Creating a Custom View”. The normal behavior of a modal window or session is to exclude all other windows and panels from receiving events. For windows and panels that serve as general auxiliary controls, such as menus and the Font panel, this behavior is overly restrictive. The user must be able to use menu key equivalents (such as those for Cut 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 12 How Modal Windows Workand for Paste) and change the font of text in the modal window, and this requires that non-modal panels be able to receive events. To support this behavior, an NSWindow subclass overridesthe worksWhenModal method to return YES. This allows the window to receive mouse and keyboard events even when a modal window is present. If a subclass needs to work when a modal window is present, it should generally be a subclass of NSPanel, not of NSWindow. Modal windows and modal sessions provide different levels of control to the application and the user. Modal windows restrict all action to the window itself and any methods invoked from the window. Modal sessions allow the application to continue an operation while accepting input only through the modal session window. Beyond this, you can use distributed objects to perform background operations in a separate thread, while allowing the user to perform other actions with any part of the application. The background thread can communicate with the main thread, allowing the application to display the status of the operation in a non-modal panel, perhaps including controls to stop or affect the operation as it occurs. Note that because AppKit isn’t thread-safe, the background thread should communicate with a designated object in the main thread that in turn interacts with the AppKit. Before OS X version 10.6, if a modal window was open, application termination would be prevented if the user attempted to terminate that window’s application. Beginning in OS X version 10.6, you can call setPreventsApplicationTerminationWhenModal: with a value of NO, and the window will not prevent application termination when modal. The current value of this property may be accessed by calling preventsApplicationTerminationWhenModal. The default value is NO. How Modal Windows Work 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 13A panel is a special kind of window, typically serving an auxiliary function in an application. The NSPanel subclass of NSWindow adds a few special behaviors to windows in support of the role panels play: ● By default panels are not released when they’re closed, because they’re usually lightweight and often reused. ● Onscreen panels, except for alert dialogs, are removed from the screen when the application isn’t active and are restored when the application again becomes active. This reduces screen clutter. Specifically, the NSWindow implementation of the hidesOnDeactivate method returns NO, but the NSPanel implementation of the same method returns YES. ● Panels can become the key window, but they cannot become the main window. ● If a panel is the key window and has a close button, it closes itself when the user presses the Escape key. In addition to these automatic behaviors, the NSPanel class allows you to configure certain other behaviors common to some kinds of panels: ● You can prevent a panel from becoming the key window unless the user clicks in a view that responds to typing. This prevents the key window from shifting to the panel unnecessarily. The setBecomesKeyOnlyIfNeeded: method controls this behavior. ● Palettes and similar panels can be made to float above standard windows and other panels. This prevents them from being covered and keepsthem readily available to the user. The setFloatingPanel: method controls this behavior. ● A panel can be made to receive mouse and keyboard events even when another window or panel is being run modally or in a modal session. This permits actions in the panel to affect the modal window or panel. The setWorksWhenModal: method controls this behavior. See “How Modal Windows Work” (page 12) for more information on modal windows and panels. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 14 How Panels WorkA controller object (in this case, an instance of the NSWindowController class) manages a window; this object is usually stored in a nib file. This management entails the following: ● Loading and displaying the window ● Closing the window when appropriate ● Customizing the window’s title ● Storing the window’s frame (size and location) in the defaults database ● Cascading the window in relation to other document windows of the application A window controller can manage a window by itself or as a participant in AppKit’s document-based architecture, which also includes the NSDocument and NSDocumentController classes. In this architecture, a window controller is created and managed by a document (an instance of an NSDocument subclass) and, in turn, keeps a reference to the document. For a discussion of this architecture, see Document-Based App Programming Guide for Mac . The relationship between a window controller and a nib file is important. Although a window controller can manage a programmatically created window, it usually manages a window in a nib file. The nib file can contain other top-level objects, including other windows, but the window controller’s responsibility is this primary window. The window controller is usually the owner of the nib file, even when it is part of a document-based application. For simple documents—that is, documents with only one nib file containing a window—you need do little directly with NSWindowController objects. AppKit creates one for you. However, if the default window controller is not sufficient, you can create a custom subclass of NSWindowController. For documents with multiple windows or panels, your document must create separate instances of NSWindowController (or of custom subclasses of NSWindowController), one for each window or panel. An example is a CAD application that has different windows for side, top, and front views of drawn objects. What you do in your NSDocument subclass determines whether the default NSWindowController object or separately created and configured NSWindowController objects are used. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 15 How Window Controllers WorkWindow Closing Behavior When a window is closed and it is part of a document-based application, the document removes the window’s window controller from itslist of window controllers. Thisresultsin the system deallocating the window controller and the window, and possibly the NSDocument object itself. When a window controller is not part of a document-based application, closing the window does not by default result in the deallocation of the window or window controller. This is the desired behavior for a window controller that manages something like an inspector; you shouldn’t have to load the nib file again and re-create the objectsthe next time the user requests the inspector. If you want the closing of a window to make both window and window controller go away when it isn’t part of a document, yoursubclass of NSWindowController can observe the NSWindowWillCloseNotification notification or, as the window delegate, implement the windowWillClose: method. How Window Controllers Work Window Closing Behavior 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 16This article describes how to open and close a window. Opening a window—that is, making a window visible—is normally accomplished by placing the window into the application's window list by invoking one of the methods makeKeyAndOrderFront:, orderFront:, etc., in NSWindow, and so on. Also, with certain bits set in Interface Builder, the window is shown when the nib file is loaded in some cases. Closing a window involves explicit use of either the close method, which simply removes the window from the screen, or performClose:, which highlights the close button as though the user clicked it. Closing a window involves at least removing it from the screen but may include disposing of it altogether. The setReleasedWhenClosed: method specifies whether a window releases itself when it receives a close message. A window’s delegate is also notified when it’s about to close, as described in “Using Window Notifications and Delegate Methods” (page 41). These methods hide a window without closing it. The method orderOut: removes a window from the screen. You can also set a window to be removed from the screen automatically when its application isn’t active using setHidesOnDeactivate:. The isVisible method returns whether a window is on or off the screen. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 17 Opening and Closing WindowsEach window is placed on the screen by a particular application, and each application typically owns a variety of windows. Windows have numerous characteristics. They can be located onscreen or offscreen. Onscreen windows are placed on the screen in levels managed by the window server. Windows onscreen are ordered from front to back. Like sheets of paper loosely stacked together, windows in front can overlap, or even completely cover, those behind them. Each window has a unique position in the order. When two windows are placed side-by-side, one is still technically in front of the other. If any window could be in front of any other window, then small but important windows—like menus and tool palettes—might get lost behind larger ones. Windows that require user action, like attention panels and pop-up lists, might disappear behind another window and go unnoticed. To prevent this, all the windows onscreen are organized into levels. When two windows belong to the same level, either one can be in front. When two windows belong to different levels, however, the one in the higher level will always be above the other. Onscreen windows can also carry a status: main or key . Offscreen windows are hidden or minimized on Dock, and do not carry either status. Onscreen windows that are neither main nor key are inactive. Window Layering Each application and document window exists in its own layer, so documents from different applications can be interleaved. Clicking a window to bring it to the front doesn’t disturb the layering order of any other window. A window’s depth in the layers is determined by when the window was last accessed. When a user clicks an inactive document or chooses it from the Window menu, only that document, and any open utility windows, should be brought to the front. Users can bring all windows of an application forward by clicking its icon in the Dock or by choosing Bring All to Front in the application’s Window menu. These actions should bring forward all of the application’s open windows, maintaining their onscreen location, size, and layering order within the application. For more information, see “UI Element Guidelines: Menus” in OS X Human Interface Guidelines. Utility windows are alwaysin the same layer: the top layer. They are visible only when their application is active. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 18 Window Layering and Types of WindowsKey and Main Windows Windows have different looks based on how the user is interacting with them. The foremost document or application window that is the focus of the user’s attention is referred to as the main window. Each application also has only one main window at a given time. This main window often has key status, as well. The main window is the principal focus of user actions for an application. Often, user actions in a modal key window (typically a panel such as the Font window or an Info window) have a direct effect on the main window. Main and key windows are both active windows. Active windows are visually distinct from inactive windows in that their controls have color, while the controls in inactive windows do not have color. Inactive windows are windows the user has open but that are not in the foreground. Main and key windows are always in the foreground and their controls always have color. If the main and key window are different windows, they are distinguished from one another by the look of their title bars. Note the visual distinctions between main, key, and inactive windows in Figure 1. Figure 1 Main, key, and inactive windows Inactive window Main window Key window A good example of the difference between key and main windows can be seen in most well-behaved Mac apps. Selecting “Save As...” in a text document, for example, displays a panel with a field to type the document’s name and a pull-down menu of locations to save it. The panel represents the key window. It will accept your Window Layering and Types of Windows Key and Main Windows 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 19keyboard input (the file name), but will directly affect the main window under it (by saving it to the location you specified). Once you save the document, the save panel disappears, the main window becomes key again, and will accept keyboard input once more. The Key Window The key window responds to user input, whether from the keyboard, mouse, or alternative input devices, for an application and is the primary recipient of messages from menus and panels. Usually, a window is made key when the user clicks it. Each application can have only one key window at a given time. Users expect to see their actions on the keyboard and mouse take effect not only in a particular application, but also in a particular window of that application. Each user action is associated with a window by the window server and AppKit. Before acting, the user needs to know which window will be affected; there should be no surprises. Since the mouse controls the pointer, it's quite easy for the user to determine which window a mouse action is associated with. It's whatever window the pointer is over. But the keyboard doesn’t have a pointer, so there’s no natural way to determine where typed characters will appear. To mark the key window for users, AppKit highlights its title bar. You can think of the highlighting as a kind of pointer for the keyboard. It shifts from window to window as the key window changes. Key-window status also moves from application to application as the active application changes. Only one window on the screen is marked at a time, and it is in the active application. There’s just one key window on the Desktop. Even a system that has two screens, but only one keyboard, has at most one key window. Note: A window doesn’t have to become the key window to receive, and act on, keyboard shortcuts. It does, however, have to be a window in the active application. Since the key window belongs to the active application, its highlighted title bar has the secondary effect of helping to show which application is currently active. The key window isthe most prominently marked window in the active application, making it “key” in a second sense: it’s the main focus of the user’s attention on the screen. The Main Window The main window is the standard window where the user is currently working. The main window is not always the key window. There are times when a window other than the main window takes the focus of the input device, while the main window still remains the focus of the user’s attention and of user actions carried out in panels and menus. For example, when a person is using an inspector, a Find dialog, or the Fonts or Colors windows, the document is the main window and the other window is the key window. The Find panel requires Window Layering and Types of Windows Key and Main Windows 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 20the user to supply information by typing it. Since the panel is the destination of the user’s keystrokes, it’s marked as the key window. But the panel is just an instrument through which users can do work in another window—the main window. In a document-based application, the main window isthe window for the current document. Whenever a standard window becomesthe key window, it also becomesthe main window. When key-window status shifts from a standard window to a panel, main-window status remains with the standard window. So that users can pick out the main window when it’s not the key window, the Application Kit highlights its title bar and colorsthe window buttons. If the main window is also the key window, it has only the highlighting of the key window. A menu command might affect either the key window or the main window, depending on the command. For example, the Paste command can be used to enter text in a Find panel. But the Save command saves the document displayed in the main window, and the Bold command turns the current selection in the main window bold. For this reason, user actions in a panel or menu are associated with both the key window and the main window: ● An action is first associated with the key window. ● If the key window is a panel and it can’t handle the action, the action is next associated with the main window. Note that this order of precedence is reflected in the way windows are highlighted: The key window is always marked, but the main window is marked only when it’s not the key window. The main window is always in the same application as the key window, the active application. Changing a Window’s Status Windows that are already onscreen automatically change their status as the key or main window based on the user’s actions with the mouse and on how clicked views handle those mouse events. You can also set the key and main windows programmatically by sending the relevant windows a makeKeyWindow or makeMainWindow message. Setting the key and main windows programmatically is particularly useful when creating a new window. Because making a window key is often combined with ordering the window to the front of the screen, the NSWindow class defines a convenience method, makeKeyAndOrderFront:, that performs both operations. Not all windows are suitable as key or main windows. For example, a window that merely displays information and contains no objectsthat need to respond to events or action messages can completely forgo ever becoming the key window. Similarly, a window that acts as a floating palette of itemsthat are only dragged out by mouse actions never needs to be the key window. Such a window can be defined as a subclass of NSWindow that overridesthe methods canBecomeKeyWindow and canBecomeMainWindow to return NO instead of the default of YES. Defining a window this way prevents it from ever becoming the key or main window. Although the NSWindow class defines these methods, only subclasses of NSPanel typically refuse to accept key or main window status. Window Layering and Types of Windows Key and Main Windows 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 21Windows can be placed on the screen in three dimensions. Besides horizontal and vertical placement, windows are layered back-to-front within distinct levels. Each application and document window exists in its own layer, so documents from different applications can be interleaved. Clicking a window to bring it to the front doesn’t disturb the layering order of any other window. A window’s depth in the layers is determined by when the window was last accessed. When a user clicks an inactive document or chooses it from the Window menu, only that document and any open utility windows should be brought to the front. Window Levels Windows are ordered within several distinct levels. Window levels group windows ofsimilar type and purpose so that the more “important” ones(such as alert panels) appear in front of those lesser importance. A window’s level serves as a high-order bit to determine its position with regard to other windows. Windows can be reordered with respect to each other within a given level; a given window, however, cannot be layered above other windows in a higher level. There are a number of predefined window levels, specified by constants defined by the NSWindow class. The levels you typically use are: NSNormalWindowLevel, which specifies the default level; NSFloatingWindowLevel, which specifiesthe level for floating palettes; and NSScreenSaverWindowLevel, which specifies the level for a screen saver window. You might also use NSStatusWindowLevel for a status window, or NSModalPanelWindowLevel for a modal panel. If you need to implement your own popup menus you use NSPopUpMenuWindowLevel. The remaining two levels, NSTornOffMenuWindowLevel and NSMainMenuWindowLevel, are reserved for system use. Setting Ordering and Level Programmatically You can use the orderWindow:relativeTo: method to order a window within its level in front of or in back of another window. You more typically use convenience methods to specify ordering, such as makeKeyAndOrderFront: (which also affectsstatus), orderFront:, and orderBack:, as well as orderOut:, which removes a window from the screen. You use the isVisible method to determine whether a window is on or off the screen. You can also set a window to be removed from the screen automatically when its application isn’t active using setHidesOnDeactivate:. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 22 Window Layers and LevelsTypically you should have no need to programmatically set the level of a window, since Cocoa automatically determines the appropriate level for a window based on its characteristics. A utility panel, for example, is automatically assigned to NSFloatingWindowLevel. You can nevertheless set a window’s level using the setLevel: method; for example, you can set the level of a standard window to NSFloatingWindowLevel if you want a utility window that looks like a standard window (for example to act as an inspector). This has two disadvantages, however: firstly, it may violate the human interface guidelines; secondly, if you assign a window to a floating level, you must ensure that you also set it to hide on deactivation of your application or reset its level when your application is hidden. Cocoa automatically takes care of the latter aspect for you if you use default window configurations. There is currently no level specified to allow you to place a window above a screen saver window. If you need to do this (for example, to show an alert while a screen saver is running), you can set the window’s level to be greater than that of the screen saver, as shown in the following example. [aWindow setLevel:NSScreenSaverWindowLevel + 1]; Other than this specific case, you are discouraged from setting windows in custom levels since this may lead to unexpected behavior. Window Layers and Levels Setting Ordering and Level Programmatically 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 23The are a number of different options that can be set regarding the window collection behavior of a window. They include a window’s behavior when using Spaces, Exposé, and the “Cycle Through Windows” command. These options can be set using the setCollectionBehavior: method of NSWindow, by passing in at most one constant from each group, combined using bitwise or operators. The current options may be accessed via the collectionBehavior method. Spaces Collection Behavior There are three options that can be set for a window’s Spaces collection behavior. The default is NSWindowCollectionBehaviorDefault, which allows the window to be associated with one space at a time. The second option is NSWindowCollectionBehaviorCanJoinAllSpaces. This option causes the window to appear on all spaces, like the menu bar. The third option is NSWindowCollectionBehaviorMoveToActiveSpace. This causesthe window to switch to the active space when it is made active. Only one of these options may be used at a time. If a window is currently associated with the active space, isOnActiveSpace returns YES. Otherwise, it returns NO. Additionally, you can get an array of the window numbers of windows on one or all spaces using the method windowNumbersWithOptions: and specified your desired options. The possible options are specified by NSWindowNumberListOptions. Exposé Collection Behavior There are also three options that can be set for a window’s Exposé collection behavior. If a window has a window level of NSNormalWindowLevel, the default behavior is NSWindowCollectionBehaviorManaged, which causes the window to participate in both Spaces and Exposé. NSWindowCollectionBehaviorTransient causes the window to float in Spaces and be hidden in Exposé. This is the default behavior if the window level is not NSNormalWindowLevel. The final option is NSWindowCollectionBehaviorStationary, which causes the window to be unaffected by Exposé; i.e. it stays visible and does not move, like the desktop window. Only one of these options may be used at a time. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 24 Setting Window Collection BehaviorWindow Cycling Behavior There are two options: NSWindowCollectionBehaviorParticipatesInCycle and NSWindowCollectionBehaviorIgnoresCycle. These options cause the window to participate in the window cycle for the “Cycle Through Windows” menu option or not participate in it, respectively. Setting Window Collection Behavior Window Cycling Behavior 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 25This article describes how to control a window’s size and position, including how to set a window’s minimum and maximum size, how to constrain a window to the screen, how to cascade windowsso their title barsremain visible, how to zoom a window as though the user pressed the zoom button, and how to center a window on the screen. Setting a Window’s Size and Location The center method places a window in the most prominent location on the screen, one suitable for important messages and alert dialogs. You can resize or reposition a window using setFrame:display: or setFrame:display:animate:—the former is equivalent to the latter with the animate flag NO. You might use these methodsin particular to expand or contract a window to show or hide a subview (such as a control that may be exposed by clicking a disclosure triangle). If the animate argument in setFrame:display:animate: is YES, the method performs a smooth resize of the window, where the total time for the resize can be obtained by calling animationResizeTime:. The user can resize windows by clicking and dragging on the bottom right corner of the window. While the user is resizing the window, inLiveResize will return YES. Otherwise, it returns NO. The user can generally reposition windows by dragging only the title bar. If you want usersto be able to drag your window by clicking elsewhere, you should override mouseDownCanMoveWindow so that it returns YES in any views that you want to be draggable window regions. The methods isMovable and setMovable: determine whether the user can move the window by clicking in its title bar or background. To keep the window’s top-left hand corner fixed when resizing, you must typically also reposition the origin, as illustrated in the following example. - (IBAction)showAdditionalControls:sender { NSRect frame = [myWindow frame]; if (frame.size.width <= MIN_WIDTH_WITH_ADDITIONS) frame.size.width = MIN_WIDTH_WITH_ADDITIONS; frame.size.height += ADDITIONS_HEIGHT; frame.origin.y -= ADDITIONS_HEIGHT; 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 26 Sizing and Placing Windows[myWindow setFrame:frame display:YES animate:YES]; // implementation continues... Note that the window’s delegate does not receive windowWillResize:toSize: messages when the window is resized in this way. It is your responsibility to ensure that the window’s new size is acceptable. The window’s delegate doesreceive windowDidResize: messages. You can implement windowDidResize: to add or remove subviews at suitable junctures. There are no additional flags to denote that the window is performing an animated resize operation (as distinct from a user-initiated resize). It is therefore up to you to capture relevant state information so that you can update the window contents appropriately in windowDidResize:. Window Cascading If you use the Cocoa document architecture, you can use the setShouldCascadeWindows: method of NSWindowController to set whether the window, when it is displayed, should cascade in relation to other document windows(that is, have a slightly offset location so that the title bars of previously displayed windows are still visible). The default is true, so typically you have no additional work to perform. If you are not using the document architecture, you can use the cascadeTopLeftFromPoint: method of NSWindow to cascade windows yourself. The method returns a point shifted from the top-left corner of the window that can be passed to a subsequent invocation of cascadeTopLeftFromPoint: to position the next window so the title bars of both windows are fully visible. Window Zooming You use the zoom: method to toggle the size and location of a window between its standard state, as determined by the application, and its user state: a new size and location the user may have set by moving or resizing the window. Constraining a Window’s Size and Location You can use setContentMinSize: and setContentMaxSize: to limit the user’s ability to resize the window—note that you can still set it to any size programmatically. Similarly, you can use setContentAspectRatio: to keep a window’s width and height at the same proportions as the user resizes it, and setContentResizeIncrements: to make the window resize in discrete amountslarger than a single pixel. (Aspect ratio and resize increments are mutually exclusive attributes.) In general, you should use the Sizing and Placing Windows Constraining a Window’s Size and Location 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 27setContent... methodsinstead of those that affect the window’sframe (setAspectRatio:, setMaxSize:, and so on). These are preferred because they avoid confusion for windows with toolbars, and also are typically a better model since you control the content of the window but not the frame. You can use the constrainFrameRect:toScreen: method to adjust a proposed frame rectangle so that it lies on the screen in such a way that the user can move and resize a window. However, you should make sure your window fits onscreen before display. Note that any NSWindow with a title bar automatically constrains itself to the screen. The cascadeTopLeftFromPoint: method shifts the top left point by an amount that allows one window to be placed relative to another so that both their title bars are visible. Additionally, when a window is about to be resized, the window’s delegate will be sent a windowWillResize:toSize: message. You can implement that method in your delegate to easily control your window’s size. Sizing and Placing Windows Constraining a Window’s Size and Location 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 28A window can store its placement in the user defaults system, so that it appears in the same location the next time the user starts the application. The saveFrameUsingName: method stores the frame rectangle, and setFrameUsingName: setsit from the value in user defaults. You can also use the setFrameAutosaveName: method to have a window save the frame rectangle any time it changes. However, for the correct frame to be saved, you must ensure that the window controller for the window in question doesn’t cascade the windows under its charge. You accomplish this task by sending setShouldCascadeWindows:NO to the controller, as shown in Listing 1. Listing 1 Saving a window’s frame automatically NSWindow *window = // the window in question [[window windowController] setShouldCascadeWindows:NO]; // Tell the controller to not cascade its windows. [window setFrameAutosaveName:[window representedFilename]]; // Specify the autosave name for the window. To expunge a frame rectangle from the defaults system, use the class method removeFrameUsingName:. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 29 Saving a Window’s Position into the User’s DefaultsWhen a user minimizes a window, it’s removed from the screen and replaced with a smaller counterpart in the Dock. The miniaturize: and deminiaturize: methods reduce and reconstitute a window, and performMiniaturize: simulatesthe user clicking the window’s minimize button. You can also set the image and title displayed in a freestanding mini-window by sending setMiniwindowImage: and setMiniwindowTitle: messages to the NSWindow object. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 30 Minimizing WindowsMost Cocoa applications include the Window menu, which displays the titles of various of the application’s windows. When you change a window’s title, this change is automatically reflected in the Window menu. This menu automatically lists windowsthat have a title bar and are resizable and that can become the main window (as described in “Window Layering and Types of Windows” (page 18)). Typically you can rely on the automatic updating provided by Cocoa. In rare circumstances, however, you might want to modify the default behavior. You can exclude a window that would otherwise be listed in the Window menu by sending it a setExcludedFromWindowsMenu:YES message. Since they cannot become main, NSPanel objects are excluded from the Windows menu. Instances of subclasses of NSPanel can be included in the menu by returning NO from its isExcludedFromWindowsMenu method and YES from its canBecomeMainWindow method. If you change a window’s configuration such that it should be added to or removed from the Window menu, you can update the Window menu by sending the shared application instance addWindowsItem:title:filename: or removeWindowsItem:. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 31 Using the Window MenuYou usually configure most aspects of a window’s appearance in Interface Builder. Sometimes, however, you may need to create a window programmatically, or alter its appearance after it has been created. Setting a Window’s Style The peripheral elements that a window displays define its style. Though you can’t access and manipulate them directly, you can determine at initialization whether a window has them by providing a style mask to the initializer. There are four possible style elements,specifiable by combining their mask values using the C bitwise OR operator: Element Mask Value A title bar NSTitledWindowMask A close button NSClosableWindowMask A minimize button NSMiniaturizableWindowMask A resize bar, border, or box NSResizableWindowMask You can also specify NSBorderlessWindowMask, in which case none of these style elements is used. Typically, you set a window’s appearance once, when it is first created. Sometimes, however, you want to enable or disable a button in the title bar to reflect changed context. To do this, you first retrieve the button from the window using the standardWindowButton: of NSWindow method and then set its enabled state, as in the following example. NSButton *closeButton = [window standardWindowButton:NSWindowCloseButton]; [closeButton setEnabled:NO]; The constants required to access standard title bar widgets are defined in the API reference for NSWindow. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 32 Setting a Window’s AppearanceSetting a Window’s Color and Transparency You can set a window’s background color and transparency using the methods setBackgroundColor: and setAlphaValue:, respectively. You can set a window’s background color to a non-opaque color. This does not affect the window’s title bar; it only makes the background itself transparent if the window is not opaque, as illustrated in the following example. [myWindow setOpaque:NO]; // YES by default NSColor *semiTransparentBlue = [NSColor colorWithDeviceRed:0.0 green:0.0 blue:1.0 alpha:0.5]; [myWindow setBackgroundColor:semiTransparentBlue]; Views placed on a non-opaque window with a transparent background color retain their own opacity. If you want to make the entire window (including the title bar and views placed on the window) transparent, you should use setAlphaValue:. Setting a Window’s Color Space You can set a window’s colorspace using setColorSpace: and can retrieve the window’s current colorspace using colorSpace. NSColorSpace objects for use with setColorSpace: may be obtained using the class methods documented in NSColorSpace Class Reference . Setting a Window’s Content Border Thickness Beginning in OS X version 10.5, windows automatically have a textured gradient applied to their backgrounds. The area on which the gradient is drawn is determined automatically. At times, however, this may not work correctly. If your window does not look correct with automatic gradient calculation, disable it by calling setAutorecalculatesContentBorderThickness:forEdge: with a value of NO and the edge to disable automatic calculation for. The value of this property may be accessed using the method autorecalculatesContentBorderThicknessForEdge:. You can also set and access the content border thickness manually using setContentBorderThickness:forEdge: and contentBorderThicknessForEdge:, respectively. Setting a Window’s Appearance Setting a Window’s Color and Transparency 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 33A titled window can display an arbitrary title or one derived from a filename. The setTitle: method puts an arbitrary string on the title bar. The setTitleWithRepresentedFilename: method formats a filename in the title bar in a readable format and associates the window with that file. You can set the associated file without changing the title using setRepresentedFilename:. You can use the association between the window and the file in any way you see fit. One convenience offered by the NSWindow class is marking the file as having been changed, so that the user is prompted to save it on closing the window. The method for marking the document as having been changed is setDocumentEdited:. When the window closes, its delegate can check if the files has been changed using isDocumentEdited to see whether the document needs to be saved. Additionally, starting in OS X version 10.5, you can set a window’s represented document by URL using the setRepresentedURL: method. You can get the URL of the document currently represented by a window using the representedURL method. The window will automatically use the known icon for the file type of the specified file, if one exists. To customize the document icon, you can use the following code segment: [[NSWindow standardWindowButton:NSWindowDocumentIconButton] setImage:customImage]. By default, a Command-click or Control-click on the rectangle containing a window’s document icon button and title will show a path popup. To customize this behavior, you can implement window:shouldPopUpDocumentPathMenu: in your window’s delegate. You can return NO from this method to stop the window from showing the path popup. You can also customize the document icon’s default drag behavior by implementing the window:shouldDragDocumentWithEvent:from:withPasteboard: in the window’s delegate. You can return NO to prohibit dragging the document icon. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 34 Setting a Window’s Title and Represented FileNearly every window has a corresponding display window device in the window server. The window device holdsthe window’s drawn image, and hastwo attributes determined by the window server and many attributes that the window controls. The window server assigns the window device a unique identifier (within an application). This is the window number, and it can be accessed using the windowNumber method. Each window also has a graphics state that most of its views share for drawing (views can create their own as well). The gState method returns its identifier. The attributes under direct window control are the following: ● Backing store type, described in “Specifying How To Store the Window’s Image” (page 35) ● Backing location, described in “Specifying Where To Store the Window’s Image” (page 36) ● Window device creation, described in “Specifying When the Window’s Image Is Created” (page 36) ● One shot, described in “Specifying Whether the Window’s Image Persists When Offscreen” (page 37) ● Depth limit, described in “Specifying the Depth Limit for the Window’s Image” (page 37) ● Dynamic depth limit, described in “Specifying Whether the Depth Limit Changes to the Screen’s Capacity” (page 37) ● Content sharing, described in “Specifying Whether Window Content Can Be Read or Written by Another Process” (page 37). Specifying How To Store the Window’s Image A window device’s backing store type determines how the window’simage isstored. It’sset when the window is initialized and can be one of three types. A buffered window device renders all drawing into a display buffer and then flushes it to the screen. Always drawing to the buffer produces very smooth display, but can require significant amounts of memory. Buffered windows are best for displaying material that must be redrawn often, such as text. You must also use buffered windows if you want your windows to support transparency. A retained window device also uses a buffer, but draws directly to the screen where possible and to the buffer for any portions that are obscured. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 35 Setting Attributes for the Window’s ImageA nonretained window device has no buffer at all, and must redraw portions as they’re exposed. Further, this redrawing is suspended when the window’s display mechanism is preempted. For example, if the user drags a window across a nonretained window, the nonretained window is “erased” and isn’t redrawn until the user releases the mouse. Both retained and nonretained windows are also subject to a flashing effect as individual drawing operations are performed, but their results do get to the screen more quickly than those of buffered windows. You can change the backing store type between buffered and retained after initialization using the setBackingType: method. Specifying Where To Store the Window’s Image The window server chooses whether to place the backing store for a buffered window in main memory or video memory. It will choose the location that providesthe best overall performance. You can query the window server to determine where your window’s backing store is located using the preferredBackingLocation method. You may choose to set a preferred location for a Window’s backing store using the setPreferredBackingLocation: method. While the window server is not required to respect this preferred backing location, it will attempt to do so. You should not change the preferred backing location without testing how it affects the performance of your application. Specifying When the Window’s Image Is Created The defer argument to the initializer specifies whether the window creates its window device immediately or only when it’s moved on screen. Deferring creation of the window device can offer some performance gain for windows that aren’t displayed immediately because it reduces the amount of work that needs to be performed up front. Deferring creation of the window device is particularly useful when creation of the window itself can’t be deferred or when an window is needed for purposes other than displaying content. Submenus with key equivalents, for example, must exist for the key equivalents to work, but may never actually be displayed. Setting Attributes for the Window’s Image Specifying Where To Store the Window’s Image 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 36Specifying Whether the Window’s Image Persists When Offscreen Memory can also be saved by destroying the window device when the window is removed from the screen. The setOneShot: method controls this behavior. One-shot window devices exist only when their windows are onscreen. Specifying the Depth Limit for the Window’s Image Like the display hardware, a window device’s buffer has a depth, or a limit to the memory allotted each pixel. Buffered and retained windows start out with the same depth as the main display or 16 bits, whichever is deeper. These settings stay in effect unless changed using the setDepthLimit: method, which takes as an argument a window depth limit created using the NSBestDepth function. SpecifyingWhether the Depth Limit Changesto the Screen’s Capacity Keeping a window’s depth at its richest preserves the displayed image, but may incur unnecessary memory overhead when the window buffer depth is deeper than the screen depth. You can use the setDynamicDepthLimit: method to tell a window to match the depth of the screen it’s on. When it’s moved to a new screen, a window with a dynamic depth limit adjusts its buffer to the new depth before redrawing. Making a window’s depth limit dynamic overrides the limit set using setDepthLimit:, and removing the dynamic limit reverts the window to the default limit. Specifying Whether Window Content Can Be Read or Written by Another Process The contents of your window can be made available to other processes. By default, the contents of your window can be read but not written to by other processes. This allows system services to work with your window’s contents and also allows other applications to capture a snapshot of your windows contents. You can override the default behavior using the setSharingType: method. Changing the sharing type to NSWindowSharingNone prevents other systems from capturing your window’s image data. If you do this, however, your window will not be able to participate in a number of system services; therefore, this setting should be used with caution. If you set your window’s sharing type to NSWindowSharingReadWrite, other processes can both read and modify the window’s content. Setting Attributes for the Window’s Image Specifying Whether the Window’s Image Persists When Offscreen 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 37As described in NSResponder Class Reference , most events coming into an application make their way to a window in a sendEvent: message. A key event is directed at the key window, while a mouse event is directed at whatever window lies under the pointer. If an event affects the window directly—resizing or moving it, for example—it performs the appropriate operation itself and sends messages to its delegate informing it of its intentions, thus allowing your application to intercede. The window sends other events up its responder chain from the appropriate starting point: the first responder for a key event, the view under the pointer for a mouse event. These events are then typically handled by some view object in the window. See Cocoa Event Handling Guide for more information on how to intercept and handle events. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 38 Handling Events in WindowsA window’s first responder is often a view object selected by the user clicking it. For text fields and other view objects (mainly subclasses of NSControl), the user can select the first responder with the keyboard using the Tab and Shift keys. The NSView class defines the methods for setting up and examining the loop of objects that the user can select in this manner. A view that’s the first responder is called the key view, and the views that can become the key view in a window are linked together in the window’s key view loop. You normally set up the key view loop using Interface Builder, establishing connections between the nextKeyView outlets of views in the window and setting the window’s initialFirstResponder outlet to the view that you want selected when the window is first placed onscreen. If you do not set this outlet, the window sets a key loop (not necessarily the same as the one you would have specified!) and picks a default initial first responder for you. In addition to the key view loop, a window can have a default button cell, which uses the Return (or Enter) key as its key equivalent. The setDefaultButtonCell: method establishes this button cell; you can also set it in Interface Builder by setting a button cell’s key equivalent to '\r'. The default button cell draws itself as a focal element for keyboard interface control unless another button cell is focused on. In this case, it temporarily draws itself as normal and disables its key equivalent. Another default key established by the NSWindow class is the Escape key, which immediately aborts a modal loop (described in “How Modal Windows Work” (page 12)). See NSResponder Class Reference for more information on keyboard interface control. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 39 Using Keyboard Interface Control in WindowsEach window has a text object that is shared for light editing tasks. This object, the window’s field editor, is inserted in the view hierarchy when an object needsto editsome text and removed when the object isfinished. The field editor is used by NSTextField objects and other controls, for example, to edit the text that they display. The fieldEditor:forObject: method returns a window’s field editor, after asking the delegate for a substitute using windowWillReturnFieldEditor:toObject:. You can override the fieldEditor:forObject: method of NSWindow in subclasses or provide a delegate to substitute a class of text object different from the NSTextView default, thereby customizing text editing in your application. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 40 Using the Window’s Field EditorThe NSWindow class offers observers a rich set of notifications, which it broadcasts on such occurrences as gaining or losing key or main window status, minimizing, moving or resizing, becoming exposed, and closing. Each notification is matched to a delegate method, so a window’s delegate is automatically registered for all notifications that it has methods for. The NSWindow class also offers its delegate a few other methods, such as windowShouldClose:, which requests approval to close, windowWillResize:toSize:, which allows the delegate to constrain the window’ssize, windowWillUseStandardFrame:defaultFrame:, which allows the delegate to set the window frame for zooming, and windowWillReturnFieldEditor:toObject:, which gives the delegate a chance to modify the field editor or substitute a different editor. See the individual notification and delegate method descriptions for more information. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 41 Using Window Notifications and Delegate MethodsThe NSWindow class defines some methods for image dragging, in case the user wants to drag an object into or out of a window. Although most dragging operations are initiated by and occur between view objects, the NSWindow class also defines an image-dragging method, dragImage:at:offset:event:pasteboard:source:slideBack:. A window can also serve as the destination for dragging operations, registering the types it accepts with registerForDraggedTypes: and unregisterDraggedTypes. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 42 Dragging Images to and from WindowsYou can change the cursor image when the cursor is within a specified area of a view in a window. To do this, use the NSTrackingArea class, along with the cursorUpdate: method of the NSResponder class. For specifics, read “Using Tracking-Area Objects” in Cocoa Event Handling Guide . For details on the NSTrackingArea class itself, refer to NSTrackingArea Class Reference . 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 43 Updating the Cursor Image in a WindowTo support transitory drawing by views, the NSWindow class defines methods that temporarily cache a portion of its raster image so that it can be restored later. This feature is useful for situations where highly dynamic drawing must be done over the otherwise static image of the window. For example, in a drawing program where the user drags lines and other shapes directly onto a canvas, it’s more efficient to restore the window’s cached image and draw anew over that than to have all the views send display instructions to the window server. For more information,see the method descriptionsfor cacheImageInRect:, restoreCachedImage, and discardCachedImage. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 44 Caching Window ImagesThis table describes the changes to Window Programming Guide . Date Notes Revised the article “Updating the Cursor Image in a Window” (page 43), previously titled “Setting Pointer Rectangles for Windows.” 2009-11-27 2009-05-15 Updated for OS X v10.6. Added information on the use of backing locations to improve performance. 2009-02-04 2008-10-15 Provided links to delegate methods. Clarified the behavior of the setFrameAutosaveName: method in conjunction with a window's window controller. 2006-10-03 Added window-controller requirement for the NSWindow setFrameAutosaveName: method to “Saving a Window’s Position into the User’s Defaults” (page 29). Made correction to "Using the Windows Menu" article. Changed title from "Windows and Panels." 2005-09-08 Updated “Setting a Window’s Appearance” (page 32) to cover enabling and disabling buttons in the title bar, and to discuss setting a window’s background color and transparency. 2004-08-31 “Setting a Window’s Level” renamed “Window Layers and Levels” (page 22) and augmented. “Changing the Key and Main Windows” renamed to “Window Layering and Types of Windows” (page 18) and augmented. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 45 Document Revision HistoryDate Notes Augmented “Sizing and Placing Windows” (page 26) to discuss animated resizing, window cascading, and constraining window size and position. Minor changes to “Using the Window Menu” (page 31). Clarified the concepts of key and main windowsin “Window Layering and Types of Windows” (page 18)“. 2003-06-05 2002-11-12 Revision history was added to existing topic. Document Revision History 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 46Apple Inc. © 2002, 2009 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, Exposé, Mac, Numbers, OS X, and Spaces are trademarks of Apple Inc., registered in the U.S. and other countries. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. AV Foundation Programming GuideContents About the AV Foundation Framework 4 At a Glance 5 Representing and Using Media with AV Foundation 5 Concurrent Programming with AV Foundation 7 Prerequisites 8 Using Assets 9 Creating an Asset Object 9 Options for Initializing an Asset 9 Accessing the User’s Assets 10 Preparing an Asset for Use 11 Getting Still Images From a Video 12 Generating a Single Image 13 Generating a Sequence of Images 14 Trimming and Transcoding a Movie 15 Reading and Writing Assets 17 Playback 18 Playing Assets 18 Handling Different Types of Asset 20 Playing an Item 21 Changing the Playback Rate 21 Seeking—Repositioning the Playhead 22 Playing Multiple Items 23 Monitoring Playback 23 Responding to a Change in Status 24 Tracking Readiness for Visual Display 25 Tracking Time 25 Reaching the End of an Item 26 Putting it all Together: Playing a Video File Using AVPlayerLayer 26 The Player View 27 A Simple View Controller 27 Creating the Asset 28 Responding to the Player Item’s Status Change 30 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 2Playing the Item 31 Media Capture 32 Use a Capture Session to Coordinate Data Flow 33 Configuring a Session 34 Monitoring Capture Session State 35 An AVCaptureDevice Object Represents an Input Device 35 Device Characteristics 36 Device Capture Settings 36 Configuring a Device 40 Switching Between Devices 41 Use Capture Inputs to Add a Capture Device to a Session 41 Use Capture Outputs to Get Output from a Session 42 Saving to a Movie File 43 Processing Frames of Video 46 Capturing Still Images 47 Showing the User What’s Being Recorded 49 Video Preview 49 Showing Audio Levels 50 Putting it all Together: Capturing Video Frames as UIImage Objects 50 Create and Configure a Capture Session 51 Create and Configure the Device and Device Input 51 Create and Configure the Data Output 52 Implement the Sample Buffer Delegate Method 52 Starting and Stopping Recording 53 Time and Media Representations 54 Representation of Assets 54 Representations of Time 55 CMTime Represents a Length of Time 55 CMTimeRange Represents a Time Range 57 Representations of Media 58 Converting a CMSampleBuffer to a UIImage 59 Document Revision History 62 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 3 ContentsAV Foundation is one of several frameworks that you can use to play and create time-based audiovisual media. It provides an Objective-C interface you use to work on a detailed level with time-based audiovisual data. For example, you can use it to examine, create, edit, or reencode media files. You can also get input streams from devices and manipulate video during realtime capture and playback. Core Audio UIKit Media Player AV Foundation iOS 3 Audio classes Core Media Core Animation You should typically use the highest-level abstraction available that allows you to perform the tasks you want. For example, in iOS: ● If you simply want to play movies, you can use the Media Player Framework (MPMoviePlayerController or MPMoviePlayerViewController), or for web-based media you could use a UIWebView object. ● To record video when you need only minimal control over format, use the UIKit framework (UIImagePickerController). Note, however, thatsome of the primitive data structuresthat you use in AV Foundation—including time-related data structures and opaque objectsto carry and describemedia data—are declared in the Core Media framework. AV Foundation is available in iOS 4 and later, and OS X 10.7 and later. This document describes AV Foundation as introduced in iOS 4.0. To learn about changes and additions to the framework in subsequent versions, you should also read the appropriate release notes: ● AV Foundation Release Notes describe changes made for iOS 5. ● AV Foundation Release Notes (iOS 4.3) describe changes made for iOS 4.3 and included in OS X 10.7. 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 4 About the AV Foundation FrameworkRelevant Chapters: “Time and Media Representations” (page 54) At a Glance There are two facets to the AV Foundation framework—API related just to audio, which was available prior to iOS 4; and API introduced in iOS 4 and later. The older audio-related classes provide easy ways to deal with audio. They are described in Multimedia Programming Guide , not in this document. ● To play sound files, you can use AVAudioPlayer. ● To record audio, you can use AVAudioRecorder. You can also configure the audio behavior of your application using AVAudioSession; this is described in Audio Session Programming Guide . Representing and Using Media with AV Foundation The primary class that the AV Foundation framework uses to represent media is AVAsset. The design of the framework is largely guided by this representation. Understanding its structure will help you to understand how the framework works. An AVAsset instance is an aggregated representation of a collection of one or more pieces of media data (audio and video tracks). It provides information about the collection as a whole, such as its title, duration, natural presentation size, and so on. AVAsset is not tied to particular data format. AVAsset is the superclass of other classes used to create asset instances from media at a URL (see “Using Assets” (page 9)) and to create new compositions (see “Editing” (page 7)). Each of the individual pieces of media data in the asset is of a uniform type and called a track. In a typical simple case, one track represents the audio component, and another represents the video component; in a complex composition, however, there may be multiple overlapping tracks of audio and video. Assets may also have metadata. A vital concept in AV Foundation is that initializing an asset or a track does not necessarily mean that it is ready for use. It may require some time to calculate even the duration of an item (an MP3 file, for example, may not contain summary information). Rather than blocking the current thread while a value is being calculated, you ask for values and get an answer back asynchronously through a callback that you define using a block. About the AV Foundation Framework At a Glance 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 5Relevant Chapters: “Using Assets” (page 9) “Time and Media Representations” (page 54) Playback AVFoundation allows you to manage the playback of asset in sophisticated ways. To support this, it separates the presentation state of an asset from the asset itself. This allows you to, for example, play two different segments of the same asset at the same time rendered at different resolutions. The presentation state for an asset is managed by a player item object; the presentation state for each tracks within an asset is managed by a player item track objects. Using the player item and player item tracks you can, for example, set the size at which the visual portion of the item is presented by the player, set the audio mix parameters and video composition settings to be applied during playback, or disable components of the asset during playback. You play player items using a player object, and direct the output of a player to Core Animation layer. On iOS 4.1 and later, you can use a player queue to schedule playback of a collection of player items in sequence. Relevant Chapters: “Playback” (page 18) Reading, Writing, and Reencoding Assets AV Foundation allows you to create new representations of an asset in several ways. You can simply reencode an existing asset, or—on iOS 4.1 and later—you can perform operations on the contents of an asset and save the result as a new asset. You use an export session to reencode an existing asset into a format defined by one of a small number of commonly-used presets. If you need more control over the transformation, on iOS 4.1 and later you can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects you can, for example, choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process. To produce a visual representation of the waveform, you use an asset reader to read the audio track of an asset. About the AV Foundation Framework At a Glance 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 6Relevant Chapters: “Using Assets” (page 9) Thumbnails To create thumbnail images of video presentations, you initialize an instance of AVAssetImageGenerator using the asset from which you want to generate thumbnails. AVAssetImageGenerator uses the default enabled video track(s) to generate images. Relevant Chapters: “Using Assets” (page 9) Editing AV Foundation uses compositions to create new assets from existing pieces of media (typically, one or more video and audio tracks). You use a mutable composition to add and remove tracks, and adjust their temporal orderings. You can also set the relative volumes and ramping of audio tracks; and set the opacity, and opacity ramps, of video tracks. A composition is an assemblage of pieces of media held in memory. When you export a composition using an export session, it's collapsed to a file. On iOS 4.1 and later, you can also create an asset from media such as sample buffers or still images using an asset writer. Media Capture and Access to Camera Recording input from cameras and microphonesis managed by a capture session. A capture session coordinates the flow of data from input devices to outputs such as a movie file. You can configure multiple inputs and outputs for a single session, even when the session is running. You send messages to the session to start and stop data flow. In addition, you can use an instance of preview layer to show the user what a camera is recording. Relevant Chapters: “Media Capture” (page 32) Concurrent Programming with AV Foundation Callouts from AV Foundation—invocations of blocks, key-value observers, or notification handlers—are not guaranteed to be made on any particular thread or queue. Instead, AV Foundation invokes these handlers on threads or queues on which it performs its internal tasks. You are responsible for testing whether the thread or queue on which a handler isinvoked is appropriate for the tasks you want to perform. If it’s not (for example, if you want to update the user interface and the callout is not on the main thread), you must redirect the execution of your tasks to a safe thread or queue that you recognize, or that you create for the purpose. About the AV Foundation Framework At a Glance 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 7If you’re writing amultithreaded application, you can use the NSThreadmethod isMainThread or [[NSThread currentThread] isEqual:<#A stored thread reference#>] to test whether the invocation thread is a thread you expect to perform your work on. You can redirect messages to appropriate threads using methods such as performSelectorOnMainThread:withObject:waitUntilDone: and performSelector:onThread:withObject:waitUntilDone:modes:. You could also use dispatch_async(3) OS X Developer Tools Manual Page to “bounce”to your blocks on an appropriate queue, either the main queue for UI tasks or a queue you have up for concurrent operations. For more about concurrent operations, see Concurrency Programming Guide ; for more about blocks, see Blocks Programming Topics. Prerequisites AV Foundation is an advanced Cocoa framework. To use it effectively, you must have: ● A solid understanding of fundamental Cocoa development tools and techniques ● A basic grasp of blocks ● A basic understanding of key-value coding and key-value observing ● For playback, a basic understanding of Core Animation (see Core Animation Programming Guide ) About the AV Foundation Framework Prerequisites 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 8Asset can come from a file or from media in the user’s iPod Library or Photo library. Simply creating an asset object, though, does not necessarily mean that all the information that you might want to retrieve for that item is immediately available. Once you have a movie asset, you can extract still images from it, transcode it to another format, or trim the contents. Creating an Asset Object To create an asset to represent any resource that you can identify using a URL, you use AVURLAsset. The simplest case is creating an asset from a file: NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>; AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil]; Options for Initializing an Asset AVURLAsset’s initialization methods take as their second argument an options dictionary. The only key used in the dictionary is AVURLAssetPreferPreciseDurationAndTimingKey. The corresponding value is a boolean (contained in an NSValue object) that indicates whether the asset should be prepared to indicate a precise duration and provide precise random access by time. Getting the exact duration of an asset may require significant processing overhead. Using an approximate duration is typically a cheaper operation and sufficient for playback. Thus: ● If you only intend to play the asset, either pass nil instead of a dictionary, or pass a dictionary that contains the AVURLAssetPreferPreciseDurationAndTimingKey key and a corresponding value of NO (contained in an NSValue object). ● If you want to add the asset to a composition (AVMutableComposition), you typically need precise randomaccess. Pass a dictionary that containsthe AVURLAssetPreferPreciseDurationAndTimingKey key and a corresponding value of YES (contained in an NSValue object—recall that NSNumber inherits from NSValue): 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 9 Using AssetsNSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>; NSDictionary *options = @{ AVURLAssetPreferPreciseDurationAndTimingKey : @YES }; AVURLAsset *anAssetToUseInAComposition = [[AVURLAsset alloc] initWithURL:url options:options]; Accessing the User’s Assets To access the assets managed the iPod Library or by the Photos application, you need to get a URL of the asset you want. ● To access the iPod Library, you create an MPMediaQuery instance to find the item you want, then get its URL using MPMediaItemPropertyAssetURL. For more about the Media Library, see Multimedia Programming Guide . ● To access the assets managed by the Photos application, you use ALAssetsLibrary. The following example shows how you can get an asset to represent the first video in the Saved Photos Album. ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init]; // Enumerate just the photos and videos group by using ALAssetsGroupSavedPhotos. [library enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos usingBlock:^(ALAssetsGroup *group, BOOL *stop) { // Within the group enumeration block, filter to enumerate just videos. [group setAssetsFilter:[ALAssetsFilter allVideos]]; // For this example, we're only interested in the first item. [group enumerateAssetsAtIndexes:[NSIndexSet indexSetWithIndex:0] options:0 usingBlock:^(ALAsset *alAsset, NSUInteger index, BOOL *innerStop) { // The end of the enumeration is signaled by asset == nil. if (alAsset) { Using Assets Creating an Asset Object 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 10ALAssetRepresentation *representation = [alAsset defaultRepresentation]; NSURL *url = [representation url]; AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil]; // Do something interesting with the AV asset. } }]; } failureBlock: ^(NSError *error) { // Typically you should handle an error more gracefully than this. NSLog(@"No groups"); }]; Preparing an Asset for Use Initializing an asset (or track) does not necessarily mean that all the information that you might want to retrieve for that item is immediately available. It may require some time to calculate even the duration of an item (an MP3 file, for example, may not contain summary information). Rather than blocking the current thread while a value is being calculated, you should use the AVAsynchronousKeyValueLoading protocol to ask for values and get an answer back later through a completion handler you define using a block. (AVAsset and AVAssetTrack conform to the AVAsynchronousKeyValueLoading protocol.) You test whether a value is loaded for a property using statusOfValueForKey:error:. When an asset is first loaded, the value of most or all of its properties is AVKeyValueStatusUnknown. To load a value for one or more properties, you invoke loadValuesAsynchronouslyForKeys:completionHandler:. In the completion handler, you take whatever action is appropriate depending on the property’s status. You should always be prepared for loading to not complete successfully, either because it failed for some reason such as a network-based URL being inaccessible, or because the load was canceled. . NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>; AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil]; NSArray *keys = @[@"duration"]; [asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() { Using Assets Preparing an Asset for Use 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 11NSError *error = nil; AVKeyValueStatus tracksStatus = [asset statusOfValueForKey:@"duration" error:&error]; switch (tracksStatus) { case AVKeyValueStatusLoaded: [self updateUserInterfaceForDuration]; break; case AVKeyValueStatusFailed: [self reportError:error forAsset:asset]; break; case AVKeyValueStatusCancelled: // Do whatever is appropriate for cancelation. break; } }]; If you want to prepare an asset for playback, you should load its tracks property. For more about playing assets, see “Playback” (page 18). Getting Still Images From a Video To get still images such as thumbnails from an asset for playback, you use an AVAssetImageGenerator object. You initialize an image generator with your asset. Initialization may succeed, though, even if the asset possesses no visual tracks at the time of initialization, so if necessary you should test whether the asset has any tracks with the visual characteristic using tracksWithMediaCharacteristic:. AVAsset anAsset = <#Get an asset#>; if ([anAsset tracksWithMediaCharacteristic:AVMediaTypeVideo]) { AVAssetImageGenerator *imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:anAsset]; // Implementation continues... Using Assets Getting Still Images From a Video 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 12You can configure several aspects of the image generator, for example, you can specify the maximum dimensions for the images it generates and the aperture mode using maximumSize and apertureMode respectively.You can then generate a single image at a given time, or a series of images. You must ensure that you keep a strong reference to the image generator until it has generated all the images. Generating a Single Image You use copyCGImageAtTime:actualTime:error: to generate a single image at a specific time. AV Foundation may not be able to produce an image at exactly the time you request, so you can pass as the second argument a pointer to a CMTime that upon return contains the time at which the image was actually generated. AVAsset *myAsset = <#An asset#>]; AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:myAsset]; Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]); CMTime midpoint = CMTimeMakeWithSeconds(durationSeconds/2.0, 600); NSError *error; CMTime actualTime; CGImageRef halfWayImage = [imageGenerator copyCGImageAtTime:midpoint actualTime:&actualTime error:&error]; if (halfWayImage != NULL) { NSString *actualTimeString = (NSString *)CMTimeCopyDescription(NULL, actualTime); NSString *requestedTimeString = (NSString *)CMTimeCopyDescription(NULL, midpoint); NSLog(@"Got halfWayImage: Asked for %@, got %@", requestedTimeString, actualTimeString); // Do something interesting with the image. CGImageRelease(halfWayImage); } Using Assets Getting Still Images From a Video 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 13Generating a Sequence of Images To generate a series of images, you send the image generator a generateCGImagesAsynchronouslyForTimes:completionHandler:message. The first argument is an array of NSValue objects, each containing a CMTime, specifying the asset times for which you want images to be generated. The second argument is a block that serves as a callback invoked for each image that is generated. The block arguments provide a result constant that tells you whether the image was created successfully or if the operation was canceled, and, as appropriate: ● The image. ● The time for which you requested the image and the actual time for which the image was generated. ● An error object that describes the reason generation failed. In your implementation of the block, you should check the result constant to determine whether the image was created. In addition, you must ensure that you keep a strong reference to the image generator until it has finished creating the images. AVAsset *myAsset = <#An asset#>]; // Assume: @property (strong) AVAssetImageGenerator *imageGenerator; self.imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:myAsset]; Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]); CMTime firstThird = CMTimeMakeWithSeconds(durationSeconds/3.0, 600); CMTime secondThird = CMTimeMakeWithSeconds(durationSeconds*2.0/3.0, 600); CMTime end = CMTimeMakeWithSeconds(durationSeconds, 600); NSArray *times = @[NSValue valueWithCMTime:kCMTimeZero], [NSValue valueWithCMTime:firstThird], [NSValue valueWithCMTime:secondThird], [NSValue valueWithCMTime:end]]; [imageGenerator generateCGImagesAsynchronouslyForTimes:times completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error) { NSString *requestedTimeString = (NSString *) CFBridgingRelease(CMTimeCopyDescription(NULL, requestedTime)); Using Assets Getting Still Images From a Video 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 14NSString *actualTimeString = (NSString *) CFBridgingRelease(CMTimeCopyDescription(NULL, actualTime)); NSLog(@"Requested: %@; actual %@", requestedTimeString, actualTimeString); if (result == AVAssetImageGeneratorSucceeded) { // Do something interesting with the image. } if (result == AVAssetImageGeneratorFailed) { NSLog(@"Failed with error: %@", [error localizedDescription]); } if (result == AVAssetImageGeneratorCancelled) { NSLog(@"Canceled"); } }]; You can cancel the generation of the image sequence by sending the image generator a cancelAllCGImageGeneration message. Trimming and Transcoding a Movie You can transcode a movie from one format to another, and trim a movie, using an AVAssetExportSession object. An export session is a controller object that manages asynchronous export of an asset. You initialize the session using the asset you want to export and the name of a export preset that indicates the export options you want to apply (see allExportPresets). You then configure the export session to specify the output URL and file type, and optionally other settings such as the metadata and whether the output should be optimized for network use. Asset Export preset AVAssetExportSession URL You can check whether you can export a given asset using a given preset using exportPresetsCompatibleWithAsset: as illustrated in this example: Using Assets Trimming and Transcoding a Movie 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 15AVAsset *anAsset = <#Get an asset#>; NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:anAsset]; if ([compatiblePresets containsObject:AVAssetExportPresetLowQuality]) { AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:anAsset presetName:AVAssetExportPresetLowQuality]; // Implementation continues. } You complete configuration of the session by providing the output URL (The URL must be a file URL.) AVAssetExportSession can infer the output file type from the URL’s path extension; typically, however, you set it directly using outputFileType. You can also specify additional properties such as the time range, a limit for the output file length, whether the exported file should be optimized for network use, and a video composition. The following example illustrates how to use the timeRange property to trim the movie: exportSession.outputURL = <#A file URL#>; exportSession.outputFileType = AVFileTypeQuickTimeMovie; CMTime start = CMTimeMakeWithSeconds(1.0, 600); CMTime duration = CMTimeMakeWithSeconds(3.0, 600); CMTimeRange range = CMTimeRangeMake(start, duration); exportSession.timeRange = range; To create the new file you invoke exportAsynchronouslyWithCompletionHandler:. The completion handler block is called when the export operation finishes; in your implementation of the handler, you should check the session’s status to determine whether the export was successful, failed, or was canceled: [exportSession exportAsynchronouslyWithCompletionHandler:^{ switch ([exportSession status]) { case AVAssetExportSessionStatusFailed: NSLog(@"Export failed: %@", [[exportSession error] localizedDescription]); break; case AVAssetExportSessionStatusCancelled: NSLog(@"Export canceled"); Using Assets Trimming and Transcoding a Movie 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 16break; default: break; } }]; You can cancel the export by sending the session a cancelExport message. The export will fail if you try to overwrite an existing file, or write a file outside of the application’s sandbox. It may also fail if: ● There is an incoming phone call ● Your application is in the background and another application starts playback In these situations, you should typically inform the user that the export failed, then allow the user to restart the export. Reading and Writing Assets You use an AVAssetReader when you want to perform an operation on the contents of an asset. For example, you might read the audio track of an asset to produce a visual representation of the waveform. To produce an asset from media such as sample buffers or still images, you use an AVAssetWriter object. You can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects you have more control over the conversion than you do with AVExportSession. For example of you want to choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process. Using Assets Reading and Writing Assets 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 17To control the playback of assets, you use an AVPlayer object. During playback, you can use an AVPlayerItem object to manage the presentation state of an asset as a whole, and an AVPlayerItemTrack to manage the presentation state of an individual track. To display video, you use an AVPlayerLayer object. Playing Assets A player is a controller object that you use to manage playback of an asset, for example starting and stopping playback, and seeking to a particular time. You use an instance of AVPlayer to play a single asset. On iOS 4.1 and later, you can use an AVQueuePlayer object to play a number of items in sequence (AVQueuePlayer is a subclass of AVPlayer). A player provides you with information about the state of the playback so, if you need to, you can synchronize your user interface with the player’s state. You typically direct the output of a player to specialized Core Animation Layer (an instance of AVPlayerLayer or AVSynchronizedLayer). To learn more about layers, see Core Animation Programming Guide . 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 18 PlaybackMultiple player layers: You can create arbitrarily many AVPlayerLayer objects from a single AVPlayer instance, but only the most-recently-created such layer will display any video content on-screen. Although ultimately you want to play an asset, you don’t provide assets directly to an AVPlayer object. Instead, you provide an instance of AVPlayerItem. A player item manages the presentation state of an asset with which it is associated. A player item contains player item tracks—instances of AVPlayerItemTrack—that correspond to the tracks in the asset. AVAsset AVAssetTrack AVAssetTrack AVPlayerItem AVPlayerItemTrack AVPlayer AVPlayerLayer AVPlayerItemTrack This abstraction means that you can play a given asset using different players simultaneously, but rendered in different ways by each player. Using the item tracks, you can, for example, disable a particular track during playback (you might not want to play the sound component). AVAsset AVPlayer 1 AVPlayer 2 • Video • Audio R • Audio L AVPlayerItem 1 AVPlayerItem 2 AVPlayerItemTracks time = 4:15 time = 2:10 Video Audio R Audio L Off Off Playback Playing Assets 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 19You can initialize a player item with an existing asset, or you can initialize a player item directly from a URL so that you can play a resource at a particular location (AVPlayerItem will then create and configure an asset for the resource). As with AVAsset, though, simply initializing a player item doesn’t necessarily mean it’s ready for immediate playback. You can observe (using key-value observing) an item’s status property to determine if and when it’s ready to play. Handling Different Types of Asset The way you configure an asset for playback may depend on the sort of asset you want to play. Broadly speaking, there are two main types: file-based assets, to which you have random access (such as from a local file, the camera roll, or the Media Library), and stream-based (HTTP Live Stream format). To load and play a file-based asset. There are several steps to playing a file-based asset: ● Create an asset using AVURLAsset and load its tracks using loadValuesAsynchronouslyForKeys:completionHandler:. ● When the asset has loaded its tracks, create an instance of AVPlayerItem using the asset. ● Associate the item with an instance of AVPlayer. ● Wait until the item’s status indicatesthat it’sready to play (typically you use key-value observing to receive a notification when the status changes). This approach is illustrated in “Putting it all Together: Playing a Video File Using AVPlayerLayer” (page 26). To create and prepare an HTTP live stream for playback. Initialize an instance of AVPlayerItem using the URL. (You cannot directly create an AVAsset instance to represent the media in an HTTP Live Stream.) NSURL *url = [NSURL URLWithString:@"<#Live stream URL#>]; // You may find a test stream at . self.playerItem = [AVPlayerItem playerItemWithURL:url]; [playerItem addObserver:self forKeyPath:@"status" options:0 context:&ItemStatusContext]; self.player = [AVPlayer playerWithPlayerItem:playerItem]; When you associate the player item with a player, it starts to become ready to play. When it is ready to play, the player item createsthe AVAsset and AVAssetTrack instances, which you can use to inspect the contents of the live stream. Playback Handling Different Types of Asset 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 20If you simply want to play a live stream, you can take a shortcut and create a player directly using the URL: self.player = [AVPlayer playerWithURL:<#Live stream URL#>]; [player addObserver:self forKeyPath:@"status" options:0 context:&PlayerStatusContext]; As with assets and items, initializing the player does not mean it’s ready for playback. You should observe the player’s status property, which changes to AVPlayerStatusReadyToPlay when it is ready to play. You can also observe the currentItem property to access the player item created for the stream. If you don’t know what kind of URL you have. Follow these steps: 1. Try to initialize an AVURLAsset using the URL, then load its tracks key. If the tracks load successfully, then you create a player item for the asset. 2. If 1 fails, create an AVPlayerItem directly from the URL. Observe the player’s status property to determine whether it becomes playable. If either route succeeds, you end up with a player item that you can then associate with a player. Playing an Item To start playback, you send a play message to the player. - (IBAction)play:sender { [player play]; } In addition to simply playing, you can manage various aspects of the playback,such asthe rate and the location of the playhead. You can also monitor the play state of the player; this is useful if you want to, for example, synchronize the user interface to the presentation state of the asset—see “Monitoring Playback” (page 23). Changing the Playback Rate You change the rate of playback by setting the player’s rate property. aPlayer.rate = 0.5; aPlayer.rate = 2.0; Playback Playing an Item 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 21A value of 1.0 means “play at the natural rate of the current item”. Setting the rate to 0.0 is the same as pausing playback—you can also use pause. Seeking—Repositioning the Playhead To move the playhead to a particular time, you generally use seekToTime:. CMTime fiveSecondsIn = CMTimeMake(5, 1); [player seekToTime:fiveSecondsIn]; The seekToTime: method, however, is tuned for performance rather than precision. If you need to move the playhead precisely, instead you use seekToTime:toleranceBefore:toleranceAfter:. CMTime fiveSecondsIn = CMTimeMake(5, 1); [player seekToTime:fiveSecondsIn toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero]; Using a tolerance of zero may require the framework to decode a large amount of data. You should only use zero if you are, for example, writing a sophisticated media editing application that requires precise control. After playback, the player’s head is set to the end of the item, and further invocations of play have no effect. To position the play head back at the beginning of the item, you can register to receive an AVPlayerItemDidPlayToEndTimeNotification from the item. In the notification’s callback method, you invoke seekToTime: with the argument kCMTimeZero. // Register with the notification center after creating the player item. [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playerItemDidReachEnd:) name:AVPlayerItemDidPlayToEndTimeNotification object:<#The player item#>]; - (void)playerItemDidReachEnd:(NSNotification *)notification { [player seekToTime:kCMTimeZero]; } Playback Playing an Item 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 22Playing Multiple Items On iOS 4.1 and later, you can use an AVQueuePlayer object to play a number of items in sequence. AVQueuePlayer is a subclass of AVPlayer. You initialize a queue player with an array of player items: NSArray *items = <#An array of player items#>; AVQueuePlayer *queuePlayer = [[AVQueuePlayer alloc] initWithItems:items]; You can then play the queue using play, just as you would an AVPlayer object. The queue player plays each item in turn. If you want to skip to the next item, you send the queue player an advanceToNextItem message. You can modify the queue using insertItem:afterItem:, removeItem:, and removeAllItems. When adding a new item, you should typically check whether it can be inserted into the queue, using canInsertItem:afterItem:. You pass nil as the second argument to test whether the new item can be appended to the queue: AVPlayerItem *anItem = <#Get a player item#>; if ([queuePlayer canInsertItem:anItem afterItem:nil]) { [queuePlayer insertItem:anItem afterItem:nil]; } Monitoring Playback You can monitor a number of aspects of the presentation state of a player and the player item being played. This is particularly useful for state changes that are not under your direct control, for example: ● If the user uses multitasking to switch to a different application, a player’s rate property will drop to 0.0. ● If you are playing remotemedia, a playeritem’s loadedTimeRanges and seekableTimeRanges properties will change as more data becomes available. These properties tell you what portions of the player item’s timeline are available. ● A player’s currentItem property changes as a player item is created for an HTTP live stream. ● A player item’s tracks property may change while playing an HTTP live stream. This may happen if the stream offers different encodings for the content; the tracks change if the player switches to a different encoding. ● A player or player item’s status may change if playback fails for some reason. You can use key-value observing to monitor changes to values of these properties. Playback Playing Multiple Items 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 23Important: You should register for KVO change notifications and unregister from KVO change notifications on the main thread. This avoids the possibility of receiving a partial notification if a change is being made on another thread. AV Foundation invokes observeValueForKeyPath:ofObject:change:context: on the main thread, even if the change operation is made on another thread. Responding to a Change in Status When a player or player item’s status changes, it emits a key-value observing change notification. If an object is unable to play for some reason (for example, if the media services are reset), the status changes to AVPlayerStatusFailed or AVPlayerItemStatusFailed as appropriate. In thissituation, the value of the object’s error property is changed to an error object that describes why the object is no longer be able to play. AV Foundation does not specify what thread that the notification is sent on. If you want to update the user interface, you must make sure that any relevant code is invoked on the main thread. This example uses dispatch_async(3) OS X Developer Tools Manual Page to execute code on the main thread. - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if (context == <#Player status context#>) { AVPlayer *thePlayer = (AVPlayer *)object; if ([thePlayer status] == AVPlayerStatusFailed) { NSError *error = [<#The AVPlayer object#> error]; // Respond to error: for example, display an alert sheet. return; } // Deal with other status change if appropriate. } // Deal with other change notifications if appropriate. [super observeValueForKeyPath:keyPath ofObject:object change:change context:context]; return; } Playback Monitoring Playback 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 24Tracking Readiness for Visual Display You can observe an AVPlayerLayer object’s readyForDisplay property to be notified when the layer has user-visible content. In particular, you might insert the player layer into the layer tree only when there is something for the user to look at, and perform a transition from Tracking Time To track changes in the position of the playhead in an AVPlayer object, you can use addPeriodicTimeObserverForInterval:queue:usingBlock: or addBoundaryTimeObserverForTimes:queue:usingBlock:. You might do this to, for example, update your user interface with information about time elapsed or time remaining, or perform some other user interface synchronization. ● With addPeriodicTimeObserverForInterval:queue:usingBlock:,the block you provide isinvoked at the interval you specify, and if time jumps, and when playback starts or stops. ● With addBoundaryTimeObserverForTimes:queue:usingBlock:, you pass an array of CMTimes contained in NSValue objects. The block you provide is invoked whenever any of those times is traversed. Both of the methods return an opaque object that serves as an observer. You must keep a strong reference to the returned object as long as you want the time observation block to be invoked by the player. You must also balance each invocation of these methods with a corresponding call to removeTimeObserver:. With both of these methods, AV Foundation does not guarantee to invoke your block for every interval or boundary passed. AV Foundation does not invoke a block if execution of a previously-invoked block has not completed. You must make sure, therefore, that the work you perform in the block does not overly tax the system. // Assume a property: @property (strong) id playerObserver; Float64 durationSeconds = CMTimeGetSeconds([<#An asset#> duration]); CMTime firstThird = CMTimeMakeWithSeconds(durationSeconds/3.0, 1); CMTime secondThird = CMTimeMakeWithSeconds(durationSeconds*2.0/3.0, 1); NSArray *times = @[[NSValue valueWithCMTime:firstThird], [NSValue valueWithCMTime:secondThird]]; self.playerObserver = [<#A player#> addBoundaryTimeObserverForTimes:times queue:NULL usingBlock:^{ NSString *timeDescription = (NSString *) Playback Monitoring Playback 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 25CFBridgingRelease(CMTimeCopyDescription(NULL, [self.player currentTime])); NSLog(@"Passed a boundary at %@", timeDescription); }]; Reaching the End of an Item You can register to receive an AVPlayerItemDidPlayToEndTimeNotification notification when a player item has completed playback: [[NSNotificationCenter defaultCenter] addObserver:<#The observer, typically self#> selector:@selector(<#The selector name#>) name:AVPlayerItemDidPlayToEndTimeNotification object:<#A player item#>]; Putting it all Together: Playing a Video File Using AVPlayerLayer This brief code example to illustrates how you can use an AVPlayer object to play a video file. It shows how to: ● Configure a view to use an AVPlayerLayer layer ● Create an AVPlayer object ● Create an AVPlayerItem object for a file-based asset, and use key-value observing to observe its status ● Respond to the item becoming ready to play by enabling a button ● Play the item, then restore the player’s head to the beginning. Note: To focus on the most relevant code, this example omits several aspects of a complete application,such as memory management, and unregistering as an observer (for key-value observing or for the notification center). To use AV Foundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces. For a conceptual introduction to playback, skip to “Playing Assets” (page 18). Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 26The Player View To play the visual component of an asset, you need a view containing an AVPlayerLayer layer to which the output of an AVPlayer object can be directed. You can create a simple subclass of UIView to accommodate this: #import #import @interface PlayerView : UIView @property (nonatomic) AVPlayer *player; @end @implementation PlayerView + (Class)layerClass { return [AVPlayerLayer class]; } - (AVPlayer*)player { return [(AVPlayerLayer *)[self layer] player]; } - (void)setPlayer:(AVPlayer *)player { [(AVPlayerLayer *)[self layer] setPlayer:player]; } @end A Simple View Controller Assume you have a simple view controller, declared as follows: @class PlayerView; @interface PlayerViewController : UIViewController @property (nonatomic) AVPlayer *player; @property (nonatomic) AVPlayerItem *playerItem; @property (nonatomic, weak) IBOutlet PlayerView *playerView; @property (nonatomic, weak) IBOutlet UIButton *playButton; - (IBAction)loadAssetFromFile:sender; Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 27- (IBAction)play:sender; - (void)syncUI; @end The syncUI method synchronizes the button’s state with the player’s state: - (void)syncUI { if ((self.player.currentItem != nil) && ([self.player.currentItem status] == AVPlayerItemStatusReadyToPlay)) { self.playButton.enabled = YES; } else { self.playButton.enabled = NO; } } You can invoke syncUI in the view controller’s viewDidLoad method to ensure a consistent user interface when the view is first displayed. - (void)viewDidLoad { [super viewDidLoad]; [self syncUI]; } The other properties and methods are described in the remaining sections. Creating the Asset You create an asset from a URL using AVURLAsset. Creating the asset, however, does not necessarily mean that it’s ready for use. To be used, an asset must have loaded its tracks. To avoid blocking the current thread, you load the asset’s tracks asynchronously using loadValuesAsynchronouslyForKeys:completionHandler:. (The following example assumes your project contains a suitable video resource.) - (IBAction)loadAssetFromFile:sender { Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 28NSURL *fileURL = [[NSBundle mainBundle] URLForResource:<#@"VideoFileName"#> withExtension:<#@"extension"#>]; AVURLAsset *asset = [AVURLAsset URLAssetWithURL:fileURL options:nil]; NSString *tracksKey = @"tracks"; [asset loadValuesAsynchronouslyForKeys:@[tracksKey] completionHandler: ^{ // The completion block goes here. }]; } In the completion block, you create an instance of AVPlayerItem for the asset, and set it as the player for the player view. As with creating the asset, simply creating the player item does not mean it’s ready to use. To determine when it’s ready to play, you can observe the item’s status. You trigger its preparation to play when you associate it with the player. // Define this constant for the key-value observation context. static const NSString *ItemStatusContext; // Completion handler block. dispatch_async(dispatch_get_main_queue(), ^{ NSError *error; AVKeyValueStatus status = [asset statusOfValueForKey:tracksKey error:&error]; if (status == AVKeyValueStatusLoaded) { self.playerItem = [AVPlayerItem playerItemWithAsset:asset]; [self.playerItem addObserver:self forKeyPath:@"status" options:0 context:&ItemStatusContext]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playerItemDidReachEnd:) name:AVPlayerItemDidPlayToEndTimeNotification Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 29object:self.playerItem]; self.player = [AVPlayer playerWithPlayerItem:self.playerItem]; [self.playerView setPlayer:self.player]; } else { // You should deal with the error appropriately. NSLog(@"The asset's tracks were not loaded:\n%@", [error localizedDescription]); } }); Responding to the Player Item’s Status Change When the player item’s status changes, the view controller receives a key-value observing change notification. AV Foundation does not specify what thread that the notification is sent on. If you want to update the user interface, you must make sure that any relevant code is invoked on the main thread. This example uses dispatch_async(3) OS X Developer Tools Manual Page to queue a message on the main thread to synchronize the user interface. - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if (context == &ItemStatusContext) { dispatch_async(dispatch_get_main_queue(), ^{ [self syncUI]; }); return; } [super observeValueForKeyPath:keyPath ofObject:object change:change context:context]; return; } Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 30Playing the Item Playing the item is trivial: you send a play message to the player. - (IBAction)play:sender { [player play]; } This only playsthe item once, though. After playback, the player’s head isset to the end of the item, and further invocations of play will have no effect. To position the play head back at the beginning of the item, you can register to receive an AVPlayerItemDidPlayToEndTimeNotification from the item. In the notification’s callback method, invoke seekToTime: with the argument kCMTimeZero. // Register with the notification center after creating the player item. [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playerItemDidReachEnd:) name:AVPlayerItemDidPlayToEndTimeNotification object:[self.player currentItem]]; - (void)playerItemDidReachEnd:(NSNotification *)notification { [self.player seekToTime:kCMTimeZero]; } Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 31To manage the capture from a device such as a camera or microphone, you assemble objects to represent inputs and outputs, and use an instance of AVCaptureSession to coordinate the data flow between them. Minimally you need: ● An instance of AVCaptureDevice to represent the input device, such as a camera or microphone ● An instance of a concrete subclass of AVCaptureInput to configure the ports from the input device ● An instance of a concrete subclass of AVCaptureOutput to manage the output to a movie file or still image ● An instance of AVCaptureSession to coordinate the data flow from the input to the output To show the user what a camera is recording, you can use an instance of AVCaptureVideoPreviewLayer (a subclass of CALayer). You can configure multiple inputs and outputs, coordinated by a single session: AVCapture Device Input AVCapture Device Input AVCaptureMovieFileOutput AVCaptureStillImageOutput AVCaptureVideoPreviewLayer AVCapture Session Capture Session For many applications, thisis as much detail as you need. Forsome operations, however, (if you want to monitor the power levels in an audio channel, for example) you need to consider how the various ports of an input device are represented, how those ports are connected to the output. 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 32 Media CaptureA connection between a capture input and a capture output in a capture session is represented by an AVCaptureConnection object. Capture inputs(instances of AVCaptureInput) have one or more input ports (instances of AVCaptureInputPort). Capture outputs (instances of AVCaptureOutput) can accept data from one or more sources (for example, an AVCaptureMovieFileOutput object accepts both video and audio data). When you add an input or an output to a session, the session “greedily” forms connections between all the compatible capture inputs’ ports and capture outputs. A connection between a capture input and a capture output is represented by an AVCaptureConnection object. Capture Device Input Capture connection Capture connection Capture input port (Video) Capture input port (Audio) Capture Device Input Capture input port (Audio) Connections AVCaptureMovieFileOutput AVCaptureStillImageOutput Connections Capture Session Capture connection You can use a capture connection to enable or disable the flow of data from a given input or to a given output. You can also use a connection to monitor the average and peak power levels in an audio channel. Use a Capture Session to Coordinate Data Flow AVCaptureSession object is the central coordinating object you use to manage data capture. You use an instance to coordinate the flow of data from AV input devices to outputs. You add the capture devices and outputs you want to the session, then start data flow by sending the session a startRunning message, and stop recording by sending a stopRunning message. AVCaptureSession *session = [[AVCaptureSession alloc] init]; // Add inputs and outputs. [session startRunning]; Media Capture Use a Capture Session to Coordinate Data Flow 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 33Configuring a Session You use a preset on the session to specify the image quality and resolution you want. A preset is a constant that identifies one of a number of possible configurations; in some cases the actual configuration is device-specific: Symbol Resolution Comments Highest recording quality. This varies per device. AVCaptureSessionPresetHigh High Suitable for WiFi sharing. The actual values may change. AVCaptureSessionPresetMedium Medium Suitable for 3G sharing. The actual values may change. AVCaptureSessionPresetLow Low AVCaptureSessionPreset640x480 640x480 VGA. AVCaptureSessionPreset1280x720 1280x720 720p HD. Full photo resolution. This is not supported for video output. AVCaptureSessionPresetPhoto Photo For examples of the actual valuesthese presetsrepresent for various devices,see “Saving to a Movie File” (page 43) and “Capturing Still Images” (page 47). If you want to set a size-specific configuration, you should check whether it is supported before setting it: if ([session canSetSessionPreset:AVCaptureSessionPreset1280x720]) { session.sessionPreset = AVCaptureSessionPreset1280x720; } else { // Handle the failure. } In many situations, you create a session and the various inputs and outputs all at once. Sometimes, however, you may want to reconfigure a running session, perhaps as different input devices become available, or in response to user request. This can present a challenge, since, if you change them one at a time, a new setting may be incompatible with an existing setting. To deal with this, you use beginConfiguration and commitConfiguration to batch multiple configuration operations into an atomic update. After calling Media Capture Use a Capture Session to Coordinate Data Flow 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 34beginConfiguration, you can for example add or remove outputs, alter the sessionPreset, or configure individual capture input or output properties. No changes are actually made until you invoke commitConfiguration, at which time they are applied together. [session beginConfiguration]; // Remove an existing capture device. // Add a new capture device. // Reset the preset. [session commitConfiguration]; Monitoring Capture Session State A capture session posts notifications that you can observe to be notified, for example, when it starts or stops running, or when it is interrupted. You can also register to receive an AVCaptureSessionRuntimeErrorNotification if a runtime error occurs. You can also interrogate the session’s running property to find out if it is running, and its interrupted property to find out if it is interrupted. An AVCaptureDevice Object Represents an Input Device An AVCaptureDevice object abstracts a physical capture device that provides input data (such as audio or video) to an AVCaptureSession object. There is one object for each input device, so for example on an iPhone 3GS there is one video input for the camera and one audio input for the microphone; on an iPhone 4 there are two video inputs—one for front-facing the camera, one for the back-facing camera—and one audio input for the microphone. You can find out what capture devices are currently available using the AVCaptureDevice class methods devices and devicesWithMediaType:, and if necessary find out what featuresthe devices offer (see “Device Capture Settings” (page 36)). The list of available devices may change, though. Current devices may become unavailable (if they’re used by another application), and new devices may become available, (if they’re relinquished by another application). You should register to receive AVCaptureDeviceWasConnectedNotification and AVCaptureDeviceWasDisconnectedNotificationnotifications to be alerted when the list of available devices changes. You add a device to a capture session using a capture input (see “Use Capture Inputs to Add a Capture Device to a Session” (page 41)). Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 35Device Characteristics You can ask a device about several different characteristics. You can test whether it provides a particular media type or supports a given capture session preset using hasMediaType: and supportsAVCaptureSessionPreset: respectively. To provide information to the user, you can find out the position of the capture device (whether it is on the front or the back of the unit they’re using), and its localized name. This may be useful if you want to present a list of capture devices to allow the user to choose one. The following code example iterates over all the available devices and logs their name, and for video devices their position on the unit. NSArray *devices = [AVCaptureDevice devices]; for (AVCaptureDevice *device in devices) { NSLog(@"Device name: %@", [device localizedName]); if ([device hasMediaType:AVMediaTypeVideo]) { if ([device position] == AVCaptureDevicePositionBack) { NSLog(@"Device position : back"); } else { NSLog(@"Device position : front"); } } } In addition, you can find out the device’s model ID and its unique ID. Device Capture Settings Different devices have different capabilities; for example, some may support different focus or flash modes; some may support focus on a point of interest. Feature iPhone 3G iPhone 3GS iPhone 4 (Back) iPhone 4 (Front) Focus mode NO YES YES NO Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 36Feature iPhone 3G iPhone 3GS iPhone 4 (Back) iPhone 4 (Front) Focus point of interest NO YES YES NO Exposure mode YES YES YES YES Exposure point of interest NO YES YES YES White balance mode YES YES YES YES Flash mode NO NO YES NO Torch mode NO NO YES NO The following code fragment shows how you can find video input devices that have a torch mode and support a given capture session preset: NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; NSMutableArray *torchDevices = [[NSMutableArray alloc] init]; for (AVCaptureDevice *device in devices) { [if ([device hasTorch] && [device supportsAVCaptureSessionPreset:AVCaptureSessionPreset640x480]) { [torchDevices addObject:device]; } } If you find multiple devices that meet your criteria, you might let the user choose which one they want to use. To display a description of a device to the user, you can use its localizedName property. You use the various different features in similar ways. There are constants to specify a particular mode, and you can ask a device whether it supports a particular mode. In several cases you can observe a property to be notified when a feature is changing. In all cases, you should lock the device before changing the mode of a particular feature, as described in “Configuring a Device” (page 40). Note: Focus point of interest and exposure point of interest are mutually exclusive, as are focus mode and exposure mode. Focus modes There are three focus modes: Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 37● AVCaptureFocusModeLocked: the focal length is fixed. This is useful when you want to allow the user to compose a scene then lock the focus. ● AVCaptureFocusModeAutoFocus: the camera does a single scan focus then reverts to locked. This is suitable for a situation where you want to select a particular item on which to focus and then maintain focus on that item even if it is not the center of the scene. ● AVCaptureFocusModeContinuousAutoFocus: the camera continuously auto-focuses as needed. You use the isFocusModeSupported: method to determine whether a device supports a given focus mode, then set the mode using the focusMode property. In addition, a device may support a focus point of interest. You test for support using focusPointOfInterestSupported. If it’ssupported, you set the focal point using focusPointOfInterest. You pass a CGPoint where {0,0} representsthe top left of the picture area, and {1,1} representsthe bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode. You can use the adjustingFocus property to determine whether a device is currently focusing. You can observe the property using key-value observing to be notified when a device starts and stops focusing. If you change the focus mode settings, you can return them to the default configuration as follows: if ([currentDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]) { CGPoint autofocusPoint = CGPointMake(0.5f, 0.5f); [currentDevice setFocusPointOfInterest:autofocusPoint]; [currentDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus]; } Exposure modes There are two exposure modes: ● AVCaptureExposureModeLocked: the exposure mode is fixed. ● AVCaptureExposureModeAutoExpose: the camera continuously changesthe exposure level as needed. You use the isExposureModeSupported: method to determine whether a device supports a given exposure mode, then set the mode using the exposureMode property. Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 38In addition, a device may support an exposure point of interest. You test for support using exposurePointOfInterestSupported. If it’s supported, you set the exposure point using exposurePointOfInterest. You pass a CGPoint where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode. You can use the adjustingExposure property to determine whether a device is currently changing its exposure setting. You can observe the property using key-value observing to be notified when a device starts and stops changing its exposure setting. If you change the exposure settings, you can return them to the default configuration as follows: if ([currentDevice isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) { CGPoint exposurePoint = CGPointMake(0.5f, 0.5f); [currentDevice setExposurePointOfInterest:exposurePoint]; [currentDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure]; } Flash modes There are three flash modes: ● AVCaptureFlashModeOff: the flash will never fire. ● AVCaptureFlashModeOn: the flash will always fire. ● AVCaptureFlashModeAuto: the flash will fire if needed. You use hasFlash to determine whether a device has a flash. You use the isFlashModeSupported: method to determine whether a device supports a given flash mode, then set the mode using the flashMode property. Torch mode Torch mode is where a camera uses the flash continuously at a low power to illuminate a video capture. There are three torch modes: ● AVCaptureTorchModeOff: the torch is always off. ● AVCaptureTorchModeOn: the torch is always on. ● AVCaptureTorchModeAuto: the torch is switched on and off as needed. Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 39You use hasTorch to determine whether a device has a flash. You use the isTorchModeSupported: method to determine whether a device supports a given flash mode, then set the mode using the torchMode property. For devices with a torch, the torch only turns on if the device is associated with a running capture session. White balance There are two white balance modes: ● AVCaptureWhiteBalanceModeLocked: the white balance mode is fixed. ● AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance: the camera continuously changes the white balance as needed. You use the isWhiteBalanceModeSupported: method to determine whether a device supports a given white balance mode, then set the mode using the whiteBalanceMode property. You can use the adjustingWhiteBalance property to determine whether a device is currently changing its white balance setting. You can observe the property using key-value observing to be notified when a device starts and stops changing its white balance setting. Configuring a Device To set capture properties on a device, you must first acquire a lock on the device using lockForConfiguration:. This avoids making changes that may be incompatible with settings in other applications. The following code fragment illustrates how to approach changing the focus mode on a device by first determining whether the mode is supported, then attempting to lock the device for reconfiguration. The focus mode is changed only if the lock is obtained, and the lock is released immediately afterward. if ([device isFocusModeSupported:AVCaptureFocusModeLocked]) { NSError *error = nil; if ([device lockForConfiguration:&error]) { device.focusMode = AVCaptureFocusModeLocked; [device unlockForConfiguration]; } else { // Respond to the failure as appropriate. You should only hold the device lock if you need settable device properties to remain unchanged. Holding the device lock unnecessarily may degrade capture quality in other applications sharing the device. Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 40Switching Between Devices Sometimes you may want to allow the user to switch between input devices—for example, on an iPhone 4 they could switch from using the front to the back camera. To avoid pauses or stuttering, you can reconfigure a session while it is running, however you should use beginConfiguration and commitConfiguration to bracket your configuration changes: AVCaptureSession *session = <#A capture session#>; [session beginConfiguration]; [session removeInput:frontFacingCameraDeviceInput]; [session addInput:backFacingCameraDeviceInput]; [session commitConfiguration]; When the outermost commitConfiguration is invoked, all the changes are made together. This ensures a smooth transition. Use Capture Inputs to Add a Capture Device to a Session To add a capture device to a capture session, you use an instance of AVCaptureDeviceInput (a concrete subclass of the abstract AVCaptureInput class). The capture device input manages the device’s ports. NSError *error; AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (!input) { // Handle the error appropriately. } You add inputs to a session using addInput:. If appropriate, you can check whether a capture input is compatible with an existing session using canAddInput:. AVCaptureSession *captureSession = <#Get a capture session#>; AVCaptureDeviceInput *captureDeviceInput = <#Get a capture device input#>; if ([captureSession canAddInput:captureDeviceInput]) { [captureSession addInput:captureDeviceInput]; Media Capture Use Capture Inputs to Add a Capture Device to a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 41} else { // Handle the failure. } See “Configuring a Session” (page 34) for more details on how you might reconfigure a running session. An AVCaptureInput vends one or more streams of media data. For example, input devices can provide both audio and video data. Each media stream provided by an input is represented by an AVCaptureInputPort object. A capture session uses an AVCaptureConnection object to define the mapping between a set of AVCaptureInputPort objects and a single AVCaptureOutput. Use Capture Outputs to Get Output from a Session To get output from a capture session, you add one or more outputs. An output is an instance of a concrete subclass of AVCaptureOutput; you use: ● AVCaptureMovieFileOutput to output to a movie file ● AVCaptureVideoDataOutput if you want to process frames from the video being captured ● AVCaptureAudioDataOutput if you want to process the audio data being captured ● AVCaptureStillImageOutput if you want to capture still images with accompanying metadata You add outputs to a capture session using addOutput:. You check whether a capture output is compatible with an existing session using canAddOutput:. You can add and remove outputs as you want while the session is running. AVCaptureSession *captureSession = <#Get a capture session#>; AVCaptureMovieFileOutput *movieInput = <#Create and configure a movie output#>; if ([captureSession canAddOutput:movieInput]) { [captureSession addOutput:movieInput]; } else { // Handle the failure. } Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 42Saving to a Movie File You save movie data to a file using an AVCaptureMovieFileOutput object. (AVCaptureMovieFileOutput is a concrete subclass of AVCaptureFileOutput, which defines much of the basic behavior.) You can configure various aspects of the movie file output, such as the maximum duration of the recording, or the maximum file size. You can also prohibit recording if there is less than a given amount of disk space left. AVCaptureMovieFileOutput *aMovieFileOutput = [[AVCaptureMovieFileOutput alloc] init]; CMTime maxDuration = <#Create a CMTime to represent the maximum duration#>; aMovieFileOutput.maxRecordedDuration = maxDuration; aMovieFileOutput.minFreeDiskSpaceLimit = <#An appropriate minimum given the quality of the movie format and the duration#>; The resolution and bit rate for the output depend on the capture session’s sessionPreset. The video encoding is typically H.264 and audio encoding AAC. The actual values vary by device, as illustrated in the following table. Preset iPhone 3G iPhone 3GS iPhone 4 (Back) iPhone 4 (Front) 640x480 3.5 mbps 1280x720 10.5 mbps 640x480 3.5 mbps No video Apple Lossless High 480x360 700 kbps 480x360 700 kbps 480x360 700 kbps No video Apple Lossless Medium 192x144 128 kbps 192x144 128 kbps 192x144 128 kbps No video Apple Lossless Low 640x480 3.5 mbps 640x480 3.5 mbps 640x480 3.5 mbps No video Apple Lossless 640x480 No video 64 kbps AAC No video 64 kbps AAC No video 64 kbps AAC No video Apple Lossless 1280x720 Notsupported for video output Notsupported for video output Not supported for video output Not supported for video output Photo Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 43Starting a Recording You start recording a QuickTime movie using startRecordingToOutputFileURL:recordingDelegate:. You need to supply a file-based URL and a delegate. The URL must not identify an existing file, as the movie file output does not overwrite existing resources. You must also have permission to write to the specified location. The delegate must conform to the AVCaptureFileOutputRecordingDelegate protocol, and must implement the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method. AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>; NSURL *fileURL = <#A file URL that identifies the output location#>; [aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>]; In the implementation of captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:, the delegate might write the resulting movie to the camera roll. Itshould also check for any errorsthat might have occurred. Ensuring the File Was Written Successfully To determine whether the file was saved successfully, in the implementation of captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: you check not only the error, but also the value of the AVErrorRecordingSuccessfullyFinishedKey in the error’s user info dictionary: - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error { BOOL recordedSuccessfully = YES; if ([error code] != noErr) { // A problem occurred: Find out if the recording was successful. id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey]; if (value) { recordedSuccessfully = [value boolValue]; } Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 44} // Continue as appropriate... You should check the value of the AVErrorRecordingSuccessfullyFinishedKey in the error’s user info dictionary because the file might have been saved successfully, even though you got an error. The error might indicate that one of your recording constraints wasreached, for example AVErrorMaximumDurationReached or AVErrorMaximumFileSizeReached. Other reasons the recording might stop are: ● The disk is full—AVErrorDiskFull. ● The recording device was disconnected (for example, the microphone was removed from an iPod touch)—AVErrorDeviceWasDisconnected. ● The session wasinterrupted (for example, a phone call wasreceived)—AVErrorSessionWasInterrupted. Adding Metadata to a File You can set metadata for the movie file at any time, even while recording. This is useful for situations where the information is not available when the recording starts, as may be the case with location information. Metadata for a file output is represented by an array of AVMetadataItem objects; you use an instance of its mutable subclass, AVMutableMetadataItem, to create metadata of your own. AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>; NSArray *existingMetadataArray = aMovieFileOutput.metadata; NSMutableArray *newMetadataArray = nil; if (existingMetadataArray) { newMetadataArray = [existingMetadataArray mutableCopy]; } else { newMetadataArray = [[NSMutableArray alloc] init]; } AVMutableMetadataItem *item = [[AVMutableMetadataItem alloc] init]; item.keySpace = AVMetadataKeySpaceCommon; item.key = AVMetadataCommonKeyLocation; CLLocation *location - <#The location to set#>; item.value = [NSString stringWithFormat:@"%+08.4lf%+09.4lf/" location.coordinate.latitude, location.coordinate.longitude]; Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 45[newMetadataArray addObject:item]; aMovieFileOutput.metadata = newMetadataArray; Processing Frames of Video An AVCaptureVideoDataOutput object uses delegation to vend video frames. You set the delegate using setSampleBufferDelegate:queue:. In addition to the delegate, you specify a serial queue on which they delegate methods are invoked. You must use a serial queue to ensure that frames are delivered to the delegate in the proper order. You should not pass the queue returned by dispatch_get_current_queue since there is no guarantee as to which thread the current queue is running on. You can use the queue to modify the priority given to delivering and processing the video frames. The frames are presented in the delegate method, captureOutput:didOutputSampleBuffer:fromConnection:, as instances of the CMSampleBuffer opaque type (see “Representations of Media” (page 58)). By default, the buffers are emitted in the camera’s most efficient format. You can use the videoSettings property to specify a custom output format. The video settings property is a dictionary; currently, the only supported key is kCVPixelBufferPixelFormatTypeKey. The recommended pixel format choices for iPhone 4 are kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange or kCVPixelFormatType_32BGRA; for iPhone 3G the recommended pixel format choices are kCVPixelFormatType_422YpCbCr8 or kCVPixelFormatType_32BGRA. Both Core Graphics and OpenGL work well with the BGRA format: AVCaptureSession *captureSession = <#Get a capture session#>; NSDictionary *newSettings = @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) }; captureSession.videoSettings = newSettings; Performance Considerations for Processing Video You should set the session output to the lowest practical resolution for your application. Setting the output to a higher resolution than necessary wastes processing cycles and needlessly consumes power. You must ensure that your implementation of captureOutput:didOutputSampleBuffer:fromConnection: is able to process a sample buffer within the amount of time allotted to a frame. If it takes too long, and you hold onto the video frames, AV Foundation will stop delivering frames, not only to your delegate but also other outputs such as a preview layer. Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 46You can use the capture video data output’s minFrameDuration property to ensure you have enough time to process a frame—at the cost of having a lower frame rate than would otherwise be the case. You might also ensure that the alwaysDiscardsLateVideoFrames property is set to YES (the default). This ensures that any late video frames are dropped rather than handed to you for processing. Alternatively, if you are recording and it doesn’t matter if the output fames are a little late, you would prefer to get all of them, you can set the property value to NO. This does not mean that frames will not be dropped (that is, frames may still be dropped), but they may not be dropped as early, or as efficiently. Capturing Still Images You use an AVCaptureStillImageOutput output if you want to capture still images with accompanying metadata. The resolution of the image depends on the preset for the session, as illustrated in this table: Preset iPhone 3G iPhone 3GS iPhone 4 (Back) iPhone 4 (Front) High 400x304 640x480 1280x720 640x480 Medium 400x304 480x360 480x360 480x360 Low 400x304 192x144 192x144 192x144 640x480 N/A 640x480 640x480 640x480 1280x720 N/A N/A 1280x720 N/A Photo 1600x1200 2048x1536 2592x1936 640x480 Pixel and Encoding Formats Different devices support different image formats: iPhone 3G iPhone 3GS iPhone 4 yuvs, 2vuy, BGRA, jpeg 420f, 420v, BGRA, jpeg 420f, 420v, BGRA, jpeg You can find out what pixel and codec types are supported using availableImageDataCVPixelFormatTypes and availableImageDataCodecTypes respectively. You set the outputSettings dictionary to specify the image format you want, for example: AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init]; NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG}; [stillImageOutput setOutputSettings:outputSettings]; Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 47If you want to capture a JPEG image, you should typically not specify your own compression format. Instead, you should let the still image output do the compression for you,since its compression is hardware-accelerated. If you need a data representation of the image, you can use jpegStillImageNSDataRepresentation: to get an NSData object without re-compressing the data, even if you modify the image’s metadata. Capturing an Image When you want to capture an image, you send the output a captureStillImageAsynchronouslyFromConnection:completionHandler: message. The first argument is the connection you want to use for the capture. You need to look for the connection whose input port is collecting video: AVCaptureConnection *videoConnection = nil; for (AVCaptureConnection *connection in stillImageOutput.connections) { for (AVCaptureInputPort *port in [connection inputPorts]) { if ([[port mediaType] isEqual:AVMediaTypeVideo] ) { videoConnection = connection; break; } } if (videoConnection) { break; } } The second argument to captureStillImageAsynchronouslyFromConnection:completionHandler: is a block that takes two arguments: a CMSampleBuffer containing the image data, and an error. The sample buffer itself may contain metadata,such as an Exif dictionary, as an attachment. You can modify the attachments should you want, but note the optimization for JPEG images discussed in “Pixel and Encoding Formats” (page 47). [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) { CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL); if (exifAttachments) { // Do something with the attachments. } Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 48// Continue as appropriate. }]; Showing the User What’s Being Recorded You can provide the user with a preview of what’s being recorded by the camera using a preview layer, or by the microphone by monitoring the audio channel. Video Preview You can provide the user with a preview of what’s being recorded using an AVCaptureVideoPreviewLayer object. AVCaptureVideoPreviewLayer is a subclass ofCALayer (see Core Animation Programming Guide . You don’t need any outputs to show the preview. Unlike a capture output, a video preview layer maintains a strong reference to the session with which it is associated. This is to ensure that the session is not deallocated while the layer is attempting to display video. This is reflected in the way you initialize a preview layer: AVCaptureSession *captureSession = <#Get a capture session#>; CALayer *viewLayer = <#Get a layer from the view in which you want to present the preview#>; AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]; [viewLayer addSublayer:captureVideoPreviewLayer]; In general, the preview layer behaves like any other CALayer object in the render tree (see Core Animation Programming Guide ). You can scale the image and perform transformations, rotations and so on just as you would any layer. One difference is that you may need to set the layer’s orientation property to specify how itshould rotate images coming from the camera. In addition, on iPhone 4 the preview layersupports mirroring (this is the default when previewing the front-facing camera). Video Gravity Modes The preview layer supports three gravity modes that you set using videoGravity: ● AVLayerVideoGravityResizeAspect: This preserves the aspect ratio, leaving black bars where the video does not fill the available screen area. Media Capture Showing the User What’s Being Recorded 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 49● AVLayerVideoGravityResizeAspectFill: This preservesthe aspect ratio, but fillsthe available screen area, cropping the video when necessary. ● AVLayerVideoGravityResize: This simply stretches the video to fill the available screen area, even if doing so distorts the image. Using “Tap to Focus” With a Preview You need to take care when implementing tap-to-focus in conjunction with a preview layer. You must account for the preview orientation and gravity of the layer, and the possibility that the preview may be mirrored. Showing Audio Levels To monitor the average and peak power levels in an audio channel in a capture connection, you use an AVCaptureAudioChannel object. Audio levels are not key-value observable, so you must poll for updated levels as often as you want to update your user interface (for example, 10 times a second). AVCaptureAudioDataOutput *audioDataOutput = <#Get the audio data output#>; NSArray *connections = audioDataOutput.connections; if ([connections count] > 0) { // There should be only one connection to an AVCaptureAudioDataOutput. AVCaptureConnection *connection = [connections objectAtIndex:0]; NSArray *audioChannels = connection.audioChannels; for (AVCaptureAudioChannel *channel in audioChannels) { float avg = channel.averagePowerLevel; float peak = channel.peakHoldLevel; // Update the level meter user interface. } } Putting it all Together: Capturing Video Frames as UIImage Objects This brief code example to illustrates how you can capture video and convert the frames you get to UIImage objects. It shows you how to: ● Create an AVCaptureSession object to coordinate the flow of data from an AV input device to an output Media Capture Putting it all Together: Capturing Video Frames as UIImage Objects 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 50● Find the AVCaptureDevice object for the input type you want ● Create an AVCaptureDeviceInput object for the device ● Create an AVCaptureVideoDataOutput object to produce video frames ● Implement a delegate for the AVCaptureVideoDataOutput object to process video frames ● Implement a function to convert the CMSampleBuffer received by the delegate into a UIImage object Note: To focus on the most relevant code, this example omits several aspects of a complete application, including memory management. To use AV Foundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces. Create and Configure a Capture Session You use an AVCaptureSession object to coordinate the flow of data from an AV input device to an output. Create a session, and configure it to produce medium resolution video frames. AVCaptureSession *session = [[AVCaptureSession alloc] init]; session.sessionPreset = AVCaptureSessionPresetMedium; Create and Configure the Device and Device Input Capture devices are represented by AVCaptureDevice objects; the class provides methods to retrieve an object for the input type you want. A device has one or more ports, configured using an AVCaptureInput object. Typically, you use the capture input in its default configuration. Find a video capture device, then create a device input with the device and add it to the session. AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; NSError *error = nil; AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (!input) { // Handle the error appropriately. } [session addInput:input]; Media Capture Putting it all Together: Capturing Video Frames as UIImage Objects 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 51Create and Configure the Data Output You use an AVCaptureVideoDataOutput object to process uncompressed frames from the video being captured. You typically configure several aspects of an output. For video, for example, you can specify the pixel format using the videoSettings property, and cap the frame rate by setting the minFrameDuration property. Create and configure an output for video data and add it to the session; cap the frame rate to 15 fps by setting the minFrameDuration property to 1/15 second: AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init]; [session addOutput:output]; output.videoSettings = @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) }; output.minFrameDuration = CMTimeMake(1, 15); The data output object uses delegation to vend the video frames. The delegate must adopt the AVCaptureVideoDataOutputSampleBufferDelegate protocol. When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked. dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL); [output setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); You use the queue to modify the priority given to delivering and processing the video frames. Implement the Sample Buffer Delegate Method In the delegate class, implement the method (captureOutput:didOutputSampleBuffer:fromConnection:) that is called when a sample buffer is written. The video data output object delivers frames as CMSampleBuffers, so you need to convert from the CMSampleBuffer to a UIImage object. The function for this operation isshown in “Converting a CMSampleBuffer to a UIImage” (page 59). - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { Media Capture Putting it all Together: Capturing Video Frames as UIImage Objects 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 52UIImage *image = imageFromSampleBuffer(sampleBuffer); // Add your code here that uses the image. } Remember that the delegate method is invoked on the queue you specified in setSampleBufferDelegate:queue:; if you want to update the user interface, you must invoke any relevant code on the main thread. Starting and Stopping Recording After configuring the capture session, you send it a startRunning message to start the recording. [session startRunning]; To stop recording, you send the session a stopRunning message. Media Capture Putting it all Together: Capturing Video Frames as UIImage Objects 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 53Time-based audio-visual data such as a movie file or a video stream is represented in the AV Foundation framework by AVAsset. Its structure dictates much of the framework works. Several low-level data structures that AV Foundation uses to represent time and media such as sample buffers come from the Core Media framework. Representation of Assets AVAsset is the core class in the AV Foundation framework. It provides a format-independent abstraction of time-based audiovisual data, such as a movie file or a video stream. In many cases, you work with one of its subclasses: you use the composition subclasses when you create new assets (see “Editing” (page 7)), and you use AVURLAsset to create a new asset instance from media at a given URL (including assetsfrom the MPMedia framework or the Asset Library framework—see “Using Assets” (page 9)). AVURLAsset AVMutableComposition AVComposition AVAsset NSObject An asset contains a collection of tracks that are intended to be presented or processed together, each of a uniform media type, including (but not limited to) audio, video, text, closed captions, and subtitles. The asset object providesinformation about whole resource,such asits duration or title, as well as hintsfor presentation, such as its natural size. Assets may also have metadata, represented by instances of AVMetadataItem. 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 54 Time and Media RepresentationsA track is represented by an instance of AVAssetTrack. In a typical simple case, one track represents the audio component and another represents the video component; in a complex composition, there may be multiple overlapping tracks of audio and video. AVAsset AVMetadataItem AVMetadataItem AVAssetTrack AVAssetTrack AVAssetTrack AVAssetTrack A track has a number of properties, such as its type (video or audio), visual and/or audible characteristics (as appropriate), metadata, and timeline (expressed in terms of its parent asset). A track also has an array of format descriptions. The array contains CMFormatDescriptions (see CMFormatDescriptionRef), each of which describes the format of media samples referenced by the track. A track that contains uniform media (for example, all encoded using to the same settings) will provide an array with a count of 1. A track may itself be divided into segments, represented by instances of AVAssetTrackSegment. A segment is a time mapping from the source to the asset track timeline. Representations of Time Time in AV Foundation is represented by primitive structures from the Core Media framework. CMTime Represents a Length of Time CMTime is a C structure that represents time as a rational number, with a numerator (an int64_t value), and a denominator (an int32_t timescale).Conceptually, the timescale specifies the fraction of a second each unit in the numerator occupies. Thusif the timescale is 4, each unit represents a quarter of a second; if the timescale is 10, each unit represents a tenth of a second, and so on. You frequently use a timescale of 600, since this is a common multiple of several commonly-used frame-rates: 24 frames per second (fps) for film, 30 fps for NTSC (used for TV in North America and Japan), and 25 fps for PAL (used for TV in Europe). Using a timescale of 600, you can exactly represent any number of frames in these systems. In addition to a simple time value, a CMTime can represent non-numeric values: +infinity, -infinity, and indefinite. It can also indicate whether the time been rounded at some point, and it maintains an epoch number. Time and Media Representations Representations of Time 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 55Using CMTime You create a time using CMTimeMake, or one of the related functions such as CMTimeMakeWithSeconds (which allows you to create a time using a float value and specify a preferred time scale). There are several functions for time-based arithmetic and to compare times, as illustrated in the following example. CMTime time1 = CMTimeMake(200, 2); // 200 half-seconds CMTime time2 = CMTimeMake(400, 4); // 400 quarter-seconds // time1 and time2 both represent 100 seconds, but using different timescales. if (CMTimeCompare(time1, time2) == 0) { NSLog(@"time1 and time2 are the same"); } Float64 float64Seconds = 200.0 / 3; CMTime time3 = CMTimeMakeWithSeconds(float64Seconds , 3); // 66.66... third-seconds time3 = CMTimeMultiply(time3, 3); // time3 now represents 200 seconds; next subtract time1 (100 seconds). time3 = CMTimeSubtract(time3, time1); CMTimeShow(time3); if (CMTIME_COMPARE_INLINE(time2, ==, time3)) { NSLog(@"time2 and time3 are the same"); } For a list of all the available functions, see CMTime Reference . Special Values of CMTime Core Media provides constants for special values: kCMTimeZero, kCMTimeInvalid, kCMTimePositiveInfinity, and kCMTimeNegativeInfinity. There are many ways, though in which a CMTime can, for example, represent a time that is invalid. If you need to test whether a CMTime is valid, or a non-numeric value, you should use an appropriate macro, such as CMTIME_IS_INVALID, CMTIME_IS_POSITIVE_INFINITY, or CMTIME_IS_INDEFINITE. CMTime myTime = <#Get a CMTime#>; if (CMTIME_IS_INVALID(myTime)) { Time and Media Representations Representations of Time 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 56// Perhaps treat this as an error; display a suitable alert to the user. } You should not compare the value of an arbitrary CMTime with kCMTimeInvalid. Representing a CMTime as an Object If you need to use CMTimes in annotations or Core Foundation containers, you can convert a CMTime to and from a CFDictionary (see CFDictionaryRef) using CMTimeCopyAsDictionary and CMTimeMakeFromDictionary respectively. You can also get a string representation of a CMTime using CMTimeCopyDescription. Epochs The epoch number of a CMTime is usually set to 0, but you can use it to distinguish unrelated timelines. For example, the epoch could be incremented each cycle through a presentation loop, to differentiate between time N in loop 0 from time N in loop 1. CMTimeRange Represents a Time Range CMTimeRange is a C structure that has a start time and duration, both expressed as CMTimes. A time range does not include the time that is the start time plus the duration. You create a time range using CMTimeRangeMake or CMTimeRangeFromTimeToTime. There are constraints on the value of the CMTimes’ epochs: ● CMTimeRanges cannot span different epochs. ● The epoch in a CMTime that represents a timestamp may be non-zero, but you can only perform range operations (such as CMTimeRangeGetUnion) on ranges whose start fields have the same epoch. ● The epoch in a CMTime that represents a duration should always be 0, and the value must be non-negative. Working with Time Ranges Core Media provides functions you can use to determine whether a time range contains a given time or other time range, or whether two time ranges are equal, and to calculate unions and intersections of time ranges, such as CMTimeRangeContainsTime, CMTimeRangeEqual, CMTimeRangeContainsTimeRange, and CMTimeRangeGetUnion. Given that a time range does not include the time that is the start time plus the duration, the following expression always evaluates to false: Time and Media Representations Representations of Time 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 57CMTimeRangeContainsTime(range, CMTimeRangeGetEnd(range)) For a list of all the available functions, see CMTimeRange Reference . Special Values of CMTimeRange Core Media provides constants for a zero-length range and an invalid range, kCMTimeRangeZero and kCMTimeRangeInvalid respectively. There are many ways, though in which a CMTimeRange can be invalid, or zero—or indefinite (if one of the CMTimes is indefinite. If you need to test whether a CMTimeRange is valid, zero, or indefinite, you should use an appropriate macro: CMTIMERANGE_IS_VALID, CMTIMERANGE_IS_INVALID, CMTIMERANGE_IS_EMPTY, or CMTIMERANGE_IS_EMPTY. CMTimeRange myTimeRange = <#Get a CMTimeRange#>; if (CMTIMERANGE_IS_EMPTY(myTimeRange)) { // The time range is zero. } You should not compare the value of an arbitrary CMTimeRange with kCMTimeRangeInvalid. Representing a CMTimeRange as an Object If you need to use CMTimeRangesin annotations or Core Foundation containers, you can convert a CMTimeRange to and from a CFDictionary (see CFDictionaryRef) using CMTimeRangeCopyAsDictionary and CMTimeRangeMakeFromDictionary respectively. You can also get a string representation of a CMTime using CMTimeRangeCopyDescription. Representations of Media Video data and its associated metadata is represented in AV Foundation by opaque objects from the Core Media framework. Core Media represents video data using CMSampleBuffer (see CMSampleBufferRef). CMSampleBuffer is a Core Foundation-style opaque type; an instance contains the sample buffer for a frame of video data as a Core Video pixel buffer (see CVPixelBufferRef). You access the pixel buffer from a sample buffer using CMSampleBufferGetImageBuffer: CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(<#A CMSampleBuffer#>); Time and Media Representations Representations of Media 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 58From the pixel buffer, you can accessthe actual video data. For an example,see “Converting a CMSampleBuffer to a UIImage” (page 59). In addition to the video data, you can retrieve a number of other aspects of the video frame: ● Timing information You get accurate timestamps for both the original presentation time and the decode timeusingCMSampleBufferGetPresentationTimeStampandCMSampleBufferGetDecodeTimeStamp respectively. ● Format information The format information is encapsulated in a CMFormatDescription object (see CMFormatDescriptionRef). From the format description, you can get for example the pixel type and video dimensions using CMVideoFormatDescriptionGetCodecType and CMVideoFormatDescriptionGetDimensions respectively. ● Metadata Metadata are stored in a dictionary as an attachment. You use CMGetAttachment to retrieve the dictionary: CMSampleBufferRef sampleBuffer = <#Get a sample buffer#>; CFDictionaryRef metadataDictionary = CMGetAttachment(sampleBuffer, CFSTR("MetadataDictionary", NULL); if (metadataDictionary) { // Do something with the metadata. } Converting a CMSampleBuffer to a UIImage The following function shows how you can convert a CMSampleBuffer to a UIImage object. You should consider your requirements carefully before using it. Performing the conversion is a comparatively expensive operation. It is appropriate to, for example, create a still image from a frame of video data taken every second or so. You should not use this as a means to manipulate every frame of video coming from a capture device in real time. UIImage *imageFromSampleBuffer(CMSampleBufferRef sampleBuffer) { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer. CVPixelBufferLockBaseAddress(imageBuffer,0); // Get the number of bytes per row for the pixel buffer. Time and Media Representations Converting a CMSampleBuffer to a UIImage 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 59size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); // Get the pixel buffer width and height. size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); // Create a device-dependent RGB color space. static CGColorSpaceRef colorSpace = NULL; if (colorSpace == NULL) { colorSpace = CGColorSpaceCreateDeviceRGB(); if (colorSpace == NULL) { // Handle the error appropriately. return nil; } } // Get the base address of the pixel buffer. void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); // Get the data size for contiguous planes of the pixel buffer. size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer); // Create a Quartz direct-access data provider that uses data we supply. CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL); // Create a bitmap image from data supplied by the data provider. CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little, dataProvider, NULL, true, kCGRenderingIntentDefault); CGDataProviderRelease(dataProvider); // Create and return an image object to represent the Quartz image. UIImage *image = [UIImage imageWithCGImage:cgImage]; CGImageRelease(cgImage); Time and Media Representations Converting a CMSampleBuffer to a UIImage 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 60CVPixelBufferUnlockBaseAddress(imageBuffer, 0); return image; } Time and Media Representations Converting a CMSampleBuffer to a UIImage 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 61This table describes the changes to AV Foundation Programming Guide . Date Notes 2011-10-12 Updated for iOS5 to include references to release notes. 2011-04-28 First release for OS X v10.7. 2010-09-08 TBD First version of a document that describes a low-level framework you use to play, inspect, create, edit, capture, and transcode media assets. 2010-08-16 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 62 Document Revision HistoryApple Inc. © 2011 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, iPhone, iPod, iPod touch, Mac, Objective-C, OS X, Quartz, and QuickTime are trademarks of Apple Inc., registered in the U.S. and other countries. OpenGL is a registered trademark of Silicon Graphics, Inc. Times is a registered trademark of Heidelberger Druckmaschinen AG, available from Linotype Library GmbH. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Preferences and Settings Programming GuideContents About Preferences and Settings 5 At a Glance 5 You Decide What Preferences You Want to Expose 5 Apps Provide Their Own Preferences Interface 5 Apps Access Preferences Using the User Defaults Object 6 iCloud Stores Shared Preference and Configuration Data 6 Defaults Are Grouped into Domains in OS X 6 A Settings Bundle Manages Preferences for iOS Apps 6 See Also 7 About the User Defaults System 8 What Makes a Good Preference? 8 Providing a Preference Interface 8 The Organization of Preferences 9 The Argument Domain 10 The Application Domain 10 The Global Domain 11 The Languages Domains 11 The Registration Domain 11 Viewing Preferences Using the Defaults Tool 12 Accessing Preference Values 13 Registering Your App’s Default Preferences 13 Getting and Setting Preference Values 14 Synchronizing and Detecting Preference Changes 15 Managing Preferences Using Cocoa Bindings 16 Managing Preferences Using Core Foundation 16 Setting a Preference Value Using Core Foundation 16 Getting a Preference Value Using Core Foundation 17 Storing Preferences in iCloud 19 Strategies for Using the iCloud Key-Value Store 19 Configuring Your App to Use the Key-Value Store 20 Accessing Values in the Key-Value Store 21 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 2Defining the Scope of Key-Value Store Changes 22 Implementing an iOS Settings Bundle 24 The Settings App Interface 24 The Settings Bundle 26 The Settings Page File Format 27 Hierarchical Preferences 27 Localized Resources 28 Creating and Modifying the Settings Bundle 29 Adding the Settings Bundle 29 Preparing the Settings Page for Editing 29 Configuring a Settings Page: A Tutorial 31 Creating Additional Settings Page Files 33 Debugging Preferences for Simulated Apps 34 Document Revision History 35 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 3 ContentsFigures, Tables, and Listings About the User Defaults System 8 Table 1-1 Options for displaying preferences to the user 8 Table 1-2 Search order for domains 10 Accessing Preference Values 13 Listing 2-1 Registering default preference values 14 Listing 2-2 Writing a simple default 17 Listing 2-3 Reading a simple default 17 Storing Preferences in iCloud 19 Listing 3-1 Updating local preference values using iCloud 21 Implementing an iOS Settings Bundle 24 Figure 4-1 Organizing preferences using child panes 28 Figure 4-2 Formatted contents of the Root.plist file 30 Figure 4-3 A root Settings page 31 Table 4-1 Preference control types 25 Table 4-2 Contents of the Settings.bundle directory 26 Table 4-3 Root-level keys of a preferences Settings page file 27 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 4Preferences are pieces of information that you store persistently and use to configure your app. Apps often expose preferences to users so that they can customize the appearance and behavior of the app. Most preferences are stored locally using the Cocoa preferences system—known as the user defaults system. Apps can also store preferences in a user’s iCloud account using the key-value store. The user defaultssystem and key-value store are both designed forstoring simple data types—strings, numbers, dates, Boolean values, URLs, data objects, and so forth—in a property list. The use of a property list also means you can organize your preference data using array and dictionary types. It is also possible to store other objects in a property list by encoding them into an NSData object first. At a Glance Apps integrate preferences in several ways, including programmatically at various points throughout your code and as part of the user interface. Preferences are supported in both iOS and Mac apps. You Decide What Preferences You Want to Expose Preferences are different for each app, and it is up to you to decide what parts of your app you want to make configurable. Configuration involves checking the value of a stored preference from your code and taking action based on that value. Thus, the preference value itself should always be simple and have a specific meaning that is then implemented by your app. Relevant section: “What Makes a Good Preference?” (page 8) Apps Provide Their Own Preferences Interface Because each app’s preferences are different, the app itself is responsible for deciding how best to present those preferences to the user, if at all. Both iOS and OS X provide some standard places for you to incorporate a preferences interface, but you are still responsible for designing that interface and displaying it at the appropriate time. 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 5 About Preferences and SettingsRelevant section: “Providing a Preference Interface” (page 8) Apps Access Preferences Using the User Defaults Object Apps accesslocally stored preferences using a user defaults object, which is either an NSUserDefaults object (iOS and OS X) or an NSUserDefaultsController object (OS X only). In addition to retrieving preference values, apps can use this object to register default values for preferences and manage other aspects of the preferences system. Relevant chapter: “Accessing Preference Values” (page 13) iCloud Stores Shared Preference and Configuration Data Apps that support iCloud can put some of their preference data in the user’s iCloud account and make it available to instances of the app running on the user’s other devices. You use this capability to supplement (not replace) your app’s existing preferences data and provide a more coherent experience across the user’s devices. For example, a magazine app might store information about the page number and issue last read by the user so that the app running on a different device can show that same page. Relevant chapter: “Storing Preferences in iCloud” (page 19) Defaults Are Grouped into Domains in OS X OS X preferences are grouped by domainsso thatsystem preferences can be differentiated from app preferences. Splitting preferences in this manner lets the user specify some preferences globally and then override one or more of those preferences inside an app. Relevant section: “The Organization of Preferences” (page 9) A Settings Bundle Manages Preferences for iOS Apps An iOS, apps can display preferences from the Settings app, which is a good place to put preferences that the user does not need to configure frequently. To display preferences in the Settings app, an app’s bundle must include a special resource called a Settings bundle that defines the preferences to display, the proper way to display them, and the information needed to record the user’s selections. About Preferences and Settings At a Glance 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 6Note: Apps are not required to use a Settings bundle to manage all preferences. For preferences that the user islikely to change frequently, the app can display its own custom interface for managing those preferences. Relevant chapter: “Implementing an iOS Settings Bundle” (page 24) See Also For information about property lists, see Property List Programming Guide . For more advanced information about using Core Foundation to manage preferences, see Preferences Programming Topics for Core Foundation . About Preferences and Settings See Also 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 7The user defaults system manages the storage of preferences for each user. Most preferences are stored persistently and therefore do not change between subsequent launch cycles of your app. Apps use preferences to track user-initiated and program-initiated configuration changes. What Makes a Good Preference? When defining your app’s preferences, it is better to use simple values and data types whenever possible. The preferences system is built around property-list data types such as strings, numbers, and dates. Although you can use an NSData object to store arbitrary objects in preferences, doing so is not recommended in most cases. Storing objects persistently means that your app has to decode that object at some point. In the case of preferences, a stored object means decoding the object every time you access the preference. It also means that a newer version of your app has to ensure that it is able to decode objects created and written to disk using an earlier version of your app, which is potentially error prone. A better approach for preferences is to store simple strings and values and use them to create the objects your app needs. Storing simple values meansthat your app can always accessthe value. The only thing that changes from release to release is the interpretation of the simple value and the objects your app creates in response. Providing a Preference Interface For user-facing preferences, Table 1-1 lists the options for displaying those preferences to the user. As you can see from this table, most options involve the creation of a custom user interface for managing and presenting preferences. If you are creating an iOS app, you can use a Settings bundle to present preferences, but you should do so only for settings the user changes infrequently. Table 1-1 Options for displaying preferences to the user Preference iOS OS X Frequently changed preferences Custom UI Custom UI Infrequently changed preferences Settings bundle Custom UI 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 8 About the User Defaults SystemNote: An example of preferencesthat might change frequently include thingslike the volume levels or control options of a game. An example of preferences that might change infrequently are the email address and server settings in the Mail app. For iOS apps, it is ultimately up to you to decide whether it is appropriate to expose preferences from the Settings app or from inside your app. Preferences in Mac apps should be accessible from a Preferences menu item in the app menu. Cocoa apps created using the Xcode templates provide such a menu item for you automatically. It is your responsibility to present an appropriate user interface when the user choosesthis menu item. You can provide that user interface by defining an action method in your app delegate that displays a custom preferences window and connecting that action method to the menu item in Interface Builder. There is no standard way to display custom preferences from inside an iOS app. You can integrate preferences in many ways, including using a separate tab in a tab-bar interface or using a custom button from one of your app’s screens. Preferences should generally be presented using a distinct view controller so that changes in preferences can be recorded when that view controller is dismissed by the user. The Organization of Preferences Preferences are grouped into domains, each of which has a name and a specific usage. For example, there’s a domain for app-specific preferences and another for systemwide preferences that apply to all apps. All preferences are stored and accessed on a per-user basis. There is no support for sharing preferences between users. Each preference has three components: ● The domain in which it is stored ● Its name (specified as an NSString object) ● Its value, which can be any property-list object (NSData, NSString, NSNumber, NSDate, NSArray, or NSDictionary) The lifetime of a preference depends on which domain you store it in. Some domains store preferences persistently by writing them to the user’s defaults database. Such preferences continue to exist from one app launch to the next. Other domains store preferences in a more volatile way, preserving preference values only for the life of the corresponding user defaults object. About the User Defaults System The Organization of Preferences 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 9A search for the value of a given preference proceeds through the domains in an NSUserDefaults object’s search list. Only domains in the search list are searched and they are searched in the order shown in Table 1-2, starting with the NSArgumentDomain domain. A search ends when a preference with the specified name is found. If multiple domains contain the same preference, the value is taken from the domain nearest the beginning of the search list. Table 1-2 Search order for domains Domain State NSArgumentDomain volatile Application (Identified by the app’s identifier) persistent NSGlobalDomain persistent Languages (Identified by the language names) volatile NSRegistrationDomain volatile The Argument Domain The argument domain comprises values set from command- line arguments (if you started the app from the command line) and is identified by the NSArgumentDomain constant. Values set from the command line are automatically placed into this domain by the system. To add a value to this domain, specify the preference name on the command line (preceded with a hyphen) and follow it with the corresponding value. For example, the following command launches Xcode and sets the value of its IndexOnOpen preference to NO: localhost> Xcode.app/Contents/MacOS/Xcode -IndexOnOpen NO Preferencesset from the command line temporarily override the established valuesstored in the user’s defaults database. In the preceding example,setting the IndexOnOpen preference to NO prevents Xcode from indexing projects automatically, even if the preference is set to YES in the user defaults database. The Application Domain The application domain contains app-specific preferences that are stored in the user defaults database of the current user. When you use the shared NSUserDefaults object (or a NSUserDefaultsController object in OS X) to write preferences, those preferences are automatically placed in this domain. About the User Defaults System The Organization of Preferences 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 10Because this domain is app-specific, the contents of the domain are tied to your app’s bundle identifier. The contents of this domain are stored in a file that is managed by the system. Currently, this file is located in the $HOME/Library/Preferences/ directory, where $HOME is either the app’s home directory or the user’s home directory (depending on the platform and whether your app is in a sandbox). The name of the user defaults database file is .plist, where is your app’s bundle identifier. You should not modify this file directly but can inspect it during debugging to make sure preference values are being written by your app. The Global Domain The global domain contains preferencesthat are applicable to all apps and isidentified by the NSGlobalDomain constant. This domain is typically used by system frameworks to store system-wide values and should not be used by your app to store app-specific values. If you want to change the value of a preference in the global domain, write that same preference to the application domain with the new value. Examples of how the system frameworks use this domain: ● Instances of the NSRuleView class store the user’s preferred measurement units in the AppleMeasurementUnits key. Using this storage location causes ruler views in all apps to use the same units. ● The system uses the AppleLanguages key to store the user’s preferred languages as an array of strings. For example, a user could specify English as the preferred language, followed by Spanish, French, German, Italian, and Swedish. The Languages Domains For each language in the AppleLanguages preference, the system recordslanguage-specific preference values in a domain whose name is based on the language name. Each language-specific domain contains preferences for the corresponding locale. Many classes in the Foundation framework (such as the NSDate, NSDateFormatter, NSTimeZone, NSString, and NSScanner classes) use this locale information to modify their behavior. For example, when you request a string representation of an NSCalendarDate object, the NSCalendarDate object uses the locale information to find the names of months and the days of the week for the user’s preferred language. The Registration Domain The registration domain defines the set of default values to use if a given preference is not set explicitly in one of the other domains. At launch time, an app can call the registerDefaults: method of NSUserDefaults to specify a default set of values for important preferences. When an app launches for the first time, most preferences have no values,so retrieving them would yield undefined results. Registering a set of default values ensures that your app always has a known good set of values to operate on. About the User Defaults System The Organization of Preferences 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 11The contents of the registration domain can be set only by using the registerDefaults: method. Viewing Preferences Using the Defaults Tool In OS X, the defaults command-line tool provides a way for you to examine the contents of the user defaults database. During app development, you might use this tool to validate the preferences your app is writing to disk. To do that, you would use a command of the following form from the Terminal app: defaults read To read the contents of the global domain, you would use the following command: defaults read NSGlobalDomain For more information about using the defaults tool to read and write preference values, see defaults man page. About the User Defaults System Viewing Preferences Using the Defaults Tool 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 12You use the NSUserDefaults class to gain access to your app’s preferences. Each app is provided with a single instance of this class, accessible from the standardUserDefaults class method. You use the shared user defaults object to: ● Specify any default values for your app’s preferences at launch time. ● Get and set individual preference values stored in the app domain. ● Remove preference values. ● Examine the contents of the volatile preference domains. Mac appsthat use Cocoa bindings can use an NSUserDefaultsController object to set and get preferences automatically. You typically add such an object to the same nib file you use for displaying user-facing preferences. You bind your user interface controls to items in the user defaults controller, which handles the process of getting and setting values in the user defaults database. Preference values must be one of the standard property list object types: NSData, NSString, NSNumber, NSDate, NSArray, or NSDictionary. The NSUserDefaults class also provides built-in manipulations for storing NSURL objects as preference values. For more information about property lists and their contents, see Property List Programming Guide . Registering Your App’s Default Preferences At launch time, an app should register default values for any preferences that it expects to be present and valid. When you request the value of a preference that has never been set, the methods of the NSUserDefaults class return default values that are appropriate for the data type. For numerical scalar values, this typically means returning 0, but for strings and other objects it means returning nil. If these standard default values are not appropriate for your app, you can register your own default values using the registerDefaults: method. This method places your custom default values in the NSRegistrationDomain domain, which causes them to be returned when a preference is not explicitly set. When calling the registerDefaults: method, you must provide a dictionary of all the default values you need to register. Listing 2-1 shows an example where an iOS app registers its default values early in the launch cycle. You can register default values at any time, of course, butshould alwaysregister them before attempting to retrieve any preference values. 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 13 Accessing Preference ValuesListing 2-1 Registering default preference values - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // Register the preference defaults early. NSDictionary *appDefaults = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:@"CacheDataAgressively"]; [[NSUserDefaults standardUserDefaults] registerDefaults:appDefaults]; // Other initialization... } When registering default values for scalar types, use an NSNumber object to specify the value for the number. If you want to register a preference whose value is a URL, use the archivedDataWithRootObject: method of NSKeyedArchiver to encode the URL in an NSData object first. Although you can use a similar technique for other types of objects, you should avoid doing so when a simpler option is available. Getting and Setting Preference Values You get and set preference values using the methods of the NSUserDefaults class. This class has methods for getting and setting preferences with scalar values of type Boolean, integer, float, and double. It also has methodsfor getting and setting preferences whose value is an object of type NSData, NSDate, NSString, NSNumber, NSArray, NSDictionary, and NSURL. There are two situations where you might get preference values and one where you might set them: ● Get preference values: ● When you need to use the value to configure your app’s behavior. ● When you need to display the value in your preferences interface. ● Set preference values when the user changes them in your preferences interface. The following code shows how you might get a preference value in your code. In this example, the code retrieves the value of the CacheDataAggressively key, which is custom key that the app might use to determine its caching strategy. Code like this can be used anywhere to handle custom configuration of your app. If you wanted to display this particular preference value to the user, you would use similar code to configure the controls of your preferences interface. Accessing Preference Values Getting and Setting Preference Values 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 14if ([[NSUserDefaults standardUserDefaults] boolForKey:@"CacheDataAggressively"]) { // Delete the backup file. } To set a preference value programmatically, you call the corresponding setter methods of NSUserDefaults. When setting object values, you must use the setObject:forKey: method. When calling this method, you must make sure that the object is one of the standard property list types. The following example sets some preferences based on the state of the app’s preferences interface. NSUserDefaults* defaults = [NSUserDefaults standardUserDefaults]; if ([cacheAgressivelyButton state] == NSOnState) { // The user wants to cache files aggressively. [defaults setBool:YES forKey:@"CacheDataAggressively"]; [defaults setObject:[NSDate dateWithTimeIntervalSinceNow:(3600 * 24 * 7)] forKey:@"CacheExpirationDate"]; // Set a 1-week expiration } else { // The user wants to use lazy caching. [defaults setBool:NO forKey:@"CacheDataAggressively"]; [defaults removeObjectForKey:@"CacheExpirationDate"]; } You do not have to display a preferences interface to manage all values. Your app can use preferences to cache interesting information. For example, NSWindow objectsstore their current location in the user defaultssystem. This data allows them to return to the same location the next time the user starts the app. Synchronizing and Detecting Preference Changes Because the NSUserDefaults class caches values, it issometimes necessary to synchronize the cached values with the current contents of the user defaults database. Your app is not always the only entity modifying the user defaults database. In iOS, the Settings app can modify the values of preferences for apps that have a Settings bundle. In OS X, the system and other apps might modify preferences values in response to user actions. For example, if the user changes preferred languages, the system writes the new values to the user defaults database. In OS X v10.5 and later, the shared NSUserDefaults object synchronizes its caches automatically at periodic intervals. However, apps can call the synchronize method manually to force an update of the cached values. Accessing Preference Values Synchronizing and Detecting Preference Changes 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 15To detect when changes to a preference value occur, apps can also register for the notification NSUserDefaultsDidChangeNotification. The shared NSUserDefaults object sends this notification to your app whenever it detects a change to a preference located in one of the persistent domains. You can use this notification to respond to changes that might impact your user interface. For example, you could use it to detect changes to the user’s preferred language and update your app content appropriately. Managing Preferences Using Cocoa Bindings Mac apps can use Cocoa bindings to set preference values directly from their user interfaces. Modifying preferences using bindings involves adding an NSUserDefaultsController object to the appropriate nib files and binding the values of your controls to the preference values in the user defaults database. When your app showsthe interface, the user defaults controller automatically loads valuesfrom the user defaults database and uses them to set the value of controls. Similarly, when the user changes the value in a control, the user defaults controller updates the value in the user defaults database. For more information on how to use the NSUserDefaultsController class to bind preference values to your user interface, see “User Defaults and Bindings” in Cocoa Bindings Programming Topics. Managing Preferences Using Core Foundation The Core Foundation framework provides its own set of interfaces for accessing preferences stored in the user defaults database. Like the NSUserDefaults class, you can use Core Foundation functions to get and set preference values and synchronize the user defaults database. Unlike NSUserDefaults, you can use the Core Foundation functions to write preferences for different apps and on different computers. Note that modifying some preferences domains(those not belonging to the current app and user) requiresroot privileges(or admin privileges prior to OS X v10.6); for information on how to gain suitable privileges, see Authorization Services Programming Guide . Writing outside the app domain is not possible for apps installed in a sandbox. For information about the Core Foundation functions for getting and setting preferences, see Preferences Utilities Reference . Setting a Preference Value Using Core Foundation Preferences are stored as key-value pairs. The key must be a CFString object, but the value can be any Core Foundation property list value (see Property List Programming Topics for Core Foundation ), including the container types. For example, you might have a key called defaultWindowWidth that defines the width in Accessing Preference Values Managing Preferences Using Cocoa Bindings 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 16pixels of any new windows that your app creates. Its value would most likely be of type CFNumber. You might also decide to combine window width and height into a single preference called defaultWindowSize and make its value be a CFArray object containing two CFNumber objects. The code in Listing 2-2 demonstrates how to create a simple preference for the app MyTextEditor. The example sets the default text color for the app to blue. Listing 2-2 Writing a simple default CFStringRef textColorKey = CFSTR("defaultTextColor"); CFStringRef colorBLUE = CFSTR("BLUE"); // Set up the preference. CFPreferencesSetAppValue(textColorKey, colorBLUE, kCFPreferencesCurrentApplication); // Write out the preference data. CFPreferencesAppSynchronize(kCFPreferencesCurrentApplication); Notice that CFPreferencesSetAppValue by itself is not sufficient to create the new preference. A call to CFPreferencesAppSynchronize isrequired to actually save the value. If you are writing multiple preferences, it is more efficient to sync only once after the last value has been set than to sync after each individual value is set. For example, if you implement a preference pane you might synchronize only when the user presses an OK button. In other cases you might not want to sync at all until the app quits—although note that if the app crashes, all unsaved preferences settings will be lost. Getting a Preference Value Using Core Foundation The simplest way to locate and retrieve a preference value is to use the CFPreferencesCopyAppValue function. This call searches through the various preference domains in order until it finds the key you have specified. If a preference has been set in a less specific domain—Any Application, for example —its value is retrieved with this call if a more specific version cannot be found. Listing 2-3 shows how to retrieve the text color preference saved in Listing 2-2 (page 17). Listing 2-3 Reading a simple default CFStringRef textColorKey = CFSTR("defaultTextColor"); CFStringRef textColor; Accessing Preference Values Managing Preferences Using Core Foundation 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 17// Read the preference. textColor = (CFStringRef)CFPreferencesCopyAppValue(textColorKey, kCFPreferencesCurrentApplication); // When finished with value, you must release it // CFRelease(textColor); All values returned from preferences are immutable, even if you have just set the value using a mutable object. Accessing Preference Values Managing Preferences Using Core Foundation 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 18An app can use the iCloud key-value store to share small amounts of data with other instances of itself on the user’s other computers and iOS devices. The key-value store is intended for simple data types like those you might use for preferences. For example, a magazine app might store the current issue and page number being read by the user so that other instances of the app can open to the same page when launched. You should not use this store for large amounts of data or for complex data types. To use the iCloud key-value store, do the following: 1. In Xcode, configure the com.apple.developer.ubiquity-kvstore-identifier entitlement for your app. 2. In your code, create the shared NSUbiquitousKeyValueStore object and register for change notifications. 3. Use the methods of NSUbiquitousKeyValueStore to get and set values. Key-value data in iCloud is limited to simple property-list types (strings, numbers, dates, and so on). Strategies for Using the iCloud Key-Value Store The key-value store is not intended for storing large amounts of data. It is intended for storing configuration data, preferences, and small amounts of app-related data. To help you decide whether the key-value store is appropriate for your needs, consider the following: ● Each app is limited to 1 MB of total space in the key-value store. (There is also a separate per-key limit of 1 MB and a maximum of 1024 keys are allowed.) Thus, you cannot use the key-value store to share large amounts of data. ● The key-value store supports only property-list types. Property-list types include simple types such as NSNumber, NSString, and NSDate objects. You can also store raw blocks of data in NSData objects and arrange all of the types using NSArray and NSDictionary objects. ● The key-value store is intended for storing data that changes infrequently. If the apps on a device make frequent changes to the key-value store, the system may defer the synchronization of some changes in order to minimize the number of round trips to the server. The more frequently apps make changes, the more likely it is that later changes will be deferred and not show up on other devices right away. 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 19 Storing Preferences in iCloud● The key-value store is not a replacement for preferences or other local techniques for saving the same data. The purpose of the key-value store is to share data between apps, but if iCloud is not enabled or is not available on a given device, you still might want to keep a local copy of the data. If you are using the key-value store to share preferences, one approach is to store the actual values in the user defaults database and synchronize them using the key-value store. (If you do not want to use the preferences system, you could also save the changes in a custom property-list file or some other local storage.) When you change the value of a key locally, write that change to both the user defaults database and to the iCloud key-value store at the same time. To receive changesfrom externalsources, add an observer for the notification NSUbiquitousKeyValueStoreDidChangeExternallyNotification and use your handler method to detect which keys changed externally and update the corresponding data in the user defaults database. By doing this, your user defaults database always contains the correct configuration values. The iCloud key-value store simply becomes a mechanism for ensuring that the user defaults database has the most recent changes. Configuring Your App to Use the Key-Value Store In order to use of the key-value store, an app must be explicitly configured with the com.apple.developer.ubiquity-kvstore-identifier entitlement. You use Xcode to enable this entitlement and specify its value for your app: 1. In your Xcode project, select the target for your app. 2. In the Summary tab, enable the Entitlements option. 3. Specify a value for the iCloud Key-Value Store field. When you enable entitlements, Xcode automatically fills in a default value for the iCloud Key-Value Store field that is based on the bundle identifier of your app. For most apps, the default value is what you want. However, if your app shares its key-value storage with another app, you must specify the bundle identifier for the other app instead. For example, if you have a lite version of your app, you might want it to use the same key-value store as the paid version. Enabling the entitlement is all you have to do to use the shared NSUbiquitousKeyValueStore object. As long as the entitlement is configured and contains a valid value, the key-value store object writes its data to the appropriate location in the user’s iCloud account. If there is a problem attaching to the specified iCloud container, any attemptsto read or write key values will fail. To ensure the key-value store is configured properly and accessible, you should execute code similar to the following early in your app’s launch cycle: NSUbiquitousKeyValueStore* store = [NSUbiquitousKeyValueStore defaultStore]; [[NSNotificationCenter defaultCenter] addObserver:self Storing Preferences in iCloud Configuring Your App to Use the Key-Value Store 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 20selector:@selector(updateKVStoreItems:) name:NSUbiquitousKeyValueStoreDidChangeExternallyNotification object:store]; [store synchronize]; Creating the key-value store object early in your app’s launch cycle is recommended because it ensures that your app receives updates from iCloud in a timely manner. The best way to determine if changes have been made to keys and values is to register for the notification NSUbiquitousKeyValueStoreDidChangeExternallyNotification. And at launch time, you should call the synchronize method manually to detect if any changes were made externally. You do not need to call that method at other times during you app’s execution. For more information about how to configure entitlements for an iOS app, see “Configuring Apps” in Tools Workflow Guide for iOS . Accessing Values in the Key-Value Store You get and set key-value store values using the methods of the NSUbiquitousKeyValueStore class. This class has methods for getting and setting preferences with scalar values of type Boolean, long long, and double. It also has methods for getting and setting keys whose values are NSData, NSDate, NSString, NSNumber, NSArray, or NSDictionary objects. If you are using the key-value store as a way to update locally stored preferences, you could use code similar to that in Listing 3-1 to coordinate updates to the user defaults database. This example assumes that you use the same key names and corresponding values in both iCloud and the user defaults database. It also assumes that you previously registered the updateKVStoreItems: method as the method to call in response to the notification NSUbiquitousKeyValueStoreDidChangeExternallyNotification. Listing 3-1 Updating local preference values using iCloud - (void)updateKVStoreItems:(NSNotification*)notification { // Get the list of keys that changed. NSDictionary* userInfo = [notification userInfo]; NSNumber* reasonForChange = [userInfo objectForKey:NSUbiquitousKeyValueStoreChangeReasonKey]; NSInteger reason = -1; // If a reason could not be determined, do not update anything. Storing Preferences in iCloud Accessing Values in the Key-Value Store 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 21if (!reasonForChange) return; // Update only for changes from the server. reason = [reasonForChange integerValue]; if ((reason == NSUbiquitousKeyValueStoreServerChange) || (reason == NSUbiquitousKeyValueStoreInitialSyncChange)) { // If something is changing externally, get the changes // and update the corresponding keys locally. NSArray* changedKeys = [userInfo objectForKey:NSUbiquitousKeyValueStoreChangedKeysKey]; NSUbiquitousKeyValueStore* store = [NSUbiquitousKeyValueStore defaultStore]; NSUserDefaults* userDefaults = [NSUserDefaults standardUserDefaults]; // This loop assumes you are using the same key names in both // the user defaults database and the iCloud key-value store for (NSString* key in changedKeys) { id value = [store objectForKey:key]; [userDefaults setObject:value forKey:key]; } } } Defining the Scope of Key-Value Store Changes Every call to one of the NSUbiquitousKeyValueStore methods is treated as a single atomic transaction. When transferring the data for that transaction to iCloud, the whole transaction either fails or succeeds. If it succeeds, all of the keys are written to the store and if it fails no keys are written. There is no partial writing of keys to the store. When a failure occurs, the system also generates a NSUbiquitousKeyValueStoreDidChangeExternallyNotification notification that containsthe reason for the failure. If you are using the key-value store, you should use that notification to detect possible problems. Storing Preferences in iCloud Defining the Scope of Key-Value Store Changes 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 22If you have a group of keys whose values must all be updated at the same time in order to be valid, save them together in a single transaction. To write multiple keys and values in a single transaction, create an NSDictionary object with all of the keys and values. Then write the dictionary object to the key-value store using the setDictionary:forKey: method. Writing an entire dictionary of changes ensures that all of the keys are written or none of them are. Storing Preferences in iCloud Defining the Scope of Key-Value Store Changes 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 23In iOS, the Foundation framework provides the low-level mechanism for storing the preference data. Apps then have two options for presenting preferences: ● Display preferences inside the app. ● Use a Settings bundle to manage preferences from the Settings app. Which option you choose depends on how you expect users to interact with preferences. The Settings bundle is generally the preferred mechanism for displaying preferences. However, games and other apps that contain configuration options or other frequently accessed preferences might want to present them inside the app instead. Regardless of how you present them, you use the NSUserDefaults class to access preference values from your code. This chapter focuses on the creation of a Settings bundle for your app. A Settings bundle contains files that describe the structure and presentation style of your preferences. The Settings app uses this information to create an entry for your app and to display your custom preference pages. For guidelines on how to manage and present settings and configuration options, see iOS Human Interface Guidelines. The Settings App Interface The Settings app implements a hierarchical set of pages for navigating app preferences. The main page of the Settings app liststhe system and third-party apps whose preferences can be customized. Selecting a third-party app takes the user to the preferences for that app. Every app with a Settings bundle has at least one page of preferences, referred to as the main page . If your app has only a few preferences, the main page may be the only one you need. If the number of preferences gets too large to fit on the main page, however, you can create child pages that link off the main page or other child pages. There is no specific limit to the number of child pages you can create, but you should strive to keep your preferences as simple and easy to navigate as possible. The contents of each page consists of one or more controls that you configure. Table 4-1 lists the types of controls supported by the Settings app and describes how you might use each type. The table also lists the raw key name stored in the configuration files of your Settings bundle. 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 24 Implementing an iOS Settings BundleTable 4-1 Preference control types Controltype Description The text field type displays a title (optional) and an editable text field. You can use this type for preferences that require the user to specify a custom string value. The key for this type is PSTextFieldSpecifier. Text field The title type displays a read-only string value. You can use thistype to display read-only preference values. (If the preference contains cryptic or nonintuitive values, this type lets you map the possible values to custom strings.) The key for this type is PSTitleValueSpecifier. Title The toggle switch type displays an ON/OFF toggle button. You can use this type to configure a preference that can have only one of two values. Although you typically use this type to represent preferences containing Boolean values, you can also use it with preferences containing non-Boolean values. The key for this type is PSToggleSwitchSpecifier. Toggle switch The slider type displays a slider control. You can use this type for a preference that represents a range of values. The value for this type is a real number whose minimum and maximum value you specify. The key for this type is PSSliderSpecifier. Slider The multivalue type lets the user select one value from a list of values. You can use this type for a preference that supports a set of mutually exclusive values. The values can be of any type. The key for this type is PSMultiValueSpecifier. Multivalue The group type is for organizing groups of preferences on a single page. The group type does not represent a configurable preference. It simply contains a title string that is displayed immediately before one or more configurable preferences. The key for this type is PSGroupSpecifier. Group The child pane type lets the user navigate to a new page of preferences. You use this type to implement hierarchical preferences. For more information on how you configure and use this preference type, see “Hierarchical Preferences” (page 27). The key for this type is PSChildPaneSpecifier. Child pane For detailed information about the format of each preference type, see Settings Application Schema Reference . To learn how to create and edit Settings page files, see “Creating and Modifying the Settings Bundle” (page 29). Implementing an iOS Settings Bundle The Settings App Interface 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 25The Settings Bundle A Settings bundle hasthe name Settings.bundle and residesin the top-level directory of your app’s bundle. This bundle contains one or more Settings page files that describe the individual pages of preferences. It may also include other support files needed to display your preferences, such as images or localized strings. Table 4-2 lists the contents of a typical Settings bundle. Table 4-2 Contents of the Settings.bundle directory Item name Description The Settings page file containing the preferences for the root page. The name of thisfile must be Root.plist. The contents of thisfile are described in more detail in “The Settings Page File Format” (page 27). Root.plist If you build a set of hierarchical preferences using child panes, the contents for each child pane are stored in a separate Settings page file. You are responsible for naming these files and associating them with the correct child pane. Additional .plist files These directories store localized string resources for your Settings page files. Each directory contains a single strings file, whose title is specified in your Settings page file. The strings files provide the localized strings to display for your preferences. One or more .lproj directories If you use the slider control, you can store the images for your slider in the top-level directory of the bundle. Additional images In addition to the Settings bundle, the app bundle can contain a custom icon for your app settings. The Settings app displays the icon you provide next to the entry for your app preferences. For information about app icons and how you specify them, see iOS App Programming Guide . When the Settings app launches, it checks each custom app for the presence of a Settings bundle. For each custom bundle it finds, it loadsthat bundle and displaysthe corresponding app’s name and icon in the Settings main page. When the user taps the row belonging to your app, Settings loads the Root.plist Settings page file for your Settings bundle and uses that file to build your app’s main page of preferences. In addition to loading your bundle’s Root.plist Settings page file, the Settings app also loads any language-specific resources for that file, as needed. Each Settings page file can have an associated .strings file containing localized values for any user-visible strings. As it prepares your preferences for display, the Settings app looksforstring resourcesin the user’s preferred language and substitutesthem in your preferences page prior to display. Implementing an iOS Settings Bundle The Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 26The Settings Page File Format Each Settings page file is stored in the iPhone Settings property-list file format, which is a structured file format. The simplest way to edit Settings page files is to use the built-in editor facilities of Xcode; see “Preparing the Settings Page for Editing” (page 29). You can also edit property-list files using the Property List Editor app that comes with the Xcode tools. Note: Xcode converts any XML-based property files in your project to binary format when building your app. This conversion saves space and is done for you automatically. The root element of each Settings page file contains the keys listed in Table 4-3. Only one key is actually required, but it is recommended that you include both of them. Table 4-3 Root-level keys of a preferences Settings page file Key Type Value The value for this key is an array of dictionaries, with each dictionary containing the information for a single control. For a list of control types, see Table 4-1 (page 25). For a description of the keys associated with each control, see Settings Application Schema Reference . PreferenceSpecifiers Array (required) The name of the strings file associated with this file. A copy of this file (with appropriate localized strings) should be located in each of your bundle’s language-specific project directories. If you do not include this key, the strings in this file are not localized. For information on how these strings are used, see “Localized Resources” (page 28). StringsTable String Hierarchical Preferences If you plan to organize your preferences hierarchically, each page you define must have its own separate .plist file. Each .plist file contains the set of preferences displayed only on that page. Your app’s main preferences page is always stored in a file called Root.plist. Additional pages can be given any name you like. To specify a link between a parent page and a child page, you include a child pane control in the parent page. A child pane control creates a row that, when tapped, displays a new page of settings. The File key of the child pane control identifies the name of the .plist file with the contents of the child page. The Title key Implementing an iOS Settings Bundle The Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 27identifies the title of the child page; this title is also used as the text of the control used to display the child page. The Settings app automatically provides navigation controls on the child page to allow the user to navigate back to the parent page. Figure 4-1 shows how this hierarchical set of pages works. The left side of the figure shows the .plist files, and the right side shows the relationships between the corresponding pages. Figure 4-1 Organizing preferences using child panes Sounds New Voicemail Group 1 Group 2 New Email Sent Mail Ringtones Sounds page Settings Group 1 Usage Sounds Group 2 Group 3 Brightness Wallpaper General Root page Sounds.plist Root.plist General.plist General page General Date & Time Group 1 Network Keyboard For more information about child pane controls and their associated keys, see Settings Application Schema Reference . Localized Resources Because preferences contain user-visible strings, you should provide localized versions of those strings with your Settings bundle. Each page of preferences can have an associated .strings file for each localization supported by your bundle. When the Settings app encounters a key that supports localization, it checks the appropriately localized .strings file for a matching key. If it finds one, it displays the value associated with that key. When looking for localized resources such as .strings files, the Settings app follows the same rules that other iOS apps follow. It first tries to find a localized version of the resource that matches the user’s preferred language setting. If no such resource exists, an appropriate fallback language is selected. Implementing an iOS Settings Bundle The Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 28For information about the format ofstringsfiles, language-specific project directories, and how language-specific resources are retrieved from bundles, see Internationalization Programming Topics. Creating and Modifying the Settings Bundle Xcode provides a template for adding a Settings bundle to your current project. The default Settings bundle contains a Root.plist file and a default language directory for storing any localized resources. You can expand this bundle as needed to include additional property list files and resources needed by your Settings bundle. Adding the Settings Bundle To add a Settings bundle to your Xcode project: 1. Choose File > New > New File. 2. Under iOS, choose Resource, and then select the Settings Bundle template. 3. Name the file Settings.bundle. In addition to adding a new Settings bundle to your project, Xcode automatically addsthat bundle to the Copy Bundle Resources build phase of your app target. Thus, all you have to do is modify the property list files of your Settings bundle and add any needed resources. The new Settings bundle has the following structure: Settings.bundle/ Root.plist en.lproj/ Root.strings Preparing the Settings Page for Editing Before editing any of the property-list files in your Settings bundle, you should configure the Xcode editor to format the contents of those files as iPhone settings. Xcode does this automatically for the Root.plist file, but you may need to format additional property-list files manually. To format a file as iPhone Settings, do the following: 1. Select the file. 2. Control-click the editor window and choose Property List Type > iPhone Settings plist if it is not already chosen. Implementing an iOS Settings Bundle Creating and Modifying the Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 29Formatting a property list makes it easier to understand and edit the file’s contents. Xcode substitutes human-readable strings (as shown in Figure 4-2) that are appropriate for the selected format. Figure 4-2 Formatted contents of the Root.plist file Implementing an iOS Settings Bundle Creating and Modifying the Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 30Configuring a Settings Page: A Tutorial This section shows you how to configure a Settings page to display the controls you want. The goal of the tutorial is to create a page like the one in Figure 4-3. If you have not yet created a Settings bundle for your project, you should do so as described in “Adding the Settings Bundle” (page 29) before proceeding with these steps. Figure 4-3 A root Settings page 1. Disclose the Preference Items key to display the default items that come with the template. 2. Change the title of Item 0 to Sound. ● Disclose Item 0 of Preference Items. ● Change the value of the Title key from Group to Sound. ● Leave the Type key set to Group. ● Click the disclosure triangle of the item to hide its contents. 3. Create the first toggle switch for the renamed Sound group. ● Select Item 2 (the toggle switch item) of Preference Items and choose Edit > Cut. ● Select Item 0 and choose Edit > Paste. (This moves the toggle switch item in front of the text field item.) ● Disclose the toggle switch item to reveal its configuration keys. ● Change the value of the Title key to Play Sounds. ● Change the value of the Identifier key to play_sounds_preference. Implementing an iOS Settings Bundle Creating and Modifying the Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 31● Click the disclosure triangle of the item to hide its contents. 4. Create a second toggle switch for the Sound group. ● Select Item 1 (the Play Sounds toggle switch). ● Choose Edit > Copy. ● Choose Edit >Paste to place a copy of the toggle switch right after the first one. ● Disclose the new toggle switch item to reveal its configuration keys. ● Change the value of its Title key to 3D Sound. ● Change the value of its Identifier key to 3D_sound_preference. ● Click the disclosure triangle of the item to hide its contents. At this point, you have finished the first group of settings and are ready to create the User Info group. 5. Change Item 3 into a Group control and name it User Info. ● Click Item 3 in the Preferences Items. This displays a pop-up menu with a list of item types. ● From the pop-up menu, choose Group to change the type of the control. ● Disclose the contents of Item 3. ● Set the value of the Title key to User Info. ● Click the disclosure triangle of the item to hide its contents. 6. Create the Name field. ● Select Item 4 in the Preferences Items. ● Using the pop-up menu, change its type to Text Field. ● Set the value of the Title key to Name. ● Set the value of the Identifier key to user_name. ● Click the disclosure triangle of the item to hide its contents. 7. Create the Experience Level settings. ● Select Item 4. ● Control-click the editor window and select Add Row to add a new item. ● Set the type of the new item to Multi Value. ● Disclose the item’s contents and set its title to Experience Level, its identifier to experience_preference, and its default value to 0. ● With the Default Value key selected, Control-click and select Add Row to add a Titles array. ● Select the Titles array and press Return to add a new subitem. ● Add two more subitems to create a total of three items. Implementing an iOS Settings Bundle Creating and Modifying the Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 32● Set the values of the subitems to Beginner, Expert, and Master. ● Hide the key’s subitems. ● Add a new item for the Values array. ● Add three subitems to the Values array and set their values to 0, 1, and 2. ● Hide the contents of Item 5. 8. Add the final group to your settings page. ● Create a new item and set its type to Group and its title to Gravity. ● Create another new item and set itstype to Slider, itsidentifier to gravity_preference, its default value to 1, and its maximum value to 2. Creating Additional Settings Page Files The Settings Bundle template includes the Root.plist file, which defines your app’s top Settings page. To define additional Settings pages, you must add additional property list files to your Settings bundle. To add a property list file to your Settings bundle in Xcode, do the following: 1. Choose File > New > New File. 2. Under iOS, select Resource, and then select the Property List template. 3. Select the new file to display its contents in the editor. 4. Control-click the editor pane and choose Property List Type > iPhone Settings plist to format the contents. 5. Control-click the editor pane again and choose Add Row to add a new key. 6. Add and configure any additional keys you need. After adding a new Settings page to your Settings bundle, you can edit the page’s contents as described in “Configuring a Settings Page: A Tutorial” (page 31). To display the settings for your page, you must reference it from a child pane control as described in “Hierarchical Preferences” (page 27). Implementing an iOS Settings Bundle Creating and Modifying the Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 33Note: In Xcode 4, adding a property-list file to your project does not automatically associate it with your Settings bundle. You must use the Finder to move any additional property-list files into your Settings bundle. Debugging Preferences for Simulated Apps When running your app, iOS Simulatorstores any preferences valuesfor your app in ~/Library/Application Support/iOS Simulator/User/Applications//Library/Preferences, where is a programmatically generated directory name that iOS uses to identify your app. Each time you build your app, Xcode preserves your app preferences and other relevant library files. If you want to remove the current preferences for testing purposes, you can delete the app from Simulator or choose Reset Contents and Settings from the iOS Simulator menu. Implementing an iOS Settings Bundle Debugging Preferences for Simulated Apps 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 34This table describes the changes to Preferences and Settings Programming Guide . Date Notes 2012-03-01 Updated the document to reflect new limits for key and value sizes. Updated the document to include information about Settings bundles and iOS in general. Also incorporated iCloud information. 2011-10-12 Removed the articles on storing NSColor objects and using Cocoa bindings and now link to their locations instead. Changed document name from User Defaults Programming Topics. 2007-10-31 Updated information about periodic autosave behavior. 2007-01-08 Corrected typos and capitalization mistakes. Added overview of procedure forstoring non-property-list objectsin user defaults, and linked to related article. 2006-11-07 2006-09-05 Made small additions to the content. Changed title from "User Defaults." Expanded explanation of user defaults in introduction. Noted requirement that a default’s value must be a property list value at the beginning of the “Using NSUserDefaults” article. Included an article that describes the use of NSUserDefaultsController. Corrected minor typographical errors. 2005-08-11 2004-02-03 Added article “Storing NSColor in User Defaults”. Linked to the Core Foundation Preferences Programming Topic, which was also incorrectly named. 2003-05-09 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 35 Document Revision HistoryDate Notes Added link in limitations area to CFPreferences. Corrected class name in Defaults Domains Concept. 2003-01-13 Revision history was added to existing topic. It will be used to record changes to the content of the topic. 2002-11-12 Document Revision History 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 36Apple Inc. © 2012 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, Finder, iPhone, Mac, OS X, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. .Mac and iCloud are service marks of Apple Inc., registered in the U.S. and other countries. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. OpenGL Programming Guide for MacContents About OpenGL for OS X 11 At a Glance 11 OpenGL Is a C-based, Platform-Neutral API 12 Different Rendering Destinations Require Different Setup Commands 12 OpenGL on Macs Exists in a Heterogenous Environment 12 OpenGL Helps Applications Harness the Power of Graphics Processors 13 Concurrency in OpenGL Applications Requires Additional Effort 13 Performance Tuning Allows Your Application to Provide an Exceptional User Experience 14 How to Use This Document 14 Prerequisites 15 See Also 15 OpenGL on the Mac Platform 17 OpenGL Concepts 17 OpenGL Implements a Client-Server Model 18 OpenGL Commands Can Be Executed Asynchronously 18 OpenGL Commands Are Executed In Order 19 OpenGL Copies Client Data at Call-Time 19 OpenGL Relies on Platform-Specific Libraries For Critical Functionality 19 OpenGL in OS X 20 Accessing OpenGL Within Your Application 21 OpenGL APIs Specific to OS X 22 Apple-Implemented OpenGL Libraries 23 Terminology 24 Renderer 24 Renderer and Buffer Attributes 24 Pixel Format Objects 24 OpenGL Profiles 25 Rendering Contexts 25 Drawable Objects 25 Virtual Screens 26 Offline Renderer 31 Running an OpenGL Program in OS X 31 Making Great OpenGL Applications on the Macintosh 33 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 2Drawing to a Window or View 35 General Approach 35 Drawing to a Cocoa View 36 Drawing to an NSOpenGLView Class: A Tutorial 37 Drawing OpenGL Content to a Custom View 40 Optimizing OpenGL for High Resolution 44 Enable High-Resolution Backing for an OpenGL View 44 Set Up the Viewport to Support High Resolution 45 Adjust Model and Texture Assets 46 Check for Calls Defined in Pixel Dimensions 46 Tune OpenGL Performance for High Resolution 47 Use a Layer-Backed View to Overlay Text on OpenGL Content 48 Use an Application Window for Fullscreen Operation 49 Convert the Coordinate Space When Hit Testing 49 Drawing to the Full Screen 50 Creating a Full-Screen Application 50 52 Drawing Offscreen 53 Rendering to a Framebuffer Object 53 Using a Framebuffer Object as a Texture 54 Using a Framebuffer Object as an Image 58 Rendering to a Pixel Buffer 60 Setting Up a Pixel Buffer for Offscreen Drawing 61 Using a Pixel Buffer as a Texture Source 61 Rendering to a Pixel Buffer on a Remote System 63 Choosing Renderer and Buffer Attributes 64 OpenGL Profiles (OS X v10.7) 64 Buffer Size Attribute Selection Tips 65 Ensuring That Back Buffer Contents Remain the Same 66 Ensuring a Valid Pixel Format Object 66 Ensuring a Specific Type of Renderer 67 Ensuring a Single Renderer for a Display 68 Allowing Offline Renderers 69 OpenCL 70 Deprecated Attributes 70 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 3 ContentsWorking with Rendering Contexts 72 Update the Rendering Context When the Renderer or Geometry Changes 72 Tracking Renderer Changes 73 Updating a Rendering Context for a Custom Cocoa View 73 Context Parameters Alter the Context’s Behavior 76 Swap Interval Allows an Application to Synchronize Updates to the Screen Refresh 76 Surface Opacity Specifies How the OpenGL Surface Blends with Surfaces Behind It 77 Surface Drawing Order Specifies the Position of the OpenGL Surface Relative to the Window 77 Determining Whether Vertex and Fragment Processing Happens on the GPU 78 Controlling the Back Buffer Size 78 Sharing Rendering Context Resources 79 Determining the OpenGL Capabilities Supported by the Renderer 83 Detecting Functionality 83 Guidelines for Code That Checks for Functionality 87 OpenGL Renderer Implementation-Dependent Values 88 OpenGL Application Design Strategies 89 Visualizing OpenGL 89 Designing a High-Performance OpenGL Application 91 Update OpenGL Content Only When Your Data Changes 94 Synchronize with the Screen Refresh Rate 96 Avoid Synchronizing and Flushing Operations 96 Using glFlush Effectively 97 Avoid Querying OpenGL State 98 Use Fences for Finer-Grained Synchronization 98 Allow OpenGL to Manage Your Resources 99 Use Double Buffering to Avoid Resource Conflicts 100 Be Mindful of OpenGL State Variables 101 Replace State Changes with OpenGL Objects 102 Use Optimal Data Types and Formats 102 Use OpenGL Macros 103 Best Practices for Working with Vertex Data 104 Understand How Vertex Data Flows Through OpenGL 105 Techniques for Handling Vertex Data 107 Vertex Buffers 107 Using Vertex Buffers 108 Buffer Usage Hints 110 Flush Buffer Range Extension 113 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 4 ContentsVertex Array Range Extension 113 Vertex Array Object 116 Best Practices for Working with Texture Data 118 Using Extensions to Improve Texture Performance 119 Pixel Buffer Objects 121 Apple Client Storage 124 Apple Texture Range and Rectangle Texture 125 Combining Client Storage with Texture Ranges 127 Optimal Data Formats and Types 128 Working with Non–Power-of-Two Textures 129 Creating Textures from Image Data 131 Creating a Texture from a Cocoa View 131 Creating a Texture from a Quartz Image Source 133 Getting Decompressed Raw Pixel Data from a Source Image 135 Downloading Texture Data 136 Double Buffering Texture Data 137 Customizing the OpenGL Pipeline with Shaders 139 Shader Basics 141 Advanced Shading Extensions 142 Transform Feedback 142 GPU Shader 4 143 Geometry Shaders 143 Uniform Buffers 143 Techniques for Scene Antialiasing 144 Guidelines 145 General Approach 145 Hinting for a Specific Antialiasing Technique 147 Concurrency and OpenGL 148 Identifying Whether an OpenGL Application Can Benefit from Concurrency 149 OpenGL Restricts Each Context to a Single Thread 149 Strategies for Implementing Concurrency in OpenGL Applications 150 Multithreaded OpenGL 150 Perform OpenGL Computations in a Worker Task 151 Use Multiple OpenGL Contexts 153 Guidelines for Threading OpenGL Applications 154 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 5 ContentsTuning Your OpenGL Application 155 Gathering and Analyzing Baseline Performance Data 156 Using OpenGL Driver Monitor to Measure Stalls 161 Identifying Bottlenecks with Shark 161 Legacy OpenGL Functionality by Version 163 Version 1.1 163 Version 1.2 164 Version 1.3 165 Version 1.4 165 Version 1.5 166 Version 2.0 166 Version 2.1 167 Updating an Application to Support the OpenGL 3.2 Core Specification 168 Removed Functionality 168 Extension Changes on OS X 169 Setting Up Function Pointers to OpenGL Routines 171 Obtaining a Function Pointer to an Arbitrary OpenGL Entry Point 171 Initializing Entry Points 172 Document Revision History 175 Glossary 179 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 6 ContentsFigures, Tables, and Listings OpenGL on the Mac Platform 17 Figure 1-1 OpenGL provides the reflections in iChat 17 Figure 1-2 OpenGL client-server model 18 Figure 1-3 Graphics platform model 18 Figure 1-4 MacOS X OpenGL driver model 20 Figure 1-5 Layers of OpenGL for OS X 21 Figure 1-6 The programing interfaces used for OpenGL content 22 Figure 1-7 Data flow through OpenGL 26 Figure 1-8 A virtual screen displays what the user sees 27 Figure 1-9 Two virtual screens 28 Figure 1-10 A virtual screen can represent more than one physical screen 29 Figure 1-11 Two virtual screens and two graphics cards 30 Figure 1-12 The flow of data through OpenGL 31 Drawing to a Window or View 35 Figure 2-1 OpenGL content in a Cocoa view 35 Figure 2-2 The output from the Golden Triangle program 39 Listing 2-1 The interface for MyOpenGLView 37 Listing 2-2 Include OpenGL/gl.h 38 Listing 2-3 The drawRect: method for MyOpenGLView 38 Listing 2-4 Code that draws a triangle using OpenGL commands 38 Listing 2-5 The interface for a custom OpenGL view 40 Listing 2-6 The initWithFrame:pixelFormat: method 41 Listing 2-7 The lockFocus method 42 Listing 2-8 The drawRect method for a custom view 42 Listing 2-9 Detaching the context from a drawable object 43 Optimizing OpenGL for High Resolution 44 Figure 3-1 Enabling high-resolution backing for an OpenGL view 45 Figure 3-2 A text overlay scales automatically for standard resolution (left) and high resolution (right) 48 Listing 3-1 Setting up the viewport for drawing 45 Drawing to the Full Screen 50 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 7Figure 4-1 Drawing OpenGL content to the full screen 50 Drawing Offscreen 53 Listing 5-1 Setting up a framebuffer for texturing 57 Listing 5-2 Setting up a renderbuffer for drawing images 59 Choosing Renderer and Buffer Attributes 64 Table 6-1 Renderer types and pixel format attributes 67 Listing 6-1 Using the CGL API to create a pixel format object 66 Listing 6-2 Setting an NSOpenGLContext object to use a specific display 68 Listing 6-3 Setting a CGL context to use a specific display 69 Working with Rendering Contexts 72 Figure 7-1 A fixed size back buffer and variable size front buffer 79 Figure 7-2 Shared contexts attached to the same drawable object 80 Figure 7-3 Shared contexts and more than one drawable object 80 Listing 7-1 Handling context updates for a custom view 74 Listing 7-2 Using CGL to set up synchronization 76 Listing 7-3 Using CGL to set surface opacity 77 Listing 7-4 Using CGL to set surface drawing order 77 Listing 7-5 Using CGL to check whether the GPU is processing vertices and fragments 78 Listing 7-6 Using CGL to set up back buffer size control 79 Listing 7-7 Setting up an NSOpenGLContext object for sharing 81 Listing 7-8 Setting up a CGL context for sharing 82 Determining the OpenGL Capabilities Supported by the Renderer 83 Table 8-1 Common OpenGL renderer limitations 88 Table 8-2 OpenGL shader limitations 88 Listing 8-1 Checking for OpenGL functionality 84 Listing 8-2 Setting up a valid rendering context to get renderer functionality information 86 OpenGL Application Design Strategies 89 Figure 9-1 OpenGL graphics pipeline 90 Figure 9-2 OpenGL client-server architecture 91 Figure 9-3 Application model for managing resources 92 Figure 9-4 Single-buffered vertex array data 100 Figure 9-5 Double-buffered vertex array data 101 Listing 9-1 Setting up a Core Video display link 94 Listing 9-2 Setting up synchronization 96 Listing 9-3 Disabling state variables 102 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 8 Figures, Tables, and ListingsListing 9-4 Using CGL macros 103 Best Practices for Working with Vertex Data 104 Figure 10-1 Vertex data sets can be quite large 104 Figure 10-2 Vertex data path 105 Figure 10-3 Immediate mode requires a copy of the current vertex data 105 Listing 10-1 Submitting vertex data using glDrawElements. 106 Listing 10-2 Using the vertex buffer object extension with dynamic data 109 Listing 10-3 Using the vertex buffer object extension with static data 110 Listing 10-4 Geometry with different usage patterns 111 Listing 10-5 Using the vertex array range extension with dynamic data 115 Listing 10-6 Using the vertex array range extension with static data 116 Best Practices for Working with Texture Data 118 Figure 11-1 Textures add realism to a scene 118 Figure 11-2 Texture data path 119 Figure 11-3 Data copies in an OpenGL program 120 Figure 11-4 The client storage extension eliminates a data copy 124 Figure 11-5 The texture range extension eliminates a data copy 126 Figure 11-6 Combining extensions to eliminate data copies 127 Figure 11-7 Normalized and non-normalized coordinates 129 Figure 11-8 An image segmented into power-of-two tiles 130 Figure 11-9 Using an image as a texture for a cube 131 Figure 11-10 Single-buffered data 137 Figure 11-11 Double-buffered data 138 Listing 11-1 Using texture extensions for a rectangular texture 127 Listing 11-2 Using texture extensions for a power-of-two texture 128 Listing 11-3 Building an OpenGL texture from an NSView object 132 Listing 11-4 Using a Quartz image as a texture source 134 Listing 11-5 Getting pixel data from a source image 135 Listing 11-6 Code that downloads texture data 136 Customizing the OpenGL Pipeline with Shaders 139 Figure 12-1 OpenGL fixed-function pipeline 139 Figure 12-2 OpenGL shader pipeline 140 Listing 12-1 Loading a Shader 141 Techniques for Scene Antialiasing 144 Table 13-1 Antialiasing hints 147 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 9 Figures, Tables, and ListingsConcurrency and OpenGL 148 Figure 14-1 CPU processing and OpenGL on separate threads 152 Figure 14-2 Two contexts on separate threads 153 Listing 14-1 Enabling the multithreaded OpenGL engine 151 Tuning Your OpenGL Application 155 Figure 15-1 Output produced by the top application 157 Figure 15-2 The OpenGL Profiler window 158 Figure 15-3 A statistics window 159 Figure 15-4 A Trace window 160 Figure 15-5 The graph view in OpenGL Driver Monitor 161 Legacy OpenGL Functionality by Version 163 Table A-1 Functionality added in OpenGL 1.1 163 Table A-2 Functionality added in OpenGL 1.2 164 Table A-3 Functionality added in OpenGL 1.3 165 Table A-4 Functionality added in OpenGL 1.4 165 Table A-5 Functionality added in OpenGL 1.5 166 Table A-6 Functionality added in OpenGL 2.0 166 Table A-7 Functionality added in OpenGL 2.1 167 Updating an Application to Support the OpenGL 3.2 Core Specification 168 Table B-1 Extensions described in this guide 169 Setting Up Function Pointers to OpenGL Routines 171 Listing C-1 Using NSLookupAndBindSymbol to obtain a symbol for a symbol name 172 Listing C-2 Using NSGLGetProcAddress to obtain an OpenGL entry point 173 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 10 Figures, Tables, and ListingsOpenGL is an open, cross-platform graphics standard with broad industry support. OpenGL greatly eases the task of writing real-time 2D or 3D graphics applications by providing a mature, well-documented graphics processing pipeline that supports the abstraction of current and future hardware accelerators. OpenGL client OpenGL server Graphics hardware Application OpenGL framework OpenGL driver Runs on GPU Runs on CPU At a Glance OpenGL is an excellent choice for graphics development on the Macintosh platform because it offers the following advantages: ● Reliable Implementation. The OpenGL client-server model abstracts hardware details and guarantees consistent presentation on any compliant hardware and software configuration. Every implementation of OpenGL adheres to the OpenGL specification and must pass a set of conformance tests. ● Performance. Applications can harness the considerable power of the graphics hardware to improve rendering speeds and quality. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 11 About OpenGL for OS X● Industry acceptance. The specification for OpenGL is controlled by the Khronos Group, an industry consortium whose members include many of the major companies in the computer graphics industry, including Apple. In addition to OpenGL for OS X, there are OpenGL implementations for Windows, Linux, Irix, Solaris, and many game consoles. OpenGL Is a C-based, Platform-Neutral API Because OpenGL is a C-based API, it is extremely portable and widely supported. As a C API, it integrates seamlessly with Objective-C based Cocoa applications. OpenGL provides functions your application uses to generate 2D or 3D images. Your application presents the rendered images to the screen or copies them back to its own memory. The OpenGL specification does not provide a windowing layer of its own. It relies on functions defined by OS X to integrate OpenGL drawing with the windowing system. Your application creates an OS X OpenGL rendering context and attaches a rendering target to it (known as a drawable object). The rendering context manages OpenGL state changes and objects created by calls to the OpenGL API. The drawable object is the final destination for OpenGL drawing commands and is typically associated with a Cocoa window or view. Relevant Chapters: “OpenGL on the Mac Platform” (page 17) Different Rendering Destinations Require Different Setup Commands Depending on whether your application intends to draw OpenGL content to a window, to draw to the entire screen, or to perform offscreen image processing, it takes different steps to create the rendering context and associate it with a drawable object. Relevant Chapters: “Drawing to a Window or View” (page 35), “Drawing to the Full Screen” (page 50) and “Drawing Offscreen” (page 53) OpenGL on Macs Exists in a Heterogenous Environment Macs support different types of graphics processors, each with different rendering capabilities, supporting versions of OpenGL from 1.x through OpenGL 3.2. When creating a rendering context, your application can accept a broad range of renderers or it can restrict itself to devices with specific capabilities. Once you have a context, you can configure how that context executes OpenGL commands. About OpenGL for OS X At a Glance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 12OpenGL on the Mac is not only a heterogenous environment, but it is also a dynamic environment. Users can add or remove displays, or take a laptop running on battery power and plug it into a wall. When the graphics environment on the Mac changes, the renderer associated with the context may change. Your application must handle these changes and adjust how it uses OpenGL. Relevant Chapters: “Choosing Renderer and Buffer Attributes” (page 64), “Working with Rendering Contexts” (page 72), and “Determining the OpenGL Capabilities Supported by the Renderer” (page 83) OpenGL Helps Applications Harness the Power of Graphics Processors Graphics processors are massively parallelized devices optimized for graphics operations. To access that computing power adds additional overhead because data must move from your application to the GPU over slower internal buses. Accessing the same data simultaneously from both your application and OpenGL is usually restricted. To get great performance in your application, you must carefully design your application to feed data and commands to OpenGL so that the graphics hardware runs in parallel with your application. A poorly tuned application may stall either on the CPU or the GPU waiting for the other to finish processing. When you are ready to optimize your application’s performance, Apple provides both general-purpose and OpenGL-specific profiling tools that make it easy to learn where your application spends its time. Relevant Chapters: “Optimizing OpenGL for High Resolution” (page 44), “OpenGL on the Mac Platform” (page 17),“OpenGL Application Design Strategies” (page 89), “Best Practices for Working with Vertex Data” (page 104), “Best Practicesfor Working with Texture Data” (page 118), “Customizing the OpenGL Pipeline with Shaders” (page 139), and “Tuning Your OpenGL Application” (page 155) Concurrency in OpenGL Applications Requires Additional Effort Many Macs ship with multiple processors or multiple cores, and future hardware is expected to add more of each. Designing applications to take advantage of multiprocessing is critical. OpenGL places additional restrictions on multithreaded applications. If you intend to add concurrency to an OpenGL application, you must ensure that the application does not access the same context from two different threads at the same time. About OpenGL for OS X At a Glance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 13Relevant Chapters: “Concurrency and OpenGL” (page 148) Performance Tuning Allows Your Application to Provide an Exceptional User Experience Once you’ve improved the performance of your OpenGL application and taken advantage of concurrency, put some of the freed processing power to work for you. Higher resolution textures, detailed models, and more complex lighting and shading algorithms can improve image quality. Full-scene antialiasing on modern graphics hardware can eliminate many of the “jaggies” common on lower resolution images. Relevant Chapters: “Customizing the OpenGL Pipeline with Shaders” (page 139),“Techniques for Scene Antialiasing” (page 144) How to Use This Document If you have never programmed in OpenGL on the Mac, you should read this book in its entirety, starting with “OpenGL on the Mac Platform” (page 17). Critical Mac terminology is defined in that chapter as well as in the “Glossary” (page 179). If you already have an OpenGL application running on the Mac, but have not yet updated it for OS X v10.7, read “Choosing Renderer and Buffer Attributes” (page 64) to learn how to choose an OpenGL profile for your application. To find out how to update an existing OpenGL app for high resolution, see “Optimizing OpenGL for High Resolution” (page 44). Once you have OpenGL content in your application, read “OpenGL Application Design Strategies” (page 89) to learn fundamental patterns for implementing high-performance OpenGL applications, and the chapters that follow to learn how to apply those patterns to specific OpenGL problems. About OpenGL for OS X How to Use This Document 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 14Important: Although this guide describes how to create rendering contexts that support OpenGL 3.2, most code examples and discussion in the rest of the book describe the earlier legacy versions of OpenGL. See “Updating an Application to Support the OpenGL 3.2 Core Specification” (page 168) for more information on migrating your application to OpenGL 3.2. Prerequisites This guide assumes that you have some experience with OpenGL programming, but want to learn how to apply that knowledge to create software for the Mac. Although this guide provides advice on optimizing OpenGL code, it does not provide entry-level information on how to use the OpenGL API. If you are unfamiliar with OpenGL, you should read “OpenGL on the Mac Platform” (page 17) to get an overview of OpenGL on the Mac platform, and then read the following OpenGL programming guide and reference documents: ● OpenGL Programming Guide, by Dave Shreiner and the Khronos OpenGL Working Group; otherwise known as "The Red book.” ● OpenGL Shading Language , by Randi J. Rost, is an excellent guide for those who want to write programs that compute surface properties (also known as shaders). ● OpenGL Reference Pages. Before reading this document, you should be familiar with Cocoa windows and views asintroduced in Window Programming Guide and View Programming Guide . See Also Keep these reference documents handy as you develop your OpenGL program for OS X: ● NSOpenGLView Class Reference , NSOpenGLContext Class Reference , NSOpenGLPixelBuffer Class Reference , and NSOpenGLPixelFormat Class Reference provide a complete description of the classes and methods needed to integrate OpenGL content into a Cocoa application. ● CGL Reference describes low-level functions that can be used to create full-screen OpenGL applications. ● OpenGL Extensions Guide provides information about OpenGL extensions supported in OS X. The OpenGL Foundation website, http://www.opengl.org, provides information on OpenGL commands, the Khronos OpenGL Working Group, logo requirements, OpenGL news, and many other topics. It's a site that you'll want to visit regularly. Among the many resources it provides, the following are important reference documents for OpenGL developers: About OpenGL for OS X Prerequisites 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 15● OpenGL Specification provides detailed information on how an OpenGL implementation is expected to handle each OpenGL command. ● OpenGL Reference describes the main OpenGL library. ● OpenGL GLU Reference describes the OpenGL Utility Library, which contains convenience functions implemented on top of the OpenGL API. ● OpenGL GLUT Reference describes the OpenGL Utility Toolkit, a cross-platform windowing API. ● OpenGL API Code and Tutorial Listings provides code examples for fundamental tasks, such as modeling and texture mapping, as well as for advanced techniques, such as high dynamic range rendering (HDRR). About OpenGL for OS X See Also 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 16You can tell that Apple has an implementation of OpenGL on its platform by looking at the user interface for many of the applications that are installed with OS X. The reflections built into iChat (Figure 1-1) provide one of the more notable examples. The responsiveness of the windows, the instant results of applying an effect in iPhoto, and many other operations in OS X are due to the use of OpenGL. OpenGL is available to all Macintosh applications. OpenGL for OS X is implemented as a set of frameworks that contain the OpenGL runtime engine and its drawing software. These frameworks use platform-neutral virtual resourcesto free your programming as much as possible from the underlying graphics hardware. OS X provides a set of application programming interfaces (APIs) that Cocoa applications can use to support OpenGL drawing. Figure 1-1 OpenGL provides the reflections in iChat This chapter provides an overview of OpenGL and the interfaces your application uses on the Mac platform to tap into it. OpenGL Concepts To understand how OpenGL fits into OS X and your application, you should first understand how OpenGL is designed. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 17 OpenGL on the Mac PlatformOpenGL Implements a Client-Server Model OpenGL uses a client-server model, as shown in Figure 1-2. When your application calls an OpenGL function, it talks to an OpenGL client. The client delivers drawing commands to an OpenGL server. The nature of the client, the server, and the communication path between them is specific to each implementation of OpenGL. For example, the server and clients could be on different computers, or they could be different processes on the same computer. Figure 1-2 OpenGL client-server model Application OpenGL client OpenGL server A client-server model allows the graphics workload to be divided between the client and the server. For example, all Macintosh computersship with dedicated graphics hardware that is optimized to perform graphics calculations in parallel. Figure 1-3 shows a common arrangement of CPUs and GPUs. With this hardware configuration, the OpenGL client executes on the CPU and the server executes on the GPU. Figure 1-3 Graphics platform model CPU RAM Core Core GPU RAM Core Core Core Core Core Core System OpenGL Commands Can Be Executed Asynchronously A benefit of the OpenGL client-server model is that the client can return control to the application before the command has finished executing. An OpenGL client may also buffer or delay execution of OpenGL commands. If OpenGL required all commands to complete before returning control to the application, then either the CPU or the GPU would be idle waiting for the other to provide it data, resulting in reduced performance. OpenGL on the Mac Platform OpenGL Concepts 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 18Some OpenGL commandsimplicitly or explicitly require the client to wait untilsome or all previously submitted commands have completed. OpenGL applicationsshould be designed to reduce the frequency of client-server synchronizations. See “OpenGL Application Design Strategies” (page 89) for more information on how to design your OpenGL application. OpenGL Commands Are Executed In Order OpenGL guarantees that commands are executed in the order they are received by OpenGL. OpenGL Copies Client Data at Call-Time When an application calls an OpenGL function, the OpenGL client copies any data provided in the parameters before returning control to the application. For example, if a parameter points at an array of vertex data stored in application memory, OpenGL must copy that data before returning. Therefore, an application is free to change memory it owns regardless of calls it makes to OpenGL. The data that the client copies is often reformatted before it is transmitted to the server. Copying, modifying, and transmitting parameters to the server adds overhead to calling OpenGL. Applications should be designed to minimize copy overhead. OpenGL Relies on Platform-Specific Libraries For Critical Functionality OpenGL provides a rich set of cross-platform drawing commands, but does not define functions to interact with an operating system’s graphics subsystem. Instead, OpenGL expects each implementation to define an interface to create rendering contexts and associate them with the graphics subsystem. A rendering context holds all of the data stored in the OpenGL state machine. Allowing multiple contexts allows the state in one machine to be changed by an application without affecting other contexts. Associating OpenGL with the graphic subsystem usually means allowing OpenGL content to be rendered to a specific window. When content is associated with a window, the implementation creates whatever resources are required to allow OpenGL to render and display images. OpenGL on the Mac Platform OpenGL Concepts 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 19OpenGL in OS X OpenGL in OS X implementsthe OpenGL client-server model using a common OpenGL framework and plug-in drivers. The framework and driver combine to implement the client portion of OpenGL, as shown in Figure 1-4. Dedicated graphics hardware provides the server. Although this is the common scenario, Apple also provides a software renderer implemented entirely on the CPU. Figure 1-4 MacOS X OpenGL driver model OpenGL client OpenGL server Graphics hardware Application OpenGL framework OpenGL driver Runs on GPU Runs on CPU OS X supports a display space that can include multiple dissimilar displays, each driven by different graphics cards with different capabilities. In addition, multiple OpenGL renderers can drive each graphics card. To accommodate this versatility, OpenGL for OS X is segmented into well-defined layers: a window system layer, OpenGL on the Mac Platform OpenGL in OS X 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 20a framework layer, and a driver layer, as shown in Figure 1-5. This segmentation allows for plug-in interfaces to both the window system layer and the framework layer. Plug-in interfaces offer flexibility in software and hardware configuration without violating the OpenGL standard. Figure 1-5 Layers of OpenGL for OS X Software GLD plug-in ATI GLD plug-in NVIDIA GLD plug-in Intel GLD plug-in Application Hardware Window system layer Common OpenGL framework Driver layer NSOpenGL CGL OpenGL The window system layer is an OS X–specific layer that your application uses to create OpenGL rendering contexts and associate them with the OS X windowing system. The NSOpenGL classes and Core OpenGL (CGL) API also provide some additional controlsfor how OpenGL operates on that context. See “OpenGL APIs Specific to OS X” (page 22) for more information. Finally, this layer also includes the OpenGL libraries—GL, GLU, and GLUT. (See “Apple-Implemented OpenGL Libraries” (page 23) for details.) The common OpenGL framework layer is the software interface to the graphics hardware. This layer contains Apple's implementation of the OpenGL specification. The driver layer contains the optional GLD plug-in interface and one or more GLD plug-in drivers, which may have different software and hardware support capabilities. The GLD plug-in interface supports third-party plug-in drivers, allowing third-party hardware vendors to provide drivers optimized to take best advantage of their graphics hardware. Accessing OpenGL Within Your Application The programming interfacesthat your application callsfall into two categories—those specific to the Macintosh platform and those defined by the OpenGL Working Group. The Apple-specific programming interfaces are what Cocoa applications use to communicate with the OS X windowing system. These APIs don't create OpenGL content, they manage content, direct it to a drawing destination, and control various aspects of the rendering OpenGL on the Mac Platform Accessing OpenGL Within Your Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 21operation. Your application calls the OpenGL APIs to create content. OpenGL routines accept vertex, pixel, and texture data and assemble the data to create an image. The final image resides in a framebuffer, which is presented to the user through the windowing-system specific API. Figure 1-6 The programing interfaces used for OpenGL content OpenGL engine and drivers GLUT CGL OpenGL NSOpenGL classes GLUT application Cocoa application OpenGL APIs Specific to OS X OS X offers two easy-to-use APIs that are specific to the Macintosh platform: the NSOpenGL classes and the CGL API. Throughout this document, these APIs are referred to as the Apple-specific OpenGL APIs. Cocoa provides many classes specifically for OpenGL: ● The NSOpenGLContext class implements a standard OpenGL rendering context. ● The NSOpenGLPixelFormat class is used by an application to specify the parameters used to create the OpenGL context. ● The NSOpenGLView class is a subclass of NSView that uses NSOpenGLContext and NSOpenGLPixelFormat to display OpenGL content in a view. Applicationsthatsubclass NSOpenGLView do not need to directly subclass NSOpenGLPixelFormat or NSOpenGLContext. Applications that need customization or flexibility, can subclass NSView and create NSOpenGLPixelFormat and NSOpenGLContext objects manually. ● The NSOpenGLLayer class allows your application to integrate OpenGL drawing with Core Animation. ● The NSOpenGLPixelBuffer class provides hardware-accelerated offscreen drawing. OpenGL on the Mac Platform Accessing OpenGL Within Your Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 22The Core OpenGL API (CGL) residesin the OpenGL framework and is used to implement the NSOpenGL classes. CGL offersthe most direct accessto system functionality and providesthe highest level of graphics performance and control for drawing to the full screen. CGL Reference provides a complete description of this API. Apple-Implemented OpenGL Libraries OS X also provides the full suite of graphics libraries that are part of every implementation of OpenGL: GL, GLU, GLUT, and GLX. Two of these—GL and GLU—provide low-level drawing support. The other two—GLUT and GLX—support drawing to the screen. Your application typically interfaces directly with the core OpenGL library (GL), the OpenGL Utility library (GLU), and the OpenGL Utility Toolkit (GLUT). The GL library provides a low-level modular API that allows you to define graphical objects. Itsupportsthe core functions defined by the OpenGL specification. It providessupport for two fundamental types of graphics primitives: objects defined by sets of vertices, such as line segments and simple polygons, and objects that are pixel-based images, such as filled rectangles and bitmaps. The GL API does not handle complex custom graphical objects; your application must decompose them into simpler geometries. The GLU library combines functions from the GL library to support more advanced graphics features. It runs on all conforming implementations of OpenGL. GLU is capable of creating and handling complex polygons (including quartic equations), processing nonuniform rational b-spline curves (NURBs), scaling images, and decomposing a surface to a series of polygons (tessellation). The GLUT library provides a cross-platform API for performing operations associated with the user windowing environment—displaying and redrawing content, handling events, and so on. It isimplemented on most UNIX, Linux, and Windows platforms. Code that you write with GLUT can be reused across multiple platforms. However, such code is constrained by a generic set of user interface elements and event-handling options. This document does not show how to use GLUT. The GLUTBasics sample project shows you how to get started with GLUT. GLX is an OpenGL extension that supports using OpenGL within a window provided by the X Window system. X11 for OS X is available as an optional installation. (It's not shown in Figure 1-6 (page 22).) See OpenGL Programming for the X Window System, published by Addison Wesley for more information. This document does not show how to use these libraries. For detailed information, either go to the OpenGL Foundation website http://www.opengl.org or see the most recent version of "The Red book"—OpenGL Programming Guide, published by Addison Wesley. OpenGL on the Mac Platform Accessing OpenGL Within Your Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 23Terminology There are a number of termsthat you’ll want to understand so that you can write code effectively using OpenGL: renderer, renderer attributes, buffer attributes, pixel format objects, rendering contexts, drawable objects, and virtual screens. As an OpenGL programmer, some of these may seem familiar to you. However, understanding the Apple-specific nuances of these terms will help you get the most out of OpenGL on the Macintosh platform. Renderer A renderer isthe combination of the hardware and software that OpenGL usesto execute OpenGL commands. The characteristics of the final image depend on the capabilities of the graphics hardware associated with the renderer and the device used to display the image. OS X supports graphics accelerator cards with varying capabilities, as well as a software renderer. It is possible for multiple renderers, each with different capabilities or features, to drive a single set of graphics hardware. To learn how to determine the exact features of a renderer, see “Determining the OpenGL Capabilities Supported by the Renderer” (page 83). Renderer and Buffer Attributes Your application uses renderer and buffer attributes to communicate renderer and buffer requirements to OpenGL. The Apple implementation of OpenGL dynamically selectsthe best renderer for the current rendering task and doesso transparently to your application. If your application has very specific rendering requirements and wants to control renderer selection, it can do so by supplying the appropriate renderer attributes. Buffer attributes describe such things as color and depth buffer sizes, and whether the data is stereoscopic or monoscopic. Renderer and buffer attributes are represented by constants defined in the Apple-specific OpenGL APIs. OpenGL uses the attributes you supply to perform the setup work needed prior to drawing content. “Drawing to a Window or View” (page 35) provides a simple example that shows how to use renderer and buffer attributes. “Choosing Renderer and Buffer Attributes” (page 64) explains how to choose renderer and buffer attributes to achieve specific rendering goals. Pixel Format Objects A pixel format describes the format for pixel data storage in memory. The description includes the number and order of components as well as their names (typically red, blue, green and alpha). It also includes other information, such as whether a pixel contains stencil and depth values. A pixel format object is an opaque data structure that holds a pixel format along with a list of renderers and display devices that satisfy the requirements specified by an application. OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 24Each of the Apple-specific OpenGL APIs defines a pixel format data type and accessor routines that you can use to obtain the information referenced by this object. See “Virtual Screens” (page 26) for more information on renderer and display devices. OpenGL Profiles OpenGL profiles are new in OS X 10.7. An OpenGL profile is a renderer attribute used to request a specific version of the OpenGL specification. When your application provides an OpenGL profile as part of its renderer attributes, it only receives renderers that provide the complete feature set promised by that profile. The render can implement a different version of the OpenGL so long asthe version itsuppliesto your application provides the same functionality that your application requested. Rendering Contexts A rendering context, or simply context, contains OpenGL state information and objects for your application. State variables include such things as drawing color, the viewing and projection transformations, lighting characteristics, and material properties. State variables are set per context. When your application creates OpenGL objects (for example, textures), these are also associated with the rendering context. Although your application can maintain more than one context, only one context can be the current context in a thread. The current context is the rendering context that receives OpenGL commands issued by your application. Drawable Objects A drawable object refers to an object allocated by the windowing system that can serve as an OpenGL framebuffer. A drawable object is the destination for OpenGL drawing operations. The behavior of drawable objects is not part of the OpenGL specification, but is defined by the OS X windowing system. A drawable object can be any of the following: a Cocoa view, offscreen memory, a full-screen graphics device, or a pixel buffer. OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 25Note: A pixel buffer (pbuffer) is an OpenGL buffer designed for hardware-accelerated offscreen drawing and as a source for texturing. An application can render an image into a pixel buffer and then use the pixel buffer as a texture for other OpenGL commands. Although pixel buffers are supported on Apple’s implementation of OpenGL, Apple recommends you use framebuffer objects instead. See “Drawing Offscreen” (page 53) for more information on offscreen rendering. Before OpenGL can draw to a drawable object, the object must be attached to a rendering context. The characteristics of the drawable object narrow the selection of hardware and software specified by the rendering context. Apple’s OpenGL automatically allocates buffers, creates surfaces, and specifies which renderer is the current renderer. The logical flow of data from an application through OpenGL to a drawable object is shown in Figure 1-7. The application issues OpenGL commands that are sent to the current rendering context. The current context, which contains state information, constrains how the commands are interpreted by the appropriate renderer. The renderer converts the OpenGL primitives to an image in the framebuffer. (See also “Running an OpenGL Program in OS X ” (page 31).) Figure 1-7 Data flow through OpenGL Rendered Image Application Possible renderers OpenGL buffers Current Drawable objects CONTEXT Virtual Screens The characteristics and quality of the OpenGL content that the user sees depend on both the renderer and the physical display used to view the content. The combination of renderer and physical display is called a virtual screen. This important concept has implications for any OpenGL application running on OS X. A simple system, with one graphics card and one physical display, typically has two virtual screens. One virtual screen consists of a hardware-based renderer and the physical display and the other virtual screen consists of a software-based renderer and the physical display. OS X provides a software-based renderer as a fallback. It's possible for your application to decline the use of thisfallback. You'llsee how in “Choosing Renderer and Buffer Attributes” (page 64). OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 26The green rectangle around the OpenGL image in Figure 1-8 surrounds a virtual screen for a system with one graphics card and one display. Note that a virtual screen is not the physical display, which is why the green rectangle is drawn around the application window thatshowsthe OpenGL content. In this case, it isthe renderer provided by the graphics card combined with the characteristics of the display. Figure 1-8 A virtual screen displays what the user sees Graphics card Virtual screen OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 27Because a virtual screen is not simply the physical display, a system with one display can use more than one virtualscreen at a time, asshown in Figure 1-9. The green rectangles are drawn to point out each virtualscreen. Imagine that the virtual screen on the right side uses a software-only renderer and that the one on the left uses a hardware-dependent renderer. Although this is a contrived example, it illustrates the point. Figure 1-9 Two virtual screens Graphics card Virtual screen 2 (Software renderer) Virtual screen 1 (Hardware renderer) OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 28It's also possible to have a virtualscreen that can represent more than one physical display. The green rectangle in Figure 1-10 is drawn around a virtual screen that spans two physical displays. In this case, the same graphics hardware drives a pair of identical displays. A mirrored display also has a single virtual screen associated with multiple physical displays. Figure 1-10 A virtual screen can represent more than one physical screen Dual-headed graphics card Identical displays Virtual screen The concept of a virtualscreen is particularly important when the user drags an image from one physicalscreen to another. When this happens, the virtual screen may change, and with it, a number of attributes of the imaging process, such as the current renderer, may change. With the dual-headed graphics card shown in Figure 1-10 (page 29), dragging between displays preserves the same virtual screen. However, Figure 1-11 shows the case for which two displays represent two unique virtual screens. Not only are the two graphics cards different, but it's possible that the renderer, buffer attributes, and pixel characteristics are different. A change in any of these three items can result in a change in the virtual screen. OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 29When the user drags an image from one display to another, and the virtualscreen isthe same for both displays, the image quality should appear similar. However, for the case shown in Figure 1-11, the image quality can be quite different. Figure 1-11 Two virtual screens and two graphics cards Graphics card 1 Graphics card 2 Virtual screen 1 Virtual screen 2 OpenGL for OS X transparently manages rendering across multiple monitors. A user can drag a window from one monitor to another, even though their display capabilities may be different or they may be driven by dissimilar graphics cards with dissimilar resolutions and color depths. OpenGL dynamically switches renderers when the virtual screen that contains the majority of the pixels in an OpenGL window changes. When a window issplit between multiple virtualscreens, the framebuffer israsterized entirely by the renderer driving the screen that contains the largest segment of the window. The regions of the window on the other virtual screens are drawn by copying the rasterized image. When the entire OpenGL drawable object is displayed on one virtual screen, there is no performance impact from multiple monitor support. Applications need to track virtual screen changes and, if appropriate, update the current application state to reflect changes in renderer capabilities. See “Working with Rendering Contexts” (page 72). OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 30Offline Renderer An offline renderer is one that is not currently associated with a display. For example, a graphics processor might be powered down to conserve power, or there might not be a display hooked up to the graphics card. Offline renderers are not normally visible to your application, but your application can enable them by adding the appropriate renderer attribute. Taking advantage of offline renderers is useful because it gives the user a seamless experience when they plug in or remove displays. For more information about configuring a context to see offline renderers, see “Choosing Renderer and Buffer Attributes” (page 64). To enable your application to switch to a renderer when a display is attached,see “Update the Rendering Context When the Renderer or Geometry Changes” (page 72). Running an OpenGL Program in OS X Figure 1-12 shows the flow of data in an OpenGL program, regardless of the platform that the program runs on. Figure 1-12 The flow of data through OpenGL Rasterization Fragment shading and per-fragment operations Per-pixel operations Texture assembly Framebuffer Vertex shading and per-vertex operations Pixel data Vertex data Per-vertex operations include such things as applying transformation matrices to add perspective or to clip, and applying lighting effects. Per-pixel operations include such things as color conversion and applying blur and distortion effects. Pixels destined for textures are sent to texture assembly, where OpenGL stores textures until it needs to apply them onto an object. OpenGL rasterizesthe processed vertex and pixel data, meaning that the data are converged to create fragments. A fragment encapsulates all the values for a pixel, including color, depth, and sometimes texture values. These values are used during antialiasing and any other calculations needed to fill shapes and to connect vertices. OpenGL on the Mac Platform Running an OpenGL Program in OS X 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 31Per-fragment operations include applying environment effects, depth and stencil testing, and performing other operations such as blending and dithering. Some operations—such as hidden-surface removal—end the processing of a fragment. OpenGL draws fully processed fragments into the appropriate location in the framebuffer. The dashed arrows in Figure 1-12 indicate reading pixel data back from the framebuffer. They represent operations performed byOpenGL functionssuch as glReadPixels, glCopyPixels, and glCopyTexImage2D. So far you've seen how OpenGL operates on any platform. But how do Cocoa applications provide data to the OpenGL for processing? A Mac application must perform these tasks: ● Set up a list of buffer and renderer attributes that define the sort of drawing you want to perform. (See “Renderer and Buffer Attributes” (page 24).) ● Request the system to create a pixel format object that contains a pixel format that meets the constraints of the buffer and render attributes and a list of all suitable combinations of displays and renderers. (See “Pixel Format Objects” (page 24) and “Virtual Screens” (page 26).) ● Create a rendering context to hold state information that controls such things as drawing color, view and projection matrices, characteristics of light, and conventions used to pack pixels. When you set up this context, you must provide a pixel format object because the rendering context needs to know the set of virtual screens that can be used for drawing. (See “Rendering Contexts” (page 25).) ● Bind a drawable object to the rendering context. The drawable object is what capturesthe OpenGL drawing sent to that rendering context. (See “Drawable Objects” (page 25).) ● Make the rendering context the current context. OpenGL automatically targets the current context. Although your application might have several rendering contexts set up, only the current one is the active one for drawing purposes. ● Issue OpenGL drawing commands. ● Flush the contents of the rendering context. This causes previously submitted commands to be rendered to the drawable object and displays them to the user. The tasks described in the first five bullet items are platform-specific. “Drawing to a Window or View” (page 35) provides simple examples of how to perform them. As you read other parts of this document, you'll see there are a number of other tasks that, although not mandatory for drawing, are really quite necessary for any application that wantsto use OpenGL to perform complex 3D drawing efficiently on a wide variety of Macintosh systems. OpenGL on the Mac Platform Running an OpenGL Program in OS X 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 32Making Great OpenGL Applications on the Macintosh OpenGL lets you create applications with outstanding graphics performance as well as a great user experience—but neither of these things come for free. Your application performs best when it works with OpenGL rather than against it. With that in mind, here are guidelines you should follow to create high-performance, future-looking OpenGL applications: ● Ensure your application runs successfully with offline renderers and multiple graphics cards. Apple ships many sophisticated hardware configurations. Your application should handle renderer changes seamlessly. You should test your application on a Mac with multiple graphics processors and include tests for attaching and removing displays. For more information on how to implement hot plugging correctly, see “Working with Rendering Contexts” (page 72) ● Avoid finishing and flushing operations. Pay particular attention to OpenGL functions that force previously submitted commands to complete. Synchronizing the graphics hardware to the CPU may result in dramatically lower performance. Performance is covered in detail in “OpenGL Application Design Strategies” (page 89). ● Use multithreading to improve the performance of your OpenGL application. Many Macs support multiple simultaneous threads of execution. Your application should take advantage of concurrency. Well-behaved applications can take advantage of concurrency in just a few line of code. See “Concurrency and OpenGL” (page 148). ● Use buffer objects to manage your data. Vertex buffer objects (VBOs) allow OpenGL to manage your application’s vertex data. Using vertex buffer objects gives OpenGL more opportunities to cache vertex data in a format that is friendly to the graphics hardware, improving application performance. For more information see “Best Practices for Working with Vertex Data” (page 104). Similarly, pixel buffer objects (PBOs) should be used to manage your image data. See “Best Practices for Working with Texture Data” (page 118) ● Use framebuffer objects (FBOs) when you need to render to offscreen memory. Framebuffer objects allow your application to create offscreen rendering targets without many of the limitations of platform-dependent interfaces. See “Rendering to a Framebuffer Object” (page 53). ● Generate objects before binding them. Earlier version of OpenGL allowed your applications to create its own object names before binding them. However, you should avoid this. Always use the OpenGL API to generate object names. ● Migrate your OpenGL Applications to OpenGL 3.2 OpenGL on the Mac Platform Making Great OpenGL Applications on the Macintosh 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 33The OpenGL 3.2 Core profile provides a clean break from earlier versions of OpenGL in favor of a simpler shader-based pipeline. For better compatibility with future hardware and OS X releases, migrate your applications away from legacy versions of OpenGL. Many of the recommendationslisted above are required when your application uses OpenGL 3.2. ● Harness the power of Apple’s development tools. Apple provides many toolsthat help create OpenGL applications and analyze and tune their performance. Learning how to use these tools helps you create fast, reliable applications. “Tuning Your OpenGL Application” (page 155) describes many of these tools. OpenGL on the Mac Platform Making Great OpenGL Applications on the Macintosh 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 34The OpenGL programming interface provides hundreds of drawing commands that drive graphics hardware. It doesn't provide any commands that interface with the windowing system of an operating system. Without a windowing system, the 3D graphics of an OpenGL program are trapped inside the GPU. Figure 2-1 shows a cube drawn to a Cocoa view. Figure 2-1 OpenGL content in a Cocoa view This chapter shows how to display OpenGL drawing onscreen using the APIs provided by OS X. (This chapter does not show how to use GLUT.) The first section describes the overall approach to drawing onscreen and provides an overview of the functions and methods used by each API. General Approach To draw your content to a view or a layer, your application uses the NSOpenGL classes from within the Cocoa application framework. While the CGL API is used by your applications only to create full-screen content, every NSOpenGLContext object contains a CGL context object. This object can be retrieved from the NSOpenGLContext when your application needs to reference it directly. To show the similarities between the two, this chapter discusses both the NSOpenGL classes and the CGL API. To draw OpenGL content to a window or view using the NSOpenGL classes, you need to perform these tasks: 1. Set up the renderer and buffer attributes that support the OpenGL drawing you want to perform. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 35 Drawing to a Window or ViewEach of the OpenGL APIs in OS X has its own set of constants that represent renderer and buffer attributes. For example, the all-renderers attribute is represented by the NSOpenGLPFAAllRenderers constant in Cocoa and the kCGLPFAAllRenderers constant in the CGL API. 2. Request, from the operating system, a pixel format object that encapsulates pixel storage information and the renderer and buffer attributes required by your application. The returned pixel format object contains all possible combinations of renderers and displays available on the system that your program runs on and that meets the requirements specified by the attributes. The combinations are referred to as virtual screens. (See “Virtual Screens” (page 26).) There may be situationsfor which you want to ensure that your program uses a specific renderer. “Choosing Renderer and Buffer Attributes” (page 64) discusses how to set up an attributes array that guarantees the system passes back a pixel format object that uses only that renderer. If an error occurs, your application may receive a NULL pixel format object. Your application must handle this condition. 3. Create a rendering context and bind the pixel format object to it. The rendering context keeps track of state information that controls such things as drawing color, view and projection matrices, characteristics of light, and conventions used to pack pixels. Your application needs a pixel format object to create a rendering context. 4. Release the pixel format object. Once the pixel format object is bound to a rendering context, itsresources are no longer needed. 5. Bind a drawable object to the rendering context. For a windowed context, this is typically a Cocoa view. 6. Make the rendering context the current context. The system sends OpenGL drawing to whichever rendering context is designated as the current one. It's possible for you to set up more than one rendering context, so you need to make sure that the one you want to draw to is the current one. 7. Perform your drawing. The specific functions or methods that you use to perform each of the steps are discussed in the sections that follow. Drawing to a Cocoa View There are two ways to draw OpenGL content to a Cocoa view. If your application has modest drawing requirements, then you can use the NSOpenGLView class. See “Drawing to an NSOpenGLView Class: A Tutorial” (page 37). Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 36If your application is more complex and needs to support drawing to multiple rendering contexts, you may want to consider subclassing the NSView class. For example, if your application supports drawing to multiple views at the same time, you need to set up a custom NSView class. See “Drawing OpenGL Content to a Custom View” (page 40). Drawing to an NSOpenGLView Class: A Tutorial The NSOpenGLView class is a lightweight subclass of the NSView class that provides convenience methods for setting up OpenGL drawing. An NSOpenGLView object maintains an NSOpenGLPixelFormat object and an NSOpenGLContext object into which OpenGL calls can be rendered. It provides methods for accessing and managing the pixel format object and the rendering context, and handles notification of visible region changes. An NSOpenGLView object does notsupportsubviews. You can, however, divide the view into multiple rendering areas using the OpenGL function glViewport. This section provides step-by-step instructions for creating a simple Cocoa application that draws OpenGL content to a view. The tutorial assumes that you know how to use Xcode and Interface Builder. If you have never created an application using the Xcode development environment, see Getting Started with Tools. 1. Create a Cocoa application project named Golden Triangle. 2. Add the OpenGL framework to your project. 3. Add a new file to your project using the Objective-C class template. Name the file MyOpenGLView.m and create a header file for it. 4. Open the MyOpenGLView.h file and modify the file so that it looks like the code shown in Listing 2-1 to declare the interface. Listing 2-1 The interface for MyOpenGLView #import @interface MyOpenGLView : NSOpenGLView { } - (void) drawRect: (NSRect) bounds; @end 5. Save and close the MyOpenGLView.h file. 6. Open the MyOpenGLView.m file and include the gl.h file, as shown in Listing 2-2. Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 37Listing 2-2 Include OpenGL/gl.h #import "MyOpenGLView.h" #include @implementation MyOpenGLView @end 7. Implement the drawRect: method asshown in Listing 2-3, adding the code after the @implementation statement. The method sets the clear color to black and clears the color buffer in preparation for drawing. Then, drawRect: calls your drawing routine, which you’ll add next. The OpenGL command glFlush draws the content provided by your routine to the view. Listing 2-3 The drawRect: method for MyOpenGLView -(void) drawRect: (NSRect) bounds { glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT); drawAnObject(); glFlush(); } 8. Add the code to perform your drawing. In your own application, you'd perform whatever drawing is appropriate. But for the purpose of learning how to draw OpenGL content to a view, add the code shown in Listing 2-4. This code draws a 2D, gold-colored triangle, whose dimensions are not quite the dimensions of a true golden triangle, but good enough to show how to perform OpenGL drawing. Make sure that you insert this routine before the drawRect: method in the MyOpenGLView.m file. Listing 2-4 Code that draws a triangle using OpenGL commands static void drawAnObject () { glColor3f(1.0f, 0.85f, 0.35f); glBegin(GL_TRIANGLES); { Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 38glVertex3f( 0.0, 0.6, 0.0); glVertex3f( -0.2, -0.3, 0.0); glVertex3f( 0.2, -0.3 ,0.0); } glEnd(); } 9. Open the MainMenu.xib in Interface Builder. 10. Change the window’s title to Golden Triangle. 11. Drag an NSOpenGLView object from the Library to the window. Resize the view to fit the window. 12. Change the class of this object to MyOpenGLView. 13. Open the Attributes pane of the inspector for the view, and take a look at the renderer and buffer attributes that are available to set. These settings save you from setting attributes programmatically. Only those attributes listed in the Interface Builder inspector are set when the view is instantiated. If you need additional attributes, you need to set them programmatically. 14. Build and run your application. You should see content similar to the triangle shown in Figure 2-2. Figure 2-2 The output from the Golden Triangle program This example is extremely simple. In a more complex application, you'd want to do the following: ● Replace the immediate-mode drawing commands with commands that persist your vertex data inside OpenGL. See “OpenGL Application Design Strategies” (page 89). Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 39● In the interface for the view, declare a variable that indicates whether the view is ready to accept drawing. A view is ready for drawing only if it is bound to a rendering context and that context is set to be the current one. ● Cocoa does not call initialization routines for objects created in Interface Builder. If you need to perform any initialization tasks, do so in the awakeFromNib method for the view. Note that because you set attributes in the inspector, there is no need to set them up programmatically unless you need additional ones. There is also no need to create a pixel format object programmatically; it is created and loaded when Cocoa loads the nib file. ● Your drawRect: method should test whether the view is ready to draw into. You need to provide code that handles the case when the view is not ready to draw into. ● OpenGL is at its best when doing real-time and interactive graphics. Your application needs to provide a timer or support user interaction. For more information about creating animation in your OpenGL application, see “Synchronize with the Screen Refresh Rate” (page 96). Drawing OpenGL Content to a Custom View This section provides an overview of the key tasks you need to perform to customize the NSView class for OpenGL drawing. Before you create a custom view for OpenGL drawing, you should read “Creating a Custom View” in View Programming Guide . When you subclass the NSView class to create a custom view for OpenGL drawing, you override any Quartz drawing or other content that is in that view. To set up a custom view for OpenGL drawing, subclass NSView and create two private variables—one which is an NSOpenGLContext object and the other an NSOpenGLPixelFormat object, as shown in Listing 2-5. Listing 2-5 The interface for a custom OpenGL view @class NSOpenGLContext, NSOpenGLPixelFormat; @interface CustomOpenGLView : NSView { @private NSOpenGLContext* _openGLContext; NSOpenGLPixelFormat* _pixelFormat; } + (NSOpenGLPixelFormat*)defaultPixelFormat; - (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format; - (void)setOpenGLContext:(NSOpenGLContext*)context; Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 40- (NSOpenGLContext*)openGLContext; - (void)clearGLContext; - (void)prepareOpenGL; - (void)update; - (void)setPixelFormat:(NSOpenGLPixelFormat*)pixelFormat; - (NSOpenGLPixelFormat*)pixelFormat; @end In addition to the usual methods for the private variables (openGLContext, setOpenGLContext:, pixelFormat, and setPixelFormat:) you need to implement the following methods: ● + (NSOpenGLPixelFormat*) defaultPixelFormat Use this method to allocate and initialize the NSOpenGLPixelFormat object. ● - (void) clearGLContext Use this method to clear and release the NSOpenGLContext object. ● - (void) prepareOpenGL Use this method to initialize the OpenGL state after creating the NSOpenGLContext object. You need to override the update and initWithFrame: methods of the NSView class. ● update calls the update method of the NSOpenGLContext class. ● initWithFrame:pixelFormat retains the pixel format and sets up the notification NSViewGlobalFrameDidChangeNotification. See Listing 2-6. Listing 2-6 The initWithFrame:pixelFormat: method - (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format { self = [super initWithFrame:frameRect]; if (self != nil) { _pixelFormat = [format retain]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(_surfaceNeedsUpdate:) name:NSViewGlobalFrameDidChangeNotification object:self]; } Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 41return self; } - (void) _surfaceNeedsUpdate:(NSNotification*)notification { [self update]; } If the custom view is not guaranteed to be in a window, you must also override the lockFocus method of the NSView class. See Listing 2-7. This method makes sure that the view is locked prior to drawing and that the context is the current one. Listing 2-7 The lockFocus method - (void)lockFocus { NSOpenGLContext* context = [self openGLContext]; [super lockFocus]; if ([context view] != self) { [context setView:self]; } [context makeCurrentContext]; } The reshape method is not supported by the NSView class. You need to update bounds in the drawRect: method, which should take the form shown in Listing 2-8. Listing 2-8 The drawRect method for a custom view -(void) drawRect { [context makeCurrentContext]; //Perform drawing here [context flushBuffer]; } Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 42There may be other methods that you want to add. For example, you might consider detaching the context from the drawable object when the custom view is moved from the window, as shown in Listing 2-9. Listing 2-9 Detaching the context from a drawable object -(void) viewDidMoveToWindow { [super viewDidMoveToWindow]; if ([self window] == nil) [context clearDrawable]; } Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 43OpenGL is a pixel-based API so the NSOpenGLView class does not provide high-resolution surfaces by default. Because adding more pixelsto renderbuffers has performance implications, you must explicitly opt in to support high-resolution screens. It’s easy to enable high-resolution backing for an OpenGL view. When you do, you’ll want to perform a few additional tasks to ensure the best possible high-resolution experience for your users. Enable High-Resolution Backing for an OpenGL View You can opt in to high resolution by calling the method setWantsBestResolutionOpenGLSurface: when you initialize the view, and supplying YES as an argument: [self setWantsBestResolutionOpenGLSurface:YES]; If you don’t opt in, the system magnifies the rendered results. The wantsBestResolutionOpenGLSurface property is relevant only for views to which an NSOpenGLContext object is bound. Its value does not affect the behavior of other views. For compatibility, wantsBestResolutionOpenGLSurface defaultsto NO, providing a 1-pixel-per-point framebuffer regardless of the backing scale factor for the display the view occupies. Setting this property to YES for a given view causes AppKit to allocate a higher-resolution framebuffer when appropriate for the backing scale factor and target display. To function correctly with wantsBestResolutionOpenGLSurface set to YES, a view must perform correct conversions between view units (points) and pixel units as needed. For example, the common practice of passing the width and height of [self bounds] to glViewport() will yield incorrect results at high resolution, because the parameters passed to the glViewport() function must be in pixels. As a result, you’ll get only partial instead of complete coverage of the render surface. Instead, use the backing store bounds: [self convertRectToBacking:[self bounds]]; 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 44 Optimizing OpenGL for High ResolutionYou can also opt in to high resolution by enabling the Supports Hi-Res Backing setting for the OpenGL view in Xcode, as shown in Figure 3-1. Figure 3-1 Enabling high-resolution backing for an OpenGL view Set Up the Viewport to Support High Resolution The viewport dimensions are in pixelsrelative to the OpenGL surface. Passthe width and height to glViewPort and use 0,0 for the x and y offsets. Listing 3-1 shows how to get the view dimensions in pixels and take the backing store size into account. Listing 3-1 Setting up the viewport for drawing - (void)drawRect:(NSRect)rect // NSOpenGLView subclass { // Get view dimensions in pixels NSRect backingBounds = [self convertRectToBacking:[self bounds]]; Optimizing OpenGL for High Resolution Set Up the Viewport to Support High Resolution 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 45GLsizei backingPixelWidth = (GLsizei)(backingBounds.size.width), backingPixelHeight = (GLsizei)(backingBounds.size.height); // Set viewport glViewport(0, 0, backingPixelWidth, backingPixelHeight); // draw… } You don’t need to perform rendering in pixels, but you do need to be aware of the coordinate system you want to render in. For example, if you want to render in points, this code will work: glOrtho(NSWidth(bounds), NSHeight(bounds),...) Adjust Model and Texture Assets If you opt in to high-resolution drawing, you also need to adjust the model and texture assets of your app. For example, when running on a high-resolution display, you might want to choose larger models and more detailed textures to take advantage of the increased number of pixels. Conversely, on a standard-resolution display, you can continue to use smaller models and textures. If you create and cache textures when you initialize your app, you might want to consider a strategy that accommodates changing the texture based on the resolution of the display. Check for Calls Defined in Pixel Dimensions These functions use pixel dimensions: ● glViewport (GLint x, GLint y, GLsizei width, GLsizei height) ● glScissor (GLint x, GLint y, GLsizei width, GLsizei height) ● glReadPixels (GLint x, GLint y, GLsizei width, GLsizei height, ...) ● glLineWidth (GLfloat width) ● glRenderbufferStorage (..., GLsizei width, GLsizei height) ● glTexImage2D (..., GLsizei width, GLsizei height, ...) Optimizing OpenGL for High Resolution Adjust Model and Texture Assets 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 46Tune OpenGL Performance for High Resolution Performance is an important factor when determining whether to support high-resolution content. The quadrupling of pixels that occurs when you opt in to high resolution requires more work by the fragment processor. If your app performs many per-fragment calculations, the increase in pixels might reduce its frame rate. If your app runs significantly slower at high resolution, consider the following options: ● Optimize fragment shader performance. (See “Tuning Your OpenGL Application” (page 155).) ● Choose a simpler algorithm to implement in your fragment shader. This reduces the quality of each individual pixel to allow for rendering the overall image at a higher resolution. ● Use a fractional scale factor between 1.0 and 2.0. A scale factor of 1.5 provides better quality than a scale factor of 1.0, but it needs to fill fewer pixels than an image scaled to 2.0. ● Multisampling antialiasing can be costly with marginal benefit at high resolution. If you are using it, you might want to reconsider. The best solution depends on the needs of your OpenGL app; you should test more than one of these options and choose the approach that provides the best balance between performance and image quality. Optimizing OpenGL for High Resolution Tune OpenGL Performance for High Resolution 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 47Use a Layer-Backed View to Overlay Text on OpenGL Content When you draw standard controls and Cocoa text to a layer-backed view, the system handles scaling the contents of that layer for you. You need to perform only a few steps to set and use the layer. Compare the controls and text in standard and high resolutions, as shown in Figure 3-2. The text looks the same on both without any additional work on your part. Figure 3-2 A text overlay scales automatically for standard resolution (left) and high resolution (right) To set up a layer-backed view for OpenGL content 1. Set the wantsLayer property of your NSOpenGLView subclass to YES. Enabling the wantsLayer property of an NSOpenGLView object activates layer-backed rendering of the OpenGL view. Drawing a layer-backed OpenGL view proceeds mostly normally through the view’s drawRect: method. The layer-backed rendering mode usesits own NSOpenGLContext object, which is distinct from the NSOpenGLContext that the view uses for drawing in non-layer-backed mode. AppKit automatically creates this context and assigns it to the view by invoking the setOpenGLContext: method. The view’s openGLContext accessor will return the layer-backed OpenGL context (rather than the non-layer-backed context) while the view is operating in layer-backed mode. Optimizing OpenGL for High Resolution Use a Layer-Backed View to Overlay Text on OpenGL Content 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 482. Create the layer content either as a XIB file or programmatically. The controls shown in Figure 3-2 were created in a XIB file by subclassing NSBox and using static text with a variety of standard controls. Using this approach allows the NSBox subclass to ignore mouse events while still allowing the user to interact with the OpenGL content. 3. Add the layer to the OpenGL view by calling the addSublayer: method. Use an Application Window for Fullscreen Operation For the best user experience, if you want your app to run full screen, create a window that covers the entire screen. This approach offers two advantages: ● The system provides optimized context performance. ● Users will be able to see critical system dialogs above your content. You should avoid changing the display mode of the system. Convert the Coordinate Space When Hit Testing Always convert window event coordinates when performing hit testing in OpenGL. The locationInWindow method of the NSEvent class returns the receiver’s location in the base coordinate system of the window. You then need to call the convertPoint:fromView: method to get the local coordinates for the OpenGL view. NSPoint aPoint = [theEvent locationInWindow]; NSPoint localPoint = [myOpenGLView convertPoint:aPoint fromView:nil]; Optimizing OpenGL for High Resolution Use an Application Window for Fullscreen Operation 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 49In OS X, you have the option to draw to the entire screen. This is a common scenario for games and other immersive applications, and OS X applies additional optimizations to improve the performance of full-screen contexts. Figure 4-1 Drawing OpenGL content to the full screen OS X v10.6 and later automatically optimize the performance ofscreen-sized windows, allowing your application to take complete advantage of the window server environment on OS X. For example, critical operating system dialogs may be displayed over your content when necessary. For information about high-resolution and full-screen drawing, see “Use an Application Window for Fullscreen Operation” (page 49). Creating a Full-Screen Application Creating a full-screen context is very simple. Your application should follow these steps: 1. Create a screen-sized window on the display you want to take over: NSRect mainDisplayRect = [[NSScreen mainScreen] frame]; 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 50 Drawing to the Full ScreenNSWindow *fullScreenWindow = [[NSWindow alloc] initWithContentRect: mainDisplayRect styleMask:NSBorderlessWindowMask backing:NSBackingStoreBuffered defer:YES]; 2. Set the window level to be above the menu bar.: [fullScreenWindow setLevel:NSMainMenuWindowLevel+1]; 3. Perform any other window configuration you desire: [fullScreenWindow setOpaque:YES]; [fullScreenWindow setHidesOnDeactivate:YES]; 4. Create a view with a double-buffered OpenGL context and attach it to the window: NSOpenGLPixelFormatAttribute attrs[] = { NSOpenGLPFADoubleBuffer, 0 }; NSOpenGLPixelFormat* pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs]; NSRect viewRect = NSMakeRect(0.0, 0.0, mainDisplayRect.size.width, mainDisplayRect.size.height); MyOpenGLView *fullScreenView = [[MyOpenGLView alloc] initWithFrame:viewRect pixelFormat: pixelFormat]; [fullScreenWindow setContentView: fullScreenView]; 5. Show the window: [fullScreenWindow makeKeyAndOrderFront:self]; That’s all you need to do. Your content is in a window that is above most other content, but because it is in a window, OS X can still show critical UI elements above your content when necessary (such as error dialogs). When there is no content above your full-screen window, OS X automatically attemptsto optimize this context’s performance. For example, when your application calls flushBuffer on the NSOpenGLContext object, the Drawing to the Full Screen Creating a Full-Screen Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 51system may swap the buffers rather than copying the contents of the back buffer to the front buffer. These performance optimizations are not applied when your application adds the NSOpenGLPFABackingStore attribute to the context. Because the system may choose to swap the buffers rather than copy them, your application must completely redraw the scene after every call to flushBuffer. For more information on NSOpenGLPFABackingStore, see “Ensuring That Back Buffer Contents Remain the Same” (page 66). Avoid changing the display resolution from that chosen by the user. If your application needs to render data at a lower resolution for performance reasons, you can explicitly create a back buffer at the desired resolution and allow OpenGL to scale those results to the display. See “Controlling the Back Buffer Size” (page 78). Drawing to the Full Screen 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 52OpenGL applications may want to use OpenGL to render images without actually displaying them to the user. For example, an image processing application might render the image, then copy that image back to the application and save it to disk. Another useful strategy is to create intermediate images that are used later to render additional content. For example, your application might want to render an image and use it as a texture in a future rendering pass. For best performance, offscreen targets should be managed by OpenGL. Having OpenGL manage offscreen targets allows you to avoid copying pixel data back to your application, except when this is absolutely necessary. OS X offers two useful options for creating offscreen rendering targets: ● Framebuffer objects. The OpenGL framebuffer extension allows your application to create fully supported offscreen OpenGL framebuffers. Framebuffer objects are fully supported as a cross-platform extension, so they are the preferred way to create offscreen rendering targets. See “Rendering to a Framebuffer Object” (page 53). ● Pixel buffer drawable objects. Pixel buffer drawable objects are an Apple-specific technology for creating an offscreen target. Each of the Apple-specific OpenGL APIs provides routines to create an offscreen hardware accelerated pixel buffer. Pixel buffers are recommended for use only when framebuffer objects are not available. See “Rendering to a Pixel Buffer” (page 60). Rendering to a Framebuffer Object The OpenGL framebuffer extension (GL_EXT_framebuffer_object) allows applications to create offscreen rendering targets from within OpenGL. OpenGL manages the memory for these framebuffers. Note: Extensions are available on a per-renderer basis. Before you use framebuffer objects you must check each renderer to make sure that itsupportsthe extension. See “Detecting Functionality” (page 83) for more information. A framebuffer object(FBO) issimilar to a drawable object, except a drawable object is a window-system-specific object, whereas a framebuffer object is a window-agnostic object that's defined in the OpenGL standard. After drawing to a framebuffer object, it is straightforward to read the pixel data to the application, or to use it as source data for other OpenGL commands. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 53 Drawing OffscreenFramebuffer objects offer a number of benefits: ● They are window-system independent, which makes porting code easier. ● They are easy to set up and save memory. There is no need to set up attributes and obtain a pixel format object. ● They are associated with a single OpenGL context, whereas each pixel buffer must be bound to a context. ● You can switch between them faster since there is no context switch as with pixel buffers. Because all commands are rendered by a single context, no additional serialization is required. ● They can share depth buffers; pixel buffers cannot. ● You can use them for 2D pixel images and texture images. Completeness is a key concept to understanding framebuffer objects. Completeness is a state that indicates whether a framebuffer object meets all the requirements for drawing. You test for this state after performing all the necessary setup work. If a framebuffer object is not complete, it cannot be used as the destination for rendering operations and as a source for read operations. Completeness is dependent on many factors that are not possible to condense into one or two statements, but these factors are thoroughly defined in the OpenGL specification for the framebuffer object extension. The specification describes the requirements for internal formats of images attached to the framebuffer, how to determine if a format is color-, depth-, and stencil-renderable, as well as other requirements. Prior to using framebuffer objects, read the OpenGL specification, which not only defines the framebuffer object API, but provides detailed definitions of all the terms necessary to understand their use and shows several code examples. The remainder of thissection provides an overview of how to use a framebuffer as either a texture or an image. The functions used to set up textures and images are slightly different. The API for images usesthe renderbuffer terminology defined in the OpenGL specification. A renderbuffer image is simply a 2D pixel image. The API for textures uses texture terminology, as you might expect. For example, one of the calls for setting up a framebuffer object for a texture is glFramebufferTexture2DEXT, whereasthe call forsetting up a framebuffer object for an image is glFramebufferRenderbufferEXT. You'll see how to set up a simple framebuffer object for each type of drawing, starting first with textures. Using a Framebuffer Object as a Texture These are the basic steps needed to set up a framebuffer object for drawing a texture offscreen: 1. Make sure the framebuffer extension (GL_EXT_framebuffer_object) is supported on the system that your code runs on. See “Determining the OpenGL Capabilities Supported by the Renderer” (page 83). Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 542. Check the renderer limits. For example, you might want to call the OpenGL function glGetIntegerv to check the maximum texture size (GL_MAX_TEXTURE_SIZE) or find out the maximum number of color buffers you can attach to the framebuffer object(GL_MAX_COLOR_ATTACHMENTS_EXT). 3. Generate a framebuffer object name by calling the following function: void glGenFramebuffersEXT (GLsizei n, GLuint *ids); n is the number of framebuffer object names that you want to create. On return, *ids points to the generated names. 4. Bind the framebuffer object name to a framebuffer target by calling the following function: void glBindFramebufferEXT(GLenum target, GLuint framebuffer); target should be the constant GL_FRAMEBUFFER_EXT. framebuffer is set to an unused framebuffer object name. On return, the framebuffer object is initialized to the state values described in the OpenGL specification for the framebuffer object extension. Each attachment point of the framebuffer is initialized to the attachment point state values described in the specification. The number of attachment points is equal to GL_MAX_COLOR_ATTACHMENTS_EXT plus 2 (for depth and stencil attachment points). Whenever a framebuffer object is bound, drawing commands are directed to it instead of being directed to the drawable associated with the rendering context. 5. Generate a texture name. void glGenTextures(GLsizei n, GLuint *textures); n is the number of texture object names that you want to create. On return, *textures points to the generated names. 6. Bind the texture name to a texture target. void glBindTexture(GLenum target, GLuint texture); target is the type of texture to bind. texture is the texture name you just created. 7. Set up the texture environment and parameters. Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 558. Define the texture by calling the appropriate OpenGL function to specify the target, level of detail, internal format, dimensions, border, pixel data format, and texture data storage. 9. Attach the texture to the framebuffer by calling the following function: void glFramebufferTexture2DEXT (GLenum target, GLenum attachment, GLenum textarget, GLuint texture, GLint level); target must be GL_FRAMEBUFFER_EXT. attachment must be one of the attachment points of the framebuffer: GL_STENCIL_ATTACHMENT_EXT, GL_DEPTH_ATTACHMENT_EXT, or GL_COLOR_ATTACHMENTn_EXT, where n is a number from 0 to GL_MAX_COLOR_ATTACHMENTS_EXT-1. textarget is the texture target. texture is an existing texture object. level is the mipmap level of the texture image to attach to the framebuffer. 10. Check to make sure that the framebuffer is complete by calling the following function: GLenum glCheckFramebufferStatusEXT(GLenum target); target must be the constant GL_FRAMEBUFFER_EXT. This function returns a status constant. You must test to make sure that the constant is GL_FRAMEBUFFER_COMPLETE_EXT. If it isn't, see the OpenGL specification for the framebuffer object extension for a description of the other constants in the status enumeration. 11. Render content to the texture. You must make sure to bind a different texture to the framebuffer object or disable texturing before you render content. If you render to a framebuffer object texture attachment with that same texture currently bound and enabled, the result is undefined. 12. To draw the contents of the texture to a window, make the window the target of all rendering commands by calling the function glBindFramebufferEXT and passing the constant GL_FRAMEBUFFER_EXT and 0. The window is always specified as 0. 13. Use the texture attachment as a normal texture by binding it, enabling texturing, and drawing. 14. Delete the texture. 15. Delete the framebuffer object by calling the following function: void glDeleteFramebuffersEXT (GLsizei n, const GLuint *framebuffers); Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 56n is the number of framebuffer objects to delete. *framebuffers points to an array that contains the framebuffer object names. Listing 5-1 shows code that performs these tasks. This example creates and draws to a single framebuffer object. Listing 5-1 Setting up a framebuffer for texturing GLuint framebuffer, texture; GLenum status; glGenFramebuffersEXT(1, &framebuffer); // Set up the FBO with one texture attachment glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXWIDE, TEXHIGH, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, texture, 0); status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); if (status != GL_FRAMEBUFFER_COMPLETE_EXT) // Handle error here // Your code to draw content to the FBO // ... // Make the window the target glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); //Your code to use the contents of the FBO // ... //Tear down the FBO and texture attachment glDeleteTextures(1, &texture); glDeleteFramebuffersEXT(1, &framebuffer); Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 57Using a Framebuffer Object as an Image There is a lot of similarity between setting up a framebuffer object for drawing images and setting one up to draw textures. These are the basic steps needed to set up a framebuffer object for drawing a 2D pixel image (a renderbuffer image) offscreen: 1. Make sure the framebuffer extension (EXT_framebuffer_object) is supported on the renderer that your code runs on. 2. Check the renderer limits. For example, you might want to call the OpenGL function glGetIntegerv to find out the maximum number of color buffers (GL_MAX_COLOR_ATTACHMENTS_EXT). 3. Generate a framebuffer object name by calling the function glGenFramebuffersEXT. 4. Bind the framebuffer object name to a framebuffer target by calling the function glBindFramebufferEXT. 5. Generate a renderbuffer object name by calling the following function: void glGenRenderbuffersEXT (GLsizei n, GLuint *renderbuffers ); n is the number of renderbuffer object names to create. *renderbuffers points to storage for the generated names. 6. Bind the renderbuffer object name to a renderbuffer target by calling the following function: void glBindRenderbufferEXT (GLenum target, GLuint renderbuffer); target must be the constant GL_RENDERBUFFER_EXT. renderbuffer is the renderbuffer object name generated previously. 7. Create data storage and establish the pixel format and dimensions of the renderbuffer image by calling the following function: void glRenderbufferStorageEXT (GLenum target, GLenum internalformat, GLsizei width, GLsizei height); target must be the constant GL_RENDERBUFFER_EXT. internalformat is the pixel format of the image. The value must be RGB, RGBA, DEPTH_COMPONENT, STENCIL_INDEX, or one of the other formats listed in the OpenGL specification. width is the width of the image, in pixels. height is the height of the image, in pixels. Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 588. Attach the renderbufferto a framebuffertarget by calling the function glFramebufferRenderbufferEXT. void glFramebufferRenderbufferEXT(GLenum target, GLenum attachment, GLenum renderbuffertarget, GLuint renderbuffer); target must be the constant GL_FRAMEBUFFER_EXT. attachment should be one of the attachment points of the framebuffer: GL_STENCIL_ATTACHMENT_EXT, GL_DEPTH_ATTACHMENT_EXT, or GL_COLOR_ATTACHMENTn_EXT, where n is a number from 0 to GL_MAX_COLOR_ATTACHMENTS_EXT–1. renderbuffertarget must be the constant GL_RENDERBUFFER_EXT. renderbuffer should be set to the name of the renderbuffer object that you want to attach to the framebuffer. 9. Check to make sure that the framebuffer is complete by calling the following function: enum glCheckFramebufferStatusEXT(GLenum target); target must be the constant GL_FRAMEBUFFER_EXT. This function returns a status constant. You must test to make sure that the constant is GL_FRAMEBUFFER_COMPLETE_EXT. If it isn't, see the OpenGL specification for the framebuffer object extension for a description of the other constants in the status enumeration. 10. Render content to the renderbuffer. 11. To access the contents of the renderbuffer object, bind the framebuffer object and then use OpenGL functions such as glReadPixels or glCopyTexImage2D. 12. Delete the framebuffer object with its renderbuffer attachment. Listing 5-2 shows code that sets up and draws to a single renderbuffer object. Your application can set up more than one renderbuffer object if it requires them. Listing 5-2 Setting up a renderbuffer for drawing images GLuint framebuffer, renderbuffer; GLenum status; // Set the width and height appropriately for your image GLuint imageWidth = 1024, imageHeight = 1024; Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 59//Set up a FBO with one renderbuffer attachment glGenFramebuffersEXT(1, &framebuffer); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer); glGenRenderbuffersEXT(1, &renderbuffer); glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, renderbuffer); glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA8, imageWidth, imageHeight); glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, renderbuffer); status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); if (status != GL_FRAMEBUFFER_COMPLETE_EXT) // Handle errors //Your code to draw content to the renderbuffer // ... //Your code to use the contents // ... // Make the window the target glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); // Delete the renderbuffer attachment glDeleteRenderbuffersEXT(1, &renderbuffer); Rendering to a Pixel Buffer The OpenGL extension string GL_APPLE_pixel_buffer provides hardware-accelerated offscreen rendering to a pixel buffer. A pixel buffer is typically used as a texture source. It can also be used for remote rendering. Important: Pixel buffers are deprecated starting with OS X v10.7 and are not supported by the OpenGL 3.2 Core profile; use framebuffer objects instead. You must create a rendering context for each pixel buffer. For example, if you want to use a pixel buffer as a texture source, you create one rendering context attached to the pixel buffer and a second context attached to a window or view. The first step in using a pixel buffer is to create it. The Apple-specific OpenGL APIs each provide a routine for this purpose: Drawing Offscreen Rendering to a Pixel Buffer 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 60● The NSOpenGLPixelBuffer method initWithTextureTarget:textureInternalFormat:textureMaxMipMapLevel:pixelsWide:pixelsHigh: ● The CGL function CGLCreatePBuffer Each of these routinesrequiresthat you provide a texture target, an internal format, a maximum mipmap level, and the width and height of the texture. The texture target must be one of these OpenGL texture constants: GL_TEXTURE_2D for a 2D texture, GL_TEXTURE_RECTANGLE_ARB for a rectangular (not power-of-two) texture, or GL_TEXTURE_CUBE_MAP for a cube map texture. The internal format specifies how to interpret the data for texturing operations. You can supply any of these options: GL_RGB (each pixel is a three-component group), GL_RGBA (each pixel is a four-component group), or GL_DEPTH_COMPONENT (each pixel is a single depth component). The maximum mipmap level should be 0 for a pixel buffer that does not have a mipmap. The value that you supply should not exceed the actual maximum number of mipmap levels that can be represented with the given width and height. Note that none of the routines that create a pixel buffer allocate the storage needed. The storage is allocated by the system at the time that you attach the pixel buffer to a rendering context. Setting Up a Pixel Buffer for Offscreen Drawing After you create a pixel buffer, the general procedure for using a pixel buffer for drawing is similar to the way you set up windows and views for drawing: 1. Specify renderer and buffer attributes. 2. Obtain a pixel format object. 3. Create a rendering context and make it current. 4. Attach a pixel buffer to the context using the appropriate Apple OpenGL attachment function: ● The setPixelBuffer:cubeMapFace:mipMapLevel:currentVirtualScreen: method of the NSOpenGLContext class instructs the receiver to render into a pixel buffer. ● The CGL function CGLSetPBuffer attaches a CGL rendering context to a pixel buffer. 5. Draw, as you normally would, using OpenGL. Using a Pixel Buffer as a Texture Source Pixel bufferslet you perform direct texturing without incurring the cost of extra copies. After drawing to a pixel buffer, you can create a texture by following these steps: Drawing Offscreen Rendering to a Pixel Buffer 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 611. Generate a texture name by calling the OpenGL function glGenTextures. 2. Bind the named texture to a target by calling the OpenGL function glBindTexture. 3. Set the texture parameters by calling OpenGL function glTexEnvParameter. 4. Set up the pixel buffer asthe source for the texture by calling one of the following Apple OpenGL functions: ● The setTextureImageToPixelBuffer:colorBuffer: method of the NSOpenGLContext class attaches the image data in the pixel buffer to the texture object currently bound by the receiver. ● The CGL function CGLTexImagePBuffer binds the contents of a CGL pixel buffer as the data source for a texture object. The context that you attach to the pixel buffer is the target rendering context: the context that uses the pixel buffer as the source of the texture data. Each of these routines requires a source parameter, which is an OpenGL constant that specifies the source buffer to texture from. The source parameter must be a valid OpenGL buffer, such as GL_FRONT, GL_BACK, or GL_AUX0, and should be compatible with the buffer attributes used to create the OpenGL context associated with the pixel buffer. This means that the pixel buffer must possess the buffer in question for texturing to succeed. For example, if the buffer attribute used with the pixel buffer is only single buffered, then texturing from the GL_BACK buffer will fail. If you modify content of any pixel buffer that contains mipmap levels, you must call the appropriate Apple OpenGL function again (setTextureImageToPixelBuffer:colorBuffer: or CGLTexImagePBuffer) before drawing with the pixel buffer to ensure that the content issynchronized with OpenGL. To synchronize the content of pixel buffers without mipmaps, simply rebind to the texture object using glBind. 5. Draw primitives using the appropriate texture coordinates. (See "The Red book"—OpenGL Programming Guide—for details.) 6. Call glFlush to cause all drawing commands to be executed. 7. When you no longer need the texture object, call the OpenGL function glDeleteTextures. 8. Set the current context to NULL using one of the Apple OpenGL routines: ● The makeCurrentContext method of the NSOpenGLContext class ● The CGL function CGLSetCurrentContext 9. Destroy the pixel buffer by calling CGLDestroyPBuffer. 10. Destroy the context by calling CGLDestroyContext. 11. Destroy the pixel format by calling CGLDestroyPixelFormat. You might find these guidelines useful when using pixel buffers for texturing: Drawing Offscreen Rendering to a Pixel Buffer 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 62● You cannot make OpenGL texturing calls that modify pixel buffer content (such as glTexSubImage2D or glCopyTexImage2D) with the pixel buffer as the destination. You can use texturing commands to read data from a pixel buffer, such as glCopyTexImage2D, with the pixel buffer texture as the source. You can also use OpenGL functions such as glReadPixels to read the contents of a pixel buffer directly from the pixel buffer context. ● Texturing can fail to produce the intended results without reporting an error. You must make sure that you enable the proper texture target, set a compatible filter mode, and adhere to other requirements described in the OpenGL specification. ● You are not required to set up contextsharing when you texture from a pixel buffer. You can have different pixel format objects and rendering contexts for both the pixel buffer and the target drawable object, without sharing resources, and still texture using a pixel buffer in the target context. Rendering to a Pixel Buffer on a Remote System Follow these steps to render to a pixel buffer on a remote system. The remote system does not need to have a display attached to it. 1. When you set the renderer and buffer attributes, include the remote pixel buffer attribute kCGLPFARemotePBuffer. 2. Log in to the remote machine using the ssh command to ensure security. 3. Run the application on the target system. 4. Retrieve the content. Drawing Offscreen Rendering to a Pixel Buffer 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 63Renderer and buffer attributes determine the renderers that the system chooses for your application. Each of the Apple-specific OpenGL APIs provides constants that specify a variety of renderer and buffer attributes. You supply a list of attribute constants to one of the Apple OpenGL functions for choosing a pixel format object. The pixel format object maintains a list of renderers that meet the requirements defined by those attributes. In a real-world application, selecting attributes is an art because you don't know the exact combination of hardware and software that your application will run on. An attribute list that is too restrictive may miss out on future capabilities or it may fail to return renderers on some systems. For example, if you specify a buffer of a specific depth, your application won't be able to take advantage of a larger buffer when more memory is available in the future. In this case, you might specify a required minimum and direct OpenGL to use the maximum available. Although you might specify attributes that make your OpenGL content look and run its best, you also need to consider whether your application should run on a less-capable system with less speed or detail. If tradeoffs are acceptable, you need to set the attributes accordingly. OpenGL Profiles (OS X v10.7) When your application is running on OS X v10.7, it should always include the kCGLPFAOpenGLProfile attribute, followed by a constant for the profile whose functionality your application requires. A profile affects different parts of OpenGL in OS X: ● A profile requires that a specific version of the OpenGL API must provided by the renderer. The renderer may implement a different version of the OpenGL specification only if that version implements the same functions and constants required by the profile; typically, this means a renderer that supports a later version of the OpenGL specification that did not remove or alter behavior specified in the version of the OpenGL specification your application requested. ● The profile alters the list of OpenGL extensions returned by the renderer. For example, extensions whose functionality is provided by the version of the OpenGL specification you requested are not also returned in the list of extensions. ● On OS X, the profile affects what other renderer and buffer attributes may be included in the attributes list. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 64 Choosing Renderer and Buffer AttributesFollow these guidelines to choose an OpenGL profile: ● If you are developing a new OS X v10.7 application, implement your OpenGL functionality using the OpenGL 3.2 Core profile; include the kCGLOGLPVersion_3_2_Core constant. The OpenGL 3.2 core profile is defined by Khronos and explicitly removes removes deprecated features described in earlier versions of the OpenGL specification; further the core profile prohibits these functions from being added back into OpenGL using extensions. OpenGL 3.2 core represents a complete break from the fixed function pipeline of OpenGL 1.x in favor of a clean, lean shader-based pipeline. When you use the OpenGL 3.2 Core profile on OS X, legacy extensions are removed wherever their functionality is already provided by OpenGL 3.2. Further, pixel and buffer format attributesthat are marked as deprecated may not be used in conjunction with the OpenGL 3.2 core profile. ● If you are updating an existing OS X application, include the kCGLOGLPVersion_Legacy constant. The legacy profile provides the same functionality found in earlier versions of OS X, with no changes. It continues to support older extensions as well as deprecated pixel and buffer format attributes. No new functionality will be added to the legacy profile in future versions of OS X. ● If you want to use OpenGL 3.2 in your application, but also want to support earlier versions of OS X or Macsthat lack hardware support for OpenGL 3.2, you must implement multiple OpenGL rendering options in your application. On OS X v10.7, your application should first test to see if OpenGL 3.2 is supported. If OpenGL 3.2 is supported, create a context and provide it to your OpenGL 3.2 rendering path. Otherwise, search for a pixel format using the legacy profile instead. For more information on migrating an application to OpenGL 3.2, see “Updating an Application to Support the OpenGL 3.2 Core Specification” (page 168). Buffer Size Attribute Selection Tips Follow these guidelines to choose buffer attributes that specify buffer size: ● To choose color, depth, and accumulation buffers that are greater than or equal to a size you specify, use the minimum policy attribute (NSOpenGLPFAMinimumPolicy or kCGLPFAMinimumPolicy). ● To choose color, depth, and accumulation buffers that are closest to a size you specify, use the closest policy attribute (NSOpenGLPFAClosestPolicy or kCGLPFAClosestPolicy). ● To choose the largest color, depth, and accumulation buffers available, use the maximum policy attribute (NSOpenGLPFAMaximumPolicy or kCGLPFAMaximumPolicy). Aslong as you pass a value that is greater than 0, this attribute specifies the use of color, depth, and accumulation buffers that are the largest size possible. Choosing Renderer and Buffer Attributes Buffer Size Attribute Selection Tips 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 65Ensuring That Back Buffer Contents Remain the Same When your application uses a double-buffered context, it displays the rendered image by calling a function to flush the image to the screen— theNSOpenGLContext class’s flushBuffer method or the CGL function CGLFlushDrawable. When the image is displayed, the contents of the back buffer are not preserved. The next time your application wants to update the back buffer, it must completely redraw the scene. Your application can add a backing store attribute (NSOpenGLPFABackingStore or kCGLPFABackingStore) to preserve the contents of the buffer after the back buffer is flushed. Adding this attribute disables some optimizations that the system can perform, which may impact the performance of your application. Ensuring a Valid Pixel Format Object The pixel format routines (the initWithAttributes: method of the NSOpenGLPixelFormat class and the CGLChoosePixelFormat function) return a pixel format object to your application that you use to create a rendering context. The buffer and renderer attributes that you supply to the pixel format routine determine the characteristics of the OpenGL drawing sent to the rendering context. If the system can't find at least one pixel format that satisfies the constraints specified by the attribute array, it returns NULL for the pixel format object. In this case, your application should have an alternative that ensures it can obtain a valid object. One alternative is to set up your attribute array with the least restrictive attribute first and the most restrictive attribute last. Then, it is fairly easy to adjust the attribute list and make another request for a pixel format object. The code in Listing 6-1 illustrates this technique using the CGL API. Notice that the initial attributes list is set up with the supersample attribute last in the list. If the function CGLChoosePixelFormat returns NULL, it clears the supersample attribute to NULL and tries again. Listing 6-1 Using the CGL API to create a pixel format object int last_attribute = 6; CGLPixelFormatAttribute attribs[] = { kCGLPFAAccelerated, kCGLPFAColorSize, 24 kCGLPFADepthSize, 16, kCGLPFADoubleBuffer, kCGLPFASupersample, 0 }; Choosing Renderer and Buffer Attributes Ensuring That Back Buffer Contents Remain the Same 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 66CGLPixelFormatObj pixelFormatObj; GLint numPixelFormats; long value; CGLChoosePixelFormat (attribs, &pixelFormatObj, &numPixelFormats); if( pixelFormatObj == NULL ) { attribs[last_attribute] = NULL; CGLChoosePixelFormat (attribs, &pixelFormatObj, &numPixelFormats); } if( pixelFormatObj == NULL ) { // Your code to notify the user and take action. } Ensuring a Specific Type of Renderer There are times when you want to ensure that you obtain a pixel format that supports a specific renderer type, such as a hardware-accelerated renderer. Table 6-1 lists attributes that support specific types of renderers. The table reflects the following tips for setting up pixel formats: ● To select only hardware-accelerated renderers, use both the accelerated and no-recovery attributes. ● To use only the floating-point software renderer, use the appropriate generic floating-point constant. ● To render to system memory, use the offscreen pixel attribute. Note that this rendering option does not use hardware acceleration. ● To render offscreen with hardware acceleration, specify a pixel buffer attribute. (See “Rendering to a Pixel Buffer” (page 60).) Table 6-1 Renderer types and pixel format attributes Renderer type CGL Cocoa NSOpenGLPFAAccelerated NSOpenGLPFANoRecovery kCGLPFAAccelerated kCGLPFANoRecovery Hardware-accelerated onscreen Choosing Renderer and Buffer Attributes Ensuring a Specific Type of Renderer 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 67Renderer type CGL Cocoa NSOpenGLPFARendererID kCGLRendererGenericFloatID kCGLPFARendererID kCGLRendererGenericFloatID Software (floating-point) System memory (not kCGLPFAOffScreen NSOpenGLPFAOffScreen accelerated) Hardware-accelerated kCGLPFAPBuffer NSOpenGLPFAPixelBuffer offscreen Ensuring a Single Renderer for a Display In some cases you may want to use a specific hardware renderer and nothing else. Since the OpenGL framework normally provides a software renderer as a fallback in addition to whatever hardware renderer it chooses, you need to prevent OpenGL from choosing the software renderer as an option. To do this, specify the no-recovery attribute for a windowed drawable object. Limiting a context to use a specific display, and thus a single renderer, has its risks. If your application runs on a system that uses more than one display, dragging a windowed drawable object from one display to the other is likely to yield a less than satisfactory result. Either rendering fails, or OpenGL uses the specified renderer and then copiesthe result to the second display. The same unsatisfactory result happens when attaching a full-screen context to another display. If you choose to use the hardware renderer associated with a specific display, you need to add code that detects and handles display changes. The code examples that follow show how to use each of the Apple-specific OpenGL APIs to set up a context that uses a single renderer. Listing 6-2 shows how to set up an NSOpenGLPixelFormat object that supports a single renderer. The attribute NSOpenGLPFANoRecovery specifies to OpenGL not to provide the fallback option of the software renderer. Listing 6-2 Setting an NSOpenGLContext object to use a specific display #import + (NSOpenGLPixelFormat*)defaultPixelFormat { NSOpenGLPixelFormatAttribute attributes [] = { NSOpenGLPFAScreenMask, 0, NSOpenGLPFANoRecovery, Choosing Renderer and Buffer Attributes Ensuring a Single Renderer for a Display 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 68NSOpenGLPFADoubleBuffer, (NSOpenGLPixelFormatAttribute)nil }; CGDirectDisplayID display = CGMainDisplayID (); // Adds the display mask attribute for selected display attributes[1] = (NSOpenGLPixelFormatAttribute) CGDisplayIDToOpenGLDisplayMask (display); return [[(NSOpenGLPixelFormat *)[NSOpenGLPixelFormat alloc] initWithAttributes:attributes] autorelease]; } Listing 6-3 shows how to use CGL to set up a context that uses a single renderer. The attribute kCGLPFANoRecovery ensures that OpenGL does not provide the fallback option of the software renderer. Listing 6-3 Setting a CGL context to use a specific display #include CGLPixelFormatAttribute attribs[] = { kCGLPFADisplayMask, 0, kCGLPFANoRecovery, kCGLPFADoubleBuffer, 0 }; CGLPixelFormatObj pixelFormat = NULL; GLint numPixelFormats = 0; CGLContextObj cglContext = NULL; CGDirectDisplayID display = CGMainDisplayID (); // Adds the display mask attribute for selected display attribs[1] = CGDisplayIDToOpenGLDisplayMask (display); CGLChoosePixelFormat (attribs, &pixelFormat, &numPixelFormats); Allowing Offline Renderers Adding the attribute NSOpenGLPFAAllowOfflineRenderers allows OpenGL to include offline renderers in the list of virtual screens returned in the pixel format object. Apple recommends you include this attribute, because it allows your application to work better in environments where renderers come and go,such as when a new display is plugged into a Mac. Choosing Renderer and Buffer Attributes Allowing Offline Renderers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 69If your application includes NSOpenGLPFAAllowOfflineRenderers in the list of attributes, your application must also watch for display changes and update its rendering context. See “Update the Rendering Context When the Renderer or Geometry Changes” (page 72). OpenCL If your applications uses OpenCL to perform other computations, you may want to find an OpenGL renderer that also supports OpenCL. To do this, add the attribute NSOpenGLPFAAcceleratedCompute to the pixel format attribute list. Adding this attribute restricts the list of renderers to those that also support OpenCL. More information on OpenCL can be found in the OpenCL Programming Guide for Mac . Deprecated Attributes There are several renderer and buffer attributes that are no longer recommended either because they are too narrowly focused or no longer useful. Your application should move away from using any of these attributes: ● The robust attribute (NSOpenGLPFARobust or kCGLPFARobust) specifies only those renderers that do not have any failure modes associated with a lack of video card resources. ● The multiple-screen attribute (NSOpenGLPFAMultiScreen or kCGLPFAMultiScreen) specifies only those renderers that can drive more than one screen at a time. ● The multiprocessing-safe attribute (kCGLPFAMPSafe) specifies only those renderers that are thread safe. This attribute is deprecated in OS X because all renderers can accept commands for threads running on a second processor. However, this does not mean that all renderers are thread safe or reentrant. See “Concurrency and OpenGL” (page 148). ● The compliant attribute (NSOpenGLPFACompliant or kCGLPFACompliant) specifies only OpenGL-compliant renderers. All OS X renderers are OpenGL-compliant, so this attribute is no longer useful. ● The fullscreen attribute (kCGLPFAFullScreen) requested special fullscreen contexts. The window screen attribute (kCGLPFAWindow) required the context to support windowed contexts. OS X no longer requires a special full screen context to be created, as it automatically provides the same performance benefits with a properly formatted window. ● The offscreen buffer attribute (kCGLPFAOffScreen) selects renderers capable of rendering to offscreen memory. Instead, use a frame buffer object as the rendering target and read the final results back to application memory. Choosing Renderer and Buffer Attributes OpenCL 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 70● The pixel buffer attributes(kCGLPFAPBuffer and kCGLPFARemotePBuffer are no longer recommended; use frame buffer objects instead. ● The auxiliary buffers attribute (kCGLPFAAuxBuffers) specifies the number of required auxiliary buffers your application requires. Auxiliary buffers are not supported by the OpenGL 3.2 Core profile. Because auxiliary buffers are not supported, the kCGLPFAAuxDepthStencil attribute that modifies it is also deprecated. ● The accumulation buffersize attribute (kCGLPFAAccumSize)specifiesthe desired size for the accumulation buffer. Accumulation buffers are not supported by the OpenGL 3.2 Core Profile. Important: Your application may not use any of the deprecated attributes in conjunction with a profile other than the legacy profile; if you do, pixel format creation fails. Choosing Renderer and Buffer Attributes Deprecated Attributes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 71A rendering context is a container forstate information. When you designate a rendering context asthe current rendering context,subsequent OpenGL commands modify that context’sstate, objects attached to that context, or the drawable object associated with that context. The actual drawing surfaces are never owned by the rendering context but are created, as needed, when the rendering context is actually attached to a drawable object. You can attach multiple rendering contexts to the same drawing surfaces. Each context maintains its own drawing state. “Drawing to a Window or View” (page 35), “Drawing to the Full Screen” (page 50), and “Drawing Offscreen” (page 53) show how to create a rendering context and attach it to a drawable object. This chapter describes advanced ways to interact with rendering contexts. Update the Rendering Context When the Renderer or Geometry Changes A renderer change can occur when the user drags a window from one display to another or when a display is attached or removed. Geometry changes occur when the display mode changes or when a window is resized or moved. If your application uses an NSOpenGLView object to maintain the context, it is automatically updated. An application that creates a custom view to hold the rendering context must track the appropriate system events and update the context when the geometry or display changes. Updating a rendering context notifies it of geometry changes; it doesn't flush content. Calling an update function updates the attached drawable objects and ensures that the renderer is properly updated for any virtual screen changes. If you don't update the rendering context, you may see rendering artifacts. The routine that you call for updating determines how events related to renderer and geometry changes are handled. For applications that use or subclass NSOpenGLView, Cocoa calls the update method automatically. Applications that create an NSOpenGLContext object manually must call the update method of NSOpenGLContext directly. For a full-screen Cocoa application, calling the setFullScreen method of NSOpenGLContext ensures that depth, size, or display changes take affect. Your application must update the rendering context after the system event but before drawing to the context. If the drawable object is resized, you may want to issue a glViewport command to ensure that the content scales properly. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 72 Working with Rendering ContextsNote: Some system-level events(such as display mode changes) that require a context update could reallocate the buffers of the context; thus you need to redraw the entire scene after all context updates. It's important that you don't update rendering contexts more than necessary. Your application should respond to system-level events and notifications rather than updating every frame. For example, you'll want to respond to window move and resize operations and to display configuration changes such as a color depth change. Tracking Renderer Changes It's fairly straightforward to track geometry changes, but how are renderer changes tracked? This is where the concept of a virtual screen becomes important (see “Virtual Screens” (page 26)). A change in the virtual screen indicates a renderer change, a change in renderer capability, or both. When your application detects a window resize event, window move event, or display change, it should check for a virtual screen change and respond to the change to ensure that the current application state reflects any changes in renderer capabilities. Each of the Apple-specific OpenGL APIs has a function that returns the current virtual screen number: ● The currentVirtualScreen method of the NSOpenGLContext class ● The CGLGetVirtualScreen function The virtual screen number represents an index in the list of virtual screens that were set up specifically for the pixel format object used for the rendering context. The number is unique to the list but is meaningless otherwise. When the renderer changes, the limits and extensions available to OpenGL may also change. Your application should retest the capabilities of the renderer and use these to choose its rendering algorithms appropriately. See “Determining the OpenGL Capabilities Supported by the Renderer” (page 83). Updating a Rendering Context for a Custom Cocoa View If you subclass NSView instead of using the NSOpenGLView class, your application must update the rendering context. That's due to a slight difference between the events normally handled by the NSView class and those handled by the NSOpenGLView class. Cocoa does not call a reshape method for the NSView class when the size changes because that class does not export a reshape method to override. Instead, you need to perform reshape operations directly in your drawRect: method, looking for changes in view bounds prior to drawing content. Using this approach provides results that are equivalent to using the reshape method of the NSOpenGLView class. Working with Rendering Contexts Update the Rendering Context When the Renderer or Geometry Changes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 73Listing 7-1 is a partial implementation of a custom view thatshows how to handle context updates. The update method is called after move, resize, and display change events and when the surface needs updating. The class adds an observer to the notification NSViewGlobalFrameDidChangeNotification, which is posted whenever an NSView object that has attached surfaces (that is, NSOpenGLContext objects) resizes, moves, or changes coordinate offsets. It's slightly more complicated to handle changes in the display configuration. For that, you need to register for the notification NSApplicationDidChangeScreenParametersNotification through the NSApplication class. This notification is posted whenever the configuration of any of the displays attached to the computer is changed (either programmatically or when the user changes the settings in the interface). Listing 7-1 Handling context updates for a custom view #import #import #import @class NSOpenGLContext, NSOpenGLPixelFormat; @interface CustomOpenGLView : NSView { @private NSOpenGLContext* _openGLContext; NSOpenGLPixelFormat* _pixelFormat; } - (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format; - (void)update; @end @implementation CustomOpenGLView - (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format { Working with Rendering Contexts Update the Rendering Context When the Renderer or Geometry Changes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 74self = [super initWithFrame:frameRect]; if (self != nil) { _pixelFormat = [format retain]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(_surfaceNeedsUpdate:) name:NSViewGlobalFrameDidChangeNotification object:self]; } return self; } - (void)dealloc [[NSNotificationCenter defaultCenter] removeObserver:self name:NSViewGlobalFrameDidChangeNotification object:self]; [self clearGLContext]; } - (void)update { if ([_openGLContext view] == self) { [_openGLContext update]; } } - (void) _surfaceNeedsUpdate:(NSNotification*)notification { [self update]; } @end Working with Rendering Contexts Update the Rendering Context When the Renderer or Geometry Changes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 75Context Parameters Alter the Context’s Behavior A rendering context has a variety of parameters that you can set to suit the needs of your OpenGL drawing. Some of the most useful, and often overlooked, context parameters are discussed in thissection:swap interval, surface opacity, surface drawing order, and back-buffer size control. Each of the Apple-specific OpenGL APIs provides a routine forsetting and getting rendering context parameters: ● The setValues:forParameter: method of the NSOpenGLContext class takes as arguments a list of values and a list of parameters. ● The CGLSetParameter function takes as parameters a rendering context, a constant that specifies an option, and a value for that option. Some parameters need to be enabled for their values to take effect. The reference documentation for a parameter indicates whether a parameter needs to be enabled. See NSOpenGLContext Class Reference , and CGL Reference . Swap Interval Allows an Application to Synchronize Updatesto the Screen Refresh If the swap interval is set to 0 (the default), buffers are swapped as soon as possible, without regard to the vertical refresh rate of the monitor. If the swap interval is set to any other value, the buffers are swapped only during the vertical retrace of the monitor. For more information, see “Synchronize with the Screen Refresh Rate” (page 96). You can use the following constants to specify that you are setting the swap interval value: ● For Cocoa, use NSOpenGLCPSwapInterval. ● If you are using the CGL API, use kCGLCPSwapInterval as shown in Listing 7-2. Listing 7-2 Using CGL to set up synchronization GLint sync = 1; // ctx must be a valid context CGLSetParameter (ctx, kCGLCPSwapInterval, &sync); Working with Rendering Contexts Context Parameters Alter the Context’s Behavior 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 76Surface Opacity Specifies How the OpenGL Surface Blends with Surfaces Behind It OpenGL surfaces are typically rendered as opaque. Thus the background color for pixels with alpha values of 0.0 is the surface background color. If you set the value of the surface opacity parameter to 0, then the contents of the surface are blended with the contents of surfaces behind the OpenGL surface. This operation is equivalent to OpenGL blending with a source contribution proportional to the source alpha and a background contribution proportional to 1 minus the source alpha. A value of 1 means the surface is opaque (the default); 0 means completely transparent. You can use the following constants to specify that you are setting the surface opacity value: ● For Cocoa, use NSOpenGLCPSurfaceOpacity. ● If you are using the CGL API, use kCGLCPSurfaceOpacity as shown in Listing 7-3. Listing 7-3 Using CGL to set surface opacity GLint opaque = 0; // ctx must be a valid context CGLSetParameter (ctx, kCGLCPSurfaceOpacity, &opaque); Surface Drawing Order Specifies the Position of the OpenGL Surface Relative to the Window A value of 1 means that the position is above the window; a value of –1 specifies a position that is below the window. When you have overlapping views, setting the order to -1 causes OpenGL to draw underneath, 1 causes OpenGL to draw on top. This parameter is useful for drawing user interface controls on top of an OpenGL view. You can use the following constants to specify that you are setting the surface drawing order value: ● For Cocoa, use NSOpenGLCPSurfaceOrder. ● If you are using the CGL API, use kCGLCPSurfaceOrder as shown in Listing 7-4. Listing 7-4 Using CGL to set surface drawing order GLint order = –1; // below window // ctx must be a valid context CGLSetParameter (ctx, kCGLCPSurfaceOrder, &order); Working with Rendering Contexts Context Parameters Alter the Context’s Behavior 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 77Determining Whether Vertex and Fragment Processing Happens on the GPU CGL provides two parameters for checking whether the system is using the GPU for processing: kCGLCPGPUVertexProcessing and kCGLCPGPUFragmentProcessing. To check vertex processing, pass the vertex constant to the CGLGetParameter function. To check fragment processing, pass the fragment constant to CGLGetParameter. Listing 7-5 demonstrates how to use these parameters. Important: Although you can perform these queries at any time, keep in mind that such queries force an internal state validation, which can impact performance. For best performance, do not use these queries inside your drawing loop. Instead, perform the queries once at initialization or context setup time to determine whether OpenGL is using the CPU or the GPU for processing, and then act appropriately in your drawing loop. Listing 7-5 Using CGL to check whether the GPU is processing vertices and fragments BOOL gpuProcessing; GLint fragmentGPUProcessing, vertexGPUProcessing; CGLGetParameter (CGLGetCurrentContext(), kCGLCPGPUFragmentProcessing, &fragmentGPUProcessing); CGLGetParameter(CGLGetCurrentContext(), kCGLCPGPUVertexProcessing, &vertexGPUProcessing); gpuProcessing = (fragmentGPUProcessing && vertexGPUProcessing) ? YES : NO; Controlling the Back Buffer Size Normally, the back buffer is the same size as the window or view that it's drawn into, and it changes size when the window or view changes size. For a window whose size is 720×pixels, the OpenGL back buffer is sized to match. If the window grows to 1024×768 pixels, for example, then the back buffer is resized as well. If you do not want this behavior, use the back buffer size control parameter. Using this parameter fixes the size of the back buffer and lets the system scale the image automatically when it moves the data to a variable size buffer (see Figure 7-1). The size of the back buffer remains fixed at the size that you set up regardless of whether the image is resized to display larger onscreen. You can use the following constants to specify that you are setting the surface backing size: ● If you are using the CGL API, use kCGLCPSurfaceBackingSize, as shown in Listing 7-6. Working with Rendering Contexts Context Parameters Alter the Context’s Behavior 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 78Listing 7-6 Using CGL to set up back buffer size control GLint dim[2] = {720, 480}; // ctx must be a valid context CGLSetParameter(ctx, kCGLCPSurfaceBackingSize, dim); CGLEnable (ctx, kCGLCESurfaceBackingSize); Figure 7-1 A fixed size back buffer and variable size front buffer Sharing Rendering Context Resources A rendering context does not own the drawing objects attached to it, which leaves open the option forsharing. Rendering contexts can share resources and can be attached to the same drawable object (see Figure 7-2 (page 80)) or to different drawable objects (see Figure 7-3 (page 80)). You set up context sharing—either with more than one drawable object or with another context—at the time you create a rendering context. Contexts can share object resources and their associated object state by indicating a shared context at context creation time. Shared contexts share all texture objects, display lists, vertex programs, fragment programs, and buffer objects created before and after sharing is initiated. The state of the objects is also shared but not other Working with Rendering Contexts Sharing Rendering Context Resources 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 79contextstate,such as current color, texture coordinate settings, matrix and lighting settings, rasterization state, and texture environment settings. You need to duplicate context state changes as required, but you need to set up individual objects only once. Figure 7-2 Shared contexts attached to the same drawable object Context Context Drawable object Shared object state When you create an OpenGL context, you can designate another context whose object resources you want to share. Allsharing is peer to peer. Shared resources are reference-counted and thus are maintained until explicitly released or when the last context-sharing resource is released. Not every context can be shared with every other context. Both contexts must share the same OpenGL profile. You must also ensure that both contexts share the same set of renderers. You meet these requirements by ensuring each context uses the same virtual screen list, using either of the following techniques: ● Use the same pixel format object to create all the rendering contexts that you want to share. ● Create pixel format objects using attributes that narrow down the choice to a single display. This practice ensures that the virtual screen is identical for each pixel format object. Figure 7-3 Shared contexts and more than one drawable object Context Context Drawable object Drawable object Shared object state Setting up shared rendering contextsis very straightforward. Each Apple-specific OpenGL API providesfunctions with an option to specify a context to share in its context creation routine: ● Use the share argument for the initWithFormat:shareContext: method of the NSOpenGLContext class. See Listing 7-7 (page 81). ● Use the share parameter for the function CGLCreateContext. See Listing 7-8 (page 82). Working with Rendering Contexts Sharing Rendering Context Resources 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 80Listing 7-7 ensures the same virtual screen list by using the same pixel format object for each of the shared contexts. Listing 7-7 Setting up an NSOpenGLContext object for sharing #import + (NSOpenGLPixelFormat*)defaultPixelFormat { NSOpenGLPixelFormatAttribute attributes [] = { NSOpenGLPFADoubleBuffer, (NSOpenGLPixelFormatAttribute)nil }; return [(NSOpenGLPixelFormat *)[NSOpenGLPixelFormat alloc] initWithAttributes:attribs]; } - (NSOpenGLContext*)openGLContextWithShareContext:(NSOpenGLContext*)context { if (_openGLContext == NULL) { _openGLContext = [[NSOpenGLContext alloc] initWithFormat:[[self class] defaultPixelFormat] shareContext:context]; [_openGLContext makeCurrentContext]; [self prepareOpenGL]; } return _openGLContext; } - (void)prepareOpenGL { // Your code here to initialize the OpenGL state } Listing 7-8 ensures the same virtual screen list by using the same pixel format object for each of the shared contexts. Working with Rendering Contexts Sharing Rendering Context Resources 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 81Listing 7-8 Setting up a CGL context for sharing #include CGLPixelFormatAttribute attrib[] = {kCGLPFADoubleBuffer, 0}; CGLPixelFormatObj pixelFormat = NULL; Glint numPixelFormats = 0; CGLContextObj cglContext1 = NULL; CGLContextObj cglContext2 = NULL; CGLChoosePixelFormat (attribs, &pixelFormat, &numPixelFormats); CGLCreateContext(pixelFormat, NULL, &cglContext1); CGLCreateContext(pixelFormat, cglContext1, &cglContext2); Working with Rendering Contexts Sharing Rendering Context Resources 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 82One of the benefits of using OpenGL isthat it is extensible. An extension istypically introduced by one or more vendors and then later is accepted by the OpenGL Working Group. Some extensions are promoted from a vendor-specific extension to one shared by more than one vendor, sometimes even being incorporated into the core OpenGL API. Extensions allow OpenGL to embrace innovation, but require you to verify that the OpenGL functionality you want to use is available. Because extensions can be introduced at the vendor level, more than one extension can provide the same basic functionality. There might also be an ARB-approved extension that has functionality similar to that of a vendor-specific extension. Your application should prefer core functionality or ARB-approved extensions over those specific to a particular vendor, when both are offered by the same renderer. This makes it easier to transparently support new renderers from other vendors. As particular functionality becomes widely adopted, it can be moved into the core OpenGL API by the ARB. As a result, functionality that you want to use could be included as an extension, as part of the core API, or both. For example, the ability to combine texture environments is supported through the GL_ARB_texture_env_combine and the GL_EXT_texture_env_combine extensions. It's also part of the core OpenGL version 1.3 API. Although each has similar functionality, they use a different syntax. You may need to check in several places (core OpenGL API and extension strings) to determine whether a specific renderer supports functionality that you want to use. Detecting Functionality OpenGL hastwo types of commands—those that are part of the core API and those that are part of an extension to OpenGL. Your application first needs to check for the version of the core OpenGL API and then check for the available extensions. Keep in mind that OpenGL functionality is available on a per-renderer basis. For example, a software renderer might notsupport fog effects even though fog effects are available in an OpenGL extension implemented by a hardware vendor on the same system. For this reason, it's important that you check for functionality on a per-renderer basis. Regardless of what functionality you are checking for, the approach is the same. You need to call the OpenGL function glGetString twice. The first time pass the GL_VERSION constant. The function returns a string that specifies the version of OpenGL. The second time, pass the GL_EXTENSIONS constant. The function returns a pointer to an extension name string. The extension name string is a space-delimited list of the OpenGL 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 83 Determining the OpenGL Capabilities Supported by the Rendererextensions that are supported by the current renderer. This string can be rather long, so do not allocate a fixed-length string for the return value of the glGetString function. Use a pointer and evaluate the string in place. Pass the extension name string to the function gluCheckExtension along with the name of the extension you want to check for. The gluCheckExtension function returns a Boolean value that indicates whether or not the extension is available for the current renderer. If an extension becomes part of the core OpenGL API, OpenGL continues to export the name strings of the promoted extensions. It also continuesto support the previous versions of any extension that has been exported in earlier versions of OS X. Because extensions are not typically removed, the methodology you use today to check for a feature works in future versions of OS X. Checking for functionality, although fairly straightforward, involves writing a large chunk of code. The best way to check for OpenGL functionality is to implement a capability-checking function that you call when your program starts up, and then any time the renderer changes. Listing 8-1 shows a code excerpt that checks for a few extensions. A detailed explanation for each line of code appears following the listing. Listing 8-1 Checking for OpenGL functionality GLint maxRectTextureSize; GLint myMaxTextureUnits; GLint myMaxTextureSize; const GLubyte * strVersion; const GLubyte * strExt; float myGLVersion; GLboolean isVAO, isTexLOD, isColorTable, isFence, isShade, isTextureRectangle; strVersion = glGetString (GL_VERSION); // 1 sscanf((char *)strVersion, "%f", &myGLVersion); strExt = glGetString (GL_EXTENSIONS); // 2 glGetIntegerv(GL_MAX_TEXTURE_UNITS, &myMaxTextureUnits); // 3 glGetIntegerv(GL_MAX_TEXTURE_SIZE, &myMaxTextureSize); // 4 isVAO = gluCheckExtension ((const GLubyte*)"GL_APPLE_vertex_array_object",strExt); // 5 isFence = gluCheckExtension ((const GLubyte*)"GL_APPLE_fence", strExt); // 6 isShade = gluCheckExtension ((const GLubyte*)"GL_ARB_shading_language_100", strExt); // 7 Determining the OpenGL Capabilities Supported by the Renderer Detecting Functionality 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 84isColorTable = gluCheckExtension ((const GLubyte*)"GL_SGI_color_table", strExt) || gluCheckExtension ((const GLubyte*)"GL_ARB_imaging", strExt); // 8 isTexLOD = gluCheckExtension ((const GLubyte*)"GL_SGIS_texture_lod", strExt) || (myGLVersion >= 1.2); // 9 isTextureRectangle = gluCheckExtension ((const GLubyte*) "GL_EXT_texture_rectangle", strExt); if (isTextureRectangle) glGetIntegerv (GL_MAX_RECTANGLE_TEXTURE_SIZE_EXT, &maxRectTextureSize); else maxRectTextureSize = 0; // 10 Here is what the code does: 1. Gets a string that specifies the version of OpenGL. 2. Gets the extension name string. 3. Calls the OpenGL function glGetIntegerv to get the value of the attribute passed to it which, in this case, is the maximum number of texture units. 4. Gets the maximum texture size. 5. Checks whether vertex array objects are supported. 6. Checks for the Apple fence extension. 7. Checks for support for version 1.0 of the OpenGL shading language. 8. Checks for RGBA-format color lookup table support. In this case, the code needs to check for the vendor-specific string and for the ARB string. If either is present, the functionality is supported. 9. Checks for an extension related to the texture level of detail parameter (LOD). In this case, the code needs to check for the vendor-specific string and for the OpenGL version. If the vendor string is present or the OpenGL version is greater than or equal to 1.2, the functionality is supported. 10. Getsthe OpenGL limit for rectangle textures. Forsome extensions,such asthe rectangle texture extension, it may not be enough to check whether the functionality is supported. You may also need to check the limits. You can use glGetIntegerv and related functions (glGetBooleanv, glGetDoublev, glGetFloatv) to obtain a variety of parameter values. You can extend this example to make a comprehensive functionality-checking routine for your application. For more details, see the GLCheck.c file in the Cocoa OpenGL sample application. Determining the OpenGL Capabilities Supported by the Renderer Detecting Functionality 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 85The code in Listing 8-2 shows one way to query the current renderer. It uses the CGL API, which can be called from Cocoa applications. In reality, you need to iterate over all displays and all renderers for each display to get a true picture of the OpenGL functionality available on a particular system. You also need to update your functionality snapshot each time the list of displays or display configuration changes. Listing 8-2 Setting up a valid rendering context to get renderer functionality information #include #include CGDirectDisplayID display = CGMainDisplayID (); // 1 CGOpenGLDisplayMask myDisplayMask = CGDisplayIDToOpenGLDisplayMask (display); // 2 { // Check capabilities of display represented by display mask CGLPixelFormatAttribute attribs[] = {kCGLPFADisplayMask, myDisplayMask, 0}; // 3 CGLPixelFormatObj pixelFormat = NULL; GLint numPixelFormats = 0; CGLContextObj myCGLContext = 0; CGLContextObj curr_ctx = CGLGetCurrentContext (); // 4 CGLChoosePixelFormat (attribs, &pixelFormat, &numPixelFormats); // 5 if (pixelFormat) { CGLCreateContext (pixelFormat, NULL, &myCGLContext); // 6 CGLDestroyPixelFormat (pixelFormat); // 7 CGLSetCurrentContext (myCGLContext); // 8 if (myCGLContext) { // Check for capabilities and functionality here } } CGLDestroyContext (myCGLContext); // 9 CGLSetCurrentContext (curr_ctx); // 10 } Here's what the code does: 1. Gets the display ID of the main display. Determining the OpenGL Capabilities Supported by the Renderer Detecting Functionality 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 862. Maps a display ID to an OpenGL mask. 3. Fills a pixel format attributes array with the display mask attribute and the mask value. 4. Saves the current context so that it can be restored later. 5. Gets the pixel format object for the display. The numPixelFormats parameter specifies how many pixel formats are listed in the pixel format object. 6. Creates a context based on the first pixel format in the list supplied by the pixel format object. Only one renderer will be associated with this context. In your application, you would need to iterate through all pixel formats for this display. 7. Destroys the pixel format object when it is no longer needed. 8. Sets the current context to the newly created, single-renderer context. Now you are ready to check for the functionality supported by the current renderer. See Listing 8-1 (page 84) for an example of functionality-checking code. 9. Destroys the context because it is no longer needed. 10. Restores the previously saved context as the current context, thus ensuring no intrusion upon the user. Guidelines for Code That Checks for Functionality The guidelines in this section ensure that your functionality-checking code is thorough yet efficient. ● Don't rely on what's in a header file. A function declaration in a header file does not ensure that a feature is supported by the current renderer. Neither does linking against a stub library that exports a function. ● Make sure that a renderer is attached to a valid rendering context before you check the functionality of that renderer. ● Check the API version or the extension name string for the current renderer before you issue OpenGL commands. ● Check only once per renderer. After you've determined that the current renderer supports an OpenGL command, you don't need to check for that functionality again for that renderer. ● Make sure that you are aware of whether a feature is being used as part of the Core OpenGL API or as an extension. When a feature is implemented both as part of the core OpenGL API and as an extension, it uses different constants and function names. Determining the OpenGL Capabilities Supported by the Renderer Guidelines for Code That Checks for Functionality 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 87OpenGL Renderer Implementation-Dependent Values The OpenGL specification definesimplementation-dependent valuesthat define the limits of what an OpenGL implementation is capable of. For example, the maximum size of a texture and the number of texture units are both common implementation-dependent values that an application is expected to check. Each of these values provides a minimum value that all conforming OpenGL implementations are expected to support. If your application’s usage exceeds these minimums, it must check the limit first, and fail gracefully if the implementation cannot provide the limit desired. Your application may need to load smaller textures, disable a rendering feature, or choose a different implementation. Although the specification provides a comprehensive list of these limitations, a few stand out in most OpenGL applications. Table 8-1 lists values that applications should test if they require more than the minimum values in the specification. Table 8-1 Common OpenGL renderer limitations Maximum size of the texture GL_MAX_TEXTURE_SIZE Number of depth buffer planes GL_DEPTH_BITS Number of stencil buffer planes GL_STENCIL_BITS The limit on the size and complexity of your shaders is a key area you need to test. All graphics hardware supportslimited memory to pass attributesinto the vertex and fragmentshaders. Your application must either keep its usage below the minimums as defined in the specification, or it must check the shader limitations documented in Table 8-2 and choose shaders that are within those limits. Table 8-2 OpenGL shader limitations Maximum number of vertex attributes GL_MAX_VERTEX_ATTRIBS Maximum number of uniform vertex vectors GL_MAX_VERTEX_UNIFORM_COMPONENTS Maximum number of uniform fragment vectors GL_MAX_FRAGMENT_UNIFORM_COMPONENTS Maximum number of varying vectors GL_MAX_VARYING_FLOATS Maximum number of texture units usable in a GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS vertex shader Maximum number of texture units usable in a GL_MAX_TEXTURE_IMAGE_UNITS fragment shader Determining the OpenGL Capabilities Supported by the Renderer OpenGL Renderer Implementation-Dependent Values 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 88OpenGL performs many complex operations—transformations, lighting, clipping, texturing, environmental effects, and so on—on large data sets. The size of your data and the complexity of the calculations performed on it can impact performance, making your stellar 3D graphics shine less brightly than you'd like. Whether your application is a game using OpenGL to provide immersive real-time images to the user or an image processing application more concerned with image quality, use the information in this chapter to help you design your application. Visualizing OpenGL The most common way to visualize OpenGL is as a graphics pipeline, as shown in Figure 9-1 (page 90). Your application sends vertex and image data, configuration and state changes, and rendering commandsto OpenGL. Vertices are processed, assembled into primitives, and rasterized into fragments. Each fragment is calculated and merged into the framebuffer. The pipeline model is useful for identifying exactly what work your application must perform to generate the results you want. OpenGL allows you to customize each stage of the graphics pipeline, either through customized shader programs or by configuring a fixed-function pipeline through OpenGL function calls. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 89 OpenGL Application Design StrategiesIn most implementations, each pipeline stage can act in parallel with the others. This is a key point. If any one pipeline stage performs too much work, then the other stages sit idle waiting for it to complete. Your design should balance the work performed in each pipeline stage to the capabilities of the renderer. When you tune your application’s performance, the firststep is usually to determine which stage the application is bottlenecked in, and why. Figure 9-1 OpenGL graphics pipeline Geometry Fragment Framebuffer operations Texturing Fog Alpha, stencil, and depth tests Framebuffer blending Primitive assembly Clipping Vertex Application Primitives and image data Transform and lighting Another way to visualize OpenGL is as a client-server architecture, as shown in Figure 9-2 (page 91). OpenGL state changes, texture and vertex data, and rendering commands must all travel from the application to the OpenGL client. The client transforms these items so that the graphics hardware can understand them, and then forwards them to the GPU. Not only do these transformations add overhead, but the bandwidth between the CPU and the graphics hardware is often lower than other parts of the system. OpenGL Application Design Strategies Visualizing OpenGL 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 90To achieve great performance, an application must reduce the frequency of callsthey make to OpenGL, minimize the transformation overhead, and carefully manage the flow of data between the application and the graphics hardware. For example, OpenGL provides mechanismsthat allow some kinds of data to be cached in dedicated graphics memory. Caching reusable data in graphics memory reduces the overhead of transmitting data to the graphics hardware. Figure 9-2 OpenGL client-server architecture OpenGL client OpenGL server Graphics hardware Application OpenGL framework OpenGL driver Runs on GPU Runs on CPU Designing a High-Performance OpenGL Application To summarize, a well-designed OpenGL application needs to: ● Exploit parallelism in the OpenGL pipeline. ● Manage data flow between the application and the graphics hardware. OpenGL Application Design Strategies Designing a High-Performance OpenGL Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 91Figure 9-3 shows a suggested process flow for an application that uses OpenGL to perform animation to the display. Figure 9-3 Application model for managing resources Update dynamic resources Execute rendering commands Read back results Present to display Free up resources Render loop Slower process Faster process Create static resources When the application launches, it creates and initializes any static resources it intends to use in the renderer, encapsulating those resources into OpenGL objects where possible. The goal is to create any object that can remain unchanged for the runtime of the application. Thistradesincreased initialization time for better rendering performance. Ideally, complex commands or batches ofstate changesshould be replaced with OpenGL objects that can be switched in with a single function call. For example, configuring the fixed-function pipeline can take dozens of function calls. Replace it with a graphics shader that is compiled at initialization time, and you can switch to a different program with a single function call. In particular, OpenGL objects that are expensive to create or modify should be created as static objects. The rendering loop processes all of the items you intend to render to the OpenGL context, then swaps the buffersto display the resultsto the user. In an animated scene,some data needsto be updated for every frame. In the inner rendering loop shown in Figure 9-3, the application alternates between updating rendering resources(possibly creating or modifying OpenGL objectsin the process) and submitting rendering commands that use those resources. The goal of this inner loop is to balance the workload so that the CPU and GPU are working in parallel, without blocking each other by using the same resources simultaneously. OpenGL Application Design Strategies Designing a High-Performance OpenGL Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 92A goal for the inner loop is to avoid copying data back from the graphics processor to the CPU. Operations that require the CPU to read results back from the graphics hardware are sometimes necessary, but in general reading back results should be used sparingly. If those results are also used to render the current frame, as shown in the middle rendering loop, this can be very slow. Copying data from the GPU to the CPU often requires that some or all previously submitted drawing commands have completed. After the application submits all drawing commands needed in the frame, it presents the results to the screen. Alternatively, a non-interactive application might read the final image back to the CPU, but this is also slower than presenting results to the screen. This step should be performed only for results that must be read back to the application. For example, you might copy the image in the back buffer to save it to disk. Finally, when your application is ready to shut down, it deletes static and dynamic resources to make more hardware resources available to other applications. If your application is moved to the background, releasing resources to other applications is also good practice. To summarize the important characteristics of this design: ● Create static resources, whenever practical. ● The inner rendering loop alternates between modifying dynamic resources and submitting rendering commands. Enough work should be included in this loop so that when the application needs to read or write to any OpenGL object, the graphics processor has finished processing any commands that used it. ● Avoid reading intermediate rendering results into the application. The rest of this chapter provides useful OpenGL programming techniques to implement the features of this rendering loop. Later chapters demonstrate how to apply these general techniquesto specific areas of OpenGL programming. ● “Update OpenGL Content Only When Your Data Changes” (page 94) ● “Avoid Synchronizing and Flushing Operations” (page 96) ● “Allow OpenGL to Manage Your Resources” (page 99) ● “Use Optimal Data Types and Formats” (page 102) ● “Use Double Buffering to Avoid Resource Conflicts” (page 100) ● “Be Mindful of OpenGL State Variables” (page 101) ● “Use OpenGL Macros” (page 103) ● “Replace State Changes with OpenGL Objects” (page 102) OpenGL Application Design Strategies Designing a High-Performance OpenGL Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 93Update OpenGL Content Only When Your Data Changes OpenGL applications should avoid recomputing a scene when the data has not changed. This is critical on portable devices, where power conservation is critical to maximizing battery life. You can ensure that your application draws only when necessary by following a few simple guidelines: ● If your application isrendering animation, use a Core Video display link to drive the animation loop. Listing 9-1 (page 94) provides code that allows your application to be notified when a new frame needs to be displayed. This code also synchronizes image updates to the refresh rate of the display. See “Synchronize with the Screen Refresh Rate” (page 96) for more information. ● If your application does not animate its OpenGL content, you should allow the system to regulate drawing. For example, in Cocoa call the setNeedsDisplay: method when your data changes. ● If your application does not use a Core Video display link, you should still advance an animation only when necessary. To determine when to draw the next frame of an animation, calculate the difference between the current time and the start of the last frame. Use the difference to determine how much to advance the animation. You can use the Core Foundation function CFAbsoluteTimeGetCurrent to obtain the current time. Listing 9-1 Setting up a Core Video display link @interface MyView : NSOpenGLView { CVDisplayLinkRef displayLink; //display link for managing rendering thread } @end - (void)prepareOpenGL { // Synchronize buffer swaps with vertical refresh rate GLint swapInt = 1; [[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval]; // Create a display link capable of being used with all active displays CVDisplayLinkCreateWithActiveCGDisplays(&displayLink); // Set the renderer output callback function CVDisplayLinkSetOutputCallback(displayLink, &MyDisplayLinkCallback, self); OpenGL Application Design Strategies Update OpenGL Content Only When Your Data Changes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 94// Set the display link for the current renderer CGLContextObj cglContext = [[self openGLContext] CGLContextObj]; CGLPixelFormatObj cglPixelFormat = [[self pixelFormat] CGLPixelFormatObj]; CVDisplayLinkSetCurrentCGDisplayFromOpenGLContext(displayLink, cglContext, cglPixelFormat); // Activate the display link CVDisplayLinkStart(displayLink); } // This is the renderer output callback function static CVReturn MyDisplayLinkCallback(CVDisplayLinkRef displayLink, const CVTimeStamp* now, const CVTimeStamp* outputTime, CVOptionFlags flagsIn, CVOptionFlags* flagsOut, void* displayLinkContext) { CVReturn result = [(MyView*)displayLinkContext getFrameForTime:outputTime]; return result; } - (CVReturn)getFrameForTime:(const CVTimeStamp*)outputTime { // Add your drawing codes here return kCVReturnSuccess; } - (void)dealloc { // Release the display link CVDisplayLinkRelease(displayLink); [super dealloc]; } OpenGL Application Design Strategies Update OpenGL Content Only When Your Data Changes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 95Synchronize with the Screen Refresh Rate Tearing is a visual anomaly caused when part of the current frame overwrites previous frame data in the framebuffer before the current frame is fully rendered on the screen. To avoid tearing, applications use a double-buffered context and synchronize buffer swaps with the screen refresh rate (sometimes called VBL , vertical blank , or vsynch ) to eliminate frame tearing. Note: During development, it's best to disable synchronization so that you can more accurately benchmark your application. Enable synchronization when you are ready to deploy your application. The refresh rate of the display limits how often the screen can be refreshed. The screen can be refreshed at rates that are divisible by integer values. For example, a CRT display that has a refresh rate of 60 Hz can support screen refresh rates of 60 Hz, 30 Hz, 20 Hz, and 15 Hz. LCD displays do not have a vertical retrace in the CRT sense and are typically considered to have a fixed refresh rate of 60 Hz. After you tell the context to swap the buffers, OpenGL must defer any rendering commands that follow that swap until after the buffers have successfully been exchanged. Applications that attempt to draw to the screen during this waiting period waste time that could be spent performing other drawing operations or saving battery life and minimizing fan operation. Listing 9-2 shows how an NSOpenGLView object can synchronize with the screen refresh rate; you can use a similar approach if your application uses CGL contexts. It assumes that you set up the context for double buffering. The swap interval can be set only to 0 or 1. If the swap interval is set to 1, the buffers are swapped only during the vertical retrace. Listing 9-2 Setting up synchronization GLint swapInterval = 1; [[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval]; Avoid Synchronizing and Flushing Operations OpenGL is not required to execute most commandsimmediately. Often, they are queued to a command buffer and read and executed by the hardware at a later time. Usually, OpenGL waits until the application has queued up a significant number of commands before sending the buffer to the hardware—allowing the graphics hardware to execute commands in batches is often more efficient. However, some OpenGL functions must flush the buffer immediately. Other functions not only flush the buffer, but also block until previously submitted commands have completed before returning control to the application. Your application should restrict the OpenGL Application Design Strategies Avoid Synchronizing and Flushing Operations 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 96use of flushing and synchronizing commands only to those cases where that behavior is necessary. Excessive use of flushing or synchronizing commands add additional stalls waiting for the hardware to finish rendering. On a single-buffered context, flushing may also cause visual anomalies, such as flickering or tearing. These situations require OpenGL to submit the command buffer to the hardware for execution. ● The function glFlush waits until commands are submitted but does not wait for the commands to finish executing. ● The function glFinish waits for all previously submitted commands to complete executing. ● Functions that retrieve OpenGL state (for example, glGetError), also wait for submitted commands to complete. ● Buffer swapping routines (the flushBuffer method of the NSOpenGLContext class or the CGLFlushDrawable function) implicitly call glFlush. Note that when using the NSOpenGLContext class or the CGL API, the term flush actually refers to a buffer-swapping operation. For single-buffered contexts, glFlush and glFinish are equivalent to a swap operation, since all rendering is taking place directly in the front buffer. ● The command buffer is full. Using glFlush Effectively Most of the time you don't need to call glFlush to move image data to the screen. There are only a few cases that require you to call the glFlush function: ● If your application submits rendering commands that use a particular OpenGL object, and it intends to modify that object in the near future. If you attempt to modify an OpenGL object that has pending drawing commands, your application may be forced to wait until those commands have been completed. In this situation, calling glFlush ensures that the hardware begins processing commands immediately. After flushing the command buffer, your application should perform work that does not need that resource. It can perform other work (even modifying other OpenGL objects). ● Your application needs to change the drawable object associated with the rendering context. Before you can switch to another drawable object, you must call glFlush to ensure that all commands written in the command queue for the previous drawable object have been submitted. ● When two contexts share an OpenGL object. After submitting any OpenGL commands, call glFlush before switching to the other context. ● To keep drawing synchronized across multiple threads and prevent command buffer corruption, each thread should submit its rendering commands and then call glFlush. OpenGL Application Design Strategies Avoid Synchronizing and Flushing Operations 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 97Avoid Querying OpenGL State Calls to glGet*(), including glGetError(), may require OpenGL to execute previous commands before retrieving any state variables. This synchronization forces the graphics hardware to run lockstep with the CPU, reducing opportunities for parallelism. Your application should keep shadow copies of any OpenGL state that you need to query, and maintain these shadow copies as you change the state. When errors occur, OpenGL sets an error flag that you can retrieve with the function glGetError. During development, it's crucial that your code contains error checking routines, not only for the standard OpenGL calls, but for the Apple-specific functions provided by the CGL API. If you are developing a performance-critical application, retrieve error information only in the debugging phase. Calling glGetError excessively in a release build degrades performance. Use Fences for Finer-Grained Synchronization Avoid using glFinish in your application, because it waits until all previously submitted commands are completed before returning control to your application. Instead, you should use the fence extension (APPLE_fence). This extension was created to provide the level of granularity that is not provided by glFinish. A fence is a token used to mark the current point in the command stream. When used correctly, it allows you to ensure that a specific series of commands has been completed. A fence helps coordinate activity between the CPU and the GPU when they are using the same resources. Follow these steps to set up and use a fence: 1. At initialization time, create the fence object by calling the function glGenFencesAPPLE. GLint myFence; glGenFencesAPPLE(1,&myFence); 2. Call the OpenGL functions that must complete prior to the fence. 3. Set up the fence by calling the function glSetFenceAPPLE. Thisfunction inserts a token into the command stream and sets the fence state to false. void glSetFenceAPPLE(GLuint fence); fence specifies the token to insert. For example: glSetFenceAPPLE(myFence); OpenGL Application Design Strategies Avoid Synchronizing and Flushing Operations 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 984. Call glFlush to force the commands to be sent to the hardware. This step is optional, but recommended to ensure that the hardware begins processing OpenGL commands. 5. Perform other work in your application. 6. Wait for all OpenGL commands issued prior to the fence to complete by calling the function glFinishFenceAPPLE. glFinishFenceAPPLE(myFence); As an alternative to calling glFinishFenceAPPLE, you can call glTestFenceAPPLE to determine whether the fence has been reached. The advantage of testing the fence is that your application does not block waiting for the fence to complete. This is useful if your application can continue processing other work while waiting for the fence to trigger. glTestFenceAPPLE(myFence); 7. When your application no longer needsthe fence, delete it by calling the function glDeleteFencesAPPLE. glDeleteFencesAPPLE(1,&myFence); There is an art to determining where to insert a fence in the command stream. If you insert a fence for too few drawing commands, you risk having your application stall while it waits for drawing to complete. You'll want to set a fence so your application operates as asynchronously as possible without stalling. The fence extension also lets you synchronize buffer updates for objects such as vertex arrays and textures. For that you call the function glFinishObjectAPPLE, supplying an object name along with the token. For detailed information on this extension, see the OpenGL specification for the Apple fence extension. Allow OpenGL to Manage Your Resources OpenGL allows many data types to be stored persistently inside OpenGL. Creating OpenGL objects to store vertex, texture, or other forms of data allows OpenGL to reduce the overhead of transforming the data and sending them to the graphics processor. If data is used more frequently than it is modified, OpenGL can substantially improve the performance of your application. OpenGL Application Design Strategies Allow OpenGL to Manage Your Resources 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 99OpenGL allows your application to hint how it intends to use the data. These hints allow OpenGL to make an informed choice of how to process your data. For example, static data might be placed in high-speed graphics memory directly connected to the graphics processor. Data that changes frequently might be kept in main memory and accessed by the graphics hardware through DMA. Use Double Buffering to Avoid Resource Conflicts Resource conflicts occur when your application and OpenGL want to access a resource at the same time. When one participant attempts to modify an OpenGL object being used by the other, one of two problems results: ● The participant that wantsto modify the object blocks until it is no longer in use. Then the other participant is not allowed to read from or write to the object until the modifications are complete. This is safe, but these can be hidden bottlenecks in your application. ● Some extensions allow OpenGL to access application memory that can be simultaneously accessed by the application. In this situation, synchronizing between the two participants is left to the application to manage. Your application calls glFlush to force OpenGL to execute commands and uses a fence or glFinish to ensure that no commands that access that memory are pending. Whether your application relies on OpenGL to synchronize access to a resource, or it manually synchronizes access, resource contention forces one of the participants to wait, rather than allowing them both to execute in parallel. Figure 9-4 demonstrates this problem. There is only a single buffer for vertex data, which both the application and OpenGL want to use and therefore the application must wait until the GPU finishes processing commands before it modifies the data. Figure 9-4 Single-buffered vertex array data CPU GPU Vertex array 1 Vertex array 1 Vertex array 1 Vertex array 1 glFlush glFlush glFinishObject(..., 1) glFinishObject(..., 1) Time Frame 1 Frame 2 OpenGL Application Design Strategies Use Double Buffering to Avoid Resource Conflicts 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 100To solve this problem, your application could fill this idle time with other processing, even other OpenGL processing that does not need the objects in question. If you need to process more OpenGL commands, the solution is to create two of the same resource type and let each participant access a resource. Figure 9-5 illustrates the double-buffered approach. While the GPU operates on one set of vertex array data, the CPU is modifying the other. After the initialstartup, neither processing unit isidle. This example uses a fence to ensure that access to each buffer is synchronized. Figure 9-5 Double-buffered vertex array data CPU Vertex array 1 Vertex array 1 GPU Vertex array 1 Vertex array 1 Vertex array 2 Vertex array 2 Vertex array 2 Vertex array 2 glFlush glFlush glFlush glFlush glFinishObject(..., 1) glFinishObject(..., 1) glFinishObject(..., 2) glFinishObject(..., 2) Time Frame 1 Frame 2 Frame 3 Frame 4 Double buffering issufficient for most applications, but it requiresthat both participantsfinish processing their commands before a swap can occur. For a traditional producer-consumer problem, more than two buffers may prevent a participant from blocking. With triple buffering, the producer and consumer each have a buffer, with a third idle buffer. If the producer finishes before the consumer finishes processing commands, it takes the idle buffer and continues to process commands. In this situation, the producer idles only if the consumer falls badly behind. Be Mindful of OpenGL State Variables The hardware has one current state, which is compiled and cached. Switching state is expensive, so it's best to design your application to minimize state switches. Don't set a state that's already set. Once a feature is enabled, it does not need to be enabled again. Calling an enable function more than once does nothing except waste time because OpenGL does not check the state of a feature when you call glEnable or glDisable. For instance, if you call glEnable(GL_LIGHTING) more than once, OpenGL does not check to see if the lighting state is already enabled. It simply updates the state value even if that value is identical to the current value. OpenGL Application Design Strategies Be Mindful of OpenGL State Variables 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 101You can avoid setting a state more than necessary by using dedicated setup or shutdown routines rather than putting such callsin a drawing loop. Setup and shutdown routines are also useful for turning on and off features that achieve a specific visual effect—for example, when drawing a wire-frame outline around a textured polygon. If you are drawing 2D images, disable all irrelevant state variables, similar to what's shown in Listing 9-3. Listing 9-3 Disabling state variables glDisable(GL_DITHER); glDisable(GL_ALPHA_TEST); glDisable(GL_BLEND); glDisable(GL_STENCIL_TEST); glDisable(GL_FOG); glDisable(GL_TEXTURE_2D); glDisable(GL_DEPTH_TEST); glPixelZoom(1.0,1.0); // Disable other state variables as appropriate. Replace State Changes with OpenGL Objects The “Be Mindful of OpenGL State Variables” (page 101) section suggests that reducing the number of state changes can improve performance. Some OpenGL extensions also allow you to create objects that collect multiple OpenGL state changes into an object that can be bound with a single function call. Where such techniques are available, they are recommended. For example, configuring the fixed-function pipeline requires many function calls to change the state of the various operators. Not only does this incur overhead for each function called, but the code is more complex and difficult to manage. Instead, use a shader. A shader, once compiled, can have the same effect but requires only a single call to glUseProgram. Other examples of objects that take the place of multiple state changes include the “Vertex Array Range Extension” (page 113) and “Uniform Buffers” (page 143). Use Optimal Data Types and Formats If you don't use data types and formats that are native to the graphics hardware, OpenGL must convert those data types into a format that the graphics hardware understands. OpenGL Application Design Strategies Replace State Changes with OpenGL Objects 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 102For vertex data, use GLfloat, GLshort, or GLubyte data types. Most graphics hardware handle these types natively. For texture data, you’ll get the best performance if you use the following format and data type combination: GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV These format and data type combinations also provide acceptable performance: GL_BGRA, GL_UNSIGNED_SHORT_1_5_5_5_REV GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE The combination GL_RGBA and GL_UNSIGNED_BYTE needs to be swizzled by many cards when the data is loaded, so it's not recommended. Use OpenGL Macros OpenGL performs a global context and renderer lookup for each command it executesto ensure that all OpenGL commands are issued to the correct rendering context and renderer. There is significant overhead associated with these lookups; applicationsthat have extremely high call frequenciesmay find that the overheadmeasurably affects performance. OS X allows your application to use macros to provide a local context variable and cache the current renderer in that variable. You get more benefit from using macros when your code makes millions of function calls per second. Before implementing this technique, consider carefully whether you can redesign your application to perform less function calls. Frequently changing OpenGL state, pushing or popping matrices, or even submitting one vertex at a time are all examples of techniques that should be replaced with more efficient operations. You can use the CGL macro header (CGL/CGLMacro.h) if your application uses CGL from a Cocoa application. You must define the local variable cgl_ctx to be equal to the current context. Listing 9-4 shows what's needed to set up macro use for the CGL API. First, you need to include the correct macro header. Then, you must set the current context. Listing 9-4 Using CGL macros #include // include the header CGL_MACRO_DECLARE_VARIABLES // set the current context glBegin (GL_QUADS); // This code now uses the macro // draw here glEnd (); OpenGL Application Design Strategies Use OpenGL Macros 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 103Complex shapes and detailed 3D models require large amounts of vertex data to describe them in OpenGL. Moving vertex data from your application to the graphics hardware incurs a performance cost that can be quite large depending on the size of the data set. Figure 10-1 Vertex data sets can be quite large Applications that use large vertex data sets can adopt one or more of the strategies described in “OpenGL Application Design Strategies” (page 89) to optimize how vertex data is delivered to OpenGL.This chapter expands on those best practices with specific techniques for working with vertex data. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 104 Best Practices for Working with Vertex DataUnderstand How Vertex Data Flows Through OpenGL Understanding how vertex data flows through OpenGL is important to choosing strategies for handling the data. Vertex data enters into the vertex stage, where it is processed by either the built-in fixed function vertex stage or a custom vertex. Figure 10-2 Vertex data path Rasterization Fragment shading and per-fragment operations Per-pixel operations Texture assembly Framebuffer Vertex shading and per-vertex operations Pixel data Vertex data Figure 10-3 takes a closer look at the vertex data path when using immediate mode. Without any optimizations, your vertex data may be copied at various points in the data path. If your application uses immediate mode to each vertex separately, calls to OpenGL first modify the current vertex, which is copied into the command buffer whenever your application makes a glVertex* call. Thisis not only expensive in terms of copy operations, but also in function overhead to specify each vertex. Figure 10-3 Immediate mode requires a copy of the current vertex data GPU VRAM Copy Copy Original Command buffer Current vertex Application Best Practices for Working with Vertex Data Understand How Vertex Data Flows Through OpenGL 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 105The OpenGL commands glDrawRangeElements, glDrawElements, and glDrawArrays render multiple geometric primitives from array data, using very few subroutine calls. Listing 10-1 shows a typical implementation. Your application creates a vertex structure that holds all the elements for each vertex. For each element , you enable a client array and provide a pointer and offset to OpenGL so that it knows how to find those elements. Listing 10-1 Submitting vertex data using glDrawElements. typedef struct _vertexStruct { GLfloat position[2]; GLubyte color[4]; } vertexStruct; void DrawGeometry() { const vertexStruct vertices[] = {...}; const GLubyte indices[] = {...}; glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2, GL_FLOAT, sizeof(vertexStruct), &vertices[0].position); glEnableClientState(GL_COLOR_ARRAY); glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(vertexStruct), &vertices[0].color); glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); } Each time you call glDrawElements, OpenGL must copy all of the vertex data into the command buffer, which is later copied to the hardware. The copy overhead is still expensive. Best Practices for Working with Vertex Data Understand How Vertex Data Flows Through OpenGL 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 106Techniques for Handling Vertex Data Avoiding unnecessary copies of your vertex data is critical to application performance. Thissection summarizes common techniques for managing your vertex data using either built-in functionality or OpenGL extensions. Before using these techniques, you must ensure that the necessary functions are available to your application. See “Detecting Functionality” (page 83). ● Avoid the use of glBegin and glEnd to specify your vertex data. The function and copying overhead makes this path useful only for very small data sets. Also, applications written with glBegin and glEnd are not portable to OpenGL ES on iOS. ● Minimize data type conversions by supplying OpenGL data types for vertex data. Use GLfloat, GLshort, or GLubyte data types because most graphics processors handle these types natively. If you use some other type, then OpenGL may need to perform a costly data conversion. ● The preferred way to manage your vertex data is with vertex buffer objects. Vertex buffer objects are buffers owned by OpenGL that hold your vertex information. These buffers allow OpenGL to place your vertex data into memory that is accessible to the graphics hardware. See “Vertex Buffers” (page 107) for more information. ● If vertex buffer objects are not available, your application can search for the GL_APPLE_vertex_array_range and APPLE_fence extensions. Vertex array ranges allow you to prevent OpenGL from copying your vertex data into the command buffer. Instead, your application must avoid modifying or deleting the vertex data until OpenGL finishes executing drawing commands. This solution requires more effort from the application, and is not compatible with other platforms, including iOS. See “Vertex Array Range Extension” (page 113) for more information. ● Complex vertex operations require many array pointers to be enabled and set before you call glDrawElements. The GL_APPLE_vertex_array_object extension allows your application to consolidate a group of array pointers into a single object. Your application switches multiple pointers by binding a single vertex array object, reducing the overhead of changing state. See “Vertex Array Object” (page 116). ● Use double buffering to reduce resource contention between your application and OpenGL. See “Use Double Buffering to Avoid Resource Conflicts” (page 100). ● If you need to compute new vertex information between frames, consider using vertex shaders and buffer objects to perform and store the calculations. Vertex Buffers Vertex buffers are available as a core feature starting in OpenGL 1.5, and on earlier versions of OpenGL through the vertex buffer object extension (GL_ARB_vertex_buffer_object). Vertex buffers are used to improve the throughput of static or dynamic vertex data in your application. Best Practices for Working with Vertex Data Techniques for Handling Vertex Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 107A buffer object is a chunk of memory owned by OpenGL. Your application reads from or writes to the buffer using OpenGL callssuch as glBufferData, glBufferSubData, and glGetBufferSubData. Your application can also gain a pointer to this memory, an operation referred to as mapping a buffer. OpenGL prevents your application and itself from simultaneously using the data stored in the buffer. When your application maps a buffer or attempts to modify it, OpenGL may block until previous drawing commands have completed. Using Vertex Buffers You can set up and use vertex buffers by following these steps: 1. Call the function glGenBuffers to create a new name for a buffer object. void glGenBuffers(sizei n, uint *buffers ); n is the number of buffers you wish to create identifiers for. buffers specifies a pointer to memory to store the buffer names. 2. Call the function glBindBuffer to bind an unused name to a buffer object. After this call, the newly created buffer object is initialized with a memory buffer of size zero and a default state. (For the default setting, see the OpenGL specification for ARB_vertex_buffer_object.) void glBindBuffer(GLenum target, GLuint buffer); target must be set to GL_ARRAY_BUFFER. buffer specifies the unique name for the buffer object. 3. Fill the buffer object by calling the function glBufferData. Essentially, this call uploads your data to the GPU. void glBufferData(GLenum target, sizeiptr size, const GLvoid *data, GLenum usage); target must be set to GL_ARRAY_BUFFER. size specifies the size of the data store. *data points to the source data. If this is not NULL, the source data is copied to the data stored of the buffer object. If NULL, the contents of the data store are undefined. Best Practices for Working with Vertex Data Vertex Buffers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 108usage is a constant that provides a hint as to how your application plans to use the data stored in the buffer object. These examples use GL_STREAM_DRAW, which indicates that the application plans to both modify and draw using the buffer, and GL_STATIC_DRAW, which indicates that the application will define the data once but use it to draw many times. For more details on buffer hints,see “Buffer Usage Hints” (page 110) 4. Enable the vertex array by calling glEnableClientState and supplying the GL_VERTEX_ARRAY constant. 5. Point to the contents of the vertex buffer object by calling a function such as glVertexPointer. Instead of providing a pointer, you provide an offset into the vertex buffer object. 6. To update the data in the buffer object, your application calls glMapBuffer. Mapping the buffer prevents the GPU from operating on the data, and gives your application a pointer to memory it can use to update the buffer. void *glMapBuffer(GLenum target, GLenum access); target must be set to GL_ARRAY_BUFFER. access indicatesthe operations you plan to performon the data. You can supply READ_ONLY, WRITE_ONLY, or READ_WRITE. 7. Write pixel data to the pointer received from the call to glMapBuffer. 8. When your application hasfinished modifying the buffer contents, call the function glUnmapBuffer. You must supply GL_ARRAY_BUFFER as the parameter to this function. Once the buffer is unmapped, the pointer is no longer valid, and the buffer’s contents are uploaded again to the GPU. Listing 10-2 shows code that usesthe vertex buffer object extension for dynamic data. This example overwrites all of the vertex data during every draw operation. Listing 10-2 Using the vertex buffer object extension with dynamic data // To set up the vertex buffer object extension #define BUFFER_OFFSET(i) ((char*)NULL + (i)) glBindBuffer(GL_ARRAY_BUFFER, myBufferName); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, stride, BUFFER_OFFSET(0)); // When you want to draw using the vertex data draw_loop { Best Practices for Working with Vertex Data Vertex Buffers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 109glBufferData(GL_ARRAY_BUFFER, bufferSize, NULL, GL_STREAM_DRAW); my_vertex_pointer = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY); GenerateMyDynamicVertexData(my_vertex_pointer); glUnmapBuffer(GL_ARRAY_BUFFER); PerformDrawing(); } Listing 10-3 shows code that uses the vertex buffer object extension with static data. Listing 10-3 Using the vertex buffer object extension with static data // To set up the vertex buffer object extension #define BUFFER_OFFSET(i) ((char*)NULL + (i)) glBindBuffer(GL_ARRAY_BUFFER, myBufferName); glBufferData(GL_ARRAY_BUFFER, bufferSize, NULL, GL_STATIC_DRAW); GLvoid* my_vertex_pointer = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY); GenerateMyStaticVertexData(my_vertex_pointer); glUnmapBuffer(GL_ARRAY_BUFFER); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, stride, BUFFER_OFFSET(0)); // When you want to draw using the vertex data draw_loop { PerformDrawing(); } Buffer Usage Hints A key advantage of buffer objectsisthat the application can provide information on how it usesthe data stored in each buffer. For example, Listing 10-2 and Listing 10-3 differentiated between cases where the data were expected to never change (GL_STATIC_DRAW) and cases where the buffer data might change (GL_DYNAMIC_DRAW). The usage parameter allows an OpenGL renderer to alter its strategy for allocating the vertex buffer to improve performance. For example, static buffers may be allocated directly in GPU memory, while dynamic buffers may be stored in main memory and retrieved by the GPU via DMA. If OpenGL ES compatibility is useful to you, you should limit your usage hints to one of three usage cases: Best Practices for Working with Vertex Data Vertex Buffers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 110● GL_STATIC_DRAW should be used for vertex data that isspecified once and never changed. Your application should create these vertex buffers during initialization and use them repeatedly until your application shuts down. ● GL_DYNAMIC_DRAW should be used when the buffer is expected to change after it is created. Your application should still allocate these buffers during initialization and periodically update them by mapping the buffer. ● GL_STREAM_DRAW is used when your application needs to create transient geometry that is rendered and then discarded. This is most useful when your application must dynamically change vertex data every frame in a way that cannot be performed in a vertex shader. To use a stream vertex buffer, your application initially fills the buffer using glBufferData, then alternates between drawing using the buffer and modifying the buffer. Other usage constants are detailed in the vertex buffer specification. If different elements in your vertex format have different usage characteristics, you may want to split the elements into one structure for each usage pattern and allocate a vertex buffer for each. Listing 10-4 shows how to implement this. In this example, position data is expected to be the same in each frame, while color data may be animated in every frame. Listing 10-4 Geometry with different usage patterns typedef struct _vertexStatic { GLfloat position[2]; } vertexStatic; typedef struct _vertexDynamic { GLubyte color[4]; } vertexDynamic; // Separate buffers for static and dynamic data. GLuint staticBuffer; GLuint dynamicBuffer; GLuint indexBuffer; const vertexStatic staticVertexData[] = {...}; vertexDynamic dynamicVertexData[] = {...}; Best Practices for Working with Vertex Data Vertex Buffers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 111const GLubyte indices[] = {...}; void CreateBuffers() { glGenBuffers(1, &staticBuffer); glGenBuffers(1, &dynamicBuffer); glGenBuffers(1, &indexBuffer); // Static position data glBindBuffer(GL_ARRAY_BUFFER, staticBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(staticVertexData), staticVertexData, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); // Dynamic color data // While not shown here, the expectation is that the data in this buffer changes between frames. glBindBuffer(GL_ARRAY_BUFFER, dynamicBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(dynamicVertexData), dynamicVertexData, GL_DYNAMIC_DRAW); } void DrawUsingVertexBuffers() { glBindBuffer(GL_ARRAY_BUFFER, staticBuffer); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2, GL_FLOAT, sizeof(vertexStatic), (void*)offsetof(vertexStatic,position)); glBindBuffer(GL_ARRAY_BUFFER, dynamicBuffer); glEnableClientState(GL_COLOR_ARRAY); glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(vertexDynamic), (void*)offsetof(vertexDynamic,color)); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer); Best Practices for Working with Vertex Data Vertex Buffers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 112glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, (void*)0); } Flush Buffer Range Extension When your application unmaps a vertex buffer, the OpenGL implementation may copy the full contents of the buffer to the graphics hardware. If your application changes only a subset of a large buffer, this is inefficient. The APPLE_flush_buffer_range extension allows your application to tell OpenGL exactly which portions of the buffer were modified, allowing it to send only the changed data to the graphics hardware. To use the flush buffer range extension, follow these steps: 1. Turn on the flush buffer extension by calling glBufferParameteriAPPLE. glBufferParameteriAPPLE(GL_ARRAY_BUFFER,GL_BUFFER_FLUSHING_UNMAP_APPLE, GL_FALSE); This disables the normal flushing behavior of OpenGL. 2. Before you unmap a buffer, you must call glFlushMappedBufferRangeAPPLE for each range of the buffer that was modified by the application. void glFlushMappedBufferRangeAPPLE(enum target, intptr offset, sizeiptr size); target is the type of buffer being modified; for vertex data it’s ARRAY_BUFFER. offset is the offset into the buffer for the modified data. size is the length of the modified data in bytes. 3. Call glUnmapBuffer. OpenGL unmaps the buffer, but it is required to update only the portions of the buffer your application explicitly marked as changed. For more information see the APPLE_flush_buffer_range specification. Vertex Array Range Extension The vertex array range extension (APPLE_vertex_array_range) lets you define a region of memory for your vertex data. The OpenGL driver can optimize memory usage by creating a single memory mapping for your vertex data. You can also provide a hint as to how the data should be stored: cached or shared. The cached Best Practices for Working with Vertex Data Vertex Array Range Extension 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 113option specifiesto cache vertex data in video memory. The shared option indicatesthat data should be mapped into a region of memory that allows the GPU to access the vertex data directly using DMA transfer. This option is best for dynamic data. If you use shared memory, you'll need to double buffer your data. You can set up and use the vertex array range extension by following these steps: 1. Enable the extension by calling glEnableClientState and supplying the GL_VERTEX_ARRAY_RANGE_APPLE constant. 2. Allocate storage for the vertex data. You are responsible for maintaining storage for the data. 3. Define an array of vertex data by calling a function such as glVertexPointer. You need to supply a pointer to your data. 4. Optionally set up a hint about handling the storage of the array data by calling the function glVertexArrayParameteriAPPLE. GLvoid glVertexArrayParameteriAPPLE(GLenum pname, GLint param); pname must be VERTEX_ARRAY_STORAGE_HINT_APPLE. param is a hint that specifies how your application expects to use the data. OpenGL uses this hint to optimize performance. You can supply either STORAGE_SHARED_APPLE or STORAGE_CACHED_APPLE. The default value is STORAGE_SHARED_APPLE, which indicates that the vertex data is dynamic and that OpenGL should use optimization and flushing techniques suitable for this kind of data. If you expect the supplied data to be static, use STORAGE_CACHED_APPLE so that OpenGL can optimize appropriately. 5. Call the OpenGL function glVertexArrayRangeAPPLE to establish the data set. void glVertexArrayRangeAPPLE(GLsizei length, GLvoid *pointer); length specifies the length of the vertex array range. The length is typically the number of unsigned bytes. *pointer points to the base of the vertex array range. 6. Draw with the vertex data using standard OpenGL vertex array commands. 7. If you need to modify the vertex data,set a fence object after you’ve submitted all the drawing commands. See “Use Fences for Finer-Grained Synchronization” (page 98) 8. Perform other work so that the GPU has time to process the drawing commands that use the vertex array. 9. Call glFinishFenceAPPLE to gain access to the vertex array. 10. Modify the data in the vertex array. 11. Call glFlushVertexArrayRangeAPPLE. Best Practices for Working with Vertex Data Vertex Array Range Extension 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 114void glFlushVertexArrayRangeAPPLE(GLsizei length, GLvoid *pointer); length specifies the length of the vertex array range, in bytes. *pointer points to the base of the vertex array range. For dynamic data, each time you change the data, you need to maintain synchronicity by calling glFlushVertexArrayRangeAPPLE. You supply as parameters an array size and a pointer to an array, which can be a subset of the data, as long as it includes all of the data that changed. Contrary to the name of the function, glFlushVertexArrayRangeAPPLE doesn't actually flush data like the OpenGL function glFlush does. It simply makes OpenGL aware that the data has changed. Listing 10-5 shows code thatsets up and usesthe vertex array range extension with dynamic data. It overwrites all of the vertex data during each iteration through the drawing loop. The call to the glFinishFenceAPPLE command guaranteesthat the CPU and the GPU don't accessthe data at the same time. Although this example calls the glFinishFenceAPPLE function almost immediately after setting the fence, in reality you need to separate these calls to allow parallel operation of the GPU and CPU. To see how that's done, read “Use Double Buffering to Avoid Resource Conflicts” (page 100). Listing 10-5 Using the vertex array range extension with dynamic data // To set up the vertex array range extension glVertexArrayParameteriAPPLE(GL_VERTEX_ARRAY_STORAGE_HINT_APPLE, GL_STORAGE_SHARED_APPLE); glVertexArrayRangeAPPLE(buffer_size, my_vertex_pointer); glEnableClientState(GL_VERTEX_ARRAY_RANGE_APPLE); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, my_vertex_pointer); glSetFenceAPPLE(my_fence); // When you want to draw using the vertex data draw_loop { glFinishFenceAPPLE(my_fence); GenerateMyDynamicVertexData(my_vertex_pointer); glFlushVertexArrayRangeAPPLE(buffer_size, my_vertex_pointer); PerformDrawing(); glSetFenceAPPLE(my_fence); } Best Practices for Working with Vertex Data Vertex Array Range Extension 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 115Listing 10-6 shows code that usesthe vertex array range extension with static data. Unlike the setup for dynamic data, the setup forstatic data includes using the hint for cached data. Because the data isstatic, it's unnecessary to set a fence. Listing 10-6 Using the vertex array range extension with static data // To set up the vertex array range extension GenerateMyStaticVertexData(my_vertex_pointer); glVertexArrayParameteriAPPLE(GL_VERTEX_ARRAY_STORAGE_HINT_APPLE, GL_STORAGE_CACHED_APPLE); glVertexArrayRangeAPPLE(array_size, my_vertex_pointer); glEnableClientState(GL_VERTEX_ARRAY_RANGE_APPLE); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, stride, my_vertex_pointer); // When you want to draw using the vertex data draw_loop { PerformDrawing(); } For detailed information on this extension, see the OpenGL specification for the vertex array range extension. Vertex Array Object Look at the DrawUsingVertexBuffers function in Listing 10-4 (page 111). It configures buffer pointers for position, color, and indexing before calling glDrawElements. A more complex vertex structure may require additional buffer pointers to be enabled and changed before you can finally draw your geometry. If your application swaps frequently between multiple configurations of elements, changing these parameters adds significant overhead to your application. The APPLE_vertex_array_object extension allows you to combine a collection of buffer pointers into a single OpenGL object, allowing you to change all the buffer pointers by binding a different vertex array object. To use this extension, follow these steps during your application’s initialization routines: 1. Generate a vertex array object for a configuration of pointers you wish to use together. Best Practices for Working with Vertex Data Vertex Array Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 116void glGenVertexArraysAPPLE(sizei n, const uint *arrays); n is the number of arrays you wish to create identifiers for. arrays specifies a pointer to memory to store the array names. glGenVertexArraysAPPLE(1,&myArrayObject); 2. Bind the vertex array object you want to configure. void glBindVertexArrayAPPLE(uint array); array is the identifier for an array that you received from glGenVertexArraysAPPLE. glBindVertexArrayAPPLE(myArrayObject); 3. Call the pointer routines (glColorPointer and so forth.) that you would normally call inside your rendering loop. When a vertex array object is bound, these calls change the currently bound vertex array object instead of the default OpenGL state. glBindBuffer(GL_ARRAY_BUFFER, staticBuffer); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2, GL_FLOAT, sizeof(vertexStatic), (void*)offsetof(vertexStatic,position)); ... 4. Repeat the previous steps for each configuration of vertex pointers. 5. Inside your rendering loop, replace the calls to configure the array pointers with a call to bind the vertex array object. glBindVertexArrayAPPLE(myArrayObject); glDrawArrays(...); 6. If you need to get back to the default OpenGL behavior, call glBindVertexArrayAPPLE and pass in 0. glBindVertexArrayAPPLE(0); Best Practices for Working with Vertex Data Vertex Array Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 117Textures add realism to OpenGL objects. They help objects defined by vertex data take on the material properties of real-world objects, such as wood, brick, metal, and fur. Texture data can originate from many sources, including images. Many of the same techniques your application uses on vertex data can also be used to improve texture performance. Figure 11-1 Textures add realism to a scene 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 118 Best Practices for Working with Texture DataTextures start as pixel data that flows through an OpenGL program, as shown in Figure 11-2. Figure 11-2 Texture data path Rasterization Fragment shading and per-fragment operations Per-pixel operations Texture assembly Framebuffer Vertex shading and per-vertex operations Pixel data Vertex data The precise route that texture data takesfrom your application to itsfinal destination can impact the performance of your application. The purpose of this chapter is to provide techniques you can use to ensure optimal processing of texture data in your application. This chapter ● shows how to use OpenGL extensions to optimize performance ● lists optimal data formats and types ● provides information on working with textures whose dimensions are not a power of two ● describes creating textures from image data ● shows how to download textures ● discusses using double buffers for texture data Using Extensions to Improve Texture Performance Without any optimizations, texture data flows through an OpenGL program as shown in Figure 11-3. Data from your application first goes to the OpenGL framework, which may make a copy of the data before handing it to the driver. If your data is not in a native format for the hardware (see “Optimal Data Formats and Types” (page Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 119128)), the driver may also make a copy of the data to convert it to a hardware-specific format for uploading to video memory. Video memory, in turn, can keep a copy of the data. Theoretically, there could be four copies of your texture data throughout the system. Figure 11-3 Data copies in an OpenGL program GPU VRAM OpenGL driver OpenGL framework Application Data flows at different rates through the system, as shown by the size of the arrows in Figure 11-3. The fastest data transfer happens between VRAM and the GPU. The slowest transfer occurs between the OpenGL driver and VRAM. Data moves between the application and the OpenGL framework, and between the framework and the driver at the same "medium" rate. Eliminating any of the data transfers, but the slowest one in particular, will improve application performance. There are several extensions you can use to eliminate one or more data copies and control how texture data travels from your application to the GPU: ● GL_ARB_pixel_buffer_object allows your application to use OpenGL buffer objectsto manage texture and image data. As with vertex buffer objects, they allow your application to hint how a buffer is used and to decide when data is copied to OpenGL. ● GL_APPLE_client_storage allows you to prevent OpenGL from copying your texture data into the client. Instead, OpenGL keepsthe memory pointer you provided when creating the texture. Your application must keep the texture data at that location until the referencing OpenGL texture is deleted. ● GL_APPLE_texture_range, along with a storage hint, either GL_STORAGE_CACHED_APPLE or GL_STORAGE_SHARED_APPLE, allows you to specify a single block of texture memory and manage it as you see fit. ● GL_ARB_texture_rectangle provides support for non-power of-two textures. Here are some recommendations: Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 120● If your application requires optimal texture upload performance, use GL_APPLE_client_storage and GL_APPLE_texture_range together to manage your textures. ● If your application requires optimal texture download performance, use pixel buffer objects. ● If your application requires cross-platform techniques, use pixel buffer objects for both texture uploads and texture downloads. ● Use GL_ARB_texture_rectangle when your source images are not aligned to a power-of-2 size. The sections that follow describe the extensions and show how to use them. Pixel Buffer Objects Pixel buffer objects are a core feature of OpenGL 2.1 and also available through the GL_ARB_pixel_buffer_object extension. The procedure for setting up a pixel buffer object is almost identical to that of vertex buffer objects. Using Pixel Buffer Objects to Efficiently Load Textures 1. Call the function glGenBuffers to create a new name for a buffer object. void glGenBuffers(sizei n, uint *buffers ); n is the number of buffers you wish to create identifiers for. buffers specifies a pointer to memory to store the buffer names. 2. Call the function glBindBuffer to bind an unused name to a buffer object. After this call, the newly created buffer object is initialized with a memory buffer of size zero and a default state. (For the default setting, see the OpenGL specification for ARB_vertex_buffer_object.) void glBindBuffer(GLenum target, GLuint buffer); target should be be set to GL_PIXEL_UNPACK_BUFFER to use the buffer as the source of pixel data. buffer specifies the unique name for the buffer object. 3. Create and initialize the data store of the buffer object by calling the function glBufferData. Essentially, this call uploads your data to the GPU. void glBufferData(GLenum target, sizeiptr size, const GLvoid *data, GLenum usage); Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 121target must be set to GL_PIXEL_UNPACK_BUFFER. size specifies the size of the data store. *data pointsto the source data. If thisis not NULL, the source data is copied to the data store of the buffer object. If NULL, the contents of the data store are undefined. usage is a constant that provides a hint as to how your application plans to use the data store. For more details on buffer hints, see “Buffer Usage Hints” (page 110) 4. Whenever you call glDrawPixels, glTexSubImage or similar functions that read pixel data from the application, those functions use the data in the bound pixel buffer object instead. 5. To update the data in the buffer object, your application calls glMapBuffer. Mapping the buffer prevents the GPU from operating on the data, and gives your application a pointer to memory it can use to update the buffer. void *glMapBuffer(GLenum target, GLenum access); target must be set to PIXEL_UNPACK_BUFFER. access indicatesthe operations you plan to performon the data. You can supply READ_ONLY, WRITE_ONLY, or READ_WRITE. 6. Modify the texture data using the pointer provided by map buffer. 7. When you have finished modifying the texture, call the function glUnmapBuffer. You should supplyPIXEL_UNPACK_BUFFER. Once the buffer is unmapped, your application can no longer access the buffer’s data through the pointer, and the buffer’s contents are uploaded again to the GPU. Using Pixel Buffer Objects for Asynchronous Pixel Transfers glReadPixels normally blocks until previous commands have completed, which includes the slow process of copying the pixel data to the application. However, if you call glReadPixels while a pixel buffer object is bound, the function returns immediately. It does not block until you actually map the pixel buffer object to read its content. 1. Call the function glGenBuffers to create a new name for a buffer object. void glGenBuffers(sizei n, uint *buffers ); n is the number of buffers you wish to create identifiers for. buffers specifies a pointer to memory to store the buffer names. Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 1222. Call the function glBindBuffer to bind an unused name to a buffer object. After this call, the newly created buffer object is initialized with a memory buffer of size zero and a default state. (For the default setting, see the OpenGL specification for ARB_vertex_buffer_object.) void glBindBuffer(GLenum target, GLuint buffer); target should be be set to GL_PIXEL_PACK_BUFFER to use the buffer as the destination for pixel data. buffer specifies the unique name for the buffer object. 3. Create and initialize the data store of the buffer object by calling the function glBufferData. void glBufferData(GLenum target, sizeiptr size, const GLvoid *data, GLenum usage); target must be set to GL_ARRAY_BUFFER. size specifies the size of the data store. *data pointsto the source data. If thisis not NULL, the source data is copied to the data store of the buffer object. If NULL, the contents of the data store are undefined. usage is a constant that provides a hint as to how your application plans to use the data store. For more details on buffer hints, see “Buffer Usage Hints” (page 110) 4. Call glReadPixels or a similar function. The function inserts a command to read the pixel data into the bound pixel buffer object and then returns. 5. To take advantage of asynchronous pixel reads, your application should perform other work. 6. To retrieve the data in the pixel buffer object, your application calls glMapBuffer. This blocks OpenGL until the previously queued glReadPixels command completes, maps the data, and provides a pointer to your application. void *glMapBuffer(GLenum target, GLenum access); target must be set to GL_PIXEL_PACK_BUFFER. access indicatesthe operations you plan to performon the data. You can supply READ_ONLY, WRITE_ONLY, or READ_WRITE. 7. Write vertex data to the pointer provided by map buffer. 8. When you no longer need the vertex data, call the function glUnmapBuffer. You should supply GL_PIXEL_PACK_BUFFER. Once the buffer is unmapped, the data is no longer accessible to your application. Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 123Using Pixel Buffer Objects to Keep Data on the GPU There is no difference between a vertex buffer object and a pixel buffer object except for the target to which they are bound. An application can take the results in one buffer and use them as another buffer type. For example, you could use the pixel resultsfrom a fragmentshader and reinterpret them as vertex data in a future pass, without ever leaving the GPU: 1. Set up your first pass and submit your drawing commands. 2. Bind a pixel buffer object and call glReadPixels to fetch the intermediate results into a buffer. 3. Bind the same buffer as a vertex buffer. 4. Set up the second pass of your algorithm and submit your drawing commands. Keeping your intermediate data inside the GPU when performing multiple passes can result in great performance increases. Apple Client Storage The Apple client storage extension (APPLE_client_storage) lets you provide OpenGL with a pointer to memory that your application allocates and maintains. OpenGL retains a pointer to your data but does not copy the data. Because OpenGL references your data, your application must retain its copy of the data until all referencing textures are deleted. By using this extension you can eliminate the OpenGL framework copy as shown in Figure 11-4. Note that a texture width must be a multiple of 32 bytes for OpenGL to bypass the copy operation from the application to the OpenGL framework. Figure 11-4 The client storage extension eliminates a data copy GPU VRAM OpenGL driver OpenGL framework Application The Apple clientstorage extension defines a pixelstorage parameter, GL_UNPACK_CLIENT_STORAGE_APPLE, that you pass to the OpenGL function glPixelStorei to specify that your application retains storage for textures. The following code sets up client storage: Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 124glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE); For detailed information, see the OpenGL specification for the Apple client storage extension. Apple Texture Range and Rectangle Texture The Apple texture range extension (APPLE_texture_range) lets you define a region of memory used for texture data. Typically you specify an address range that encompasses the storage for a set of textures. This allows the OpenGL driver to optimize memory usage by creating a single memory mapping for all of the textures. You can also provide a hint as to how the data should be stored: cached or shared. The cached hint specifies to cache texture data in video memory. This hint is recommended when you have textures that you plan to use multiple times or that use linear filtering. The shared hint indicates that data should be mapped into a region of memory that enables the GPU to access the texture data directly (via DMA) without the need to copy it. This hint is best when you are using large images only once, perform nearest-neighbor filtering, or need to scale down the size of an image. The texture range extension defines the following routine for making a single memory mapping for all of the textures used by your application: void glTextureRangeAPPLE(GLenum target, GLsizei length, GLvoid *pointer); target is a valid texture target, such as GL_TEXTURE_2D. length specifies the number of bytes in the address space referred to by the pointer parameter. *pointer points to the address space that your application provides for texture storage. You provide the hint parameter and a parameter value to to the OpenGL function glTexParameteri. The possible values for the storage hint parameter (GL_TEXTURE_STORAGE_HINT_APPLE) are GL_STORAGE_CACHED_APPLE or GL_STORAGE_SHARED_APPLE. Some hardware requires texture dimensions to be a power-of-two before the hardware can upload the data using DMA. The rectangle texture extension (ARB_texture_rectangle) was introduced to allow texture targets for textures of any dimensions—that is, rectangle textures (GL_TEXTURE_RECTANGLE_ARB). You need to use the rectangle texture extension together with the Apple texture range extension to ensure OpenGL uses DMA to access your texture data. These extensions allow you to bypass the OpenGL driver, as shown in Figure 11-5. Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 125Note that OpenGL does not use DMA for a power-of-two texture target (GL_TEXTURE_2D). So, unlike the rectangular texture, the power-of-two texture will incur one additional copy and performance won't be quite as fast. The performance typically isn't an issue because games, which are the applications most likely to use power-of-two textures, load textures at the start of a game or level and don't upload textures in real time as often as applications that use rectangular textures, which usually play video or display images. The next section has code examples that use the texture range and rectangle textures together with the Apple client storage extension. Figure 11-5 The texture range extension eliminates a data copy GPU VRAM OpenGL driver OpenGL framework Application For detailed information on these extensions,see the OpenGL specification for the Apple texture range extension and the OpenGL specification for the ARB texture rectangle extension. Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 126Combining Client Storage with Texture Ranges You can use the Apple client storage extension along with the Apple texture range extension to streamline the texture data path in your application. When used together, OpenGL moves texture data directly into video memory, as shown in Figure 11-6. The GPU directly accesses your data (via DMA). The set up is slightly different for rectangular and power-of-two textures. The code examples in this section upload textures to the GPU. You can also use these extensions to download textures, see “Downloading Texture Data” (page 136). Figure 11-6 Combining extensions to eliminate data copies GPU VRAM OpenGL driver OpenGL framework Application Listing 11-1 shows how to use the extensions for a rectangular texture. After enabling the texture rectangle extension you need to bind the rectangular texture to a target. Next, set up the storage hint. Call glPixelStorei to set up the Apple client storage extension. Finally, call the function glTexImage2D with a with a rectangular texture target and a pointer to your texture data. Note: The texture rectangle extension limits what can be done with rectangular textures. To understand the limitationsin detail, read the OpenGL extension for texture rectangles. See “Working with Non–Power-of-Two Textures” (page 129) for an overview of the limitations and an alternative to using this extension. Listing 11-1 Using texture extensions for a rectangular texture glEnable (GL_TEXTURE_RECTANGLE_ARB); glBindTexture(GL_TEXTURE_RECTANGLE_ARB, id); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_STORAGE_HINT_APPLE, GL_STORAGE_CACHED_APPLE); glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE); Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 127glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA, sizex, sizey, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, myImagePtr); Setting up a power-of-two texture to use these extensions is similar to what's needed to set up a rectangular texture, as you can see by looking at Listing 11-2. The difference is that the GL_TEXTURE_2D texture target replaces the GL_TEXTURE_RECTANGLE_ARB texture target. Listing 11-2 Using texture extensions for a power-of-two texture glBindTexture(GL_TEXTURE_2D, myTextureName); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_STORAGE_HINT_APPLE, GL_STORAGE_CACHED_APPLE); glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sizex, sizey, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, myImagePtr); Optimal Data Formats and Types The best format and data type combinations to use for texture data are: GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV GL_BGRA, GL_UNSIGNED_SHORT_1_5_5_5_REV) GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE The combination GL_RGBA and GL_UNSIGNED_BYTE needs to be swizzled by many cards when the data is loaded, so it's not recommended. Best Practices for Working with Texture Data Optimal Data Formats and Types 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 128Working with Non–Power-of-Two Textures OpenGL is often used to process video and images, which typically have dimensionsthat are not a power-of-two. Until OpenGL 2.0, the texture rectangle extension (ARB_texture_rectangle) provided the only option for a rectangular texture target. This extension, however, imposesthe following restrictions on rectangular textures: ● You can't use mipmap filtering with them. ● You can use only these wrap modes: GL_CLAMP, GL_CLAMP_TO_EDGE, and GL_CLAMP_TO_BORDER. ● The texture cannot have a border. ● The texture uses non-normalized texture coordinates. (See Figure 11-7.) OpenGL 2.0 adds another option for a rectangular texture target through the ARB_texture_non_power_of_two extension, which supports these textures without the limitations of the ARB_texture_rectangle extension. Before using it, you must check to make sure the functionality is available. You'll also want to consult the OpenGL specification for the non—power-of-two extension. Figure 11-7 Normalized and non-normalized coordinates Normalized Non-normalized 0 1 1 0 Width Height If your code runs on a system that does not support either the ARB_texture_rectangle or ARB_texture_non_power_of_two extensions you have these options for working with with rectangular images: ● Use the OpenGL function gluScaleImage to scale the image so that it fitsin a rectangle whose dimensions are a power of two. The image undoes the scaling effect when you draw the image from the properly sized rectangle back into a polygon that has the correct aspect ratio for the image. Note: This option can result in the loss of some data. But if your application runs on hardware that doesn'tsupport the ARB_texture_rectangle extension, you may need to use this option. Best Practices for Working with Texture Data Working with Non–Power-of-Two Textures 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 129● Segment the image into power-of-two rectangles, as shown in Figure 11-8 by using one image buffer and different texture pointers. Notice how the sides and corners of the image shown in Figure 11-8 are segmented into increasingly smaller rectangles to ensure that every rectangle has dimensions that are a power of two. Special care may be needed at the borders between each segment to avoid filtering artifacts if the texture is scaled or rotated. Figure 11-8 An image segmented into power-of-two tiles Best Practices for Working with Texture Data Working with Non–Power-of-Two Textures 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 130Creating Textures from Image Data OpenGL on the Macintosh provides several options for creating high-quality textures from image data. OS X supports floating-point pixel values, multiple image file formats, and a variety of color spaces. You can import a floating-point image into a floating-point texture. Figure 11-9 shows an image used to texture a cube. Figure 11-9 Using an image as a texture for a cube For Cocoa, you need to provide a bitmap representation. You can create an NSBitmapImageRep object from the contents of an NSView object. You can use the Image I/O framework (see CGImageSource Reference ). This framework has support for many different file formats, floating-point data, and a variety of color spaces. Furthermore, it is easy to use. You can import image data as a texture simply by supplying a CFURL object that specifies the location of the texture. There is no need for you to convert the image to an intermediate integer RGB format. Creating a Texture from a Cocoa View You can use the NSView class or a subclass of it for texturing in OpenGL. The process is to first store the image data from an NSView object in an NSBitmapImageRep object so that the image data is in a format that can be readily used as texture data by OpenGL. Then, after setting up the texture target, you supply the bitmap data to the OpenGL function glTexImage2D. Note that you must have a valid, current OpenGL context set up. Best Practices for Working with Texture Data Creating Textures from Image Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 131Note: You can't create an OpenGL texture from image data that's provided by a view created from the following classes: NSProgressIndicator, NSMovieView, and NSOpenGLView. Thisis because these views do not use the window backing store, which is what the method initWithFocusedViewRect: reads from. Listing 11-3 shows a routine that uses this process to create a texture from the contents of an NSView object. A detailed explanation for each numbered line of code appears following the listing. Listing 11-3 Building an OpenGL texture from an NSView object -(void)myTextureFromView:(NSView*)theView textureName:(GLuint*)texName { NSBitmapImageRep * bitmap = [theView bitmapImageRepForCachingDisplayInRect: [theView visibleRect]]; // 1 int samplesPerPixel = 0; [theView cacheDisplayInRect:[theView visibleRect] toBitmapImageRep:bitmap]; // 2 samplesPerPixel = [bitmap samplesPerPixel]; // 3 glPixelStorei(GL_UNPACK_ROW_LENGTH, [bitmap bytesPerRow]/samplesPerPixel); // 4 glPixelStorei (GL_UNPACK_ALIGNMENT, 1); // 5 if (*texName == 0) // 6 glGenTextures (1, texName); glBindTexture (GL_TEXTURE_RECTANGLE_ARB, *texName); // 7 glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // 8 if(![bitmap isPlanar] && (samplesPerPixel == 3 || samplesPerPixel == 4)) { // 9 glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, samplesPerPixel == 4 ? GL_RGBA8 : GL_RGB8, [bitmap pixelsWide], [bitmap pixelsHigh], 0, Best Practices for Working with Texture Data Creating Textures from Image Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 132samplesPerPixel == 4 ? GL_RGBA : GL_RGB, GL_UNSIGNED_BYTE, [bitmap bitmapData]); } else { // Your code to report unsupported bitmap data } } Here's what the code does: 1. Allocates an NSBitmapImageRep object. 2. Initializes the NSBitmapImageRep object with bitmap data from the current view. 3. Gets the number of samples per pixel. 4. Sets the appropriate unpacking row length for the bitmap. 5. Sets the byte-aligned unpacking that's needed for bitmaps that are 3 bytes per pixel. 6. If a texture object is not passed in, generates a new texture object. 7. Binds the texture name to the texture target. 8. Sets filtering so that it does not use a mipmap, which would be redundant for the texture rectangle extension. 9. Checks to see if the bitmap is nonplanar and is either a 24-bit RGB bitmap or a 32-bit RGBA bitmap. If so, retrievesthe pixel data using the bitmapData method, passing it along with other appropriate parameters to the OpenGL function for specifying a 2D texture image. Creating a Texture from a Quartz Image Source Quartz images (CGImageRef data type) are defined in the Core Graphics framework (ApplicationServices/CoreGraphics.framework/CGImage.h) while the image source data type for reading image data and creating Quartz images from an image source is declared in the Image I/O framework (ApplicationServices/ImageIO.framework/CGImageSource.h). Quartz provides routines that read a wide variety of image data. To use a Quartz image as a texture source, follow these steps: 1. Create a Quartz image source by supplying a CFURL object to the function CGImageSourceCreateWithURL. 2. Create a Quartz image by extracting an image from the image source, using the function CGImageSourceCreateImageAtIndex. Best Practices for Working with Texture Data Creating Textures from Image Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 1333. Extract the image dimensions using the function CGImageGetWidth and CGImageGetHeight. You'll need these to calculate the storage required for the texture. 4. Allocate storage for the texture. 5. Create a color space for the image data. 6. Create a Quartz bitmap graphics context for drawing. Make sure to set up the context for pre-multiplied alpha. 7. Draw the image to the bitmap context. 8. Release the bitmap context. 9. Set the pixel storage mode by calling the function glPixelStorei. 10. Create and bind the texture. 11. Set up the appropriate texture parameters. 12. Call glTexImage2D, supplying the image data. 13. Free the image data. Listing 11-4 shows a code fragment that performsthese steps. Note that you must have a valid, current OpenGL context. Listing 11-4 Using a Quartz image as a texture source CGImageSourceRef myImageSourceRef = CGImageSourceCreateWithURL(url, NULL); CGImageRef myImageRef = CGImageSourceCreateImageAtIndex (myImageSourceRef, 0, NULL); GLint myTextureName; size_t width = CGImageGetWidth(myImageRef); size_t height = CGImageGetHeight(myImageRef); CGRect rect = {{0, 0}, {width, height}}; void * myData = calloc(width * 4, height); CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB(); CGContextRef myBitmapContext = CGBitmapContextCreate (myData, width, height, 8, width*4, space, kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst); CGContextSetBlendMode(myBitmapContext, kCGBlendModeCopy); CGContextDrawImage(myBitmapContext, rect, myImageRef); Best Practices for Working with Texture Data Creating Textures from Image Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 134CGContextRelease(myBitmapContext); glPixelStorei(GL_UNPACK_ROW_LENGTH, width); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGenTextures(1, &myTextureName); glBindTexture(GL_TEXTURE_RECTANGLE_ARB, myTextureName); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA8, width, height, 0, GL_BGRA_EXT, GL_UNSIGNED_INT_8_8_8_8_REV, myData); free(myData); For more information on using Quartz, see Quartz 2D Programming Guide , CGImage Reference , and CGImageSource Reference . Getting Decompressed Raw Pixel Data from a Source Image You can use the Image I/O framework together with a Quartz data provider to obtain decompressed raw pixel data from a source image, as shown in Listing 11-5. You can then use the pixel data for your OpenGL texture. The data has the same format as the source image, so you need to make sure that you use a source image that has the layout you need. Alpha is not premultiplied for the pixel data obtained in Listing 11-5, but alpha is premultiplied for the pixel data you get when using the code described in “Creating a Texture from a Cocoa View” (page 131) and “Creating a Texture from a Quartz Image Source” (page 133). Listing 11-5 Getting pixel data from a source image CGImageSourceRef myImageSourceRef = CGImageSourceCreateWithURL(url, NULL); CGImageRef myImageRef = CGImageSourceCreateImageAtIndex (myImageSourceRef, 0, NULL); CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(myImageRef)); void *pixelData = CFDataGetBytePtr(data); Best Practices for Working with Texture Data Creating Textures from Image Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 135Downloading Texture Data A texture download operation uses the same data path as an upload operation except that the data path is reversed. Downloading transfers texture data, using direct memory access (DMA), from VRAM into a texture that can then be accessed directly by your application. You can use the Apple client range, texture range, and texture rectangle extensions for downloading, just as you would for uploading. To download texture data using the Apple client storage, texture range, and texture rectangle extensions: ● Bind a texture name to a texture target. ● Set up the extensions ● Call the function glCopyTexSubImage2D to copy a texture subimage from the specified window coordinates. This call initiates an asynchronous DMA transfer to system memory the next time you call a flush routine. The CPU doesn't wait for this call to complete. ● Call the function glGetTexImage to transfer the texture into system memory. Note that the parameters must match the ones that you used to set up the texture when you called the function glTexImage2D. This call is the synchronization point; it waits until the transfer is finished. Listing 11-6 shows a code fragment that downloads a rectangular texture that uses cached memory. Your application processes data between the glCopyTexSubImage2D and glGetTexImage calls. How much processing? Enough so that your application does not need to wait for the GPU. Listing 11-6 Code that downloads texture data glBindTexture(GL_TEXTURE_RECTANGLE_ARB, myTextureName); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_STORAGE_HINT_APPLE, GL_STORAGE_SHARED_APPLE); glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE); glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA, sizex, sizey, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, myImagePtr); glCopyTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, 0, 0, 0, 0, image_width, image_height); glFlush(); // Do other work processing here, using a double or triple buffer glGetTexImage(GL_TEXTURE_RECTANGLE_ARB, 0, GL_BGRA, Best Practices for Working with Texture Data Downloading Texture Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 136GL_UNSIGNED_INT_8_8_8_8_REV, pixels); Double Buffering Texture Data When you use any technique that allowsthe GPU to access your texture data directly,such asthe texture range extension, it's possible for the GPU and CPU to access the data at the same time. To avoid such a collision, you must synchronize the GPU and the CPU. The simplest way is shown in Figure 11-10. Your application works on the data, flushes it to the GPU and waits until the GPU is finished before working on the data again. One technique for ensuring that the GPU is finished executing commands before your application sends more data is to insert a token into the command stream and use that to determine when the CPU can touch the data again, as described in “Use Fences for Finer-Grained Synchronization” (page 98). Figure 11-10 uses the fence extension command glFinishObject to synchronize buffer updates for a stream of single-buffered texture data. Notice that when the CPU is processing texture data, the GPU is idle. Similarly, when the GPU is processing texture data, the CPU is idle. It's much more efficient for the GPU and CPU to work asynchronously than to work synchronously. Double buffering data is a technique that allows you to process data asynchronously, as shown in Figure 11-11 (page 138). Figure 11-10 Single-buffered data CPU GPU glFinishObject(..., 1) glFinishObject(..., 1) TIME Frame 1 Frame 2 glFlush glFlush Best Practices for Working with Texture Data Double Buffering Texture Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 137To double buffer data, you must supply two sets of data to work on. Note in Figure 11-11 that while the GPU is rendering one frame of data, the CPU processes the next. After the initial startup, neither processing unit is idle. Using the glFinishObject function provided by the fence extension ensures that buffer updating is synchronized. Figure 11-11 Double-buffered data CPU GPU glFinishObject(..., 1) glFinishObject(..., 1) glFinishObject(..., 2) glFinishObject(..., 2) Time Frame 1 Frame 2 Frame 3 Frame 4 glFlush glFlush glFlush glFlush Best Practices for Working with Texture Data Double Buffering Texture Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 138OpenGL 1.x used fixed functions to deliver a useful graphics pipeline to application developers. To configure the various stages of the pipeline shown in Figure 12-1, applications called OpenGL functions to tweak the calculationsthat were performed for each vertex and fragment. Complex algorithmsrequired multiple rendering passes and dozens of function calls to configure the calculations that the programmer desired. Extensions offered new configuration options, but did not change the complex nature of OpenGL programming. Figure 12-1 OpenGL fixed-function pipeline Geometry Fragment Framebuffer operations Texturing Fog Alpha, stencil, and depth tests Framebuffer blending Primitive assembly Clipping Vertex Application Primitives and image data Transform and lighting 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 139 Customizing the OpenGL Pipeline with ShadersStarting with OpenGL 2.0, some stages of the OpenGL pipeline can be replaced with shaders. A shader is a program written in a special shading language. This program is compiled by OpenGL and uploaded directly into the graphics hardware. Figure 12-2 shows where your applications can hook into the pipeline with shaders. Figure 12-2 OpenGL shader pipeline Geometry Fragment Framebuffer operations Fragment shaders Alpha, stencil, and depth tests Framebuffer blending Geometry shaders Clipping Vertex Application Primitives and image data Vertex shaders Shaders offer a considerable number of advantages to your application: ● Shaders give you precise control over the operations that are performed to render your images. ● Shaders allow for algorithmsto be written in a terse, expressive format. Rather than writing complex blocks of configuration callsto implement a mathematical operation, you write code that expressesthe algorithm directly. ● Older graphics processors implemented the fixed-function pipeline in hardware or microcode, but now graphics processors are general-purpose computing devices. The fixed function pipeline is itself implemented as a shader. ● Shaders allow for longer and more complex algorithms to be implemented using a single rendering pass. Because you have extensive control over the pipeline, it is also easier to implement multipass algorithms without requiring the data to be read back from the GPU. ● Your application can switch between different shaders with a single function call. In contrast, configuring the fixed-function pipeline incurs significant function-call overhead. If your application uses the fixed-function pipeline, a critical task is to replace those tasks with shaders. Customizing the OpenGL Pipeline with Shaders 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 140If you are new to shaders, OpenGL Shading Language , by Randi J. Rost, is an excellent guide for those looking to learn more about writing shaders and integrating them into your application. The rest of this chapter provides some boilerplate code, briefly describe the extensions that implement shaders, and discusses tools that Apple provides to assist you in writing shaders. Shader Basics OpenGL 2.0 offers vertex and fragmentshaders, to take over the processing of those two stages of the graphics pipeline. These same capabilities are also offered by the ARB_shader_objects, ARB_vertex_shader and ARB_fragment_shaderextensions. Vertex shading is available on all hardware running OS X v10.5 or later. Fragment shading is available on all hardware running OS X v10.6 and the majority of hardware running OS X v10.5. Creating a shader program is an expensive operation compared to other OpenGL state changes. Listing 12-1 presents a typical strategy to load, compile, and verify a shader program. Listing 12-1 Loading a Shader /** Initialization-time for shader **/ GLuint shader, prog; GLchar *shaderText = "... shader text ..."; // Create ID for shader shader = glCreateShader(GL_VERTEX_SHADER); // Define shader text glShaderSource(shaderText); // Compile shader glCompileShader(shader); // Associate shader with program glAttachShader(prog, shader); // Link program glLinkProgram(prog); // Validate program glValidateProgram(prog); // Check the status of the compile/link glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLen); if(logLen > 0) { // Show any errors as appropriate glGetProgramInfoLog(prog, logLen, &logLen, log); Customizing the OpenGL Pipeline with Shaders Shader Basics 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 141fprintf(stderr, "Prog Info Log: %s\n", log); } // Retrieve all uniform locations that are determined during link phase for(i = 0; i < uniformCt; i++) { uniformLoc[i] = glGetUniformLocation(prog, uniformName); } // Retrieve all attrib locations that are determined during link phase for(i = 0; i < attribCt; i++) { attribLoc[i] = glGetAttribLocation(prog, attribName); } /** Render stage for shaders **/ glUseProgram(prog); This code loads the text source for a vertex shader, compiles it, and adds it to the program. A more complex example might also attach fragment and geometry shaders. The program islinked and validated for correctness. Finally, the program retrieves information about the inputs to the shader and stores then in its own arrays. When the application is ready to use the shader, it calls glUseProgram to make it the current shader. For best performance, your application should create shaders when your application is initialized, and not inside the rendering loop. Inside your rendering loop, you can quickly switch in the appropriate shaders by calling glUseProgram. For best performance, use the vertex array object extension to also switch in the vertex pointers. See “Vertex Array Object” (page 116) for more information. Advanced Shading Extensions In addition to the standard shader,some Macs offer additionalshading extensionsto reveal advanced hardware capabilities. Not all of these extensions are available on all hardware,so you need to assess whether the features of each extension are worth implementing in your application. Transform Feedback The EXT_transform_feedback extension is available on all hardware running OS X v10.5 or later. With the feedback extension, you can capture the results of the vertex shader into a buffer object, which can be used as an input to future commands. This is similar to the pixel buffer object technique described in “Using Pixel Buffer Objects to Keep Data on the GPU” (page 124), but more directly captures the results you desire. Customizing the OpenGL Pipeline with Shaders Advanced Shading Extensions 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 142GPU Shader 4 The EXT_gpu_shader4 extension extends the OpenGL shading language to offer new operations, including: ● Full integer support. ● Built-in shader variable to reference the current vertex. ● Built-in shader variable to reference the current primitive. This makes it easier to use a shader to use the same static vertex data to render multiple primitives, using a shader and uniform variables to customize each instance of that primitive. ● Unfiltered texture fetches using integer coordinates. ● Querying the size of a texture within a shader. ● Offset texture lookups. ● Explicit gradient and LOD texture lookups. ● Depth Cubemaps. Geometry Shaders The EXT_geometry_shader4 extension allows your create geometry shaders. A geometry shader accepts transformed vertices and can add or remove vertices before passing them down to the rasterizer. This allows the application to add or remove geometry based on the calculated values in the vertex. For example, given a triangle and its neighboring vertices, your application could emit additional vertices to better create a more accurate appearance of a curved surface. Uniform Buffers The EXT_bindable_uniform extension allows your application to allocate buffer objects and use them as the source for uniform data in your shaders. Instead of relying on a single block of uniform memory supplied by OpenGL, your application allocates buffer objects using the same API that it uses to implement vertex buffer objects (“Vertex Buffers” (page 107)). Instead of making a function call for each uniform variable you want to change, you can swap all of the uniform data by binding to a different uniform buffer. Customizing the OpenGL Pipeline with Shaders Advanced Shading Extensions 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 143Aliasing is the bane of the digital domain. In the early days of the personal computer, jagged edges and blocky graphics were accepted by the user simply because not much could be done to correct them. Now with faster hardware and higher-resolution displays, there are several antialiasing techniques that can smooth edges to achieve a more realistic scene. OpenGL supports antialiasing that operates at the level of lines and polygons as well as at the level of the full scene. This chapter discusses techniques for full scene antialiasing (FSAA). If your application needs point or line antialiasing instead of full scene antialiasing, use the built in OpenGL point and line antialiasing functions. These are described in Section 3.4.2 in the OpenGL Specification. The three antialiasing techniques in use today are multisampling, supersampling, and alpha channel blending: ● Multisampling defines a technique for sampling pixel content at multiple locations for each pixel. This is a good technique to use if you want to smooth polygon edges. ● Supersampling renders at a much higher resolution than what's needed for the display. Prior to drawing the content to the display, OpenGL scales and filters the content to the appropriate resolution. This is a good technique to use when you want to smooth texture interiors in addition to polygon edges. ● Alpha channel blending uses the alpha value of a fragment to control how to blend the fragment with the pixel values that are already in the framebuffer. It's a good technique to use when you want to ensure that foreground and background images are composited smoothly. The ARB_multisample extension defines a specification for full scene antialiasing. It describes multisampling and alpha channel sampling. The specification does not specifically mention supersampling but its wording doesn't preclude supersampling. The antialiasing methods that are available depend on the hardware and the actual implementation depends on the vendor. Some graphics cards support antialiasing using a mixture of multisampling and supersampling. The methodology used to select the samples can vary as well. Your best approach is to query the renderer to find out exactly what is supported. OpenGL lets you provide a hint to the renderer asto which antialiasing technique you prefer. Hints are available asrenderer attributesthat you supply when you create a pixel format object. A smallersubset of rendererssupport the EXT_framebuffer_blit and EXT_framebuffer_multisample extensions. These extensions allow your application to create multisampled offscreen frame buffer objects, render detailed scenesto them, with precise control over when the multisampled renderbuffer isresolved to a single displayable color per pixel. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 144 Techniques for Scene AntialiasingGuidelines Keep the following in mind when you set up full scene antialiasing: ● Although a system may have enough VRAM to accommodate a multisample buffer, a large buffer can affect the ability of OpenGL to maintain a properly working texture set. Keep in mind that the buffers associated with the rendering context—depth and stencil—increase in size by a factor equal to number of samples per pixel. ● The OpenGL driver allocates the memory needed for the multisample buffer; your application should not allocate this memory. ● Any antialiasing algorithm that operates on the full scene requires additional computing resources. There is a tradeoff between performance and quality. For that reason, you may want to provide a user interface that allows the user to enable and disable FSAA, or to choose the level of quality for antialiasing. ● The commands glEnable(GL_MULTISAMPLE) and glDisable(GL_MULTISAMPLE) are ignored on some hardware because some graphics cards have the feature enabled all the time. That doesn't mean that you should not call these commands because you'll certainly need them on hardware that doesn't ignore them. ● A hint as to the variant of sampling you want is a suggestion, not a command. Not all hardware supports all types of antialiasing. Other hardware mixes multisampling with supersampling techniques. The driver dictates the type of antialiasing that's actually used in your application. ● The best way to find out which sample modes are supported is to call the CGL function CGLDescribeRenderer with the renderer property kCGLRPSampleModes or kCGLRPSampleAlpha. You can also determine how many samples the renderer supports by calling CGLDescribeRenderer with the renderer property kCGLRPMaxSamples. General Approach The general approach to setting up full scene antialiasing is as follows: 1. Check to see what's supported. Not all renderers support the ARB multisample extension, so you need to check for this functionality (see “Detecting Functionality” (page 83)). To find out what type of antialiasing a specific renderersupports, call the function CGLDescribeRenderer. Supply the renderer property kCGLRPSampleModes to find out whether the renderer supports multisampling and supersampling. Supply kCGLRPSampleAlpha to see whether the renderer supports alpha sampling. Techniques for Scene Antialiasing Guidelines 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 145You can choose to exclude unsupported hardware from the pixel format search by specifying only the hardware thatsupports multisample antialiasing. Keep in mind that if you exclude unsupported hardware, the unsupported displays will not render anything. If you include unsupported hardware, OpenGL uses normal aliased rendering to the unsupported displays and multisampled rendering to supported displays. 2. Include these buffer attributes in the attributes array: ● The appropriate sample buffer attribute constant (NSOpenGLPFASampleBuffers or kCGLPFASampleBuffers) along with the number of multisample buffers. At thistime the specification allows only one multisample buffer. ● The appropriate samples constant (NSOpenGLPFASamples or kCGLPFASamples) along with the number ofsamples per pixel. You can supply 2, 4, 6, or more depending on what the renderersupports and the amount of VRAM available. The value that you supply affects the quality, memory use, and speed of the multisampling operation. For fastest performance, and to use the least amount of video memory, specify 2 samples. When you need more quality, specify 4 or more. ● The no recovery attribute ( NSOpenGLPFANoRecovery or kCGLPFANoRecovery). Although enabling this attribute is not mandatory, it's recommended to prevent OpenGL from using software fallback as a renderer. Multisampled antialiasing performance is slow in the software renderer. 3. Optionally provide a hint for the type of antialiasing you want—multisampling, supersampling, or alpha sampling. See “Hinting for a Specific Antialiasing Technique” (page 147). 4. Enable multisampling with the following command: glEnable(GL_MULTISAMPLE); Regardless of the enabled state, OpenGL always uses the multisample buffer if you supply the appropriate buffer attributes when you set up the pixel format object. If you haven'tsupplied the appropriate attributes, enabling multisampling has no effect. When multisampling is disabled, all coverage values are set to 1, which gives the appearance of rendering without multisampling. Some graphics hardware leaves multisampling enabled all the time. However, don't rely on hardware to have multisampling enabled; use glEnable to programmatically turn on this feature. 5. Optionally provide hints for the rendering algorithm. You perform this optional step only if you want OpenGL to compute coverage values by a method other than uniformly weighting samples and averaging them. Some hardware supports a multisample filter hint through an OpenGL extension—GL_NV_multisample_filter_hint. This hint allows an OpenGL implementation to use an alternative method of resolving the color of multisampled pixels. Techniques for Scene Antialiasing General Approach 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 146You can specify that OpenGL usesfaster or nicer rendering by calling the OpenGL function glHint, passing the constant GL_MULTISAMPLE_FILTER_HINT_NV asthe target parameter and GL_FASTEST or GL_NICEST as the mode parameter. Hints allow the hardware to optimize the output if it can. There is no performance penalty or returned error for issuing a hint that's not supported. For more information, see the OpenGL extension registry for NV_multisample_filter_hint. Hinting for a Specific Antialiasing Technique When you set up your renderer and buffer attributes for full scene antialiasing, you can specify a hint to prefer one antialiasing technique over the others. If the underlying renderer does not have sufficient resources to support what you request, OpenGL ignores the hint. If you do not supply the appropriate buffer attributes when you create a pixel format object, then the hint does nothing. Table 13-1 lists the hinting constants available for the NSOpenGLPixelFormat class and CGL. Table 13-1 Antialiasing hints Multisampling Supersampling Alpha blending NSOpenGLPFAMultisample NSOpenGLPFASupersample NSOpenGLPFASampleAlpha kCGLPFAMultisample kCGLPFASupersample kCGLPFASampleAlpha Techniques for Scene Antialiasing Hinting for a Specific Antialiasing Technique 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 147Concurrency is the notion of multiple things happening at the same time. In the context of computers, concurrency usually refers to executing tasks on more than one processor at the same time. By performing work in parallel, tasks complete sooner, and applications become more responsive to the user. The good news isthat well-designed OpenGL applications already exhibit a specific form of concurrency—concurrency between application processing on the CPU and OpenGL processing on the GPU. Many of the techniques introduced in “OpenGL Application Design Strategies” (page 89) are aimed specifically at creating OpenGL applications that exhibit great CPU-GPU parallelism. However, modern computers not only contain a powerful GPU, but also contain multiple CPUs. Sometimesthose CPUs have multiple cores, each capable of performing calculations independently of the others. It is critical that applications be designed to take advantage of concurrency where possible. Designing a concurrent application means decomposing the work your application performs into subtasks and identifying which tasks can safely operate in parallel and which tasks must be executed sequentially—that is, which tasks are dependent on either resources used by other tasks or results returned from those tasks. Each process in OS X is made up of one or more threads. A thread is a stream of execution that runs code for the process. Multicore systems offer true concurrency by allowing multiple threads to execute simultaneously. Apple offers both traditional threads and a feature called Grand CentralDispatch (GCD). Grand Central Dispatch allows you to decompose your application into smaller tasks without requiring the application to manage threads. GCD allocates threads based on the number of cores available on the system and automatically schedules tasks to those threads. At a higher level, Cocoa offers NSOperation and NSOperationQueue to provide an Objective-C abstraction for creating and scheduling units of work. On OS X v10.6, operation queues use GCD to dispatch work; on OS X v10.5, operation queues create threads to execute your application’s tasks. This chapter does not attempt describe these technologiesin detail. Before you consider how to add concurrency to your OpenGL application, you should first readConcurrency Programming Guide . If you plan on managing threads manually, you should also read Threading Programming Guide . Regardless of which technique you use, there are additional restrictions when calling OpenGL on multithreaded systems. This chapter helps you understand when multithreading improves your OpenGL application’s performance, the restrictions OpenGL places on multithreaded applications, and common design strategies you might use to implement concurrency in an OpenGL application. Some of these design techniques can get you an improvement in just a few lines of code. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 148 Concurrency and OpenGLIdentifying Whether an OpenGL Application Can Benefit from Concurrency Creating a multithreaded application requires significant effort in the design, implementation, and testing of your application. Threads also add complexity and overhead to an application. For example, your application may need to copy data so that it can be handed to a worker thread, or multiple threads may need to synchronize access to the same resources. Before you attempt to implement concurrency in an OpenGL application, you should optimize your OpenGL code in a single-threaded environment using the techniques described in “OpenGL Application Design Strategies” (page 89). Focus on achieving great CPU-GPU parallelism first and then assess whether concurrent programming can provide an additional performance benefit. A good candidate has either or both of the following characteristics: ● The application performs many tasks on the CPU that are independent of OpenGL rendering. Games, for example, simulate the game world, calculate artificial intelligence from computer-controlled opponents, and play sound. You can exploit parallelism in thisscenario because many of these tasks are not dependent on your OpenGL drawing code. ● Profiling your application has shown that your OpenGL rendering code spends a lot of time in the CPU. In this scenario, the GPU is idle because your application is incapable of feeding it commands fast enough. If your CPU-bound code has already been optimized, you may be able to improve its performance further by splitting the work into tasks that execute concurrently. If your application is blocked waiting for the GPU, and has no work it can perform in parallel with its OpenGL drawing commands, then it is not a good candidate for concurrency. If the CPU and GPU are both idle, then your OpenGL needs are probably simple enough that no further tuning is useful. For more information on how to determine where your application spends its time, see “Tuning Your OpenGL Application” (page 155). OpenGL Restricts Each Context to a Single Thread Each thread in an OS X process has a single current OpenGL rendering context. Every time your application calls an OpenGL function, OpenGL implicitly looks up the context associated with the current thread and modifies the state or objects associated with that context. OpenGL is not reentrant. If you modify the same context from multiple threads simultaneously, the results are unpredictable. Your application might crash or it might render improperly. If for some reason you decide to set more than one thread to target the same context, then you must synchronize threads by placing a mutex around all OpenGL calls to the context, such as gl* and CGL*. OpenGL commands that block—such as fence commands—do not synchronize threads. Concurrency and OpenGL Identifying Whether an OpenGL Application Can Benefit from Concurrency 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 149GCD and NSOperationQueue objects can both execute your tasks on a thread of their choosing. They may create a thread specifically for that task, or they may reuse an existing thread. But in either case, you cannot guarantee which thread executes the task. For an OpenGL application, that means: ● Each task must set the context before executing any OpenGL commands. ● Your application must ensure that two tasks that access the same context are not allowed to execute concurrently. Strategies for Implementing Concurrency in OpenGL Applications A concurrent OpenGL application wants to focus on CPU parallelism so that OpenGL can provide more work to the GPU. Here are a few recommended strategies for implementing concurrency in an OpenGL application: ● Decompose your application into OpenGL and non-OpenGL tasks that can execute concurrently. Your OpenGL rendering code executes as a single task, so it still executes in a single thread. This strategy works best when your application has other tasks that require significant CPU processing. ● If performance profiling reveals that your application spends a lot of CPU time inside OpenGL, you can move some of that processing to another thread by enabling the multithreading in the OpenGL engine. The advantage of this method is its simplicity; enabling the multithreaded OpenGL engine takes just a few lines of code. See “Multithreaded OpenGL” (page 150). ● If your application spends a lot of CPU time preparing data to send to openGL, you can divide the work between tasks that prepare rendering data and tasks that submit rendering commands to OpenGL. See “Perform OpenGL Computations in a Worker Task” (page 151) ● If your application has multiple scenes it can render simultaneously or work it can perform in multiple contexts, it can create multiple tasks, with an OpenGL context per task. If the contexts can share the same resources, you can use contextsharing when the contexts are created to share surfaces or OpenGL objects: display lists, textures, vertex and fragment programs, vertex array objects, and so on. See “Use Multiple OpenGL Contexts” (page 153) Multithreaded OpenGL Whenever your application calls OpenGL, the renderer processes the parameters to put them in a format that the hardware understands. The time required to process these commands varies depending on whether the inputs are already in a hardware-friendly format, but there is always some overhead in preparing commands for the hardware. Concurrency and OpenGL Strategies for Implementing Concurrency in OpenGL Applications 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 150If your application spends a lot of time performing calculations inside OpenGL, and you’ve already taken steps to pick ideal data formats, your application might gain an additional benefit by enabling multithreading inside the OpenGL engine. The multithreaded OpenGL engine automatically creates a worker thread and transfers some of its calculationsto that thread. On a multicore system, this allowsinternal OpenGL calculations performed on the CPU to act in parallel with your application, improving performance. Synchronizing functions continue to block the calling thread. Listing 14-1 shows the code required to enable the multithreaded OpenGL engine. Listing 14-1 Enabling the multithreaded OpenGL engine CGLError err = 0; CGLContextObj ctx = CGLGetCurrentContext(); // Enable the multithreading err = CGLEnable( ctx, kCGLCEMPEngine); if (err != kCGLNoError ) { // Multithreaded execution may not be available // Insert your code to take appropriate action } Note: Enabling or disabling multithreaded execution causes OpenGL to flush previous commands as well as incurring the overhead of setting up the additional thread. You should enable or disable multithreaded execution in an initialization function rather than in the rendering loop. Enabling multithreading comes at a cost—OpenGL must copy parameters to transmit them to the worker thread. Because of this overhead, you should always test your application with and without multithreading enabled to determine whether it provides a substantial performance improvement. Perform OpenGL Computations in a Worker Task Some applications perform lots of calculations on their data before passing that data down to the OpenGL renderer. For example, the application might create new geometry or animate existing geometry. Where possible,such calculationsshould be performed inside OpenGL. For example, vertex shaders and the transform Concurrency and OpenGL Perform OpenGL Computations in a Worker Task 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 151feedback extension might allow you to perform these calculations entirely within OpenGL. Thistakes advantage of the greater parallelism available inside the GPU, and reduces the overhead of copying results between your application and OpenGL. The approach described in Figure 9-3 (page 92) alternates between updating OpenGL objects and executing rendering commands that use those objects. OpenGL renders on the GPU in parallel with your application’s updates running on the CPU. If the calculations performed on the CPU take more processing time than those on the GPU, then the GPU spends more time idle. In this situation, you may be able to take advantage of parallelism on systems with multiple CPUs. Split your OpenGL rendering code into separate calculation and processing tasks, and run them in parallel. Figure 14-1 shows a clear division of labor. One task produces data that is consumed by the second and submitted to OpenGL. Figure 14-1 CPU processing and OpenGL on separate threads CPU Processing Shared data Framebuffer OpenGL context Texture data Vertex data OpenGL state OpenGL surface Thread 1 Thread 2 For best performance, your application should avoid copying data between the tasks. For example, rather than calculating the data in one task and copying it into a vertex buffer object in the other, map the vertex buffer object in the setup code and hand the pointer directly to the worker task. If your application can further decompose the modifications task into subtasks, you may see better benefits. For example, assume two or more vertex buffers, each of which needsto be updated before submitting drawing commands. Each can be recalculated independently of the others. In this scenario, the modifications to each buffer becomes an operation, using an NSOperationQueue object to manage the work: 1. Set the current context. 2. Map the first buffer. 3. Create an NSOperation object whose task is to fill that buffer. 4. Queue that operation on the operation queue. Concurrency and OpenGL Perform OpenGL Computations in a Worker Task 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 1525. Perform steps 2 through 4 for the other buffers. 6. Call waitUntilAllOperationsAreFinished on the operation queue. 7. Unmap the buffers. 8. Execute rendering commands. On a multicore system, multiple threads of execution may allow the buffers to be filled simultaneously. Steps 7 and 8 could even be performed by a separate operation queued onto the same operation queue, provided that operation set the proper dependencies. Use Multiple OpenGL Contexts If your application has multiple scenes that can be rendered in parallel, you can use a context for each scene you need to render. Create one context for each scene and assign each context to an operation or task. Because each task has its own context, all can submit rendering commands in parallel. The Apple-specific OpenGL APIs also provide the option for sharing data between contexts, as shown in Figure 14-2. Shared resources are automatically set up as mutual exclusion (mutex) objects. Notice that thread 2 draws to a pixel buffer that is linked to the shared state as a texture. Thread 1 can then draw using that texture. Figure 14-2 Two contexts on separate threads Pbuffer surface Framebuffer OpenGL context 1 OpenGL state 1 OpenGL context 2 OpenGL state 2 OpenGL surface Thread 1 Thread 2 OpenGL shared state OpenGL shared state OpenGL shared state This is the most complex model for designing an application. Changes to objects in one context must be flushed so that other contextssee the changes. Similarly, when your application finishes operating on an object, it must flush those commands before exiting, to ensure that all rendering commands have been submitted to the hardware. Concurrency and OpenGL Use Multiple OpenGL Contexts 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 153Guidelines for Threading OpenGL Applications Follow these guidelines to ensure successful threading in an application that uses OpenGL: ● Use only one thread per context. OpenGL commands for a specific context are not thread safe. You should never have more than one thread accessing a single context simultaneously. ● Contexts that are on different threads can share object resources. For example, it is acceptable for one context in one thread to modify a texture, and a second context in a second thread to modify the same texture. The shared object handling provided by the Apple APIs automatically protects against thread errors. And, your application is following the "one thread per context" guideline. ● When you use an NSOpenGLView object with OpenGL calls that are issued from a thread other than the main one, you must set up mutex locking. Mutex locking is necessary because unless you override the default behavior, the main thread may need to communicate with the view for such things as resizing. Applications that use Objective-C with multithreading can lock contexts using the functions CGLLockContext and CGLUnlockContext. If you want to perform rendering in a thread other than the main one, you can lock the context that you want to access and safely execute OpenGL commands. The locking calls must be placed around all of your OpenGL calls in all threads. CGLLockContext blocks the thread it is on until all other threads have unlocked the same context using the function CGLUnlockContext. You can use CGLLockContext recursively. Context-specific CGL calls by themselves do not require locking, but you can guarantee serial processing for a group of calls by surrounding them with CGLLockContext and CGLUnlockContext. Keep in mind that calls from the OpenGL API (the API provided by the Khronos OpenGL Working Group) require locking. ● Keep track of the current context. When switching threadsit is easy to switch contextsinadvertently, which causes unforeseen effects on the execution of graphic commands. You must set a current context when switching to a newly created thread. Concurrency and OpenGL Guidelines for Threading OpenGL Applications 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 154After you design and implement your application, it is important that you spend some time analyzing its performance. The key to performance tuning your OpenGL application is to successively refine the design and implementation of your application. You do this by alternating between measuring your application, identifying where the bottleneck is, and removing the bottleneck. If you are unfamiliar with general performance issues on the Macintosh platform, you will want to read Getting Started with Performance and Performance Overview. Performance Overview contains general performance tips that are useful to all applications. It also describes most of the performance tools provided with OS X. Next, take a close look at Instruments. Instruments consolidates many measurement tools into a single comprehensive performance-tuning application. There are two tools other than OpenGL Profiler that are specific for OpenGL development—OpenGL Driver Monitor and OpenGL Shader Builder. OpenGL Driver Monitor collectsreal-time data from the hardware. OpenGL Shader Builder provides immediate feedback on vertex and fragment programs that you write. For more information on these tools, see: ● OpenGL Tools for Serious Graphics Development ● Optimizing with Shark: Big Payoff, Small Effort ● Instruments User Guide ● Shark User Guide ● Real world profiling with the OpenGL Profiler ● OpenGL Driver Monitor User Guide ● OpenGL Shader Builder User Guide The following books contain many techniques for getting the most performance from the GPU: ● GPU Gems: Programming Techniques, Tips and Tricks for Real Time Graphics, Randima Fernando. In particular, Graphics Pipeline Performance is a critical article for understanding how to find the bottlenecks in your OpenGL application. ● GPU Gems 2: Programming Techniques for High-Performance Graphics and General-Purpose Computation , Matt Pharr and Randima Fernando. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 155 Tuning Your OpenGL ApplicationThis chapter focuses on two main topics: ● “Gathering and Analyzing Baseline Performance Data” (page 156) shows how to use top and OpenGL Profiler to obtain and interpret baseline performance data. ● “Identifying Bottlenecks with Shark” (page 161) discussesthe patterns of usage that the Shark performance tool can make apparent and that indicate places in your code that you may want to improve. Gathering and Analyzing Baseline Performance Data Analyzing performance is a systematic process that starts with gathering baseline data. OS X provides several applications that you can use to assess baseline performance for an OpenGL application: ● top is a command-line utility that you run in the Terminal window. You can use top to assess how much CPU time your application consumes. ● OpenGL Profiler is an application that determines how much time an application spends in OpenGL. It also provides function traces that you can use to look for redundant calls. ● OpenGL Driver Monitor lets you gather real-time data on the operation of the GPU and lets you look at information (OpenGL extensions supported, buffer modes, sample modes, and so forth) for the available renderers. For more information, see OpenGL Tools for Serious Graphics Development. This section shows how to use top along with OpenGL Profiler to analyze where to spend your optimization efforts—in your OpenGL code, your other application code, or in both. You'll see how to gather baseline data and how to determine the relationship of OpenGL performance to overall application performance. 1. Launch your OpenGL application. 2. Open a Terminal window and place it side-by-side with your application window. 3. In the Terminal window, type top and press Return. You'll see output similar to that shown in Figure 15-1. Tuning Your OpenGL Application Gathering and Analyzing Baseline Performance Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 156The top program indicates the amount of CPU time that an application uses. The CPU time serves as a good baseline value for gauging how much tuning your code needs. Figure 15-1 shows the percentage of CPU time for the OpenGL application GLCarbon1C (highlighted). Note this application utilizes 31.5% of CPU resources. Figure 15-1 Output produced by the top application Tuning Your OpenGL Application Gathering and Analyzing Baseline Performance Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 1574. Open the OpenGL Profiler application, located in /Developer/Applications/Graphics Tools/. In the window that appears, select the options to collect a trace and include backtraces, as shown in Figure 15-2. Figure 15-2 The OpenGL Profiler window 5. Select the option “Attach to application”, then select your application from the Application list. You may see small pauses or stutters in the application, particularly when OpenGL Profiler is collecting a function trace. This is normal and does not significantly affect the performance statistics. The glitches are due to the large amount of data that OpenGL Profiler is writing out. 6. Click Suspend to stop data collection. 7. Open the Statistics and Trace windows by choosing them from the Views menu. Figure 15-3 provides an example of what the Statistics window looks like. Figure 15-4 (page 160) shows a Trace window. The estimated percentage of time spent in OpenGL is shown at the bottom of Figure 15-3. Note that for this example, it is 28.91%. The higher this number, the more time the application is spending in OpenGL and the more opportunity there may be to improve application performance by optimizing OpenGL code. Tuning Your OpenGL Application Gathering and Analyzing Baseline Performance Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 158You can use the amount of time spent in OpenGL along with the CPU time to calculate a ratio of the application time versus OpenGL time. Thisratio indicates where to spend most of your optimization efforts. Figure 15-3 A statistics window 8. In the Trace window, look for duplicate function calls and redundant or unnecessary state changes. Look for back-to-back function calls with the same or similar data. These are areas that can typically be optimized. Functions that are called more than necessary include glTexParameter, glPixelStore, glEnable, and glDisable. For most applications, these functions can be called once from a setup or state modification routine and called only when necessary. It's generally good practice to keep state changes out of rendering loops(which can be seen in the function trace as the same sequence of state changes and drawing over and over again) as much as possible and use separate routines to adjust state as necessary. Tuning Your OpenGL Application Gathering and Analyzing Baseline Performance Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 159Look at the time value to the left of each function call to determine the cost of the call. Figure 15-4 A Trace window Use these to determine the cost of a call 9. Determine what the performance gain would be if it were possible to reduce the time to execute all OpenGL calls to zero. For example, take the performance data from the GLCarbon1C application used in thissection to determine the performance attributable to the OpenGL calls. Total Application Time (from top) = 31.5% Total Time in OpenGL (from OpenGL Profiler) = 28.91% At first glance, you might think that optimizing the OpenGL code could improve application performance by almost 29%, thusreducing the total application time by 29%. Thisisn't the case. Calculate the theoretical performance increase by multiplying the total CPU time by the percentage of time spent in OpenGL. The theoretical performance improvement for this example is: 31.5 X .2891 = 9.11% If OpenGL took no time at all to execute, the application would see a 9.11% increase in performance. So, if the application runs at 60 frames per second (FPS), it would perform as follows: New FPS = previous FPS * (1 +(% performance increase)) = 60 fps *(1.0911) = 65.47 fps The application gains almost 5.5 frames per second by reducing OpenGL from 28.91% to 0%. This shows that the relationship of OpenGL performance to application performance is not linear. Simply reducing the amount of time spent in OpenGL may or may not offer any noticeable benefit in application performance. Tuning Your OpenGL Application Gathering and Analyzing Baseline Performance Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 160Using OpenGL Driver Monitor to Measure Stalls You can use OpenGL Driver Monitor to measure how long the CPU waits for the GPU, as shown in Figure 15-5. OpenGL Driver Monitor is useful for analyzing other parameters as well. You can choose which parameters to monitor simply by clicking a parameter name from the drawer shown in the figure. Figure 15-5 The graph view in OpenGL Driver Monitor Identifying Bottlenecks with Shark Shark is an extremely useful tool for identifying places in your code that are slow and could benefit from optimization. Once you learn the basics, you can use it on your OpenGL applications to identify bottlenecks. There are three issues to watch out for in Shark when using it to analyze OpenGL performance: ● Costly data conversions. If you notice the glgProcessPixels call (in the libGLImage.dylib library) showing up in the analysis, it's an indication that the driver is not handling a texture upload optimally. The call is used when your application makes a glTexImage or glTexSubImage call using data that is in a nonnative format for the driver, which meansthe data must be converted before the driver can upload it. You can improve performance by changing your data so that it is in a native format for the driver. See “Use Optimal Data Types and Formats” (page 102). Tuning Your OpenGL Application Identifying Bottlenecks with Shark 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 161Note: If your data needs only to be swizzled, glgProcessPixels performs the swizzling reasonably fast although not as fast as if the data didn't need swizzling. But non-native data formats are converted one byte at a time and incurs a performance cost that is best to avoid. ● Time in the mach_kernel library. If you see time spent waiting for a timestamp or waiting for the driver, it indicates that your application is waiting for the GPU to finish processing. You see this during a texture upload, for example. ● Misleading symbols. You may see a symbol, such as glgGetString, that appears to be taking time but shouldn't be taking time in your application. Thatsometimes happens because the underlying optimizations performed by the system don't have any symbols attached to them on the driver side. Without a symbol to display, Shark shows the last symbol. You need to look for the call that your application made prior to that symbol and focus your attention there. You don't need to concern yourself with the calls that were made "underneath" your call. Tuning Your OpenGL Application Identifying Bottlenecks with Shark 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 162OpenGL functionality changes with each version of the OpenGL API. This appendix describes the functionality that was added with each version. See the official OpenGL specification for detailed information. The functionality for each version is guaranteed to be available through the OpenGL API even if a particular renderer does not support all of the extensions in a version. For example, a renderer that claims to support OpenGL 1.3 might not export the GL_ARB_texture_env_combine or GL_EXT_texture_env_combine extensions. It's important that you query both the renderer version and extension string to make sure that the renderer supports any functionality that you want to use. Note: It's possible for vendor and ARB extensions to provide similar functionality. As particular functionality becomes widely adopted, it can be moved into the core OpenGL API. As a result, functionality that you want to use could be included as an extension, as part of the core API, or both. You should read the extensions and the core OpenGL specifications carefully to see the differences. Furthermore, as an extension is promoted, the API associated with that functionality can change. For more information,see “Determining the OpenGL Capabilities Supported by the Renderer” (page 83). In the following tables, the extensions describe the feature that the core functionality is based on. The core functionality might not be the same as the extension. For example, compare the core texture crossbar functionality with the extension that it's based on. Version 1.1 Table A-1 Functionality added in OpenGL 1.1 Functionality Extension Copy texture and subtexture GL_EXT_copy_texture and GL_EXT_subtexture Logical operation GL_EXT_blend_logic_op Polygon offset GL_EXT_polygon_offset Texture image formats GL_EXT_texture 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 163 Legacy OpenGL Functionality by VersionFunctionality Extension Texture objects GL_EXT_texture_object Texture proxies GL_EXT_texture Texture replace environment GL_EXT_texture Vertex array GL_EXT_vertex_array There were a number of other minor changes outlined in Appendix C section 9 of the OpenGL specification. See http://www.opengl.org. Version 1.2 Table A-2 Functionality added in OpenGL 1.2 Functionality Extension BGRA pixel formats GL_EXT_bgra GL_SGI_color_table , GL_EXT_color_subtable, GL_EXT_convolution,GL_HP_convolution_border_modes, GL_SGI_color_matrix, GL_EXT_histogram, GL_EXT_blend_minmax, and GL_EXT_blend_subtract Imaging subset (optional) Normal rescaling GL_EXT_rescale_normal Packed pixel formats GL_EXT_packed_pixels Separate specular color GL_EXT_separate_specular_color Texture coordinate edge clamping GL_SGIS_texture_edge_clamp Texture level of detail control GL_SGIS_texture_lod Three-dimensional texturing GL_EXT_texture3D Vertex array draw element range GL_EXT_draw_range_elements Legacy OpenGL Functionality by Version Version 1.2 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 164Note: The imaging subset might not be present on all implementations; you must verify by checking for the ARB_imaging extension. OpenGL 1.2.1 introduced ARB extensions with no specific core API changes. Version 1.3 Table A-3 Functionality added in OpenGL 1.3 Functionality Extension Compressed textures GL_ARB_texture_compression Cube map textures GL_ARB_texture_cube_map Multisample GL_ARB_multisample Multitexture GL_ARB_multitexture Texture add environment mode GL_ARB_texture_env_add Texture border clamp GL_ARB_texture_border_clamp Texture combine environment mode GL_ARB_texture_env_combine Texture dot3 environment mode GL_ARB_texture_env_dot3 Transpose matrix GL_ARB_transpose_matrix Version 1.4 Table A-4 Functionality added in OpenGL 1.4 Functionality Extension Automatic mipmap generation GL_SGIS_generate_mipmap Blend function separate GL_ARB_blend_func_separate Blend squaring GL_NV_blend_square Depth textures GL_ARB_depth_texture Legacy OpenGL Functionality by Version Version 1.3 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 165Functionality Extension Fog coordinate GL_EXT_fog_coord Multiple draw arrays GL_EXT_multi_draw_arrays Point parameters GL_ARB_point_parameters Secondary color GL_EXT_secondary_color Separate blend functions GL_EXT_blend_func_separate, GL_EXT_blend_color Shadows GL_ARB_shadow Stencil wrap GL_EXT_stencil_wrap Texture crossbar environment mode GL_ARB_texture_env_crossbar Texture level of detail bias GL_EXT_texture_lod_bias Texture mirrored repeat GL_ARB_texture_mirrored_repeat Window raster position GL_ARB_window_pos Version 1.5 Table A-5 Functionality added in OpenGL 1.5 Functionality Extension Buffer objects GL_ARB_vertex_buffer_object Occlusion queries GL_ARB_occlusion_query Shadow functions GL_EXT_shadow_funcs Version 2.0 Table A-6 Functionality added in OpenGL 2.0 Functionality Extension Multiple render targets GL_ARB_draw_buffers Legacy OpenGL Functionality by Version Version 1.5 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 166Functionality Extension Non–power-of-two textures GL_ARB_texture_non_power_of_two Point sprites GL_ARB_point_sprite Separate blend equation GL_EXT_blend_equation_separate GL_ATI_separate_stencil GL_EXT_stencil_two_side Separate stencil Shading language GL_ARB_shading_language_100 Shader objects GL_ARB_shader_objects GL_ARB_fragment_shader GL_ARB_vertex_shader Shader programs Version 2.1 Table A-7 Functionality added in OpenGL 2.1 Functionality Extension Pixel buffer objects GL_ARB_pixel_buffer_object sRGB textures GL_EXT_texture_sRGB Legacy OpenGL Functionality by Version Version 2.1 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 167The OpenGL 3.0 specification deprecated many areas of functionality defined in earlier versions of the OpenGL specification. The OpenGL 3.2 Core profile explicitly removesthese deprecated features and adjusts other parts of the specification to provide a streamlined, clean programming interface to OpenGL. Use this chapter to assist you in migrating your application away from this deprecated functionality. Removed Functionality The features that were removed from OpenGL are described in in Appendix E of the OpenGL 3.2 Core specification, and you should use that as the definitive guide for the changes you need to make in your application. Here is a summary of most significant areas that changed: ● If your application uses the fixed-function pipeline, it must be rewritten to use shaders instead. ● If your application uses shaders, you must rewrite your shaders to use OpenGL Shading Language 1.5; many built-in shader variables provided in earlier versions of the OpenGL Shading Language were explicitly removed from the OpenGL Shading Language 1.5 specification. Similarly, your application may no longer provide vertex data using the fixed-function routines; all vertex attributes are now specified as generic vertex attributes. ● Your application must explicitly generate object names using the OpenGL API. ● Vertex data must be provided to OpenGL using buffer objects. ● The built-in matrix stack functionality from earlier versions of OpenGL has been removed; you must recreate this functionality using shader inputs. ● Support for auxiliary and accumulation buffers has been removed; use framebuffer objects instead. ● Your application no longer fetches the list of extensions as a single string. Instead, you first fetch the number of extensions and then separately fetch each extension string. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 168 Updating an Application to Support the OpenGL 3.2 Core SpecificationExtension Changes on OS X OpenGL 3.2 providesfunctionality that earlier versions ofOpenGL provided through extensions.Other extensions that were previously supported on OS X are no longer supported when your application uses the OpenGL 3.2 Core profile. Table B-1 lists extensions described elsewhere in this guide; use this table to determine whether the extension is supported, and if not, what equivalent functionality is supported. Table B-1 Extensions described in this guide Extension Status Obsolete. Use the ARB_Sync functionality provided by OpenGL 3.2 (Core). APPLE_fence ARB_vertex_buffer_object Functionality provided by OpenGL 3.2 (Core). Obsolete. Use the ARB_vertex_array_object functionality provided by OpenGL 3.2 (Core). APPLE_vertex_array_object Obsolete. Use the ARB_map_buffer_range functionality provided by OpenGL 3.2 (Core). APPLE_vertex_array_range Obsolete. Use the ARB_map_buffer_range functionality provided by OpenGL 3.2 (Core). APPLE_flush_buffer_range APPLE_client_storage Supported. APPLE_texture_range Supported. ARB_texture_rectangle Functionality provided by OpenGL 3.2 (Core). ARB_shader_objects Functionality provided by OpenGL 3.2 (Core). ARB_vertex_shader Functionality provided by OpenGL 3.2 (Core). ARB_fragment_shader Functionality provided by OpenGL 3.2 (Core). EXT_transform_feedback Functionality provided by OpenGL 3.2 (Core). EXT_gpu_shader4 Obsolete. Functionality included in GLSL 1.5 EXT_geometry_shader4 Functionality provided by OpenGL 3.2 (Core). Obsolete. Use the ARB_uniform_buffer_object functionality provided by OpenGL 3.2 (Core). EXT_bindable_uniform ARB_pixel_buffer_object Functionality provided by OpenGL 3.2 (Core). Updating an Application to Support the OpenGL 3.2 Core Specification Extension Changes on OS X 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 169Extension Status Obsolete. Use the ARB_framebuffer_object functionality provided by OpenGL 3.2 (Core). EXT_framebuffer_object APPLE_pixel_buffer Obsolete. Use framebuffer objects instead. Obsolete. Use multisampled renderbuffers to precisely control multisampling. NV_multisample_filter_hint Updating an Application to Support the OpenGL 3.2 Core Specification Extension Changes on OS X 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 170Function pointers to OpenGL routines allow you to deploy your application across multiple versions of OS X regardless of whether the entry point is supported at link time or runtime. This practice also provides support for code that needs to run cross-platform—in both OS X and Windows. Note: If you are deploying your application only in OS X v10.4 or later, you do not need to read this chapter. Instead, consider the alternative, which is to set the gcc attribute that allows weak linking of symbols. Keep in mind, however, that weak linking may impact your application's performance. For more information, see “Frameworks and Weak Linking”. This appendix discusses the tasks needed to set up and use function pointers as entry points to OpenGL routines: ● “Obtaining a Function Pointer to an Arbitrary OpenGL Entry Point” (page 171)shows how to write a generic routine that you can reuse for any OpenGL application on the Macintosh platform. ● “Initializing Entry Points” (page 172) describes how to declare function pointer type definitions and initialize them with the appropriate OpenGL command entry points for your application. Obtaining a Function Pointer to an Arbitrary OpenGL Entry Point Getting a pointer to an OpenGL entry point function is fairly straightforward from Cocoa. You can use the Dynamic Loader function NSLookupAndBindSymbol to get the address of an OpenGL entry point. Keep in mind that getting a valid function pointer means that the entry point is exported by the OpenGL framework; it does not guarantee that a particular routine is supported and valid to call from within your application. You still need to check for OpenGL functionality on a per-renderer basis as described in “Detecting Functionality” (page 83). Listing C-1 shows how to use NSLookupAndBindSymbol from within the function MyNSGLGetProcAddress. When provided a symbol name, this application-defined function returns the appropriate function pointer from the global symbol table. A detailed explanation for each numbered line of code appears following the listing. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 171 Setting Up Function Pointers to OpenGL RoutinesListing C-1 Using NSLookupAndBindSymbol to obtain a symbol for a symbol name #import #import #import void * MyNSGLGetProcAddress (const char *name) { NSSymbol symbol; char *symbolName; symbolName = malloc (strlen (name) + 2); // 1 strcpy(symbolName + 1, name); // 2 symbolName[0] = '_'; // 3 symbol = NULL; if (NSIsSymbolNameDefined (symbolName)) // 4 symbol = NSLookupAndBindSymbol (symbolName); free (symbolName); // 5 return symbol ? NSAddressOfSymbol (symbol) : NULL; // 6 } Here's what the code does: 1. Allocates storage for the symbol name plus an underscore character ('_'). The underscore character is part of the UNIX C symbol-mangling convention, so make sure that you provide storage for it. 2. Copiesthe symbol name into the string variable,starting at the second character, to leave room for prefixing the underscore character. 3. Copies the underscore character into the first character of the symbol name string. 4. Checks to make sure that the symbol name is defined, and if it is, looks up the symbol. 5. Frees the symbol name string because it is no longer needed. 6. Returns the appropriate pointer if successful, or NULL if not successful. Before using this pointer, you should make sure that is it valid. Initializing Entry Points Listing C-2 shows how to use the MyNSGLGetProcAddress function from Listing C-1 (page 172) to obtain a few OpenGL entry points. A detailed explanation for each numbered line of code appears following the listing. Setting Up Function Pointers to OpenGL Routines Initializing Entry Points 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 172Listing C-2 Using NSGLGetProcAddress to obtain an OpenGL entry point #import "MyNSGLGetProcAddress.h" // 1 static void InitEntryPoints (void); static void DeallocEntryPoints (void); // Function pointer type definitions typedef void (*glBlendColorProcPtr)(GLclampf red,GLclampf green, GLclampf blue,GLclampf alpha); typedef void (*glBlendEquationProcPtr)(GLenum mode); typedef void (*glDrawRangeElementsProcPtr)(GLenum mode, GLuint start, GLuint end,GLsizei count,GLenum type,const GLvoid *indices); glBlendColorProcPtr pfglBlendColor = NULL; // 2 glBlendEquationProcPtr pfglBlendEquation = NULL; glDrawRangeElementsProcPtr pfglDrawRangeElements = NULL; static void InitEntryPoints (void) // 3 { pfglBlendColor = (glBlendColorProcPtr) MyNSGLGetProcAddress ("glBlendColor"); pfglBlendEquation = (glBlendEquationProcPtr)MyNSGLGetProcAddress ("glBlendEquation"); pfglDrawRangeElements = (glDrawRangeElementsProcPtr)MyNSGLGetProcAddress ("glDrawRangeElements"); } // ------------------------- static void DeallocEntryPoints (void) // 4 { pfglBlendColor = NULL; pfglBlendEquation = NULL; pfglDrawRangeElements = NULL;; } Here's what the code does: 1. Imports the header file that contains the MyNSGLProcAddress function from Listing C-1 (page 172). Setting Up Function Pointers to OpenGL Routines Initializing Entry Points 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 1732. Declares function pointers for the functions of interest. Note that each function pointer uses the prefix pf to distinguish it from the function it points to. Although using this prefix is not a requirement, it's best to avoid using the exact function names. 3. Initializes the entry points. This function repeatedly calls the MyNSGLProcAddress function to obtain function pointers for each of the functions of interest—glBlendColor, glBlendEquation, and glDrawRangeElements. 4. Sets each of the function pointers to NULL when they are no longer needed. Setting Up Function Pointers to OpenGL Routines Initializing Entry Points 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 174This table describes the changes to OpenGL Programming Guide for Mac . Date Notes 2012-07-23 Updated with information on supporting high-resolution displays. 2011-06-06 Added new context options. 2010-11-15 Fixed a few small errors in the texture chapter. Updated the recommendations on when to use each texture uploading and downloading technique. Updated the code for creating a texture from a view’s contents to use newer, better supported techniques. 2010-06-14 Corrected texture creation code snippets. 2010-03-24 Minor updates and clarifications. Substantial revisions to describe behaviors for OpenGL on OS X v10.5 and OS X v10.6. Removed information on obsolete and deprecated behaviors. 2010-02-24 Corrected errors in code listings. Pixel format attribute lists should be terminated with 0, not NULL. One call to glTexImage2D had an incorrect number of parameters. 2009-08-28 Updated the Cocoa OpenGL tutorial and made numerous other minor changes. 2008-06-09 Fixed compilation errors in Listing 8-1 (page 84). Added “Getting Decompressed Raw Pixel Data from a Source Image” (page 135). Updated links to OpenGL extensions. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 175 Document Revision HistoryDate Notes Made several minor edits. 2007-12-04 Corrected minor typographical and technical errors. Added “Ensuring That Back Buffer Contents Remain the Same” (page 66). Revised “Deprecated Attributes” (page 70). 2007-08-07 Fixed several technical issues. 2007-05-29 Fixed a broken link. 2007-05-17 Fixed a few technical inaccuracies in the code listings. Changed attribs to attributes in Listing 6-2 (page 68). Fixed drawRect method implementation in “Drawing to a Window or View” (page 35). 2006-12-20 Fixed minor errors. Added information concerning the Apple client storage extension. Fixed a typographical error. 2006-11-07 Added information about performance issues and processor queries. See “Determining Whether Vertex and Fragment Processing Happens on the GPU” (page 78). 2006-10-03 Added a section on checking for GPU processing. Added “Determining Whether Vertex and Fragment Processing Happens on the GPU” (page 78). Fixed a number of minor typos in the code and in the text. 2006-09-05 Fixed minor technical problems. 2006-07-24 Made minor technical and typograhical changes throughout. Added information to “Surface Drawing Order Specifies the Position of the OpenGL Surface Relative to the Window” (page 77). Document Revision History 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 176Date Notes Changed glCopyTexSubImage to glCopyTexSubImage2D in “Downloading Texture Data” (page 136). Made minor improvements to Listing 11-6 (page 136). Removed information about 1-D textures. 2006-06-28 Made several minor technical corrections. Redirected links to the OpenGL specification for the framebuffer object extension so that they point to the SGI Open Source website, which hosts the most up-to-date version of this specification. Removed the logic operation blending entry from Table A-6 (page 166) because this functionality is not available in OpenGL 2.0. 2006-05-23 First version. This document replaces Macintosh OpenGL Programming Guide and AGL Programming Guide . This document incorporates information from the following Technical Notes: TN2007 “The CGDirectDisplay API” TN2014 “Insights on OpenGL” TN2080 “Understanding and Detecting OpenGL Functionality” TN2093 “OpenGL Performance Optimization: The Basics” This document incorporates information from the following Technical Q&As: Technical Q&A OGL01 “aglChoosePixelFormat, The Inside Scoop” Technical Q&A OGL02 “Correct Setup of an AGLDrawable” Technical Q&A QA1158 “glFlush() vs. glFinish()” Technical Q&A QA1167 “Using Interface Builder's NSOpenGLView or Custom View objects for an OpenGL application” Technical Q&A QA1188 “GetProcAdress and OpenGL Entry Points” Document Revision History 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 177Date Notes Technical Q&A QA1209 “Updating OpenGL Contexts” Technical Q&A QA1248 “Context Sharing Tips” Technical Q&A QA1268 “Sharpening Full Scene Anti-Aliasing Details” Technical Q&A QA1269 “OS X OpenGL Interfaces” Technical Q&A QA1325 “Creating an OpenGL texture from an NSView” Document Revision History 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 178This glossary containstermsthat are used specifically for the Apple implementation of OpenGL and a few terms that are common in graphics programming. For definitions of additional OpenGL terms, see OpenGL Programming Guide, by the Khronos OpenGL Working Group aliased Said of graphics whose edges appear jagged; can be remedied by performing antialiasing operations. antialiasing In graphics, a technique used to smooth and soften the jagged (or aliased) edges that are sometimes apparent when graphical objects such as text, line art, and images are drawn. ARB The Khronos OpenGL Working Group, which is the group that oversees the OpenGL specification and extensions to it. attach To establish a connection between two existing objects. Compare bind. bind To create a new object and then establish a connection between that object and a rendering context. Compare attach. bitmap A rectangular array of bits. bitplane A rectangular array of pixels. buffer A block of memory dedicated to storing a specific kind of data, such as depth values, green color values, stencil index values, and color index values. CGL (Core OpenGL) framework The Apple framework for using OpenGL graphics in OS X applications that need low-level access to OpenGL. clipping An operation that identifies the area of drawing. Anything not in the clipping region is not drawn. clip coordinates The coordinate system used for view-volume clipping. Clip coordinates are applied after applying the projection matrix and prior to perspective division. color lookup table A table of values used to map color indexes into actual color values. completeness A state that indicates whether a framebuffer object meets all the requirements for drawing. context A set of OpenGL state variables that affect how drawing is performed for a drawable object attached to that context. Also called a rendering context. culling Eliminating parts of a scene that can't be seen by the observer. current context The rendering context to which OpenGL routes commands issued by your application. current matrix A matrix used by OpenGL to transform coordinates in one system to those of another system, such as the modelview matrix, the perspective matrix, and the texture matrix. GL shading language allows user-defined matrices. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 179 Glossarydepth In OpenGL, refers to the z coordinate and specifies how far a pixel lies from the observer. depth buffer A block of memory used to store a depth value for each pixel. The depth buffer is used to determine whether or not a pixel can be seen by the observer. Those that are hidden are typically removed. display list A list of OpenGL commands that have an associated name and that are uploaded to the GPU, preprocessed, and then executed at a later time. Display lists are often used for computing-intensive commands. double buffering The practice of using a front and back color buffer to achieve smooth animation. The back buffer is not displayed, but swapped with the front buffer. drawable object In OS X, an object allocated outside of OpenGL that can serve as an OpenGL framebuffer. A drawable object can be any of the following: a window, a view, a pixel buffer, offscreen memory, or a full-screen graphics device. See also framebuffer object extension A feature of OpenGL that's not part of the OpenGL core API and therefore not guaranteed to be supported by every implementation of OpenGL. The naming conventions used for extensions indicate how widely accepted the extension is. The name of an extension supported only by a specific company includes an abbreviation of the company name. If more then one company adoptsthe extension,the extension name is changed to include EXT instead of a company abbreviation. If the Khronos OpenGL Working Group approves an extension, the extension name changes to include ARB instead of EXT or a company abbreviation. eye coordinates The coordinate system with the observer at the origin. Eye coordinates are produced by the modelview matrix and passed to the projection matrix. fence A token used by the GL_APPLE_fence extension to determine whether a given command has completed or not. filtering A process that modifies an image by combining pixels or texels. fog An effect achieved by fading colors to a background color based on the distance from the observer. Fog provides depth cues to the observer. fragment The color and depth values for a single pixel; can also include texture coordinate values. A fragment is the result of rasterizing primitives. framebuffer The collection of buffers associated with a window or a rendering context. framebuffer attachable image The rendering destination for a framebuffer object. framebuffer object An OpenGL extension that allows rendering to a destination other than the usual OpenGL buffers or destinations provided by the windowing system. A framebuffer object (FBO) contains state information for the OpenGL framebuffer and its set of images. A framebuffer object is similar to a drawable object, except that a drawable object is a window-system specific object whereas a framebuffer object is a window-agnostic object. The context that's bound to a framebuffer object can be bound to a window-system-provided drawable object for the purpose of displaying the content associated with the framebuffer object. frustum The region of space that is seen by the observer and that is warped by perspective division. Glossary 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 180FSAA (full scene antialiasing) A technique that takes multiple samples at a pixel and combinesthem with coverage values to arrive at a final fragment. gamma correction A function that changes color intensity valuesto correct for the nonlinear response of the eye or of a display. GLU Graphics library utilities. GL Graphics library. GLUT Graphics Library Utilities Toolkit, which is independent of the window system. In OS X, GLUT is implemented on top of Cocoa. GLX An OpenGL extension that supports using OpenGL within a window provided by the X Window system. image A rectangular array of pixels. immediatemode The practice ofOpenGL executing commands at the time an application issues them. To prevent commands from being issued immediately, an application can use a display list. interleaved data Arrays of dissimilar data that are grouped together, such as vertex data and texture coordinates. Interleaving can speed data retrieval. mipmaps A set of texture maps, provided at various resolutions, whose purpose is to minimize artifacts that can occur when a texture is applied to a geometric primitive whose onscreen resolution doesn't match the source texture map. Mipmapping derivesfromthe latin phrasemultumin parvo , which means "many things in a small place." modelview matrix A 4 X 4 matrix used by OpenGL to transforms points, lines, polygons, and positions from object coordinates to eye coordinates. mutex A mutual exclusion object in a multithreaded application. NURBS (nonuniform rational basis spline) A methodology use to specify parametric curves and surfaces. packing Converting pixel color components from a buffer into the format needed by an application. pbuffer See pixel buffer. pixel A picture element; the smallest element that the graphics hardware can display on the screen. A pixel is made up of all the bits at the location x , y , in all the bitplanes in the framebuffer. pixel buffer A type of drawable object that allows the use of offscreen buffers as sources for OpenGL texturing. Pixel buffers allow hardware-accelerated rendering to a texture. pixel depth The number of bits per pixel in a pixel image. pixel format A format used to store pixel data in memory. The format describesthe pixel components (that is, red, blue, green, alpha), the number and order of components, and other relevant information,such as whether a pixel containsstencil and depth values. primitives The simplest elements in OpenGL—points, lines, polygons, bitmaps, and images. projection matrix A matrix that OpenGL uses to transform points, lines, polygons, and positionsfrom eye coordinates to clip coordinates. rasterization The process of converting vertex and pixel data to fragments, each of which corresponds to a pixel in the framebuffer. renderbuffer A rendering destination for a 2D pixel image, used for generalized offscreen rendering, as defined in the OpenGL specification for the GL_EXT_framebuffer_object extension. Glossary 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 181renderer A combination of hardware and software that OpenGL uses to create an image from a view and a model. The hardware portion of a renderer is associated with a particular display device and supports specific capabilities, such as the ability to support a certain color depth or buffering mode. A renderer that uses only software is called a software renderer and is typically used as a fallback. rendering context A container forstate information. rendering pipeline The order of operations used by OpenGL to transform pixel and vertex data to an image in the framebuffer. render-to-texture An operation that draws content directly to a texture target. RGBA Red, green, blue, and alpha color components. shader A programthat computessurface properties. shading language A high-level language, accessible in C, used to produce advanced imaging effects. stencil buffer Memory used specifically for stencil testing. A stencil test is typically used to identify masking regions, to identify solid geometry that needs to be capped, and to overlap translucent polygons. surface The internal representation of a single buffer that OpenGL actually drawsto and readsfrom. For windowed drawable objects, thissurface is what the OS X window server uses to composite OpenGL content on the desktop. tearing A visual anomaly caused when part of the current frame overwrites previous frame data in the framebuffer before the current frame is fully rendered on the screen. tessellation An operation that reduces a surface to a mesh of polygons, or a curve to a sequence of lines. texel A texture element used to specify the color to apply to a fragment. texture Image data used to modify the color of rasterized fragments; can be one-, two-, or threedimensional or be a cube map. texture mapping The process of applying a texture to a primitive. texture matrix A 4 x 4 matrix that OpenGL uses to transform texture coordinates to the coordinates that are used for interpolation and texture lookup. texture object An opaque data structure used to store all data related to a texture. A texture object can include such things as an image, a mipmap, and texture parameters (width, height, internal format, resolution, wrapping modes, and so forth). vertex A three-dimensional point. A set of vertices specify the geometry of a shape. Vertices can have a number of additional attributes such as color and texture coordinates. See vertex array. vertex array A data structure that stores a block of data thatspecifiessuch things as vertex coordinates, texture coordinates, surface normals, RGBA colors, color indices, and edge flags. virtual screen A combination of hardware, renderer, and pixel format that OpenGL selects as suitable for an imaging task. When the current virtual screen changes, the current renderer typically changes. Glossary 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 182Apple Inc. © 2004, 2012 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Carbon, Cocoa, iChat, Instruments, iPhoto, Logic, Mac, Macintosh, Objective-C, OS X, Pages, Quartz, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. OpenCL is a trademark of Apple Inc. DEC is a trademark of Digital Equipment Corporation. OpenGL is a registered trademark of Silicon Graphics, Inc. UNIX is a registered trademark of The Open Group. X Window System is a trademark of the Massachusetts Institute of Technology. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. View Programming Guide for iOSContents About Windows and Views 7 At a Glance 7 Views Manage Your Application’s Visual Content 7 Windows Coordinate the Display of Your Views 8 Animations Provide the User with Visible Feedback for Interface Changes 8 The Role of Interface Builder 8 See Also 9 View and Window Architecture 10 View Architecture Fundamentals 10 View Hierarchies and Subview Management 11 The View Drawing Cycle 12 Content Modes 13 Stretchable Views 15 Built-In Animation Support 16 View Geometry and Coordinate Systems 17 The Relationship of the Frame, Bounds, and Center Properties 18 Coordinate System Transformations 20 Points Versus Pixels 21 The Runtime Interaction Model for Views 23 Tips for Using Views Effectively 25 Views Do Not Always Have a Corresponding View Controller 25 Minimize Custom Drawing 26 Take Advantage of Content Modes 26 Declare Views as Opaque Whenever Possible 26 Adjust Your View’s Drawing Behavior When Scrolling 26 Do Not Customize Controls by Embedding Subviews 27 Windows 28 Tasks That Involve Windows 28 Creating and Configuring a Window 29 Creating Windows in Interface Builder 29 Creating a Window Programmatically 30 Adding Content to Your Window 30 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 2Changing the Window Level 31 Monitoring Window Changes 31 Displaying Content on an External Display 32 Handling Screen Connection and Disconnection Notifications 33 Configuring a Window for an External Display 35 Configuring the Screen Mode of an External Display 37 Views 38 Creating and Configuring View Objects 38 Creating View Objects Using Interface Builder 39 Creating View Objects Programmatically 39 Setting the Properties of a View 40 Tagging Views for Future Identification 42 Creating and Managing a View Hierarchy 42 Adding and Removing Subviews 43 Hiding Views 46 Locating Views in a View Hierarchy 47 Translating, Scaling, and Rotating Views 47 Converting Coordinates in the View Hierarchy 50 Adjusting the Size and Position of Views at Runtime 51 Being Prepared for Layout Changes 51 Handling Layout Changes Automatically Using Autoresizing Rules 52 Tweaking the Layout of Your Views Manually 54 Modifying Views at Runtime 54 Interacting with Core Animation Layers 56 Changing the Layer Class Associated with a View 56 Embedding Layer Objects in a View 57 Defining a Custom View 58 Checklist for Implementing a Custom View 58 Initializing Your Custom View 59 Implementing Your Drawing Code 60 Responding to Events 62 Cleaning Up After Your View 63 Animations 64 What Can Be Animated? 64 Animating Property Changes in a View 66 Starting Animations Using the Block-Based Methods 66 Starting Animations Using the Begin/Commit Methods 68 Nesting Animation Blocks 72 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 3 ContentsImplementing Animations That Reverse Themselves 73 Creating Animated Transitions Between Views 73 Changing the Subviews of a View 74 Replacing a View with a Different View 76 Linking Multiple Animations Together 77 Animating View and Layer Changes Together 77 Document Revision History 80 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 4 ContentsFigures, Tables, and Listings View and Window Architecture 10 Figure 1-1 Architecture of the views in a sample application 11 Figure 1-2 Content mode comparisons 14 Figure 1-3 Stretching the background of a button 15 Figure 1-4 Coordinate system orientation in UIKit 17 Figure 1-5 Relationship between a view's frame and bounds 19 Figure 1-6 Rotating a view and its content 21 Figure 1-7 UIKit interactions with your view objects 23 Table 1-1 Screen dimensions for iOS-based devices 22 Windows 28 Listing 2-1 Registering for screen connect and disconnect notifications 33 Listing 2-2 Handling connect and disconnect notifications 34 Listing 2-3 Configuring a window for an external display 35 Views 38 Figure 3-1 Layered views in the Clock application 43 Figure 3-2 Rotating a view 45 degrees 49 Figure 3-3 Converting values in a rotated view 51 Figure 3-4 View autoresizing mask constants 53 Table 3-1 Usage of some key view properties 40 Table 3-2 Autoresizing mask constants 52 Listing 3-1 Adding a view to a window 44 Listing 3-2 Adding views to an existing view hierarchy 45 Listing 3-3 Adding a custom layer to a view 57 Listing 3-4 Initializing a view subclass 59 Listing 3-5 A drawing method 61 Listing 3-6 Implementing the dealloc method 63 Animations 64 Table 4-1 Animatable UIView properties 64 Table 4-2 Methods for configuring animation blocks 69 Listing 4-1 Performing a simple block-based animation 66 Listing 4-2 Creating an animation block with custom options 67 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 5Listing 4-3 Performing a simple begin/commit animation 69 Listing 4-4 Configuring animation parameters using the begin/commit methods 70 Listing 4-5 Nesting animations that have different configurations 72 Listing 4-6 Swapping an empty text view for an existing one 74 Listing 4-7 Changing subviews using the begin/commit methods 75 Listing 4-8 Toggling between two views in a view controller 76 Listing 4-9 Mixing view and layer animations 78 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 6 Figures, Tables, and ListingsIn iOS, you use windows and views to present your application’s content on the screen. Windows do not have any visible content themselves but provide a basic container for your application’s views. Views define a portion of a window that you want to fill with some content. For example, you might have views that display images, text, shapes, or some combination thereof. You can also use views to organize and manage other views. At a Glance Every application has at least one window and one view for presenting its content. UIKit and other system frameworks provide predefined viewsthat you can use to present your content. These viewsrange from simple buttons and text labels to more complex views such as table views, picker views, and scroll views. In places where the predefined views do not provide what you need, you can also define custom views and manage the drawing and event handling yourself. Views Manage Your Application’s Visual Content A view is an instance of the UIView class (or one of its subclasses) and manages a rectangular area in your application window. Views are responsible for drawing content, handling multitouch events, and managing the layout of any subviews. Drawing involves using graphics technologies such as Core Graphics, OpenGL ES, or UIKit to draw shapes, images, and text inside a view’s rectangular area. A view responds to touch events in its rectangular area either by using gesture recognizers or by handling touch events directly. In the view hierarchy, parent views are responsible for positioning and sizing their child views and can do so dynamically. This ability to modify child views dynamically lets your views adjust to changing conditions, such as interface rotations and animations. You can think of views as building blocks that you use to construct your user interface. Rather than use one view to present all of your content, you often use several views to build a view hierarchy. Each view in the hierarchy presents a particular portion of your user interface and is generally optimized for a specific type of content. For example, UIKit has views specifically for presenting images, text and other types of content. 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 7 About Windows and ViewsRelevant Chapters: “View and Window Architecture” (page 10) “Views” (page 38) Windows Coordinate the Display of Your Views A window is an instance of the UIWindow class and handles the overall presentation of your application’s user interface. Windows work with views (and their owning view controllers) to manage interactions with, and changes to, the visible view hierarchy. For the most part, your application’s window never changes. After the window is created, it stays the same and only the views displayed by it change. Every application has at least one window that displays the application’s user interface on a device’s main screen. If an external display is connected to the device, applications can create a second window to present content on that screen as well. Relevant Chapters: “Windows” (page 28) Animations Provide the User with Visible Feedback for Interface Changes Animations provide users with visible feedback about changes to your view hierarchy. The system defines standard animationsfor presenting modal views and transitioning between different groups of views. However, many attributes of a view can also be animated directly. For example, through animation you can change the transparency of a view, its position on the screen, its size, its background color, or other attributes. And if you work directly with the view’s underlying Core Animation layer object, you can perform many other animations as well. Relevant Chapters: “Animations” (page 64) The Role of Interface Builder Interface Builder is an application that you use to graphically construct and configure your application’s windows and views. Using Interface Builder, you assemble your views and place them in a nib file, which is a resource file that stores a freeze-dried version of your views and other objects. When you load a nib file at runtime, the objects inside it are reconstituted into actual objects that your code can then manipulate programmatically. Interface Builder greatly simplifiesthe work you have to do in creating your application’s user interface. Because support for Interface Builder and nib files is incorporated throughout iOS, little effort is required to incorporate nib files into your application’s design. About Windows and Views The Role of Interface Builder 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 8For more information about how to use Interface Builder, see Interface Builder User Guide . For information about how view controllers manage the nib files containing their views, see “Custom View Controllers” in View Controller Programming Guide for iOS . See Also Because views are very sophisticated and flexible objects, it would be impossible to cover all of their behaviors in one document. However, other documents are available to help you learn about other aspects of managing views and your user interface as a whole. ● View controllers are an important part of managing your application’s views. A view controller presides over all of the viewsin a single view hierarchy and facilitatesthe presentation of those views on the screen. For more information about view controllers and the role they play, see View Controller Programming Guide for iOS . ● Views are the key recipients of gesture and touch events in your application. For more information about using gesture recognizers and handling touch events directly, see Event Handling Guide for iOS . ● Custom views must use the available drawing technologies to render their content. For information about using these technologies to draw within your views, see Drawing and Printing Guide for iOS . ● In places where the standard view animations are notsufficient, you can use Core Animation. For information about implementing animations using Core Animation, see Core Animation Programming Guide . About Windows and Views See Also 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 9Views and windows present your application’s user interface and handle the interactions with that interface. UIKit and other system frameworks provide a number of views that you can use as-is with little or no modification. You can also define custom views for places where you need to present content differently than the standard views allow. Whether you use the system views or create your own custom views, you need to understand the infrastructure provided by the UIView and UIWindow classes. These classes provide sophisticated facilities for managing the layout and presentation of views. Understanding how those facilities work is important for making sure your views behave appropriately when changes occur in your application. View Architecture Fundamentals Most of the things you might want to do visually are done with view objects—instances of the UIView class. A view object defines a rectangular region on the screen and handles the drawing and touch events in that region. A view can also act as a parent for other views and coordinate the placement and sizing of those views. The UIView class does most of the work in managing these relationships between views, but you can also customize the default behavior as needed. Views work in conjunction with Core Animation layers to handle the rendering and animating of a view’s content. Every view in UIKit is backed by a layer object (usually an instance of the CALayer class), which manages the backing store for the view and handles view-related animations. Most operations you perform should be through the UIView interface. However, in situations where you need more control over the rendering or animation behavior of your view, you can perform operations through its layer instead. To understand the relationship between views and layers, it helps to look at an example. Figure 1-1 shows the view architecture from the ViewTransitions sample application along with the relationship to the underlying Core Animation layers. The views in the application include a window (which is also a view), a generic UIView object that acts as a container view, an image view, a toolbar for displaying controls, and a bar button item (which is not a view itself but which manages a view internally). (The actual ViewTransitions sample application includes an additional image view that is used to implement transitions. For simplicity, and because that view is usually hidden, it is not included in Figure 1-1.) Every view has a corresponding layer object that can be 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 10 View and Window Architectureaccessed from that view’s layer property. (Because a bar button item is not a view, you cannot access its layer directly.) Behind those layer objects are Core Animation rendering objects and ultimately the hardware buffers used to manage the actual bits on the screen. Figure 1-1 Architecture of the views in a sample application UIKit views Core Animation layers UIImageView UIView UIWindow UIToolbar UIBarButtonItem (internal view) The use of Core Animation layer objects has important implications for performance. The actual drawing code of a view object is called as little as possible, and when the code is called, the results are cached by Core Animation and reused as much as possible later. Reusing already-rendered content eliminates the expensive drawing cycle usually needed to update views. Reuse of this content is especially important during animations, where the existing content can be manipulated. Such reuse is much less expensive than creating new content. View Hierarchies and Subview Management In addition to providing its own content, a view can act as a container for other views. When one view contains another, a parent-child relationship is created between the two views. The child view in the relationship is known asthe subview and the parent view is known asthe superview. The creation of thistype of relationship has implications for both the visual appearance of your application and the application’s behavior. Visually, the content of a subview obscures all or part of the content of its parent view. If the subview is totally opaque, then the area occupied by the subview completely obscures the corresponding area of the parent. If the subview is partially transparent, the content from the two viewsis blended together prior to being displayed View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 11on the screen. Each superview stores its subviews in an ordered array and the order in that array also affects the visibility of each subview. If two sibling subviews overlap each other, the one that was added last (or was moved to the end of the subview array) appears on top of the other. The superview-subview relationship also impacts several view behaviors. Changing the size of a parent view has a ripple effect that can cause the size and position of any subviews to change too. When you change the size of a parent view, you can control the resizing behavior of each subview by configuring the view appropriately. Other changes that affect subviews include hiding a superview, changing a superview’s alpha (transparency), or applying a mathematical transform to a superview’s coordinate system. The arrangement of views in a view hierarchy also determines how your application responds to events. When a touch occurs inside a specific view, the system sends an event object with the touch information directly to that view for handling. However, if the view does not handle a particular touch event, it can pass the event object along to its superview. If the superview does not handle the event, it passes the event object to its superview, and so on up the responder chain. Specific views can also pass the event object to an intervening responder object, such as a view controller. If no object handles the event, it eventually reaches the application object, which generally discards it. For more information about how to create view hierarchies,see “Creating and Managing a View Hierarchy” (page 42). The View Drawing Cycle The UIView class uses an on-demand drawing model for presenting content. When a view first appears on the screen, the system asks it to draw its content. The system captures a snapshot of this content and uses that snapshot as the view’s visual representation. If you never change the view’s content, the view’s drawing code may never be called again. The snapshot image is reused for most operations involving the view. If you do change the content, you notify the system that the view has changed. The view then repeats the process of drawing the view and capturing a snapshot of the new results. When the contents of your view change, you do not redraw those changes directly. Instead, you invalidate the view using either the setNeedsDisplay or setNeedsDisplayInRect: method. These methods tell the system that the contents of the view changed and need to be redrawn at the next opportunity. The system waits until the end of the current run loop before initiating any drawing operations. This delay gives you a chance to invalidate multiple views, add or remove views from your hierarchy, hide views, resize views, and reposition views all at once. All of the changes you make are then reflected at the same time. View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 12Note: Changing a view’s geometry does not automatically cause the system to redraw the view’s content. The view’s contentMode property determines how changes to the view’s geometry are interpreted. Most content modes stretch or reposition the existing snapshot within the view’s boundaries and do not create a new one. For more information about how content modes affect the drawing cycle of your view, see “Content Modes” (page 13). When the time comes to render your view’s content, the actual drawing process varies depending on the view and its configuration. System views typically implement private drawing methods to render their content. Those same system views often expose interfaces that you can use to configure the view’s actual appearance. For custom UIView subclasses, you typically override the drawRect: method of your view and use that method to draw your view’s content. There are also other ways to provide a view’s content, such as setting the contents of the underlying layer directly, but overriding the drawRect: method is the most common technique. For more information about how to draw content for custom views, see “Implementing Your Drawing Code” (page 60). Content Modes Each view has a content mode that controls how the view recycles its content in response to changes in the view’s geometry and whether it recycles its content at all. When a view is first displayed, it renders its content as usual and the results are captured in an underlying bitmap. After that, changes to the view’s geometry do not always cause the bitmap to be recreated. Instead, the value in the contentMode property determines whether the bitmap should be scaled to fit the new bounds or simply pinned to one corner or edge of the view. The content mode of a view is applied whenever you do the following: ● Change the width or height of the view’s frame or bounds rectangles. ● Assign a transform that includes a scaling factor to the view’s transform property. View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 13By default, the contentMode property for most views is set to UIViewContentModeScaleToFill, which causes the view’s contents to be scaled to fit the new frame size. Figure 1-2 shows the results that occur for some content modes that are available. As you can see from the figure, not all content modes result in the view’s bounds being filled entirely, and those that do might distort the view’s content. Figure 1-2 Content mode comparisons UIViewContentModeScaleToFill Distorting Nondistorting UIViewContentModeScaleAspectFit UIViewContentModeScaleAspectFill Nondistorting Nondistorting UIViewContentModeLeft View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 14Content modes are good for recycling the contents of your view, but you can also set the content mode to the UIViewContentModeRedraw value when you specifically want your custom views to redraw themselves during scaling and resizing operations. Setting your view’s content mode to this value forces the system to call your view’s drawRect: method in response to geometry changes. In general, you should avoid using this value whenever possible, and you should certainly not use it with the standard system views. For more information about the available content modes, see UIView Class Reference . Stretchable Views You can designate a portion of a view asstretchable so that when the size of the view changes only the content in the stretchable portion is affected. You typically use stretchable areas for buttons or other views where part of the view defines a repeatable pattern. The stretchable area you specify can allow for stretching along one or both axes of the view. Of course, when stretching a view along two axes, the edges of the view must also define a repeatable pattern to avoid any distortion. Figure 1-3 shows how this distortion manifests itself in a view. The color from each of the view’s original pixels is replicated to fill the corresponding area in the larger view. Figure 1-3 Stretching the background of a button (0,0) (1,1) View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 15You specify the stretchable area of a view using the contentStretch property. This property accepts a rectangle whose values are normalized to the range 0.0 to 1.0. When stretching the view, the system multiplies these normalized values by the view’s current bounds and scale factor to determine which pixel or pixels need to be stretched. The use of normalized values alleviates the need for you to update the contentStretch property every time the bounds of your view change. The view’s content mode also plays a role in determining how the view’s stretchable area is used. Stretchable areas are only used when the content mode would cause the view’s content to be scaled. This means that stretchable views are supported only with the UIViewContentModeScaleToFill, UIViewContentModeScaleAspectFit, and UIViewContentModeScaleAspectFill content modes. If you specify a content mode that pins the content to an edge or corner (and thus does not actually scale the content), the view ignores the stretchable area. Note: The use of the contentStretch property isrecommended over the creation of a stretchable UIImage object when specifying the background for a view. Stretchable views are handled entirely in the Core Animation layer, which typically offers better performance. Built-In Animation Support One of the benefits of having a layer object behind every view is that you can animate many view-related changes easily. Animations are a useful way to communicate information to the user and should always be considered during the design of your application. Many properties of the UIView class are animatable—that is, semiautomatic support exists for animating from one value to another. To perform an animation for one of these animatable properties, all you have to do is: 1. Tell UIKit that you want to perform an animation. 2. Change the value of the property. Among the properties you can animate on a UIView object are the following: frame—Use this to animate position and size changes for the view. bounds—Use this to animate changes to the size of the view. center—Use this to animate the position of the view. transform—Use this to rotate or scale the view. alpha—Use this to change the transparency of the view. backgroundColor—Use this to change the background color of the view. contentStretch—Use this to change how the view’s contents stretch. View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 16One place where animations are very important is when transitioning from one set of viewsto another. Typically, you use a view controller to manage the animations associated with major changes between parts of your user interface. For example, for interfaces that involve navigating from higher-level to lower-level information, you typically use a navigation controller to manage the transitions between the views displaying each successive level of data. However, you can also create transitions between two sets of views using animations instead of a view controller. You might do so in places where the standard view-controller animations do not yield the results you want. In addition to the animations you create using UIKit classes, you can also create animations using Core Animation layers. Dropping down to the layer level gives you much more control over the timing and properties of your animations. For details about how to perform view-based animations, see “Animations” (page 64). For more information about creating animations using Core Animation, see Core Animation Programming Guide and Core Animation Cookbook . View Geometry and Coordinate Systems The default coordinate system in UIKit has its origin in the top-left corner and has axes that extend down and to the right from the origin point. Coordinate values are represented using floating-point numbers, which allow for precise layout and positioning of content regardless of the underlying screen resolution. Figure 1-4 shows this coordinate system relative to the screen. In addition to the screen coordinate system, windows and views define their own local coordinate systems that allow you to specify coordinates relative to the view or window origin instead of relative to the screen. Figure 1-4 Coordinate system orientation in UIKit y x (0,0) View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 17Because every view and window defines its own local coordinate system, you need to be aware of which coordinate system is in effect at any given time. Every time you draw into a view or change its geometry, you do so relative to some coordinate system. In the case of drawing, you specify coordinates relative to the view’s own coordinate system. In the case of geometry changes, you specify coordinates relative to the superview’s coordinate system. The UIWindow and UIView classes both include methods to help you convert from one coordinate system to another. Important: Some iOS technologies define default coordinate systems whose origin point and orientation differ from those used by UIKit. For example, Core Graphics and OpenGL ES use a coordinate system whose origin lies in the lower-left corner of the view or window and whose y-axis points upward relative to the screen. Your code must take such differences into account when drawing or creating content and adjust coordinate values (or the default orientation of the coordinate system) as needed. The Relationship of the Frame, Bounds, and Center Properties A view object tracks its size and location using its frame, bounds, and center properties: ● The frame property contains the frame rectangle, which specifies the size and location of the view in its superview’s coordinate system. ● The bounds property contains the bounds rectangle, which specifies the size of the view (and its content origin) in the view’s own local coordinate system. ● The center property contains the known center point of the view in the superview’s coordinate system. You use the center and frame properties primarily for manipulating the geometry of the current view. For example, you use these properties when building your view hierarchy or changing the position or size of a view at runtime. If you are changing only the position of the view (and not its size), the center property is the preferred way to do so. The value in the center property is always valid, even if scaling or rotation factors have been added to the view’s transform. The same is not true for the value in the frame property, which is considered invalid if the view’s transform is not equal to the identity transform. You use the bounds property primarily during drawing. The bounds rectangle is expressed in the view’s own local coordinate system. The default origin of this rectangle is (0, 0) and its size matches the size of the frame rectangle. Anything you draw inside this rectangle is part of the view’s visible content. If you change the origin of the boundsrectangle, anything you draw inside the new rectangle becomes part of the view’s visible content. View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 18Figure 1-5 shows the relationship between the frame and bounds rectangles for an image view. In the figure, the upper-left corner of the image view is located at the point (40, 40) in its superview’s coordinate system and the size of the rectangle is 240 by 380 points. For the bounds rectangle, the origin point is (0, 0) and the size of the rectangle is similarly 240 by 380 points. Figure 1-5 Relationship between a view's frame and bounds Frame rectangle Center (160,230) Bounds rectangle 240 240 380 380 (40,40) (0,0) Although you can change the frame, bounds, and center properties independent of the others, changes to one property affect the others in the following ways: ● When you set the frame property, the size value in the bounds property changes to match the new size of the frame rectangle. The value in the center property similarly changes to match the new center point of the frame rectangle. ● When you set the center property, the origin value in the frame changes accordingly. ● When you set the size of the bounds property, the size value in the frame property changes to match the new size of the bounds rectangle. By default, a view’s frame is not clipped to its superview’s frame. Thus, any subviews that lie outside of their superview’sframe are rendered in their entirety. You can change this behavior, though, by setting the superview’s clipsToBounds property to YES. Regardless of whether or not subviews are clipped visually, touch events always respect the bounds rectangle of the target view’s superview. In other words, touch events occurring in a part of a view that lies outside of its superview’s bounds rectangle are not delivered to that view. View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 19Coordinate System Transformations Coordinate system transformations offer a way to alter your view (or its contents) quickly and easily. An affine transform is a mathematical matrix that specifies how points in one coordinate system map to points in a different coordinate system. You can apply affine transforms to your entire view to change the size, location, or orientation of the view relative to its superview. You can also use affine transforms in your drawing code to perform the same types of manipulations to individual pieces of rendered content. How you apply the affine transform therefore depends on context: ● To modify your entire view, modify the affine transform in the transform property of your view. ● To modify specific pieces of content in your view’s drawRect: method, modify the affine transform associated with the active graphics context. You typically modify the transform property of a view when you want to implement animations. For example, you could use this property to create an animation of your view rotating around its center point. You would not use this property to make permanent changes to your view, such as modifying its position or size a view within its superview’s coordinate space. For that type of change, you should modify the frame rectangle of your view instead. Note: When modifying the transform property of your view, all transformations are performed relative to the center point of the view. In your view’s drawRect: method, you use affine transforms to position and orient the items you plan to draw. Rather than fix the position of an object at some location in your view, it is simpler to create each object relative to a fixed point, typically (0, 0), and use a transform to position the object immediately prior to drawing. That way, if the position of the object changes in your view, all you have to do is modify the transform, which is much faster and less expensive than recreating the object at its new location. You can retrieve the affine transform associated with a graphics context using the CGContextGetCTM function and you can use the related Core Graphics functions to set or modify this transform during drawing. The current transformation matrix (CTM) is the affine transform in use at any given time. When manipulating the geometry of your entire view, the CTM is the affine transform stored in your view’s transform property. Inside your drawRect: method, the CTM is the affine transform associated with the active graphics context. The coordinate system of each subview builds upon the coordinate systems of its ancestors. So when you modify a view’s transform property, that change affects the view and all of its subviews. However, these changes affect only the final rendering of the views on the screen. Because each view draws its content and lays out its subviews relative to its own bounds, it can ignore its superview’s transform during drawing and layout. View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 20Figure 1-6 demonstrates how two different rotation factors combine visually when rendered. Inside the view’s drawRect: method, applying a 45 degree rotation factor to a shape causes that shape to appear rotated by 45 degrees. Applying a separate 45 degree rotation factor to the view then causes the shape to appear to be rotated by 90 degrees. The shape is still rotated by only 45 degrees relative to the view that drew it, but the view rotation makes it appear to be rotated by more. Figure 1-6 Rotating a view and its content No rotations Shape rotated 45˚ during drawing Shape and view each rotated 45˚ Important: If a view’s transform property is not the identity transform, the value of that view’s frame property is undefined and must be ignored. When applying transforms to a view, you must use the view’s bounds and center properties to get the size and position of the view. The frame rectangles of any subviews are still valid because they are relative to the view’s bounds. For information about modifying your view’s transform property at runtime, see “Translating, Scaling, and Rotating Views” (page 47). For information about how to use transforms to position content during drawing, see Drawing and Printing Guide for iOS . Points Versus Pixels In iOS, all coordinate values and distances are specified using floating-point valuesin unitsreferred to as points. The measurable size of a point variesfrom device to device and islargely irrelevant. The main thing to understand about points is that they provide a fixed frame of reference for drawing. View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 21Table 1-1 lists the screen dimensions (measured in points) for different types of iOS-based devices in a portrait orientation. The width dimension is listed first, followed by the height dimension of the screen. As long as you design your interface to these screen sizes, your views will display correctly on the corresponding type of device. Table 1-1 Screen dimensions for iOS-based devices Device Screen dimensions (in points) iPhone and iPod touch 320 x 480 iPad 768 x 1024 The point-based measuring system used for each type of device defines what is known as the user coordinate space. This is the standard coordinate space you use for nearly all of your code. For example, you use points and the user coordinate space when manipulating the geometry of a view or calling Core Graphics functions to draw the contents of your view. Although coordinates in the user coordinate space sometimes map directly to the pixels on the device’s screen, you should never assume that this is the case. Instead, you should always remember the following: One point does not necessarily correspond to one pixel on the screen. At the device level, all coordinates you specify in your view must be converted to pixels atsome point. However, the mapping of pointsin the user coordinate space to pixelsin the device coordinate space is normally handled by the system. Both UIKit and Core Graphics use a primarily vector-based drawing model where all coordinate values are specified using points. Thus, if you draw a curve using Core Graphics, you specify the curve using the same values, regardless of the resolution of the underlying screen. When you need to work with images or other pixel-based technologies such as OpenGL ES, iOS provides help in managing those pixels. For static image files stored as resources in your application bundle, iOS defines conventionsforspecifying your images at different pixel densities and for loading the image that best matches the current screen resolution. Views also provide information about the current scale factor so that you can adjust any pixel-based drawing code manually to accommodate higher-resolution screens. The techniques for dealing with pixel-based content at different screen resolutions is described in “Supporting High-Resolution Screens” in Drawing and Printing Guide for iOS . View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 22The Runtime Interaction Model for Views Any time a user interacts with your user interface, or any time your own code programmatically changes something, a complex sequence of events takes place inside of UIKit to handle that interaction. At specific points during that sequence, UIKit calls out to your view classes and gives them a chance to respond on behalf of your application. Understanding these callout points is important to understanding where your views fit into the system. Figure 1-7 shows the basic sequence of events that starts with the user touching the screen and ends with the graphics system updating the screen content in response. The same sequence of events would also occur for any programmatically initiated actions. Figure 1-7 UIKit interactions with your view objects Your Application iPhone OS Touches • Buffers • Images • Attributes • Geometry • Animations touches layoutSubviews drawRect Compositor Draw images, text, etc. Touch Framework Graphics hardware UIKit setNeedsDisplay frame, alpha, etc. setNeedsLayout setNeedsDisplay frame, alpha, etc. The following steps break the event sequence in Figure 1-7 (page 23) down even further and explain what happens at each stage and how you might want your application to react in response. 1. The user touches the screen. 2. The hardware reports the touch event to the UIKit framework. 3. The UIKit framework packages the touch into a UIEvent object and dispatches it to the appropriate view. (For a detailed explanation of how UIKit delivers events to your views, see Event Handling Guide for iOS .) 4. The event-handling code of your view responds to the event. For example, your code might: ● Change the properties (frame, bounds, alpha, and so on) of the view or its subviews. ● Call the setNeedsLayout method to mark the view (or its subviews) as needing a layout update. ● Call the setNeedsDisplay or setNeedsDisplayInRect: method to mark the view (or itssubviews) as needing to be redrawn. ● Notify a controller about changes to some piece of data. View and Window Architecture The Runtime Interaction Model for Views 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 23Of course, it is up to you to decide which of these things the view should do and which methods it should call. 5. If the geometry of a view changed for any reason, UIKit updates its subviews according to the following rules: a. If you have configured autoresizing rules for your views, UIKit adjusts each view according to those rules. For more information about how autoresizing rules work, see “Handling Layout Changes Automatically Using Autoresizing Rules” (page 52). b. If the view implements the layoutSubviews method, UIKit calls it. You can override this method in your custom views and use it to adjust the position and size of any subviews. For example, a view that provides a large scrollable area would need to use severalsubviews as “tiles” rather than create one large view, which is not likely to fit in memory anyway. In its implementation of this method, the view would hide any subviewsthat are now offscreen or reposition them and use them to draw newly exposed content. As part of this process, the view’s layout code can also invalidate any views that need to be redrawn. 6. If any part of any view was marked as needing to be redrawn, UIKit asks the view to redraw itself. For custom viewsthat explicitly define a drawRect: method, UIKit callsthat method. Your implementation of this method should redraw the specified area of the view as quickly as possible and nothing else. Do not make additional layout changes at this point and do not make other changes to your application’s data model. The purpose of this method is to update the visual content of your view. Standard system viewstypically do not implement a drawRect: method but instead manage their drawing at this time. 7. Any updated views are composited with the rest of the application’s visible content and sent to the graphics hardware for display. 8. The graphics hardware transfers the rendered content to the screen. Note: The preceding update model applies primarily to applicationsthat use standard system views and drawing techniques. Applications that use OpenGL ES for drawing typically configure a single full-screen view and draw directly to the associated OpenGL graphics context. In such a case, the view would still handle touch events but, because it is full-screen, it would not need to lay out subviews or implement a drawRect: method. For more information about using OpenGL ES, see OpenGL ES Programming Guide for iOS . In the preceding set of steps, the primary integration points for your own custom views are: ● The event-handling methods: ● touchesBegan:withEvent: View and Window Architecture The Runtime Interaction Model for Views 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 24● touchesMoved:withEvent: ● touchesEnded:withEvent: ● touchesCancelled:withEvent: ● The layoutSubviews method ● The drawRect: method These are the most commonly overridden methods for views but you may not need to override all of them. If you use gesture recognizersto handle events, you do not need to override any of the event-handling methods. Similarly, if your view does not contain subviews or its size does not change, there is no reason to override the layoutSubviews method. Finally, the drawRect: method is needed only when the contents of your view can change at runtime and you are using native technologiessuch as UIKit or Core Graphicsto do your drawing. It is also important to remember that these are the primary integration points but not the only ones. Several methods of the UIView class are designed to be override points for subclasses. You should look at the method descriptions in UIView Class Reference to see which methods might be appropriate for you to override in your custom implementations. Tips for Using Views Effectively Custom views are useful for situations where you need to draw something the standard system views do not provide, but it is your responsibility to ensure that the performance of your views is good enough. UIKit does everything it can to optimize view-related behaviors and help you achieve good performance in your custom views. However, you can help UIKit in this aspect by considering the following tips. Important: Before optimizing your drawing code, you should always gather data about your view’s current performance. Measuring the current performance lets you confirm whether there actually is a problem and, if there is, gives you a baseline measurement against which you can compare future optimizations. Views Do Not Always Have a Corresponding View Controller There is rarely a one-to-one relationship between individual views and view controllers in your application. The job of a view controller is to manage a view hierarchy, which often consists of more than one view used to implement some self-contained feature. For iPhone applications, each view hierarchy typically fills the entire screen, although for iPad applications a view hierarchy may fill only part of the screen. View and Window Architecture Tips for Using Views Effectively 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 25As you design your application’s user interface, it is important to consider the role that view controllers will play. View controllers provide a lot of important behaviors, such as coordinating the presentation of views on the screen, coordinating the removal of those views from the screen, releasing memory in response to low-memory warnings, and rotating views in response to interface orientation changes. Circumventing these behaviors could cause your application to behave incorrectly or in unexpected ways. For more information view controllers and their role in applications, see View Controller Programming Guide for iOS . Minimize Custom Drawing Although custom drawing is necessary at times, it is also something you should avoid whenever possible. The only time you should truly do any custom drawing is when the existing system view classes do not provide the appearance or capabilities that you need. Any time your content can be assembled with a combination of existing views, your best bet is to combine those view objects into a custom view hierarchy. Take Advantage of Content Modes Content modes minimize the amount of time spent redrawing your views. By default, views use the UIViewContentModeScaleToFill content mode, which scales the view’s existing contents to fit the view’s frame rectangle. You can change this mode as needed to adjust your content differently, but you should avoid using the UIViewContentModeRedraw content mode if you can. Regardless of which content mode is in effect, you can always force your view to redraw its contents by calling setNeedsDisplay or setNeedsDisplayInRect:. Declare Views as Opaque Whenever Possible UIKit uses the opaque property of each view to determine whether the view can optimize compositing operations. Setting the value of this property to YES for a custom view tells UIKit that it does not need to render any content behind your view. Less rendering can lead to increased performance for your drawing code and is generally encouraged. Of course, if you set the opaque property to YES, your view must fills its bounds rectangle completely with fully opaque content. Adjust Your View’s Drawing Behavior When Scrolling Scrolling can incur numerous view updates in a short amount of time. If your view’s drawing code is not tuned appropriately, scrolling performance for your view could be sluggish. Rather than trying to ensure that your view’s content is pristine at all times, consider changing your view’s behavior when a scrolling operation begins. View and Window Architecture Tips for Using Views Effectively 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 26For example, you can reduce the quality of your rendered content temporarily or change the content mode while a scroll isin progress. When scrolling stops, you can then return your view to its previousstate and update the contents as needed. Do Not Customize Controls by Embedding Subviews Although it is technically possible to add subviews to the standard system controls—objects that inherit from UIControl—you should never customize them in this way. Controlsthatsupport customizations do so through explicit and well-documented interfaces in the control class itself. For example, the UIButton class contains methods for setting the title and background images for the button. Using the defined customization points meansthat your code will always work correctly. Circumventing these methods, by embedding a custom image view or label inside the button, might cause your application to behave incorrectly now or at some point in the future if the button’s implementation changes. View and Window Architecture Tips for Using Views Effectively 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 27Every iOS application needs at least one window—an instance of the UIWindow class—and some may include more than one window. A window object has several responsibilities: ● It contains your application’s visible content. ● It plays a key role in the delivery of touch events to your views and other application objects. ● It works with your application’s view controllers to facilitate orientation changes. In iOS, windows do not have title bars, close boxes, or any other visual adornments. A window is always just a blank container for one or more views. Also, applications do not change their content by showing new windows. When you want to change the displayed content, you change the frontmost views of your window instead. Most iOS applications create and use only one window during their lifetime. This window spans the entire main screen of the device and is loaded from the application’s main nib file (or created programmatically) early in the life of the application. However, if an application supports the use of an external display for video out, it can create an additional window to display content on that external display. All other windows are typically created by the system, and are usually created in response to specific events, such as an incoming phone call. Tasks That Involve Windows For many applications, the only time the application interacts with its window is when it creates the window at startup. However, you can use your application’s window object to perform a few application-related tasks: ● Use the window object to convert points and rectangles to or from the window’s local coordinate system. For example, if you are provided with a value in window coordinates, you might want to convert it to the coordinate system of a specific view before trying to use it. For information on how to convert coordinates, see “Converting Coordinates in the View Hierarchy” (page 50). ● Use window notifications to track window-related changes. Windows generate notifications when they are shown or hidden or when they accept or resign the key status. You can use these notifications to perform actions in other parts of your application. For more information, see “Monitoring Window Changes” (page 31). 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 28 WindowsCreating and Configuring a Window You can create and configure your application’s main window programmatically or using Interface Builder. In either case, you create the window at launch time and should retain it and store a reference to it in your application delegate object. If your application creates additional windows, have the application create them lazily when they are needed. For example, if your application supports displaying content on an external display, it should wait until a display is connected before creating the corresponding window. You should always create your application’s main window at launch time regardless of whether your application is being launched into the foreground or background. Creating and configuring a window is not an expensive operation by itself. However, if your application is launched straight into the background, you should avoid making the window visible until your application enters the foreground. Creating Windows in Interface Builder Creating your application’s main window using Interface Builder issimple because the Xcode project templates do it for you. Every new Xcode application project includes a main nib file (usually with the name MainWindow.xib or some variant thereof) that includes the application’s main window. In addition, these templates also define an outlet for that window in the application delegate object. You use this outlet to access the window object in your code. Important: When creating your window in Interface Builder, it is recommended that you enable the Full Screen at Launch option in the attributes inspector. If this option is not enabled and your window is smaller than the screen of the target device, touch events will not be received by some of your views. Thisis because windows (like all views) do not receive touch events outside of their bounds rectangle. Because views are not clipped to the window’s bounds by default, the views still appear visible but events do not reach them. Enabling the Full Screen at Launch option ensures that the window is sized appropriately for the current screen. If you are retrofitting a project to use Interface Builder, creating a window using Interface Builder is a simple matter of dragging a window object to your nib file. Of course, you should also do the following: ● To access the window at runtime, you should connect the window to an outlet, typically one defined in your application delegate or the File’s Owner of the nib file. ● If your retrofit plans include making your new nib file the main nib file of your application, you must also set the NSMainNibFile key in your application’s Info.plist file to the name of your nib file. Changing the value of this key ensures that the nib file is loaded and available for use by the time the application:didFinishLaunchingWithOptions: method of your application delegate is called. For more information about creating and configuring nib files,see Interface Builder User Guide . For information about how to load nib files into your application at runtime, see “Nib Files” in Resource Programming Guide . Windows Creating and Configuring a Window 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 29Creating a Window Programmatically If you prefer to create your application’s main window programmatically, you should include code similar to the following in the application:didFinishLaunchingWithOptions: method of your application delegate: self.window = [[[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]] autorelease]; In the preceding example, self.window is assumed to be a declared property of your application delegate that is configured to retain the window object. If you were creating a window for an external display instead, you would assign it to a different variable and you would need to specify the bounds of the non main UIScreen object representing that display. When creating windows, you should always set the size of the window to the full bounds of the screen. You should not reduce the size of the window to accommodate the status bar or any other items. The status bar always floats on top of the window anyway, so the only thing you should shrink to accommodate the status bar is the view you put into your window. And if you are using view controllers, the view controller should handle the sizing of your views automatically. Adding Content to Your Window Each window typically has a single root view object (managed by a corresponding view controller) that contains all of the other views representing your content. Using a single root view simplifies the process of changing your interface; to display new content, all you have to do is replace the root view. To install a view in your window, use the addSubview: method. For example, to install a view that is managed by a view controller, you would use code similar to the following: [window addSubview:viewController.view]; In place of the preceding code, you can alternatively configure the rootViewController property of the window in your nib file. This property offers a convenient way to configure the root view of the window using a nib file instead of programmatically. If this property is set when the window is loaded from its nib file, UIKit automatically installsthe view from the associated view controller asthe root view of the window. This property is used only to install the root view and is not used by the window to communicate with the view controller. You can use any view you want for a window’s root view. Depending on your interface design, the root view can be a generic UIView object that acts as a container for one or more subviews, the root view can be a standard system view, or the root view can be a custom view that you define. Some standard system views that are commonly used as root views include scroll views, table views, and image views. Windows Creating and Configuring a Window 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 30When configuring the root view of the window, you are responsible for setting its initial size and position within the window. For applications that do not include a status bar, or that display a translucent status bar, set the view size to match the size of the window. For applications that show an opaque status bar, position your view below the status bar and reduce its size accordingly. Subtracting the status bar height from the height of your view prevents the top portion of your view from being obscured. Note: If the root view of your window is provided by a container view controller (such as a tab bar controller, navigation controller, or split-view controller), you do not need to set the initial size of the view yourself. The container view controller automatically sizes its view appropriately based on whether the status bar is visible. Changing the Window Level Each UIWindow object has a configurable windowLevel property that determines how that window is positioned relative to other windows. For the most part, you should not need to change the level of your application’s windows. New windows are automatically assigned to the normal window level at creation time. The normal window level indicates that the window presents application-related content. Higher window levels are reserved for information that needs to float above the application content, such as the system status bar or alert messages. And although you can assign windows to these levels yourself, the system usually does this for you when you use specific interfaces. For example, when you show or hide the status bar or display an alert view, the system automatically creates the needed windows to display those items. Monitoring Window Changes If you want to track the appearance or disappearance of windows inside your application, you can do so using these window-related notifications: ● UIWindowDidBecomeVisibleNotification ● UIWindowDidBecomeHiddenNotification ● UIWindowDidBecomeKeyNotification ● UIWindowDidResignKeyNotification These notifications are delivered in response to programmatic changes in your application’s windows. Thus, when your application shows or hides a window, the UIWindowDidBecomeVisibleNotification and UIWindowDidBecomeHiddenNotification notifications are delivered accordingly. These notifications are not delivered when your application moves into the background execution state. Even though your window is not displayed on the screen while your application is in the background, it is still considered visible within the context of your application. Windows Monitoring Window Changes 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 31The UIWindowDidBecomeKeyNotification and UIWindowDidResignKeyNotification notifications help your application keep track of which window is the key window—that is, which window is currently receiving keyboard events and other non touch-related events. Whereas touch events are delivered to the window in which the touch occurred, events that do not have an associated coordinate value are delivered to the key window of your application. Only one window at a time may be key. Displaying Content on an External Display To display content on an external display, you must create an additional window for your application and associate it with the screen object representing the external display. New windows are normally associated with the main screen by default. Changing the window’s associated screen object causes the contents of that window to be rerouted to the corresponding display. Once the window is associated with the correct screen, you can add views to it and show it just like you do for your application’s main screen. The UIScreen class maintains a list of screen objects representing the available hardware displays. Normally, there is only one screen object representing the main display for any iOS-based device, but devicesthatsupport connecting to an external display can have an additional screen object available. Devices that support an external display include iPhone and iPod touch devices that have Retina displays and the iPad. Older devices, such as iPhone 3GS, do not support external displays. Note: Because external displays are essentially a video-out connection, you should not expect touch events for views and controls in a window that is associated with an external display. In addition, it is your application’s responsibility to update the contents of the window as needed. Thus, to mirror the contents of your main window, your application would need to create a duplicate set of views for the external display’s window and update them in tandem with the views in your main window. The process for displaying content on an external display is described in the following sections. However, the following steps summarize the basic process: 1. At application startup, register for the screen connection and disconnection notifications. 2. When it is time to display content on the external display, create and configure a window. ● Use the screens property of UIScreen to obtain the screen object for the external display. ● Create a UIWindow object and size it appropriately for the screen (or for your content). ● Assign the UIScreen object for the external display to the screen property of the window. ● Adjust the resolution of the screen object as needed to support your content. ● Add any appropriate views to the window. Windows Displaying Content on an External Display 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 323. Show the window and update it normally. Handling Screen Connection and Disconnection Notifications Screen connection and disconnection notifications are crucial for handling changes to external displays gracefully. When the user connects or disconnects a display, the system sends appropriate notifications to your application. You should use these notifications to update your application state and create or release the window associated with the external display. The important thing to remember about the connection and disconnection notificationsisthat they can come at any time, even when your application is suspended in the background. Therefore, it is best to observe the notifications from an object that is going to exist for the duration of your application’s runtime, such as your application delegate. If your application is suspended, the notifications are queued until your application exits the suspended state and starts running in either the foreground or background. Listing 2-1 shows the code used to register for connection and disconnection notifications. This method is called by the application delegate at initialization time but you could register for these notificationsfrom other places in your application, too. The implementation of the handler methods is shown in Listing 2-2 (page 34). Listing 2-1 Registering for screen connect and disconnect notifications - (void)setupScreenConnectionNotificationHandlers { NSNotificationCenter* center = [NSNotificationCenter defaultCenter]; [center addObserver:self selector:@selector(handleScreenConnectNotification:) name:UIScreenDidConnectNotification object:nil]; [center addObserver:self selector:@selector(handleScreenDisconnectNotification:) name:UIScreenDidDisconnectNotification object:nil]; } If your application is active when an external display is attached to the device, itshould create a second window for that display and fill it with some content. The content does not need to be the final content you want to present. For example, if your application is not ready to use the extra screen, it can use the second window to display some placeholder content. If you do not create a window for the screen, or if you create a window but do not show it, a black field is displayed on the external display. Windows Displaying Content on an External Display 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 33Listing 2-2 shows how to create a secondary window and fill it with some content. In this example, the application creates the window in the handler methods it uses to receive screen connection notifications. (For information about registering for connection and disconnection notifications, see Listing 2-1 (page 33).) The handler method for the connection notification creates a secondary window, associates it with the newly connected screen and calls a method of the application’s main view controller to add some content to the window and show it. The handler method for the disconnection notification releases the window and notifies the main view controller so that it can adjust its presentation accordingly. Listing 2-2 Handling connect and disconnect notifications - (void)handleScreenConnectNotification:(NSNotification*)aNotification { UIScreen* newScreen = [aNotification object]; CGRect screenBounds = newScreen.bounds; if (!_secondWindow) { _secondWindow = [[UIWindow alloc] initWithFrame:screenBounds]; _secondWindow.screen = newScreen; // Set the initial UI for the window. [viewController displaySelectionInSecondaryWindow:_secondWindow]; } } - (void)handleScreenDisconnectNotification:(NSNotification*)aNotification { if (_secondWindow) { // Hide and then delete the window. _secondWindow.hidden = YES; [_secondWindow release]; _secondWindow = nil; // Update the main screen based on what is showing here. [viewController displaySelectionOnMainScreen]; Windows Displaying Content on an Exte