Friday 18 August 2017

Pindah rata rata sas panduan perusahaan


Beberapa orang melihat data sebagai fakta dan figur. Tapi lebih dari itu. Ini sumber kehidupan bisnis Anda. Ini berisi riwayat organisasi Anda. Dan mencoba memberitahumu sesuatu. SAS membantu Anda memahami pesan itu. Sebagai pemimpin dalam perangkat lunak dan layanan analisis bisnis, SAS mengubah data Anda menjadi wawasan yang memberi Anda perspektif baru tentang bisnis Anda. Anda bisa mengenali apa yang sedang dikerjakan. Perbaiki apa yang tidak. Dan temukan peluang baru. Kami dapat membantu Anda mengubah sejumlah besar data menjadi pengetahuan yang dapat Anda gunakan, dan kami melakukannya dengan lebih baik daripada siapa pun. Tidak heran sebagian besar pelanggannya terus menggunakan SAS dari tahun ke tahun. Kami percaya karena kami mempekerjakan orang-orang hebat untuk menciptakan perangkat lunak dan layanan hebat. SAS adalah pemimpin dalam analisis. Melalui analisis, intelijen bisnis, dan perangkat lunak manajemen data dan layanan inovatif, SAS membantu pelanggan di lebih dari 83.000 situs membuat keputusan lebih baik dengan lebih cepat. Sejak tahun 1976, SAS telah memberikan pelanggan di seluruh dunia. KUAT UNTUK TAHU. SAS Analytics in Action Lebih dari empat dekade pengalaman dan inovasi. Temukan mengapa SAS adalah pemimpin analisis. SAS memberikan solusi terbukti yang mendorong inovasi dan meningkatkan kinerja. Fakta Perusahaan amp Keuangan Jumlah Negara yang Diinstal SAS memiliki pelanggan di 148 negara. Total Worldwide Customer Sites Perangkat lunak kami dipasang di lebih dari 83.000 situs bisnis, pemerintah dan universitas. Pelanggan Fortune Global 500 94 dari 100 perusahaan teratas di 2016 Fortune Global 500 adalah pelanggan SAS. Karyawan di Seluruh Dunia 14.063 karyawan total Perincian oleh Geografi Amerika Serikat: 7.111 Markas Besar Dunia (Cary, NC): 5.600 Kanada: 335 Amerika Latin: 436 Eropa, Timur Tengah dan Afrika: 3.720 Asia Pasifik: 2.453 SAS (kuotassquot yang diucapkan) pernah berdiri untuk analisis kuartalan System. quot Ini dimulai di North Carolina State University sebagai proyek untuk menganalisis penelitian pertanian. Permintaan akan kemampuan perangkat lunak semacam itu mulai berkembang, dan SAS didirikan pada tahun 1976 untuk membantu pelanggan di semua jenis industri mulai dari perusahaan farmasi dan bank hingga entitas akademis dan pemerintah. SAS baik perangkat lunak maupun perusahaan berkembang selama beberapa dekade ke depan. Pengembangan perangkat lunak mencapai tingkat tinggi di industri ini karena dapat berjalan di semua platform, menggunakan arsitektur multivendor yang dikenal saat ini. Sementara ruang lingkup perusahaan telah tersebar di seluruh dunia, budaya perusahaan yang menggembirakan dan inovatif tetap sama. Jelajahi setiap era sejarah perusahaan kami melalui berbagai foto dan deskripsi tentang bagaimana SAS terbentuk. Akar akademis North Carolina State University, yang terletak di ibu kota Raleigh, North Carolina, menjadi pemimpin di konsorsium, terutama karena memiliki akses ke komputer mainframe yang lebih kuat daripada universitas lain. Proyek ini akhirnya menemukan sebuah rumah di Departemen Statistik. Awal kepemimpinan anggota fakultas North Carolina State University Jim Goodnight dan Jim Barr muncul sebagai pemimpin proyek Barr yang menciptakan arsitektur dan Goodnight mengimplementasikan fitur yang berada di atas arsitektur dan memperluas kemampuan sistem. Ketika NIH menghentikan pendanaan pada tahun 1972, anggota konsorsium setuju untuk memasukkan 5.000 masing-masing setiap tahun untuk memungkinkan NCSU terus mengembangkan dan memelihara sistem dan mendukung kebutuhan analisis statistik mereka. Memperluas tim dan klien Selama tahun-tahun berikutnya, perangkat lunak SAS dilisensikan oleh perusahaan farmasi, perusahaan asuransi dan bank, serta oleh komunitas akademis yang telah melahirkan proyek tersebut. Jane Helwig, pegawai Departemen Statistik lain di NCSU, bergabung dalam proyek ini sebagai penulis dokumentasi, dan John Sall, seorang mahasiswa pascasarjana dan pemrogram, membulatkan tim inti. Mengubah cara perangkat lunak dijual Upaya penjualan beralih dari telemarketing ke tenaga penjualan langsung dengan fokus pada wilayah geografis. Perusahaan ini memperkenalkan kelompok penjualan vertikal pertamanya dengan merilis perangkat lunak SASPH-Clinical untuk industri farmasi. Dan permintaan untuk solusi kemasan, yang dirancang untuk memenuhi kebutuhan bisnis yang spesifik di seluruh industri, menyebabkan pembentukan divisi Solusi Bisnis, yang bertanggung jawab atas solusi seperti SAS Financial Management dan SAS Human Capital Management (sebelumnya bernama CFO Vision and HR Vision). Fokus baru pada pendidikan Perusahaan pindah ke wilayah baru dengan mengembangkan sumber kurikulum online berkualitas tinggi untuk kelas. Kurikulum SAS Jalur online sumber daya interaktif berfokus pada materi yang sulit disampaikan melalui metode pengajaran konvensional. Alat mencakup subjek melalui kursus yang dapat Anda lakukan, lihat dan dengarkan, berikan informasi dan dorong wawasan dengan cara yang tidak bisa dilakukan oleh buku teks. Perangkat lunak ini memungkinkan guru untuk menjaga agar siswa tetap terlibat dan belajar sambil mendorong penggunaan teknologi di kelas. Dukungan nyata untuk pengambilan keputusan Yang terpenting, SAS memisahkan diri dari paket sebagai penyedia perangkat lunak pendukung keputusan dengan kemampuan diperluas di bidang-bidang seperti analisis data terpandu dan analisis dan pelaporan uji klinis. Perusahaan ini memperkenalkan perangkat lunak untuk membangun sistem informasi eksekutif yang disesuaikan (Executive Information System - EIS) dan meluncurkan Rapid Warehousing Program-nya. Seiring internet menjadi alat yang lebih vital bagi dunia bisnis, permintaan akan perangkat lunak berbasis Web semakin meningkat. Akibatnya, SAS membawa kemampuan Web-enabled ke perangkat lunaknya, yang memungkinkan pelanggan untuk menggunakan solusi SAS menjadi lebih kompetitif dalam lingkungan bisnis yang berkembang pesat. Memahami pelanggan Dengan kemampuan penambangan data yang kuat, SAS berada dalam posisi untuk memimpin di area yang lebih sesuai permintaan daripada menawarkan perangkat lunak bisnis lainnya - manajemen hubungan pelanggan. Kini Web-enabled dengan solusi e-intelijen baru, SAS terus tetap berada di ujung tombak industri perangkat lunak bisnis. Pengakuan terus bergulir dalam Pengakuan karena produk perangkat lunak berkualitas terus datang dari banyak sumber di seluruh dunia, termasuk Datamation. Data Warehousing World. Majalah Software. ComputerWorld Brazil dan PC Week. Bersama dengan asosiasi analis Prancis bergengsi Yphise, dan Australias Corporate Research Foundation. Selain itu, Food and Drug Administration AS mengakui integritas perangkat lunak SAS dengan memilih teknologi SAS sebagai standar untuk aplikasi obat baru. SAS terus diakui sebagai tempat yang tepat untuk bekerja, menerima penghargaan dari FORTUNE. Ibu yang bekerja. Majalah BusinessWeek dan Mother Jones, bersama dengan media cetak dan liputan media cetak terkemuka di Amerika Serikat, Eropa dan Australia. Sertifikasi Analytics lanjutan Memperluas keahlian analisis Anda. Jadikan diri Anda lebih berharga. Dan menjadi aset yang lebih berharga dengan mempelajari teknik analisis mutakhir terkini untuk memecahkan tantangan bisnis penting di setiap domain. Program SAS Certified Advanced Analytics Profesional yang ditawarkan oleh SAS Academy for Data Science di kelas dan format pembelajaran campuran akan memperluas pengetahuan Anda, memperdalam kemampuan analisis Anda dan memperluas cakrawala Anda. Tentang Program Sertifikasi Lanjutan Analytics Apakah program ini tepat untuk saya Program ini diperuntukkan bagi mereka yang ingin melanjutkan pengetahuan dan keterampilan analisis lanjutan mereka. Latar belakang yang kuat dalam matematika terapan sangat dibutuhkan. Gelar magister atau lebih tinggi dalam bidang kuantitatif atau teknis dianjurkan, namun tidak diperlukan. Prasyarat Untuk mendaftar dalam program ini, Anda memerlukan setidaknya enam bulan pengalaman pemrograman di SAS atau bahasa pemrograman lainnya. Jika Anda baru saja memulai atau perlu memoles keahlian pemrograman SAS Anda, SAS Programming for Data Science Fast Track akan memberi Anda dasar yang baik. Kami juga menyarankan agar Anda memiliki setidaknya enam bulan pengalaman menggunakan statistik andor matematika di lingkungan bisnis. Anda dapat memulai dengan Statistik 1: Pengantar kursus ANOVA, Regresi atau Regresi Logistik, yang tersedia sebagai kursus e-learning online yang dipimpin instruktur atau gratis. Sertifikasi yang saya dapatkan dari SAS Academy for Data Science memberi saya kredibilitas lebih besar saat berbicara dengan para pengambil keputusan. Akademi Etienne Ndedi SAS untuk Ilmu Data Topik Pascasarjana Covered Mesin belajar dan teknik pemodelan prediktif. Bagaimana menerapkan teknik-teknik ini untuk mendistribusikan dan menyimpan data dalam memori besar. Deteksi pola Eksperimentasi dalam bisnis. Teknik pengoptimalan Peramalan deret waktu. Keterampilan komunikasi penting. Perangkat lunak SAS mencakup SAS Enterprise Miner SASETS SAS Data Mining Mining SAS In-Memory Statistics (PROC IMSTAT) SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SASAR Menggunakan SAS Enterprise Miner untuk kedua penemuan pola (segmentasi, asosiasi dan analisis urutan) dan pemodelan prediktif (pohon keputusan, regresi dan model jaringan syaraf tiruan). Topik Covered Mendefinisikan proyek SAS Enterprise Miner dan mengeksplorasi data secara grafis. Memodifikasi data untuk hasil analisis yang lebih baik. Membangun dan memahami model prediksi, termasuk pohon keputusan dan model regresi. Membandingkan dan menjelaskan model kompleks. Menghasilkan dan menggunakan kode skor. Menerapkan asosiasi dan urutan penemuan ke data transaksi. Mengkomunikasikan Temuan Teknis Dengan Pemirsa Nonteknis Kursus ini mengajarkan kepada Anda bagaimana merancang dan mengkomunikasikan presentasi yang efektif melalui penilaian diri dan diskusi tentang organisasi presentasi dan penggunaan alat bantu visual yang efektif. Anda akan menerima analisis individual tentang gaya perilaku Anda, termasuk deskripsi kekuatan dan peluang Anda untuk perbaikan, serta strategi untuk berkomunikasi dengan orang lain. Topik Covered Mendiagnosis dan menilai berbagai gaya perilaku manusia. Berkomunikasi dan coping secara lebih efektif dengan berbagai tipe orang. Menggunakan kekuatan dan pengetahuan Anda sendiri tentang orang lain untuk meningkatkan komunikasi. Menyampaikan informasi dalam format yang ringkas dan terorganisir dengan baik. Membuat presentasi, dengan fokus pada mengkomunikasikan informasi yang tidak biasa atau teknis kepada khalayak nonteknis. Merancang bahan presentasi dengan kejelasan dan tujuan. Kursus ini membantu Anda memahami dan menerapkan dua algoritma jaringan syaraf tiruan populer multilayer perceptrons dan fungsi dasar radial. Baik masalah teoritis dan praktis dari jaringan syaraf pas dibahas. Topik Covered Membangun multilayer perceptron dan radial base function neural networks. Membangun jaringan syaraf tiruan menggunakan prosedur NEURAL. Memilih arsitektur jaringan yang tepat dan menentukan metode pelatihan yang relevan. Menghindari jaringan syaraf yang overfitting. Melakukan analisis deret waktu autoregresif menggunakan jaringan syaraf tiruan. Menafsirkan model jaringan syaraf tiruan. Pemodelan Prediktif Menggunakan Regresi Logistik Kursus ini membahas pemodelan prediktif dengan menggunakan perangkat lunak SASSTAT, dengan penekanan pada prosedur LOGISTIK. Topik Covered Menggunakan regresi logistik untuk memodelkan perilaku individu sebagai fungsi dari input yang diketahui. Memilih variabel dan interaksi. Membuat plot plot plot dan kemungkinan plot dengan menggunakan ODS Statistical Graphics. Menangani nilai data yang hilang Menangani multikolinearitas pada prediktor Anda. Menilai kinerja model dan membandingkan model. Pengkodean ulang variabel kategoris berdasarkan bobot bukti yang halus. Menggunakan teknik efisiensi untuk kumpulan data masif. Data Mining Techniques: Predictive Analytics on Big Data Kursus ini memperkenalkan aplikasi dan teknik untuk pengujian dan pemodelan data besar. Ini menyajikan strategi pemodelan dasar dan lanjutan, seperti pemrosesan kelompok untuk model linier, hutan acak, model linier umum dan model distribusi campuran. Anda akan melakukan eksplorasi dan analisis langsung dengan menggunakan alat seperti SAS Enterprise Miner, SAS Visual Statistics dan SAS In-Memory Statistics. Topik Covered Menggunakan aplikasi yang dirancang untuk analisis data yang besar. Menjelajahi data secara efisien. Mengurangi dimensi data. Membangun model prediktif dengan menggunakan pohon keputusan, regresi, model linier umum, hutan acak dan mesin vektor pendukung. Membangun model yang menangani banyak target. Menilai kinerja model Menerapkan model dan mencetak prediksi baru. Menggunakan SAS untuk Memasukkan Model Open Source ke Produksi Kursus ini memperkenalkan dasar-dasar untuk mengintegrasikan pemrograman R dan skrip Python ke SAS dan SAS Enterprise Miner. Topik disajikan dalam konteks data mining, yang mencakup eksplorasi data, prototyping model, dan teknik pembelajaran yang diawasi dan tidak diawasi. Topik Covered Calling R paket di SAS. Menggunakan skrip Python di SAS. Mengintegrasikan teknik eksplorasi data open source di SAS. Mengintegrasikan model open source di SAS Enterprise Miner. Membuat kode produksi (skor) untuk model R. Dalam kursus ini, Anda akan belajar menggunakan SAS Text Miner untuk mengungkap tema atau konsep yang mendasarinya yang terdapat dalam koleksi dokumen besar, secara otomatis mengelompokkan dokumen menjadi kelompok topikal, mengklasifikasikan dokumen ke dalam kategori yang telah ditentukan, dan mengintegrasikan data teks dengan data terstruktur untuk memperkaya upaya pemodelan prediktif. Topik Covered Mengkonversi dokumen yang tersimpan dalam format standar (Microsoft Word, Adobe PDF, dll.) Ke dalam format HTML atau TXT tujuan umum. Membaca dokumen dari berbagai sumber (halaman web, file flat, elemen data dalam database relasional, sel spreadsheet, dll.) Ke dalam tabel SAS. Memproses data tekstual untuk penambangan teks (misalnya mengoreksi salah eja atau akronim dan singkatan penguraian). Mengubah data karakter berbasis teks tidak terstruktur menjadi data numerik terstruktur. Menjelajahi kata dan frase dalam koleksi dokumen. Memeriksa koleksi dokumen menggunakan kata kunci (yaitu mengidentifikasi dokumen yang menyertakan kata atau frasa tertentu). Mengidentifikasi topik atau konsep yang muncul dalam kumpulan dokumen. Membuat tabel topik yang dipengaruhi pengguna dari awal atau dengan memodifikasi topik yang dihasilkan mesin, atau membuat konsep menggunakan pengetahuan domain. Menggunakan tabel topik yang diturunkan atau tabel topik yang dipengaruhi pengguna sebelumnya (atau keduanya) untuk meningkatkan pengambilan informasi dan klasifikasi dokumen. Mengelompokkan dokumen menjadi subkelompok homogen. Klasifikasikan dokumen ke dalam kategori yang telah ditentukan. Time Series Modeling Essentials Dalam kursus ini, Anda akan mempelajari dasar-dasar data time series pemodelan, dengan fokus pada penggunaan tiga jenis model utama untuk menganalisis rangkaian waktu univariat: smoothing eksponensial, moving average terintegrasi autoregressive dengan variabel eksogen (ARIMAX) , Dan komponen yang tidak teramati (UCM). Topik Covered Membuat data time series. Menampung tren, serta variasi musiman dan event-related, dalam model time series. Mendiagnosis, memperbaiki dan menafsirkan pemulusan eksponensial, model ARIMAX dan UCM. Mengidentifikasi kekuatan dan kelemahan relatif dari ketiga tipe model. Eksperimentasi dalam Ilmu Data Kursus ini mengeksplorasi esensi eksperimen dalam sains data, mengapa eksperimen sangat penting dalam upaya sains data, dan bagaimana merancang eksperimen yang efisien dan efektif. Topik Covered Mendefinisikan terminologi umum dalam eksperimen yang dirancang. Menggambarkan manfaat eksperimen multifaktor. Membedakan antara dampak model dan dampak tindakan yang diambil dari model tersebut. Memasukkan model respons inkremental untuk mengevaluasi kontribusi unik dari pesan pemasaran, tindakan, intervensi atau perubahan proses pada hasil. Konsep Optimalisasi untuk Ilmu Data Kursus ini berfokus pada konsep optimasi linier, nonlinier dan efisiensi. Peserta akan belajar bagaimana merumuskan masalah optimasi dan bagaimana membuat formulasi mereka efisien dengan menggunakan set indeks dan array. Demonstrasi kursus mencakup contoh analisis envelopment data dan optimasi portofolio. Prosedur OPTMODEL digunakan untuk mengatasi masalah optimasi yang memperkuat konsep yang diperkenalkan dalam kursus. Topik Covered Mengidentifikasi dan merumuskan pendekatan yang tepat untuk memecahkan berbagai masalah optimasi linier dan nonlinier. Membuat model optimasi yang umum digunakan di industri. Merumuskan dan memecahkan analisis envelopment data. Mengatasi masalah pengoptimalan dengan menggunakan prosedur OPTMODEL di SAS. Topik populer di bagian ini mencakup penggunaan LAPORAN PROC, SAS Styles, Templates and ODS, serta berbagai teknik yang digunakan untuk menghasilkan hasil SAS di Microsoft Excel, Powerpoint, dan Office lainnya. Aplikasi. Topik meliputi grafik, visualisasi data, penerbitan, dan pelaporan. Topik populer di bagian ini mencakup penggunaan SASGraph, SAS Styles, Templates and ODS, serta berbagai teknik yang digunakan untuk menghasilkan hasil SAS di Microsoft Excel dan Aplikasi Office lainnya. Ilmu data dianggap sebagai perpanjangan dari statistik, data mining, dan predictive analytics. Bagian ini berfokus pada bagaimana Quotthe Sexiest Job of 21st Centuryquot dilakukan di SAS. Area yang diminati meliputi analisis teks dan data media sosial. Presenter mempersiapkan tampilan digital yang akan tersedia untuk dilihat oleh semua peserta di seluruh konferensi, daripada melakukan presentasi dengan gaya ceramah. Bagian ini sering menampilkan grafis beresolusi tinggi dan atau konsep atau gagasan pemikiran yang memungkinkan beberapa studi independen oleh peserta konferensi. Presentasi berpusat pada visualisasi data, termasuk PROC GPLOT, grafik animasi, dan penyesuaian lainnya. Hands-on-Workshops memberi peserta interaksi lsquohands-on-the-keyboardrsquo dengan Perangkat Lunak SAS selama setiap presentasi. Presenter membimbing peserta melalui contoh teknik dan kemampuan SAS Software sambil menawarkan kesempatan untuk mengajukan pertanyaan dan belajar melalui latihan. Semua CARA presentasi diberikan oleh pengguna SAS berpengalaman yang diundang untuk hadir. Bagian ini berisi presentasi tentang integrasi data, analisis, dan pelaporan, namun dengan konten spesifik industri. Contoh topik yang didorong oleh konten adalah: Metode Penelitian Hasil Kesehatan dan Kesehatan Standar dan Pengendalian Mutu untuk Penyampaian data Uji Coba Klinis ke FDA Banking, Credit Card, Asuransi dan Manajemen Risiko Pemodelan dan analisis asuransi Bagian ini membantu pengguna SAS memahami bagaimana cara membenamkan diri mereka di Dunia kaya akan sumber daya yang dikhususkan untuk mencapai pendidikan SAS, pelatihan, jejaring sosial, konsultasi, sertifikasi, dukungan teknis, dan peluang untuk afiliasi dan pertumbuhan profesional. Bagian ini memungkinkan pengguna SAS pemula dan orang lain menghadiri serangkaian presentasi yang akan membimbing mereka melalui konsep dasar pembuatan langkah SAS SAS Base dan sintaks PROC, diikuti oleh dua Lokakarya Tangan. Semua presentasi SAS Essentials dilakukan oleh pengguna SAS berpengalaman yang diundang untuk hadir. Jika Anda memiliki program yang berjalan lama, atau akan berjalan beberapa kali, Anda mungkin ingin melacak berapa lama setiap bagian program berlangsung untuk dijalankan. Ini dapat membantu Anda menemukan bagian-bagian yang lambat dari program Anda dan memprediksi berapa lama masa depan akan berjalan. Makalah ini menyajikan sebuah alat untuk membantu mengatasi masalah tersebut. Makro WriteProgramStatus menyediakan cara untuk membuat file status, mudah dibaca oleh manusia atau mesin. Beyond IF THEN ELSE: Teknik untuk Eksekusi Bersyarat atas Kode SAS Hampir setiap program SASreg mencakup logika yang menyebabkan kode tertentu dieksekusi hanya jika kondisi tertentu terpenuhi. Hal ini biasa dilakukan dengan menggunakan IF. KEMUDIAN. Sintaks ELSE Dalam makalah ini, kita akan membahas berbagai cara untuk membangun logika SAS bersyarat, termasuk beberapa yang mungkin memberikan keuntungan dibandingkan pernyataan IF. Topik akan mencakup pernyataan SELECT, fungsi IFC dan IFN, CHOOSE dan WHICH family of functions, dan fungsi COALESCE. Wersquoll juga memastikan kita memahami perbedaan antara IF biasa dan pernyataan makro IF. Aplikasi Waze untuk Base SASreg: Secara otomatis Routing di sekitar Kumpulan Data Terkunci, Proses Kemacetan, dan Kemacetan Lalu Lintas Lainnya pada Data Superhighwa y Aplikasi Waze, yang dibeli oleh Google pada tahun 2013, mengingatkan jutaan pengguna tentang kemacetan lalu lintas, tabrakan, konstruksi, dan lainnya. Kompleksitas jalan yang bisa menghalangi pengendara motor untuk menempuh perjalanan dari A ke B. Dari rig jackknif ke bangkai serigala, jalan bisa diliputi oleh kemacetan atau dikotori dengan rintangan yang menghambat arus lalu lintas dan efisiensi. Algoritma Waze secara otomatis mengubah rute pengguna ke rute yang lebih efisien berdasarkan kejadian yang dilaporkan pengguna serta norma historis yang menunjukkan kondisi jalan yang khas. Infrastruktur ekstrak, transform, load (ETL) sering mewakili arus proses serial yang dapat meniru jalan raya, dan yang dapat menjadi sama geram oleh kumpulan data yang terkunci, proses yang lambat, dan faktor lain yang mengenalkan inefisiensi. The LOCKITDOWN SASreg makro, diperkenalkan pada WUSS pada tahun 2014, mendeteksi dan mencegah tabrakan akses data yang terjadi ketika dua atau lebih proses SAS atau pengguna secara bersamaan mencoba mengakses kumpulan data SAS yang sama. Selain itu, makro LOCKANDTRACK, diperkenalkan pada WUSS pada tahun 2015, menyediakan pelacakan real-time dan metrik kinerja historis untuk kumpulan data yang terkunci melalui tabel kontrol terpadu, yang memungkinkan pengembang untuk mengasah proses untuk mengoptimalkan efisiensi dan throughput data. Teks ini menunjukkan penerapan LOCKSMART dan metrik kinerja kunci untuk membuat algoritma logika fuzzy berbasis data yang secara preemptively mengalihkan kembali aliran program di sekitar kumpulan data yang tidak dapat diakses. Jadi, daripada tidak sabar menunggu kumpulan data tersedia atau proses selesai, perangkat lunak benar-benar mengantisipasi waktu tunggu berdasarkan norma historis, melakukan fungsi lain (independen), dan kembali ke proses semula saat tersedia. Mengusulkan Pengobatan Darurat SAS ke Abad 21: Menuju Tujuan Penanganan Pengecualian, Tindakan, Hasil, dan Komitmen Darurat terdiri dari rangkaian perawatan yang sering dimulai dengan pertolongan pertama, dukungan dasar kehidupan (life support / BLS), atau dukungan kehidupan lanjut (ALS.) Pertama Penanggap, termasuk petugas pemadam kebakaran, teknisi medis darurat (EMT), dan paramedis, sering kali adalah orang pertama yang melakukan kejahatan, terluka, dan sakit, dengan cepat menilai situasi, memberikan perawatan kuratif dan paliatif, dan mengangkut pasien ke fasilitas medis. Protokol pengobatan layanan darurat dan prosedur operasi standar (SOP) memastikan bahwa, terlepas dari sifat tunggal setiap pasien serta komplikasi potensial, personil terlatih memiliki beragam alat dan teknik untuk memberikan perawatan tingkat yang bervariasi, Berulang, dan bertanggung jawab. Sama seperti penyedia EMS harus menilai pasien untuk menentukan tindakan tindakan yang efektif, perangkat lunak juga harus mengidentifikasi dan menilai penyimpangan atau kegagalan proses, dan juga menentukan tindakannya yang sepadan. Penanganan pengecualian menggambarkan baik identifikasi dan resolusi kejadian buruk, tidak terduga, atau terlalu dini yang dapat terjadi selama eksekusi perangkat lunak, dan harus diterapkan dalam perangkat lunak SASreg yang menuntut keandalan dan ketahanan. Tujuan penanganan eksepsi selalu adalah untuk mengalihkan kembali kontrol proses ke jalur quothappy trailquot atau quightappy pathquotmdashi. e. Jalur proses yang awalnya ditujukan yang memberikan nilai bisnis penuh. Tapi, saat kejadian yang tidak dapat diatasi terjadi, rutinitas penanganan pengecualian harus menginstruksikan proses, program, atau sesi yang berakhir dengan anggun untuk menghindari kerusakan atau efek tak diinginkan lainnya. Antara hasil yang berlawanan dari program yang sepenuhnya pulih dan penghentian program yang anggun, bagaimanapun, terdapat beberapa jalan pengecualian pengecualian lainnya yang dapat memberikan nilai bisnis penuh atau sebagian, kadang-kadang hanya dengan sedikit penundaan. Untuk itu, teks ini menunjukkan jalan ini dan membahas berbagai modalitas internal dan eksternal untuk mengkomunikasikan pengecualian kepada pengguna SAS, pengembang, dan pemangku kepentingan lainnya. Tidak akan lebih baik jika program lama Anda bisa menyentuh bahu Anda dan katakanlah lsquoOkay, Irsquom semua selesai sekarang. Bisa Tip cepat ini akan menunjukkan betapa mudahnya agar program SASreg Anda mengirimi Anda (atau orang lain) email selama eksekusi program. Setelah Anda memperoleh dasar-dasar sederhana, Anda mendapatkan berbagai macam kegunaan untuk fitur hebat ini, dan Anda bertanya-tanya bagaimana Anda bisa hidup tanpanya. Menemukan Semua Perbedaan pada dua perpustakaan SAS menggunakan Proc Compare Bharat Kumar Janapala Dalam industri klinis yang memvalidasi dataset dengan pemrograman paralel dan proc yang membandingkan dataset yang diturunkan ini adalah praktik rutin, namun karena pembaruan konstan dalam data mentah, menjadi sulit untuk mengetahui perbedaan antara Dua perpustakaan Program saat ini menunjukkan semua perbedaan antara perpustakaan dengan cara yang paling optimal dengan bantuan direktori bantuan Proc compare and SAS. Pertama, program menampilkan dataset yang ada di kedua perpustakaan dan mencantumkan dataset yang tidak biasa. Kedua, program ini mencari jumlah observasi dan variabel yang ada di kedua perpustakaan dengan dataset dan mencantumkan kedua variabel dan dataset yang tidak biasa dengan perbedaan jumlah observasi. Ketiga, dengan asumsi kedua perpustakaan identik, proc program membandingkan dataset dengan nama yang mirip dan menangkap perbedaan yang dapat dipantau dengan menetapkan jumlah maksimum perbedaan berdasarkan variabel untuk pengoptimalan. Akhirnya, program ini membaca semua perbedaan dan memberikan laporan konsolidasi yang diikuti oleh deskripsi oleh dataset. Biarkan Variabel Lingkungan Membantu Anda: Memindahkan File di Berbagai Studi dan Membuat Perpustakaan SAS On-The-Go Dalam uji klinis, dataset dan program SAS disimpan dalam berbagai studi di berbagai produk di Unix. Pemrogram SAS perlu sering mengakses lokasi tersebut, untuk membaca data pemrograman, atau menyalin file untuk digunakan kembali dalam analisis baru. Mengetik jalur direktori yang panjang sangat menyita waktu dan menegangkan saraf. Makalah ini menjelaskan cara yang efisien untuk menyimpan berbagai jalur direktori terlebih dahulu melalui variabel lingkungan. Variabel lingkungan yang telah ditentukan tersebut dapat digunakan untuk operasi file Unix (coping, delete, searching files, etc.). Informasi yang dibawa oleh variabel tersebut juga dapat dilewatkan ke SAS untuk membangun perpustakaan dimanapun Anda pergi. Periksa Silakan: Pendekatan Otomatis untuk Memeriksa Log Dalam industri Farmasi, kita mendapati diri kita harus menjalankan ulang program kita berulang kali untuk setiap pengiriman. Program ini dapat dijalankan secara terpisah dalam sesi SASreg interaktif yang memungkinkan kita meninjau log saat kita menjalankan program. Kita bisa menjalankan program individual secara batch dan membuka setiap log individu untuk meninjau ulang pesan log yang tidak diinginkan, seperti ERROR, WARNING, uninitialized, telah dikonversi ke, dll. Kedua pendekatan ini baik-baik saja jika hanya ada sedikit program untuk menjalankan. Tapi apa yang Anda lakukan jika Anda memiliki ratusan program yang perlu dijalankan kembali Apakah Anda ingin membuka setiap program dan mencari pesan yang tidak diinginkan Pendekatan manual ini bisa memakan waktu berjam-jam dan rawan terhadap pengawasan yang tidak disengaja. Makalah ini akan membahas makro yang akan mencari melalui direktori tertentu dan memeriksa semua log di direktori atau hanya memeriksa log dengan konvensi penamaan tertentu atau hanya memeriksa file yang terdaftar. Makro kemudian menghasilkan laporan yang mencantumkan semua file yang dicentang dan menunjukkan apakah ada masalah atau tidak. Biarkan SASreg Melakukan Pekerjaan Kotor Anda Memastikan Anda memiliki semua informasi yang diperlukan untuk meniru penyampaian yang dapat diselamatkan bisa menjadi tugas yang tidak praktis. Anda ingin memastikan bahwa semua kumpulan data mentah disimpan, semua kumpulan data yang diturunkan, apakah itu kumpulan data SDTM atau ADaM, telah disimpan dan Anda lebih suka perangko datetime dipertahankan. Anda tidak hanya memerlukan kumpulan data, Anda juga perlu menyimpan salinan semua program yang digunakan untuk menghasilkan pengiriman dan juga log yang sesuai saat program dijalankan. Informasi lain yang diperlukan untuk menghasilkan keluaran yang diperlukan perlu diselamatkan. Semua ini perlu dilakukan untuk setiap pengiriman dan mudah untuk mengabaikan langkah atau beberapa informasi penting. Kebanyakan orang melakukan proses ini secara manual dan ini bisa menjadi proses yang memakan waktu, jadi mengapa tidak membiarkan SAS melakukan pekerjaan untuk Anda LST Files with Proc Compare Result Manvitha Yennam dan Srinivas Vanam Metode yang paling banyak digunakan untuk memvalidasi program adalah Double Programming, yang Melibatkan dua pemrogram yang mengerjakan satu program dan akhirnya membandingkan hasilnya dengan menggunakan prosedur seperti ldquocomparerdquo. Hasil Proc Compare umumnya diproduksi dalam file. LST. Sebagian besar perusahaan melakukan tinjauan manual dengan memeriksa setiap file LST untuk memastikan hasilnya sama. Tapi proses manual ini memakan waktu sekaligus rawan kesalahan. Tujuan makalah ini adalah menggunakan makro SAS daripada mengikuti proses tinjauan manual. Makro SAS ini membaca semua file. LST menyediakan jalan dan membuat ringkasan daftar file dan menunjukkan apakah file tersebut bermasalah atau tidak dan juga jenis masalahnya. Baca publikasi apa pun, dari media nasional ke situs berita lokal Anda. Prestasi pendidikan, terutama di bidang STEM, merupakan keprihatinan serius dan miliaran dolar dihabiskan untuk mengatasi masalah ini. Bagaimana SAS dapat diterapkan untuk menganalisis hasil intervensi, dan, yang sama pentingnya, menyampaikan hasil analisis tersebut kepada audiens non-teknis Menggunakan data nyata dari evaluasi permainan pendidikan, presentasi ini berjalan melalui langkah-langkah evaluasi, dari Penilaian kebutuhan untuk validasi pengukuran terhadap perbandingan uji pra-pos. Teknik yang diterapkan meliputi PROC FREQ dengan opsi untuk data yang berkorelasi, PROC FACTOR untuk analisis faktor, PROC TTEST dan PROC GLM untuk ANOVA berulang. Penggunaan Happy dibuat di seluruh ODS Statistical Graphics. Dengan menggunakan prosedur SASSTAT standar, analisis ini dapat dijalankan pada sistem operasi manapun dengan SAS, termasuk SAS Studio di iPad. Membangun Interval Keyakinan untuk Perbedaan Proporsi Binomial di SASreg Mengingat dua proporsi binomial, kami ingin membangun interval keyakinan untuk selisihnya. Metode yang paling banyak dikenal adalah metode Wald (yaitu pendekatan normal), namun dapat menghasilkan hasil yang tidak diinginkan pada kasus ekstrem (misalnya, bila proporsinya mendekati 0 atau 1). Banyak metode lain yang ada, termasuk metode asimtotik, metode perkiraan, dan metode yang tepat. Makalah ini menyajikan 9 metode yang berbeda untuk membangun interval kepercayaan semacam itu, 8 di antaranya tersedia dalam prosedur SASreg 9.3. Metodenya dibandingkan dan pemikiran diberikan pada metode mana yang akan digunakan. Panduan Animasi: Pemodelan Respon Tambahan di Penambang Enterprise Beberapa orang dapat diharapkan untuk membeli produk tanpa kontak pemasaran. Jika semua pelanggan potensial dihubungi, perusahaan tidak dapat menentukan efek sebenarnya dari manipulasi pemasaran. Pembicaraan ini menggunakan simpul INCREMENTAL RESPONSE di SASreg Enterprise Minertrade untuk memecahkan masalah pemasaran dasar. Pemasar biasanya menargetkan, dan menghabiskan uang untuk menghubungi, semua calon pelanggan. Ini sia-sia, karena beberapa dari orang-orang ini akan menjadi pelanggan mereka sendiri. Node ini menggunakan seperangkat data untuk memisahkan pelanggan menjadi beberapa kelompok: 1) kemungkinan membeli 2) cenderung membeli jika mereka subjek kampanye pemasaran dan 3) pelanggan yang diharapkan tahan terhadap upaya pemasaran. Mempekerjakan Analisis Laten dalam Studi Longitudinal: Eksplorasi Prosedur SASreg yang Dikembangkan secara Independen Makalah ini membahas beberapa cara untuk menyelidiki variabel laten dalam survei longitudinal dengan memanfaatkan tiga prosedur SASreg yang dibuat secara independen. Tiga analisis yang berbeda untuk penemuan variabel laten akan ditinjau dan dieksplorasi: analisis kelas laten, analisis transisi laten, dan analisis lintasan laten. Prosedur analisis laten yang dieksplorasi dalam makalah ini (semuanya dikembangkan di luar SASreg Institute) adalah PROC LCA, PROC LTA dan PROC TRAJ. Gambaran spesifik di balik prosedur ini dan bagaimana menambahkannya ke perpustakaan prosedur onersquos akan dieksplorasi dan kemudian diterapkan pada pertanyaan studi kasus eksploratif. Efek dari variabel laten pada kesesuaian dan penggunaan model regresi dibandingkan model yang sama dengan menggunakan data yang diamati juga dapat ditinjau secara singkat. Data yang digunakan untuk penelitian ini diperoleh melalui Studi Longitudinal Nasional terhadap Kesehatan Remaja, sebuah studi yang didistribusikan dan dikumpulkan oleh Add Health. Data dianalisis dengan menggunakan SAS 9.4. Makalah ini ditujukan untuk pengguna SASreg tingkat menengah ke atas. Makalah ini juga ditulis untuk audiens dengan latar belakang statistik dan statistik perilaku. MIghty PROC MI untuk menyelamatkan Data yang hilang adalah fitur dari banyak kumpulan data, karena peserta dapat menarik diri dari studi, tidak memberikan tindakan yang dilaporkan sendiri, dan terkadang, masalah teknis dapat mengganggu pengumpulan data. Jika kita hanya menggunakan pengamatan selesai, kita ditinggalkan dengan kesalahan standar yang lebih besar, interval kepercayaan yang lebih luas, dan nilai p yang lebih besar. Metode data yang hilang seperti analisis kasus lengkap atau imputasi dapat digunakan namun mekanisme dan pola data yang hilang harus dipahami terlebih dahulu. Makalah ini akan memberikan gambaran tentang kekurangan sumber data, pola dan mekanisme. Dataset yang lengkap akan digunakan untuk mendapatkan hasil analisis regresi yang benar. Dua dataset dengan nilai yang hilang akan dibuat, satu dengan data hilang sama sekali secara acak dan satu dengan data hilang tidak secara acak. Metode data yang hilang dari kasus lengkap, imputasi tunggal dan ganda akan diterapkan. Proc MI dan MIANALYZE akan digunakan di SASreg 9.4 untuk analisis. Hasil metode data yang hilang akan dibandingkan satu sama lain dan dengan hasil yang sebenarnya. John Amrhein dan Fei Wang Termotivasi oleh kebutuhan pengujian kesetaraan dalam uji coba klinis, makalah ini memberikan wawasan tentang tes kesetaraan. Kami meringkas dan membandingkan tes kesetaraan untuk desain studi yang berbeda, termasuk desain untuk masalah satu sampel, desain untuk masalah dua sampel (pengamatan berpasangan, dan dua sampel yang tidak terkait), dan disain dengan beberapa lengan pengobatan. Perkiraan ukuran daya dan sampel dibahas. Kami juga memberikan contoh untuk menerapkan metode menggunakan prosedur FREQ, TTEST, CAMPURAN, dan POWER pada perangkat lunak SASSATreg. Jarak Korelasi untuk Vektor: A SASreg Makro Koefisien korelasi Pearson dikenal dan banyak digunakan. Namun, ia menderita kendala tertentu: ini adalah ukuran ketergantungan linier (hanya) dan tidak memberikan uji independensi statistik, dan ini terbatas pada variabel acak univariat. Sejak awal, langkah-langkah terkait dan alternatif telah diusulkan untuk mengatasi kendala ini. Beberapa langkah baru untuk mengganti atau melengkapi korelasi Pearson telah diusulkan dalam literatur statistik dalam beberapa tahun terakhir. Szekeley dkk. (2007) menggambarkan ukuran baru - korelasi jarak jauh - yang mengatasi kekurangan korelasi Pearson. Korelasi jarak didefinisikan untuk 2 variabel acak X, Y (yang dapat berupa vektor) sebagai fungsi bobot atau jarak yang diterapkan pada selisih antara karakteristik karakteristik bersama untuk (X, Y) dan produk dari fungsi karakteristik individu untuk X, Y Dalam prakteknya, diperkirakan dengan menghitung matriks jarak individu untuk korelasi X, Y, dan jarak adalah ukuran kemiripan untuk 2 matriks. Untuk kasus normal bivariat, korelasi jarak adalah fungsi korelasi Pearson. Korelasi jarak jauh juga mendukung uji statistik independensi terkait. Korelasi jarak telah dilakukan dengan baik dalam studi simulasi yang membandingkannya dengan alternatif lain terhadap korelasi Pearson. Di sini kita menyajikan makro SASreg Base untuk menghitung korelasi jarak untuk vektor nyata yang sewenang-wenang. Menentukan Fungsi Pompa Air di Tanzania Menggunakan SASreg EM dan VA India Kiran Chowdaravarpu, Vivek Manikandan Damodaran dan Ram Prasad Poudel Aksesibilitas untuk air bersih dan higienis adalah kemewahan dasar yang patut diterima manusia. Di Tanzania, ada 23 juta orang yang tidak memiliki akses terhadap air bersih dan terpaksa berjalan bermil-mil untuk mengambil Air untuk kebutuhan sehari-hari. Masalah yang ada lebih disebabkan oleh pemeliharaan yang buruk dan fungsi infrastruktur yang tidak efisien seperti pompa tangan. Untuk mengatasi krisis air saat ini dan memastikan aksesibilitas terhadap air bersih, ada kebutuhan untuk menemukan pompa fungsional dan fungsional yang perlu diperbaiki sehingga bisa diperbaiki atau diganti. Hal ini sangat biaya tidak efektif dan tidak praktis untuk memeriksa fungsionalitas dari 74.251 titik air secara manual di negara seperti Tanzania dimana sumber daya sangat terbatas. Tujuan dari penelitian ini adalah untuk membangun sebuah model untuk memprediksi pompa mana yang fungsional, yang memerlukan beberapa perbaikan dan yang tidak bekerja sama sekali dengan menggunakan data dari Kementerian Air Tanzania. Kami juga menemukan variabel penting yang memprediksi kondisi kerja pumprsquos. Data tersebut dikelola oleh dasbor titik air Taarifa. Setelah pra-pengolahan data akhir terdiri dari 39 variabel dan 74.251 observasi. Kami menggunakan SAS Bridge untuk ESRI dan SAS VA untuk menggambarkan variasi spasial titik air fungsional di tingkat regional Tanzania bersamaan dengan variabel sosial ekonomi lainnya. Di antara pohon keputusan, jaringan syaraf tiruan, regresi logistik dan model hutan HPrandom, model hutan acak HP ditemukan sebagai model terbaik. Tingkat kesalahan klasifikasi, sensitivitas dan spesifisitas model masing-masing adalah 24,91, 62,7 dan 91,7. Klasifikasi pompa air menggunakan model juara akan mempercepat operasi pemeliharaan titik-titik air yang akan memastikan air bersih dan mudah diakses di Tanzania dengan biaya rendah dan dalam waktu singkat. Model Ambang Pemasangan dengan Metode SASreg Model NLIN dan NLMIXED Hierarchical Generalized Linear untuk Standar Perubahan Standar 30 Hari dan 90 Hari yang Disesuaikan Standar Prestasi Prestasi dalam program Clinical Excellence (ACE) mendorong keunggulan di semua fasilitas jaringan kesehatan perilaku dengan mempromosikan yang Memberikan kualitas perawatan tertinggi. Dua tolok ukur utama efektivitas hasil dalam program ACE adalah pengembalian kembali 30 hari yang disesuaikan dengan risiko dan tingkat pengembalian kembali 90 hari yang disesuaikan dengan risiko. Penyesuaian risiko dilakukan dengan model linier umum hierarkis (HGLM) untuk menjelaskan perbedaan di antara rumah sakit dengan karakteristik demografi dan klinis pasien. Satu tahun data administrasi masuk (30 Juni 2013 sampai 1 Juli 2014) dari pasien untuk kerangka kerja 30 hari (N78,761, N Hospitals2,233) dan 90 hari (N74,540, N Hospital 2,205) adalah sumber data. HGLM secara simultan memodelkan dua tingkat 1) Model ndash tingkat pasien log-odds pendaftaran masuk rumah sakit dengan menggunakan usia, jenis kelamin, kovariat klinis pilihan, dan intersep khusus rumah sakit, dan 2) Tingkat rumah sakit menimpa rumah sakit acak yang menyumbang korelasi dalam rumah sakit Dari yang teramati PROC GLIMMIX digunakan untuk menerapkan HGLM dengan rumah sakit sebagai variabel acak (hirarkis) secara terpisah untuk penerimaan masuk akal dan penerimaan kesehatan mental (MH) dan dikumpulkan untuk mendapatkan tingkat pendaftaran penyesuaian yang disesuaikan dengan risiko di rumah sakit. Metodologi HGLM berasal dari dokumentasi Cent Medicare amp Medicaid Services (CMS) untuk Rumah Sakit yang Dilengkapi Semua All-Cause Risk-Standardised Premission Mengukur paket SAS. Metodologi ini dilakukan secara terpisah pada data penerimaan kembali 30 hari dan 90 hari. Metrik akhir adalah risiko pengembalian tingkat pengembalian 30 hari yang disesuaikan dengan risiko di rumah sakit dan risiko pengembalian angka 90 hari di seluruh rumah sakit disesuaikan dengan tingkat pengembalian. Model HGLM disahkan dengan silang pada data produksi baru yang tumpang tindih dengan sampel pengembangan. Model HGLM yang telah direvisi diuji pada bulan April, 2015, dan statistik hasilnya sangat mirip. Singkatnya, uji model revisi tersebut memvalidasi model HGLM asli, karena model yang direvisi didasarkan pada sampel yang berbeda. Demystifying the CONTRAST and ESTIMATE Statement Banyak analis yang bingung tentang bagaimana menggunakan pernyataan CONTRAST dan ESTIMATE di SAS untuk menguji berbagai hipotesis linier umum (GLH). GLH dapat digunakan untuk menguji perbandingan kunci dan hipotesis kompleks secara parsimya. Namun, membuat GLH sederhana cenderung mengintimidasi beberapa pengguna SAS. Contoh dari berbagai sumber tampaknya secara ajaib menghasilkan jawaban yang benar. Kuncinya adalah memahami bagaimana prosedur menentukan parameter model dan kemudian menggunakan parameterisasi tersebut untuk membangun GLH. CONTRAST andor ESTIMATE statements can be found in many of the modeling procedures in the SAS. However, not all procedures use the same syntax for these statements. This presentation will demystify the use of the CONTRAST and ESTIMATE statements using examples in PROCs GLM, LOGISTIC, MIXED, GLIMMIX and GENMOD. Short Introduction to Reliability Engineering and PROC RELIABILITY to Non-Engineers Reliability engineering specializes how often a product or system fails under stated conditions over time. In the modern world, it is important for a product or system maintains for a long time. Because technology is well-developed these days, some systems will eventually fail. Mathematical and statistical methods are useful for quantifying and for analyzing reliability data. However, the most important priority of reliability engineering is to apply engineering knowledge to prevent the likelihood of failures. This paper introduces the idea of reliability engineering to non-engineers as well as PROC RELIABILITY that demonstrates some applications of reliability data. Simulating Queuing Models in SAS This paper introduces users to how to simulate queuing models using a set of SAS macros: MM1,MG1, and MMC. The SAS macros will simulate queuing system in which entities (like customers, patients, cars or email messages) arrive, get served either at a single station or at several stations in turn, might have to wait in one or more queues for service, and then may leave. After the simulation, SAS will give a graphical output as well as statistical analysis of the desired queuing model. Selection Bias: How Can Propensity Score Utilization Help Control For It An important strength of observational studies is the ability to estimate a key behavior or treatmentrsquos effect on a specific health outcome. This is a crucial strength as most health outcomes research studies are unable to use experimental designs due to ethical and other constraints. Keeping this in mind, one drawback of observational studies (that experimental studies naturally control for) is that they lack the ability to randomize their participants into treatment groups. This can result in the unwanted inclusion of a selection bias. One way to adjust for a selection bias is through the utilization of a propensity score analysis. In this paper we explore an example of how to utilize these types of analyses. In order to demonstrate this technique, we will seek to explore whether recent substance abuse has an effect on an adolescentrsquos identification of suicidal thoughts. In order to conduct this analysis, a selection bias was identified and adjustment was sought through three common forms of propensity scoring: stratification, matching, and regression adjustment. Each form is separately conducted, reviewed, and assessed as to its effectiveness in improving the model. Data for this study was gathered through the Youth Risk Behavior Surveillance System, an ongoing nationwide project of the Centers for Disease Control and Prevention. This presentation is designed for any level of statistician, SASreg programmer, or data analyst with an interest in controlling for selection bias. Using SAS to analyze Countywide Survey Data: A look at Adverse Childhood Experiences and their Impact on Long-term Health The adverse childhood experiences (ACEs) scale measures childhood exposure to abuse and household dysfunction. Research suggests ACEs are associated with higher risks of engaging in risky behaviors, poor quality of life, morbidity, and mortality later in life. In Santa Clara County, a large diverse county where 88 residents have household internet access, we conducted a county-wide Behavioral Risk Factor Survey of adults with a unique web-based follow-up. We conducted a random-digit-dial telephone survey (N4,186) and follow-up online survey using the CDC BRFSS ACE module. Of those eligible for the web-based survey, the response rate was 33. The online ACE module comprised 11 questions to form 8 categories on abuse and household dysfunction. PROC SURVEYFREQ and SURVEYLOGISTIC were used in SAS 9.4 to analyze survey data and provide county-wide estimates for Santa Clara County as a whole. Most respondents (74) reported having experienced 1 ACEs. Emotional abuse was the most common (44), followed by household substance abuse (28), and household mental illness (25). The prevalence of emotional abuse, household substance abuse, physical abuse, and household mental illness was highest among individuals with high (3) and low (1-2) ACEs. Indicators of perceived poor health showed a strong association among individuals with ACEs. The odds of 1 poor mental health days in the past month were higher among individuals with low ACEs (OR2.86), high ACEs (OR6.74), and among women (OR2.27). A web-based survey offers a reliable means to assess a population about sensitive subjects like ACE at lower cost than a telephone survey in smaller jurisdictions. Results suggest ACEs are common among adults in the county, and may be under-reported in telephone interviews. PROC SURVEYFREQ and SURVEYLOGISTIC in SAS are powerful tools that can be used to analyze survey data, especially for small area estimates on the health of county residents. How D-I-D you do that Basic Difference-in-Differences Models in SAS Long a mainstay in econometrics research, difference-in-differences (DID) models have only recently become more commonly used in health services and epidemiologic research. DID study designs are quasi-experimental, can be used with retrospective observational data, and do not require exposure randomization. This study design estimates the difference in pre-post changes in an outcome comparing an exposed group to an unexposed (reference) group. The outcome change in the unexposed group estimates the expected change in the exposed group had the group been, counterfactually, unexposed. By subtracting this change from the change in the exposed group (the ldquodifference in differencesrdquo), the effects of background secular trends are removed. In the basic DID model, each subject serves as his or her own control, removing confounding by known and unknown individual factors associated with the outcome of interest. Thus, the DID generates a causal estimate of the change in an outcome associated with the initiation of the exposure of interest while controlling for biases due to secular trends and confounding. A basic repeated-measures generalized linear model provides estimates of population-average slopes between two time points for the exposed and unexposed groups and tests whether the slopes differ by including an interaction term between the time and exposure variables. In this paper, we illustrate the concepts behind the basic DID model and present SAS code for running these models. We include a brief discussion of more advanced DID methods and present an example of a real-world analysis using data from a study on the impact of introducing a value-based insurance design (VBID) medication plan at Kaiser Permanente Northern California on change in medication adherence. Using PROC PHREG to Assess Hazard Ratio in Longitudinal Environmental Health Study Air pollution, especially combustion products, can activate metabolic disorders through inflammatory pathways potentially leading to obesity. The effect of air-pollution on BMI growth was shown by a previous study (Jerrett, et al. 2014). Recognizing the role of air pollution in the development of obesity in children can help guide possible interventions reducing obesity formation. The objective of this paper is to analyze the obesity incidence of children participating in Childrenrsquos Hospital Study (CHS) who were non-obese at baseline, identify the time interval for the onset of obesity, and identify the effects of various risk factors, especially air pollutants. The PROC PHREG procedure was used in creating a model within a macro that included community random effects, stratified by sex, and adjusting for baseline characteristics. Using PROC LOGISTIC for Conditional Logistic Regression to Evaluate Vehicle Safety Performance The LOGISTIC Procedure has several capabilities beyond standard logistic regression on binary outcome variables. For a conditional logit model, PROC LOGISTIC can perform several types of matching, 1:1, 1:M matching, and even M:N matching. This paper shows an example of using PROC LOGISTIC for conditional logit models to evaluate vehicle safety performance in fatal accidents using the Fatality Analysis Reporting System (FARS) 2004-2011 database. Conditional logistic regression models were performed with an additional stratum parameter to model the relationship between fatality of the drivers and the vehiclersquos continent of origin. Identifying Duplicates Made Easy Elizabeth Angel and Yunin Ludena Have you ever had trouble removing or finding the exact type of duplicate you want SAS offers several different ways to identify, extract, andor remove duplicates, depending on exactly what you want. We will start by demonstrating perhaps the most commonly used method, PROC SORT, and the types of duplicates it can identify and how to remove, flag, or store them. Then, we will present the other less commonly used methods which might give information that PROC SORT cannot offer, including the data step (FIRST. LAST.), PROC SQL, PROC FREQ, and PROC SUMMARY. The programming is demonstrated at a beginnerrsquos level. Dont Forget About Small Data Beginning in the world of data analytics and eventually flowing into mainstream media, we are seeing a lot about Big Data and how it can influence our work and our lives. Through examples, this paper will explore how Small Data - ndash which is everything Big Data is not - ndash can and should influence our programming efforts. The ease with which we can read and manipulate data from different formats into usable tables in SASreg makes using data to manage data very simple and supports healthy and efficient practices. This paper will explore how using small or summarized data can help to organize and track program development, simplify coding and optimize code. Let the CAT Out of the Bag: String Concatenation in SASreg 9 Are you still using TRIM, LEFT, and vertical bar operators to concatenate strings Its time to modernize and streamline that clumsy code by using the string concatenation functions introduced in SASreg 9. This paper is an overview of the CAT, CATS, CATT, and CATX functions introduced in SASreg 9, and the new CATQ function added in SASreg 9.2. In addition to making your code more compact and readable, this family of functions also offers some new tricks for accomplishing previously cumbersome tasks. SASreg Abbreviations: a Shortcut for Remembering Complicated Syntax SASreg Abbreviations: a Shortcut for Remembering Complicated Syntax Yaorui Liu, Department of Preventive Medicine, University of Southern California ABSTRACT One of many difficulties for a SASreg programmer is remembering how to accurately use SAS syntax, especially the ones that include many parameters. Not mastering the basic syntax parameters by heart will definitely make onersquos coding inefficient because one would have to check the SAS reference manual constantly to ensure that onersquos syntax was implemented properly. One of the more useful tools in SAS, but seldom known by novice programmers, is the use of SAS Abbreviations. It allows users to store text strings, such as the syntax of a DATA step function, a SAS procedure, or a complete DATA step, with a user-defined and easy-to-remember abbreviated term. Once this abbreviated term is typed within the enhanced editor, SAS will automatically bring-up the corresponding stored syntax. Knowing how to use SAS Abbreviations will ultimately be beneficial to programmers with varying levels of SAS expertise. In this paper, various examples by utilizing SAS Abbreviations will be demonstrated. Implementation of Good Programming Practices in Clinical SAS SASreg Base software provides users with many choices for accessing, manipulating, analyzing, and processing data and results. Partly due to the power offered by the SAS software and the size of data sources, many application developers and end-users are in need of guidelines for more efficient use. This presentation highlights my personal top ten list of performance tuning techniques for SAS users to apply in their applications. Attendees learn DATA and PROC step language statements and options that can help conserve CPU, IO, data storage, and memory resources while accomplishing tasks involving processing, sorting, grouping, joining (merging), and summarizing data. Sorting a Bajillion Records: Conquering Scalability in a Big Data World quotBig dataquot is often distinguished as encompassing high volume, velocity, or variability of data. While big data can signal big business intelligence and big business value, it also can wreak havoc on systems and software ill-prepared for its profundity. Scalability describes the ability of a system or software to adequately meet the needs of additional users or its ability to utilize additional processors or resources to fulfill those added requirements. Scalability also describes the adequate and efficient response of a system to increased data throughput. Because sorting data is one of the most common as well as resource-intensive operations in any software language, inefficiencies or failures caused by big data often are first observed during sorting routines. Much SASreg literature has been dedicated to optimizing big data sorts for efficiency, including minimizing execution time and, to a lesser extent, minimizing resource usage (i. e. memory and storage consumption.) Less attention has been paid, however, to implementing big data sorting that is reliable and robust even when confronted with resource limitations. To that end, this text introduces the SAFESORT macro that facilitates a priori exception handling routines (which detect environmental and data set attributes that could cause process failure) and post hoc exception handling routines (which detect actual failed sorting routines.) If exception handling is triggered, SAFESORT automatically reroutes program flow from the default sort routine to a less resource-intensive routine, thus sacrificing execution speed for reliability. However, because SAFESORT does not exhaust system resources like default SAS sorting routines, in some cases it performs more than 200 times faster than default SAS sorting methods. Macro modularity moreover allows developers to select their favorite sorting routine and, for data-driven disciples, to build fuzzy logic routines that dynamically select a sort algorithm based on environmental and data set attributes. SAS integration with NoSQL database We are living in the world of abundant data, so called ldquobig datardquo. The term ldquobig datardquo is closely associated with any structured data ndash unstructured, structured and semi-structured. They are called ldquounstructuredrdquo and ldquosemi-structuredrdquo because they do not fit neatly in a traditional row-column relational database. A NoSQL (Not only SQL or Non-relational SQL) database is the type of database that can handle any structured data. For example, a NoSQL database can store any structured data such as XML (Extensible Markup Language), JSON (JavaScript Object Notation) or RDF (Resource Description Framework) files. If an enterprise is able to extract any structured data from NoSQL databases and transfer it to the SAS environment for analysis, it will produce tremendous value, especially from a big data solutions standpoint. This paper will show how any structured data is stored in the NoSQL databases and ways to transfer it to the SAS environment for analysis. First, the paper will introduce the NoSQL database. For example, NoSQL databases can store any structured data such as XML, JSON or RDF files. Secondly, the paper will show how the SAS system connects to NoSQL databases using REST (Representational State Transfer) API (Application Programming Interface). For example, SAS programmers can use the PROC HTTP option to extract XML or JSON files through REST API from the NoSQL database. Finally, the paper will show how SAS programmers can convert XML and JSON files to SAS datasets for analysis. For example, SAS programmers can create XMLMap files using XMLV2 LIBNAME engine and convert the extracted XML files to SAS datasets. DS2 Versus Data Step: Efficiency Considerations There is recognition that in large, complex systems the advantages of object-oriented concepts available in DS2 of modularity, code reuse and ease of debugging can provide increased efficiency. Object-oriented programming also allows multiple teams of developers to work on the same project easily. DS2 was designed for data manipulation and data modeling applications that can achieve increased efficiency by running code in threads, splitting the data across multiple processors and disks. Of course, performance is also dependent on hardware architecture and the amount of effort you put into the tuning of your architecture and code. Join our panel for a discussion of architecture, tuning and data size considerations in determining if DS2 is the more efficient alternative. Using Shared Accounts in Kerberized Hadoop Clusters with SASreg: How Can I Do That Using shared accounts to access third-party data servers is a common architecture in SASreg environments. SAS software can support seamless user access to shared accounts in databases such as Oracle, via group definitions and outbound authentication domains in Metadata. However, the configurations necessary to leverage shared accounts in Hadoop clusters with Kerberos authentication are more complicated. Not only must Kerberos tickets be generated and maintained in order to simply access the Hadoop environment, but those tickets must allow access as the shared account instead of the individual usersrsquo accounts. Methods for implementing this arrangement in SAS environments can be non-intuitive. This paper starts by outlining several general architectures of shared accounts in Kerberized Hadoop environments. It then presents possible methods of managing such shared account access in SAS environments, including specific implementation details, code samples and security implications. Finally, troubleshooting methods are presented for when issues arise. Example code and configurations for this paper were developed on a SAS 9.4 system running over Redhat Enterprise Linux 6. What just happened A visual tool for highlighting differences between two data sets. Base SAS includes a great utility for comparing two data sets - PROC COMPARE. The output though can be hard to read as the differences between values are listed separately for each variable. Its hard to see the differences across all variables for the same observation. This talk presents a macro to compare two SAS data sets and display the differences in Excel. PROC COMPARE OUT option creates an output data set with all the differences. This data set is then processed with PROC REPORT using ODS EXCEL and colour highlighting to show the differences in an Excel, making the differences easy to see. Tips and Tricks for Producing Time-Series Cohort Data Developers working on a production process need to think carefully about ways to avoid future changes that require change control, so its always important to make the code dynamic rather than hardcoding items into the code. Even if you are a seasoned programmer, the hardcoded items might not always be apparent. This paper assists in identifying the harder-to-reach hardcoded items and addresses ways to effectively use control tables within the SASreg software tools to deal with sticky areas of coding such as formats, parameters, groupinghierarchies, and standardization. The paper presents examples of several ways to use the control tables and demonstrates why this usage prevents the need for coding changes. Practical applications are used to illustrate these examples. The Power of the Function Compiler: PROC FCMP PROC FCMP, the user-defined function procedure, allows SAS users of all levels to get creative with SAS and expand their scope of functionality. PROC FCMP is the superhero of all SAS functions in its vast capabilities to create and store uniquely defined functions that can later be used in data steps. This paper outlines the basics as well as tips and tricks for the user to get the most out of this procedure. Creating Viable SASreg Data Sets From Survey Monkeyreg Transport Files Survey Monkey is an application that provides a means for creating online surveys. Unfortunately, the transport (Excel) file from this application requires a complete overhaul in order to do any serious data analysis. Besides having a peculiar structure and containing extraneous data points, the column headers become very problematic when importing the file into SAS. In fact, the initial SAS data set is virtually unusable. This paper explains a systematic approach for creating a viable SAS data set for doing serious analysis. Document and Enhance Your SASreg Code, Data Sets, and Catalogs with SAS Functions, Macros, and SAS Metadata Roberta Glass and Louise Hadden Discover how to document your SASreg programs, data sets, and catalogs with a few lines of code that include SAS functions, macro code, and SAS metadata. Do you start every project with the best of intentions to document all of your work, and then fall short of that aspiration when deadlines loom Learn how your programs can automatically update your processing log. If you have ever wondered who ran a program that overwrote your data, SAS has the answer And If you donrsquot want to be tracing back through a yearrsquos worth of code to produce a codebook for your client at the end of a contract, SAS has the answer Donrsquot Get Blindsided by PROC COMPARE For a statistical programmer in the pharmaceutical industry each work day is new. A project you have been working on for a few months can be changed at a momentrsquos notice and you need to implement changes quickly and accurately. To ensure that the desired changes are done quickly, and most especially accurately, if the task entails doing a find and replace sort of thing in all the SAS Programs in a directory (or multiple directories) a macro called ldquoReplacerrdquo could come to the rescue. Process Flow: First, it reads all the SAS programs in a directory one by one and converts every SAS program to a SAS dataset using grepline. After this, it reads all datasets, one by one. replacing an existing string with the now desired string using if then conditional logic. Finally, it outputs each updated SAS dataset as a new SAS program at a desired location which has been specified. This macro has multiple parameters which you can specify: the input directory the output directory and the from and to strings which gives the programmer more control over the process. A quick example of the practical use of the replacer macro is ndash when making the transition from a Windows to UNIX Server we needed to make sure we changed the path of our init. sas and changed all forward slashes() to backward slashes ().Letrsquos assume we have 100 programs and we decide to do this manually. It can be a cumbersome task and given time constraints, accuracy is not guaranteed. The programmer may end up spending a couple of hours to complete the necessary changes to each program before re-running all the programs to make sure the appropriate changes have taken place. Replacer can accomplish this same task in less than 2 minutes. Ditch the Data Memo: Using Macro Variables and Outer Union Corresponding in PROC SQL to Create Data Set Summary Tables Data set documentation is essential to good programming and for sharing data set information with colleagues who are not SAS programmers. However, most SAS programmers dislike writing memos which must be updated each time a dataset is manipulated. Utilizing two tools, macro variables and the outer union corresponding set operator in PROC SQL, we can write concise code that exports a single summary table containing important data set information serving in lieu of data memos. These summary tables can contain the following data set information and much more: 1) Report in the change in the number of records in a dataset due to dropping records, collapsing across IDs, removing duplicate records 2) summary statistics of key variables and 3) trends across time. This presentation requires some basic understanding of macros and SQL queries. File Management Using Pipes and X Commands in SASreg SAS for Windows can be an extremely powerful piece of software, not only for analyzing data, but also for organizing and maintaining output and permanent datasets. By employing pipes and operating system (lsquoXrsquo) commands within a SAS session, you can easily and effectively manage files of all types stored on your local network. Handling longitudinal data from multiple sources: experience with analyzing kidney disease patients Elani Streja and Melissa Soohoo Analyses in health studies using multiple data sources often come with a myriad of complex issues such as missing data, merging multiple data sources and date matching. Combining multiple data sources is not straight forward, as often times there is discordance or missing information such as dates of birth, dates of death, and even demographic information such as sex, race, ethnicity and pre-existing comorbidities. It therefore becomes essential to document the data source from which the variable information was retrieved. Analysts often rely on one resource as the dominant variable to use in analyses and ignore information from other sources. Sometimes, even the database thought to be the ldquogold standardrdquo is in fact discordant with other data sources. In order to increase sensitivity and information capture, we have created a source variable, which demonstrates the combination of sources for which the data was concordant and derived. In our example, we will show how to resolve address information on date of birth, date of death, date of transplant, sex and race combined from 3 data sources with information on kidney disease patients. These 3 sources include: the United States Renal Data System, Scientific Registry of Transplant Recipients, and data from a large dialysis organization. This paper focuses on approaches of handling multiple large databases for preparation for analyses. In addition, we will show how to summarize and prepare longitudinal lab measurements (from multiple sources) for use in analyses. An Array of Fun: Macro Variable Arrays Like all skilled tradespeople, SASreg programmers have many tools at their disposal. Part of their expertise lies in knowing when to use each tool. In this paper, we use a simple example to compare several common approaches to generating the requested report: the TABULATE, TRANSPOSE, REPORT, and SQL procedures. We investigate the advantages and disadvantages of each method and consider when applying it might make sense. A variety of factors are examined, including the simplicity, reusability, and extensibility of the code in addition to the opportunities that each method provides for customizing and styling the output. The intended audience is beginning to intermediate SAS programmers. Something Old, Something New. Flexible Reporting with DATA Step-based Tools The report looks simple enoughmdasha bar chart and a table, like something created with the GCHART and REPORT procedures. But, there are some twists to the reporting requirements that make those procedures not quite flexible enough. The solution was to mix quotoldquot and quotnewquot DATA step-based techniques to solve the problem. Annotate datasets are used to create the bar chart and the Report Writing Interface (RWI) to create the table. Without a whole lot of additional code, an extreme amount of flexibility is gained. The goal of this paper is to take a specific example of a couple generic principles of programming (at least in SASreg): 1. The tools you choose are not always the most obvious ones ndash So often, other from habit of comfort level, we get zeroed in on specific tools for reporting tasks. Have you ever heard anyone say, ldquoI use TABULATE for everythingrdquo or ldquoIsnrsquot PROC REPORT wonderful, it can do anythingrdquo While these tools are great (Irsquove written papers on their use), itrsquos very easy to get into a rut, squeezing out results that might have been done more easily, flexibly or effectively with something else. 2. Itrsquos often easier to make your data fit your reporting than to make your reporting fit your data ndash It always takes data to create a report and itrsquos very common to let the data drive the report development. We struggle and fight to get the reporting procedures to work with our data. There are numerous examples of complicated REPORT or TABULATE code that works around the structure of the data. However, the data manipulation tools in SAS (data step, SQL, procedure output) can often be used to preprocess the data to make the report code significantly simpler and easier to maintain and modify. Proc Document, The Powerful Utility for ODS Output The DOCUMENT procedure is a little-known procedure that can save you vast amounts of time and effort when managing the output of your SASreg programming efforts. This procedure is deeply associated with the mechanism by which SAS controls output in the Output Delivery System (ODS). Have you ever wished you didnrsquot have to modify and rerun the report-generating program every time there was some tweak in the desired report PROC DOCUMENT enables you to store one version of the report as an ODS Document Object and then call it out in many different output forms, such as PDF, HTML, listing, RTF, and so on, without rerunning the code. Have you ever wished you could extract those pages of the output that apply to certain ldquoBY variablesrdquo such as State, StudentName, or CarModel With PROC DOCUMENT, you have where capabilities to extract these. Do you want to customize the table of contents that assorted SAS procedures produce when you make frames for the table of contents with HTML, or use the facilities available for PDF PROC DOCUMENT enables you to get to the inner workings of ODS and manipulate them. This paper addresses PROC DOCUMENT from the viewpoint of end results, rather than provide a complete technical review of how to do the task at hand. The emphasis is on the benefits of using the procedure, not on detailed mechanics. There will be a number of practical applications presented for everyday real life challenges that arise in manipulating output in HTML, PDF and RTF formats. A SAS macro for quick descriptive statistics Arguably, the most required table in publications is the description of the sample table, fondly referred to among statisticians as ldquoTable 1rdquo. This table displays means and standard errors, medians and IQRs, and counts and percentages for the variables in the sample, often stratified by some variable of interest (e. g. disease status, recruitment site, sex, etc.). While this table is extremely useful, the construction of it can be time consuming and, frankly, rather boring. I will present two SAS macros that facilitate the creation of Table 1. The first is a ldquoquick and dirtyrdquo macro that will output the results for Table 1 for nearly every situation. The second is a ldquoprettyrdquo macro that will output a well formatted Table 1 for a specific situation. Controlling Colors by Name Selecting, Ordering, and Using Colors for Your Viewing Pleasure Within SASreg literally millions of colors are available for use in our charts, graphs, and reports. We can name these colors using techniques which include color wheels, RGB (Red, Green, Blue) HEX codes, and HLS (Hue, Lightness, Saturation) HEX codes. But sometimes I just want to use a color by name. When I want purple, I want to be able to ask for purple not CX703070 or H03C5066. But am I limiting myself to just one purple What about light purple or pinkish purple. Do those colors have names or must I use the codes It turns out that they do have names. Names that we can use. Names that we can select, names that we can order, names that we can use to build our graphs and reports. This paper will show you how to gather color names and manipulate them so that you can take advantage of your favorite purple be it lsquopurplersquo, lsquograyish purplersquo, lsquovivid purplersquo, or lsquopale purplish bluersquo. Much of the control will be obtained through the use of user defined formats. Learn how to build these formats based on a data set containing a list of these colors. Tweaking your tables: Suppressing superfluous subtotals in PROC TABULATE PROC TABULATE is a great tool for generating cross tab style reports. Its very flexible but has a few annoying limitations. One is suppressing superfluous subtotals. The ALL keyword creates a total or subtotal for the categories in one dimension. However if there is only one category in the dimension, the subtotal is still shown, which is really just repeating the detail line again. This can look a bit strange. This talk demonstrates a method to suppress those superfluous totals by saving the output from PROC TABULATE using the OUT option. That data set is then reprocessed to remove the undesirable totals using the TYPE variable which identifies the total rows. PROC TABULATE is then run again against the reprocessed data set to create the final table. Indenting with Style Within the pharmaceutical industry, may SAS programmers rely heavily on Proc Report. While it is used extensively for summary tables and listings, it is more typical that all processing is done prior to final report procedure rather than using some of its internal functionality. In many of the typical summary tables, some indenting is required. This may be required to combine information into a single column in order to gain more printable space (as is the case with many treatment group columns). It may also be to simply make the output more aesthetically pleasing. A standard approach it to pad a character string with spaces to give the appearance of indenting. This requires pre-processing of the data as well as the use of the ASISON option in the column style. While this may be sufficient in many cases, it fails for longer text strings that require wrapping within a cell. Alternative approaches that conditionally utilize INDENT and LEFTMARGIN options of a column style are presented. This Quick-tip presentation will describe such options for indenting. Example outputs will be provided to demonstrate the pros and cons of each. The use of Proc Report and ODS is required in this application using SAS 9.4 in a Windows environment. SASreg Office Analytics: An Application In Practice Data Monitoring and Reporting Using Stored Process Mansi Singh, Kamal Chugh, Chaitanya Chowdagam and Smitha Krishnamurthy Time becomes a big factor when it comes to ad-hoc reporting and real-time monitoring of data while the project work is on full swing. There are always numerous urgent requests from various cross-functional groups regarding the study progress. Typically a programmer has to work on these requests along with the study work which can become stressful. To address this growing need of real-time monitoring of data and to tailor the requirements to create portable reports, SASreg has introduced a powerful tool called SAS Office Analytics. SAS Office Analytics with Microsoftreg Add-In provides excellent real-time data monitoring and report generating capabilities with which a SAS programmer can take ad-hoc requests and data monitoring to next level. Using this powerful tool, a programmer can build interactive customized reports as well as give access to study data, and anyone with knowledge of Microsoft Office can then view, customize, andor comment on these reports within Microsoft Office with the power of SAS running in the background. This paper will be a step-by-step guide to demonstrate how to create these customized reports in SAS and access study data using Microsoft Office Add-In feature. Getting it done with PROC TABULATE From state-of-the-art research to routine analytics, the Jupyter Notebook offers an unprecedented reporting medium. Historically tables, graphics, and other output had to be created separately and integrated into a report piece by piece amidst the drafting of the text. The Jupyter Notebook interface allows for the creation of code cells and markdown cells in any kind of arrangement. While the markdown cells admit all the typical sorts of formatting, the code cells can be used to run code within and throughout the document. In this way, report creation happens naturally and in a completely reproducible way. Handing a colleague a Jupyter Notebook file to be re-run or revised is much easier and simpler than passing along at least two files: the code and the text. With the new SAS reg kernel for Jupyter, all of this is possible and more Clinton vs. Trump 2016: Analyzing and Visualizing Sentiments towards Hillary Clinton and Donald Trumprsquos Policies Sid Grover and Jacky Arora The United States 2016 presidential election has seen an unprecedented media coverage, numerous presidential candidates and acrimonious debate over wide-ranging topics from candidates of both the republican and the democratic party. Twitter is a dominant social medium for people to understand, express, relate and support the policies proposed by their favorite political leaders. In this paper, we aim to analyze the overall sentiment of the public towards some of the policies proposed by Donald Trump and Hillary Clinton using Twitter feeds. We have started to extract the live streaming data from Twitter. So far, we have extracted about 200,000 twitter feeds accessing the live stream API of Twitter, using a java program mytwitterscraper which is an open source real-time twitter scraper. We will use SASreg Enterprise Miner and SASreg Sentiment Analysis Studio to describe and assess how people are reacting to each candidatersquos stand on issues such as immigration, taxes and so on. We will also track and identify patterns of sentiments shifting across time (from March to June) and geographic regions. Donor Sentiment Analysis of Presidential Primary Candidates Using SAS In this paper, we explore advantages of using PROC DS2 procedure over the data step programming in SASreg. DS2 is a new SAS proprietary programming language that is appropriate for advanced data manipulation. We explore use of PROC DS2 to execute queries in databases using FED SQL from within the DS2 program. Several DS2 language elements accept embedded FedSQL syntax, and the run-time generated queries can exchange data interactively between DS2 and supported database. This action enables SQL preprocessing of input tables, which effectively allows processing data from multiple tables in different databases within the same query thereby drastically reducing processing times and improving performance. We explore use of DS2 for creating tables, bulk loading tables, manipulating tables, and querying data in an efficient manner. We explore advantages of using PROC DS2 over data step programming such as support for different data types, ANSI SQL types, programming structure elements, and benefits of using new expressions or writing onersquos own methods or packages available in the DS2 system. We also explore high-performance version of the DS2 procedure, PROC HPDS2, and show how one can submit DS2 language statements for execution to either a single machine running multiple threads or to a distributed computing environment, including the SAS LASR Analytic Server thereby massively reducing processing times resulting in performance improvement. The DS2 procedure enables users to submit DS2 language statements from a Base SAS session. The procedure enables requests to be processed by the DS2 data access technology that supports a scalable, threaded, high-performance, and standards-based way to access, manage, and share relational data. In the end, we empirically measure performance benefits of using PROC DS2 over PROC SQL for processing queries in-database by taking advantage of threaded processing in supported data databases such as Oracle. Social Media, Anonymity, and Fraud: HP Forest Node in SASreg Enterprise Minertrade You may encounter people who used SASreg long ago (perhaps in university) or through very limited use in a job. Some of these people with limited knowledgeexperience think that the SAS system is ldquojust a statistics packagerdquo or ldquojust a GUIrdquo, the latter usually a reference to SASreg Enterprise Guidereg or if a dated reference, to (legacy) SASAFreg or SASFSPreg applications. The reality is that the modern SAS system is a very large, complex ecosystem, with hundreds of software products and a diversity of tools for programmers and users. This poster provides a set of diagrams and tables that illustrate the complexity of the SAS system, from the perspective of a programmer. Diagramsillustrations that are provided here include: the different environments that program code can run in cross-environment interactions and related tools SAS Grid: parallel processing SAS can run with files in memory ndash the legacy SAFILE statement and big dataHadoop some code can run in-database. We end with a tabulation of the many programming languages and SQL dialects that are directly or indirectly supported within SAS. Hopefully the content of this poster will inform those who think that SAS is an old, dated statistics package or just a simple GUI. Leadership: More than Just a Position Laws of Programming Leadership As someone studying statistics in the data science era, more and more emphasis is put on illustrious graphs. Data is no longer displayed with a black and white boxplot. Using SASreg MACRO and the Statistical Graphics procedure, you can animate graphs to turn an outdated two variable graph into a graph in motion that shows not only a relation between factors but also a change over time. An even simpler approach for bubble graphs is to use a function in JMP to create colorful moving plots that would typically require many lines of code, with just a few clicks of the mouse. Sentiment Analysis of Opinions about Self-driving cars Swapneel Deshpande and Nachiket Kawitkar Self-driving cars are no longer a futuristic dream. In recent past, Google launched a prototype of the self-driving car while Apple is also developing its own self-driving car. Companies like Tesla have just introduced an Auto Pilot version in their newer version of electric cars which have created quite a buzz in the car market. This technology is said to enable aging or disable people to drive around without being dependent on anyone while also might affecting the accident rate due to human error. But many people are still skeptical about the idea of self-driving cars and thatrsquos our area of interest. In this project, we plan to do sentiment analysis on thoughts voiced by people on the Internet about self-driving cars. We have obtained the data from crowdflowerdata-for-everyone which contain these reviews about the self-driving cars. Our dataset contains 7,156 observations and 9 variables. We plan to do descriptive analysis of the reviews to identify key topics and then use supervised sentiment analysis. We also plan to track and report at how the topics and the sentiments change over time. An Analysis of the Repetitiveness of Lyrics in Predicting a Songrsquos Popularity In the interest of understanding whether or not there is a correlation between the repetitiveness of a songrsquos lyrics and its popularity, the top ten songs from the year-end Billboard Hot 100 Songs chart from 2002 to 2015 were collect. These songs then had their lyrics assessed to determine the count of the top ten words used. These words counts were then used to predict the number of weeks the song was on the chart. The prediction model was analyzed to determine the quality of the model and if word count is a significant predictor of a songrsquos popularity. To investigate if song lyrics are becoming more simplistic over time there were several tests completed in order to see if the average word counts have been changing over the years. All analysis was completed in SASreg using various PROCs. Regression Analysis of the Levels of Chlorine in the Public Water Supply in Orange County, FL This conference provides a range of events that can benefit any and all SAS Users. However, sometimes the extensive schedule can be overwhelming at first glance. With so many things to do and people to see, I have compiled the advice I was given as a novice WUSS and lessons Irsquove learned since. This presentation will provide a catalog of tips to make the most out of anyonersquos conference experience. From volunteering, to the elementary advice of sitting at a table where you do not know anyonersquos name, listeners will be excited to take on all that WUSS offers. Patients with Morbid Obesity and Congestive Heart Failure Have Longer Operative Time and Room Time in Total Hip Arthroplasty More and more patients with total hip arthroplasty have obesity, and previous studies have shown a positive correlation between obesity and increased operative time in total hip arthroplasty. But those studies shared the limitation of small sizes. Decreasing operative time and room time is essential to meeting the increased demand for total hip arthroplasty, and factors that influence these metrics should be quantified to allow for targeted reduction in time and adjusted reimbursement models. This study intend to use a multivariate approach to identify which factors increase operative time and room time in total hip arthroplasty. For the purposes of this study, the American College of Surgeons National Surgical Quality Improvement Program database was used to identify a cohort of over thirty thousand patients having total hip arthroplasty between 2006 and 2012. Patient demographics, comorbidities including body mass index, and anesthesia type were used to create generalized linear models identifying independent predictors of increased operative time and room time. The results showed that morbid obesity (body mass index gt40) independently increased operative time by 13 minutes and room time 18 by minutes. Congestive heart failure led to the greatest increase in overall room time, resulting in a 20-minute increase. Anesthesia method further influenced room time, with general anesthesia resulting in an increased room time of 18 minutes compared with spinal or regional anesthesia. Obesity is the major driver of increased operative time in total hip arthroplasty. Congestive heart failure, general anesthesia, and morbid obesity each lead to substantial increases in overall room time, with congestive heart failure leading to the greatest increase in overall room time. All analyses are conducted via SAS (version SAS 9.4, Cary, NC). Using SAS: Monte Carlo Simulations of Manufactured Goods - Should-Cost Models Should cost modeling, or ldquocleansheetingrdquo, of manufactured goods or services is a valuable tool for any procurement group. It provides category managers a foundation to negotiate, test and drive value addedvalue engineering ideas. However, an entire negotiation can be derailed by a supplier arguing that certain assumptions or inputs are not reflective of what they are currently seeing in their plant. The most straightforward resolution to this issue is using a Monte Carlo simulation of the cleansheet. This enables the manager to prevent any derailing supplier tangents, by providing them with the information in regards to how each input effects the model as a whole, and the resulting costs. In this ePoster, we will demonstrate a method for employing a Monte Carlo simulation on manufactured goods. This simulation will cover all of the direct costs associated with production, labor, machine, material, as well as the indirect costs, i. e. overhead, etc. Using SAS, this simulation model will encompass 60 variables, from nine discrete manufacturing processes, and will be set to automatically output the information most relevant to the category manager. Making Prompts Work for You: Using SAS Enterprise Guide Prompts with Categorization of Output Edward Lan and Kai-Jen Cheng In statistical and epidemiology units of public health departments, SAS codes are often re-used across a variety of different projects for data cleaning and generation of output datasets from the databases. Each SAS user will copy and paste common SAS codes into their own program and use it to generate datasets for analysis. In order to simplify this process, SAS Enterprise Guide (EG) prompts can be used to eliminate the need for the user to edit the SAS code or copy and paste. Instead, the user will be able to enter the desired directory, date ranges, and desired variables to be included in the dataset. In the event of large datasets, however, it is beneficial for these variables to be grouped into categories instead of having the user individually choose the desired variables or lumping all the variables into the final dataset. Using the SAS EG prompt for static lists where the SAS user selects multiple values, variable categories can be created for selection where groups of variables are selected into the dataset. In this paper for novice and intermediate SAS users, we will discuss how macros and SAS EG prompts, using EG 7.1, can be used to automate the process of generating an output dataset where the user selects a folder directory, date ranges, and categories of variables to be included in the final dataset. Additionally, the paper will explain how to overcome issues with integrating the categorization prompt with generating the output dataset. Application of Data Mining Techniques for Determining Factors Associated with Overweight and Obesity Among California Adults This paper describes the application of supervised data mining methods using SAS Enterprise Miner 12.3 on data from the 2013-2014 California Health Interview Survey (CHIS), in order to better understand obesity and the indicators that may predict it. CHIS is the largest health survey ever conducted in any state, which samples California households through random-digit-dialing (RDD). EM was used to apply logistic regression, decision trees and neural network models to predict a binary variable, OverweightObese Status, which determines whether an individual has a Body Mass Index (BMI) greater than 25. These models were compared to assess which categories of information, such as demographic factors or insurance status, and individual factors like race, best predict whether an individual is overweightobese or not. The Orange Lifestyle If you are like many SAS users you have worked with the classical quotoldquot SAS graphics procedures for some time and are very comfortable with the code syntax, workflow approach etc that make for reasonably simple creation of presentation graphics. Then all of a sudden, a job requires the capabilities of the procedures in SAS ODS graphics. At first glance you may be thinking --- quotOK, a few more procedures to learn and a little syntax to learnquot. Then you realize that moving yourself into this arena is no small task. This presentation will overview the options and approaches that you might take to get up to speed fast. Included will be decision trees to be followed in deciding upon a course of action. This paper contains many examples of very simple ways to get very simple things accomplished. Over 20 different graphs are developed using only a few lines of code each, using data from the SASHELP data sets. The usage of the SGPLOT, SGPANEL, and SGSCATTER procedures are shown. In addition, the paper addresses those situations in which the user must alternatively use a combination of the TEMPLATE and SGRENDER procedures to accomplish the task at hand. Most importantly, the use of ODS Graphics Designer as a teaching tool and a generator of sample graphs and code are covered. A single slide in the presentation overviewing the ODS Designer shows everything needed to generated a very complex graph. The emphasis in this paper is the simplicity of the learning process. Users will be able to take the included code and run it immediately on their personal machines to achieve an instant sense of gratification. The paper also addresses the quotODS Sandwichquot for creating output and the use of Proc Document to manipulate it. Exploring Multidimensional Data with Parallel Coordinate Plots Throughout the many phases of an analysis, it may be more intuitive to review data statistics and modeling results as visual graphics rather than numerical tables. This is especially true when an objective of the analysis is to build a sense of the underlying structures within the data rather than describe the data statistics or model results with numerical precision. Although scatterplots provide a means of evaluating relationships, its two-dimensional nature may be limiting when exploring data across multiple dimensions simultaneously. One tool to explore multivariate data is parallel coordinate plots. I will present a method of producing parallel coordinate plots using PROC SGPLOT and will provide examples of when parallel coordinate plots may be very informative. In particular, I will discuss its application on an analysis of longitudinal observational data and results from unsupervised classification techniques. Making SAS the Easy Way Out: Harnessing the Power of PROC TEMPLATE to Create Reproducible, Complex Graphs With high pressure deadlines and mercurial collaborators, creating graphs in the most familiar way seems like the best option. Using post-processing programs like Photoshop or Microsoft Powerpoint to modify graphs is quicker and easier to the novice SAS User or for onersquos collaborators to do on their own. However, reproducibility is a huge issue in the scientific community. Any changes made outside statistical software need to be repeated when collaborator preferences change, the data changes, the journal requires additional elements, and a host of other reasons The likelihood of making errors increases along with the time spent making the figure. Learning PROC TEMPLATE allows one to seamlessly create complex, automatically generated figures and eliminates the need for post-processing. This paper will demonstrate how to do complex graph manipulation procedures in SAS 9.3 or later to solve common problems, including lattice panel plots for different variables, split plots and broken axes, weighted panel plots, using select observations in each panel, waterfall plots, and graph annotation. The examples presented are healthcare based, but the methods are applicable to finance, business and education. Attendees should have a basic understanding of the macro language, graphing in SAS using SGPLOT, and ODS graphics. Customizing plots to your heartrsquos content using PROC GPLOT and the annotate facility This paper introduces tips and techniques that can speed up the validation of 2 datasets. It begins with a brief introduction to PROC COMPARE, then proceeds to introduce some techniques without using automation to that can help to speed up the validation process. These techniques are most useful when one validates a pair of datasets for the first time. For the automation part, QCData is used to compare 2 datasets and QCDir is used to compare datasets in the production directory against their corresponding datasets in the QC directory. Also introduced is ampSYSINFO, a powerful, and extremely useful macro variable which holds a value that summarizes the result of a comparison. Combining Reports into a Single File Deliverable In daily operations of a Biostatistics and Statistical Programming department, we are often tasked with generating reports in the form of tables, listings, and figures (TLFs). A common setting in the pharmaceutical industry is to develop SASreg code in which individual programs generate one or more TLFs in some standard formatted output such as RTF or PDF with a common look and feel. As trends move towards electronic review and distribution, there is an increasing demand for producing a single file as the final deliverable rather than sending each output individually. Various techniques have been presented over the years, but they typically require post-processing individual RTF or PDF files, require knowledge base beyond SAS, and may require additional software licenses. The use of item stores as an alternative has been presented more recently. Using item stores, SAS stores the data and instructions used for the creation of each report. Individual item stores are restructured and replayed at a later time within an ODS sandwich to obtain a single file deliverable. This single file is well structured with either a hyperlinked Table of Contents in RTF or properly bookmarked PDF. All hyperlinks and bookmarks are defined in a meaningful way enabling the end user to easily navigate through the document. This Hands-on-Workshop will introduce the user to creating, replaying, and restructuring item stores to obtain a single file containing a set of tables, listings, and figures. The use of ODS is required in this application using SAS 9.4 in a Windows environment. Getting your Hands on Contrast and Estimate Statements Many SAS users are familiar with modeling with and without random effects through PROC GLM, PROC MIXED, PROC GLIMMIX, and PROC GENMOD. The parameter estimates are great for giving overall effects but analysts will need to use CONTRAST and ESTIMATE statement for digging deeper into the model to answer questions such as: ldquoWhat is the predicted value of my outcome for a given combination of variablesrdquo ldquoWhat is the estimated difference between groups at a given time pointrdquo or ldquoWhat is the estimated difference between slopes for two of three groupsrdquo This HOW will provide a step by step introduction so that the SAS USER will get more comfortable programming ESTIMATE and CONTRAST statements and finding answers to these types of questions. The hands on workshop will focus on statements that can be applied to either fixed effects models or mixed models. Advanced Programming Techniques with PROC SQL Kirk Paul Lafler The SQL Procedure contains a number of powerful and elegant language features for SQL users. This hands-on workshop (HOW) emphasizes highly valuable and widely usable advanced programming techniques that will help users of Base-SAS harness the power of the SQL procedure. Topics include using PROC SQL to identify FIRST. row, LAST. row and Between. rows in BY-group processing constructing and searching the contents of a value-list macro variable for a specific value data validation operations using various integrity constraints data summary operations to process down rows and across columns and using the MSGLEVEL system option and METHOD SQL option to capture vital processing and the algorithm selected and used by the optimizer when processing a query. How to analyze correlated and longitudinal data United States Food and Drug Administration (FDA) requires an annotated Case Report Form (aCRF) to be submitted as part of the electronic data submission for every clinical trial. aCRF is a PDF document that maps the captured data in a clinical trial to their corresponding variable names in the Study Data Tabulation Model (SDTM) datasets. The SDTM Metadata Submission Guidelines recommends that the aCRF should be bookmarked in a specific way. A one-to-one relationship between the bookmarks and aCRF forms is not typical one form may have two or more bookmarks. Therefore, the number of bookmarks can easily reach thousands in any study Generating the bookmarks manually is a tedious, time consuming job. This paper presents an approach to automate the entire bookmark generation process by using SASreg 9.2 and later releases, Ghostscript, a PDF editing tool, and leveraging the linkages between forms and their corresponding visits. This approach could potentially save tremendous amounts of time and the eyesight of programmers while reducing the potential for human error. Did the Protocol Change Work Interrupted Time Series Evaluation for Health Care Organizations. Carol Conell and Alexander Flint Background: Analysts are increasingly asked to evaluate the impact of policy and protocol changes in healthcare, as well as in education and other industries. Often the request occurs after the change is implemented and the objective is to provide an estimate of the effect as quickly as possible. This paper demonstrates how we used time series models to estimate the impact of a specific protocol change using data from the electronic health record (EHR). Although the approach is well established in econometrics, it remains much less common in healthcare: the paper is designed to make this technique available to intermediate level SAS programmers. Methods: This paper introduces the time series framework, terminology, and advantages to users with no previous experience using time series. It illustrates how SAS ETS can be used to fit an interrupted time series model to evaluate the impact of a one-time protocol change based on a real-world example from Kaiser Northern California. Macros are provided for creating a time series database, fitting basic ARMA models using PROC ARIMA, and comparing models. Once the simple time-series structure is identified for this example, heterogeneity in the effect of the intervention is examined using data from subsets of patients defined by the severity of their presentation. This shows how the aggregated approach can allow exploring effect heterogeneity. Conclusions: Aggregating data and applying time series methods provides a simple way to evaluate the impact of protocol changes and similar interventions. When the timing of these interventions is well-defined, this approach avoids the need to collect substantial data on individual level confounders and problems associated with selection bias. If the effect is immediate, the approach requires a very moderate number of time points. Finding Strategies for Credit Union Growth without Mergers or Acquisitions In this era of mergers and acquisitions, community banks and credit unions often believe that bigger is better, that they cant survive if they stay small. Using 20 years of industry data, we disprove that notion for credit unions, showing that even small ones can grow slowly but strongly on their own, without merging with larger ones. We first show how we find this strategy in the data. Then we segment credit unions by size and see how the strategy changes within each segment. Finally, we track the progress of these segments over time and develop a predictive model for any credit union. In the process, we introduce the concept of quotHigh-Performance Credit Unions, quot which do actions that are proven to lead to credit union growth. Code snippets will be shown for any version of SASreg but will require the SASSTAT package. A Case of Retreatment ndash Handling Retreated Patient Data Sriramu Kundoor and Sumida Urval In certain clinical trials, if the study protocol allows, there are scenarios where subjects are re-enrolled into the study for retreatment. As per CDISC guidelines these subjects need to be handled in a manner different from non-retreated subjects. The CDISC SDTM Implementation Guide versions 3.1.2 (Page 29) and 3.2 (Section 4 - page 8) state: ldquoThe unique subject identifier (USUBJID) is required in all datasets containing subject-level data. USUBJID values must be unique for each trial participant (subject) across all trials in the submission. This means that no two (or more) subjects, across all trials in the submission, may have the same USUBJID. Additionally, the same person who participates in multiple clinical trials (when this is known) must be assigned the same USUBJID value in all trials. rdquo Therefore a retreated subject cannot have two USUBJIDs in spite of being the same person undergoing the trial phase more than once. This paper describes (with suitable examples) a method of handling retreated subject data in the SDTMs as per CDISC standards, and at the same time capturing it in such a way that it is easy for the programmer or statistician to analyze the data in ADaM datasets. This paper also discusses the conditions that need to be followed (and the logic behind them) while programming retreated patient data into the different SDTM domains. Why and What Standards for Oncology Studies (Solid Tumor, Lymphoma and Leukemia) Each therapeutic area has its own unique data collection and analysis. Oncology especially, has particularly specific standards for collection and analysis of data. Oncology studies are also separated into one of three different sub types according to response criteria guidelines. The first sub type, Solid Tumor study, usually follows RECIST (Response Evaluation Criteria in Solid Tumor). The second sub type, Lymphoma study, usually follows Cheson. Lastly, Leukemia study follows study specific guidelines (IWCLL for Chronic Lymphocytic Leukemia, IWAML for Acute Myeloid Leukemia, NCCN Guidelines for Acute Lymphoblastic Leukemia and ESMO clinical practice guides for Chronic Myeloid Leukemia). This paper will demonstrate the notable level of sophistication implemented in CDISC standards, mainly driven by the differentiation across different response criteria. The paper will specifically show what SDTM domains are used to collect the different data points in each type. For example, Solid tumor studies collect tumor results in TR and TU and response in RS. Lymphoma studies collect not only tumor results and response, but also bone marrow assessment in LB and FA, and spleen and liver enlargement in PE. Leukemia studies collect blood counts (i. e. lymphocytes, neutrophils, hemoglobin and platelet count) in LB and genetic mutation as well as what are collected in Lymphoma studies. The paper will also introduce oncology terminologies (e. g. CR, PR, SD, PD, NE) and oncology-specific ADaM data sets - Time to Event (--TTE) data set. Finally, the paper will show how standards (e. g. response criteria guidelines and CDISC) will streamline clinical trial artefacts development in oncology studies and how end to end clinical trial artefacts development can be accomplished through this standards-driven process. Efficacy Endpoint Analysis Dataset Generation with Two-Layer ADaM Design Model In clinical trial data processing, the efficacy endpoints dataset design and implementation are often the most challenging process to standardize. This paper introduces a two-layer ADaM design method for generating an efficacy endpoints dataset and summarizes the practices from past projects. The two-layer ADaM design method improves not only implementation and review, but validation as well. The method is illustrated with examples. Strategic Considerations for CDISC Implementation Amber Randall and Bill Coar The Prescription Drug User Fee Act (PDUFA) V Guidance mandates eCTD format for all regulatory submissions by May 2017. The implementation of CDISC data standards is not a one-size-fits-all process and can present both a substantial technical challenge and potential high cost to study teams. There are many factors that should be considered in strategizing when and how which include timeline, study team expertise, and final goals. Different approaches may be more efficient for brand new studies as compared to existing or completed studies. Should CDISC standards be implemented right from the beginning or does it make sense to convert data once it is known that the study product will indeed be submitted for approval Does a study team already have the technical expertise to implement data standards If not, is it more cost effective to invest in training in-house or to hire contractors How does a company identify reliable and knowledgeable contractors Are contractors skilled in SAS programming sufficient or will they also need in-depth CDISC expertise How can the work of contractors be validated Our experience as a statistical CRO has allowed us to observe and participate in many approaches to this challenging process. What has become clear is that a good, informed strategy planned from the beginning can greatly increase efficiency and cost effectiveness and reduce stress and unanticipated surprises. SDD project management tool real-time and hassle free ---- a one stop shop for study validation and completion rate estimation Do you feel sometimes it is like an octopus to work on multiple projects as a lead program or it is hard to monitor whatrsquos going on Perhaps you know about Murphyrsquos Law: Anything that can go wrong will go wrong. And you will want to be the first one to know it before anybody else. Whatrsquos its impact and whatrsquos the downstream process After pulling the study submission package up to SDD, we developed a working process which collects status information of each program and output. Then a SAS program will read in the status report of repository documents and update the tracker with bull Timestamp (last modified, last run) of: o Source and validation program. o Upstream documents (served as input of the program such as raw data or macros). o Downstream documents Features including bull Pinnacle 21 traffic lighting bull Pulling time variables from SDD and building the logic (rawltSDTMltADaM, SourceltValidation) bull Logscan in batch (time estimation on completion) bull Metadata level checking bull The workflow of all these above bull Scheduled job of running the sequenced above tasks bull Study completion report (and algorithm) Building Better ADaM Datasets Faster With If-Less Programming One of major tasks in building ADaM datasets is to write the SAS code to implement the ADaM variables based on an ADaM specification. SAS programmers often find this task tedious, time-consuming and even prone to error. The main reason that the task seems daunting is because a large number of variables have to be created with if-then-else statements in one or more data steps at the same time for each of ADaM datasets. In order to address this common issue and alleviate the process involved, this paper introduces a small set of data step inline macros that allow programmers to derive most of ADaM variables without using if-then-else statements. With this if-less programming approach, a programmer can not only make a piece of ADaM implementation code easy to read and understand, but also makes it easy to modify along with the evolving ADaM specification, and straight to reuse in the development of other ADaM datasets, or studies. Whatrsquos more, this approach can be applied to the derivation of ADaM datasets from both SDTM, and non-SDTM datasets. Whatrsquos Hot ndash Skills for SASreg Professionals Kirk Paul Lafler As a new generation of SASreg user emerges, current and prior generations of users have an extensive array of procedures, programming tools, approaches and techniques to choose from. This presentation identifies and explores the areas that are hot in the world of the professional SAS user. Topics include Enterprise Guide, PROC SQL, PROC REPORT, Output Delivery System (ODS), Macro Language, DATA step programming techniques such as arrays and hash objects, SAS University Edition software, technical support at support. sas, wiki-content on sasCommunity. orgreg, published ldquowhiterdquo papers on LexJansen, and other venues. Creating Dynamic Documents with SASreg in the Jupyter Notebook to Reinforce Soft Skills Experience with technology and strong computing skills continue to be among the most desired qualifications by employers. Programs in Statistics and other especially quantitative fields have bolstered the programming and software training they impart on graduates. But as these skills become more common, there remains an equally important desire for what are often called quotsoft skillsquot: communication, telling a story, extracting meaning from data. Through the use of SASreg in the Jupyter Notebook traditional programming assignments are easily transformed into exercises involving both analytics in SAS and writing a clear report. Traditional reports become dynamic documents which include both text and living SAS reg code that gets run during document creation. Students should never just be writing SAS reg code again. Contributing to SASreg By Writing Your Very Own Package One of the biggest reasons for the explosive growth of R statistical software in recent years is the massive collection of user-developed packages. Each package consists of a number of functions centered around a particular theme or task, not previously addressed (well) within the software. While SAS reg continues to advance on its own, SAS reg users can now contribute packages to the broader SAS reg community. Creating and contributing a package is simple and straightforward, empowering SAS reg users immensely to grow the software themselves. There is a lot of potential to increase the general applicability of SAS reg to tasks beyond statistics and data management, and its up to you Collaborations in SAS Programming or Playing Nicely with Others Kristi Metzger and Melissa R. Pfeiffer SAS programmers rarely work in isolation, but rather are usually part of a team that includes other SAS programmers such as data managers and data analysts, as well as non-programmers like project coordinators. Some members of the team -- including the SAS programmers -- may work in different locations. Given these complex collaborations, it is increasingly important to adopt approaches to work effectively and easily in teams. In this presentation, we discuss strategies and methods for working with colleagues in varied roles. We first address file organization -- putting things in places easily found by team members -- including the importance of numbering programs that are executed sequentially. While documentation is often a neglected activity, we next review the importance of documenting both within SAS and in other forms for the non-SAS users of your team. We also discuss strategies for sharing formats and writing friendly SAS coding for seamless work with other SAS programmers. Additionally, data sets are often in flux, and we talk about approaches that add clarity to data sets and their production. Finally, we suggest tips for double-checking another programmerrsquos code andor output, including the importance of confirming the logic behind variable construction and the use of proc compare in the confirmation process. Ultimately, adopting strategies that ease working jointly helps when you have to review work you did in the past and makes for a better playground experience with your teammates. A Brief Introduction to WordPress for SAS Programmers WordPress is a free, open-source platform based on PHP and MySQL used to build websites. It is easy to use with a point-and-click user interface. You can write custom HTML and CSS if you want, but you can also build beautiful webpages without knowing anything at all about HTML or CSS. Features include a plugin architecture and a template system. WordPress is used by more than 26.4 of the top 10 million websites as of April 2016. In fact, SASreg blogs (hosted at blogs. sas) use the wordPress platform. If you are considering starting a blog to share your love of SAS or to raise the profile of your business and are considering using WordPress, join us for a brief introduction to WordPress for SAS programmers. How to Be a Successful and Healthy Home-Based SAS Programmer in PharmaBiotech Industry Abstract Submission 10 min. Quick Tip Talk WUSS 2016 Educational Forum and Conference September 7-9, 2016 Grand Hyatt San Francisco on Union Square San Francisco, California How to Be a Successful and Healthy Home-Based SAS Programmer in PharmaBiotech Industry Daniel Tsui Parexel International Inc. With the advancement of technology, the tech industry accepts more and more flexible schedules and telecommuting opportunities. In recent years, more statistical SAS programming jobs in PharmaBiotech industry have shifted from office-based to home-based. There has been ongoing debates about how beneficial is the shift. A lot of room is still available for discussion about the pros and cons of this home-based model. This presentation is devoted to investigate these pros and cons as home-based SAS programmer within the pharmabiotech industry. The overall benefits have been proposed in a Microsoft whitepaper based on a survey, Work without Walls, which listed the top 10 benefits of working from home from the employee viewpoint, such as workhome balance, avoid traffic, more productive, less distractions, etc. However, to be a successful home-based SAS programmer in the pharmabiotech industry, some enemies have to be defeated, such as 24 hours on call, performance issues, solitude, advancement opportunities, dealing with family, etc. This presentation will discuss some key highlights. Lora Delwiche and Susan Slaughter SAS Studio is an important new interface for SAS, designed for both traditional SAS programmers and for point-and-click users. For SAS programmers, SAS Studio offers many useful features not found in the traditional Display Manager. SAS Studio runs in a web browser. You write programs in SAS Studio, submit the programs to a SAS server, and the results are returned to your SAS Studio session. SAS Studio is included in the license for Base SAS, is the interface for SAS University Edition and is the default interface for SAS OnDemand for Academics. Both SAS University Edition and SAS OnDemand for Academics are free of charge for non-commercial use. With SAS Studio becoming so widely available, this is a good time to learn about it. An Animated Guide: An introduction to SAS Macro quoting This cartoon like presentation expands materials in a previous paper (that explained how SAS processes Macros) to show how SAS processes macro quoting. It is suggested that the quotmap of the SAS Supervisorquot in this cartoon is a very useful paradigm for understanding SAS macro quoting. Boxes on the map are either subroutines or storage areas and the cartoon allows you to see quotquotedquot tokens flow through the components of the SAS supervisor as code executes. Basic concepts for this paper are: 1) the map of the SAS supervisor 2) the idea that certain parts of the map monitor tokens as they pass through 3) the idea of SAS tokens as rule triggers for actions to be taken by parts of the map 4) macro masking prevents recognition of tokens and the triggering of rules 5) the places in the SAS system where unquoting happens.

No comments:

Post a Comment