Feb 9 09:54:03.079013 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:54:03.079032 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:54:03.079040 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 09:54:03.079048 kernel: printk: bootconsole [pl11] enabled Feb 9 09:54:03.079052 kernel: efi: EFI v2.70 by EDK II Feb 9 09:54:03.079058 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2e698 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 09:54:03.079064 kernel: random: crng init done Feb 9 09:54:03.079069 kernel: ACPI: Early table checksum verification disabled Feb 9 09:54:03.079075 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 09:54:03.079080 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:03.079086 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:03.079092 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 09:54:03.079098 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:03.079103 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:03.079110 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:03.079116 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:03.079122 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:03.079128 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:03.079134 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 09:54:03.079140 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:03.079145 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 09:54:03.079151 kernel: NUMA: Failed to initialise from firmware Feb 9 09:54:03.079157 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:54:03.079162 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Feb 9 09:54:03.079168 kernel: Zone ranges: Feb 9 09:54:03.079174 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 09:54:03.079179 kernel: DMA32 empty Feb 9 09:54:03.079186 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:54:03.079191 kernel: Movable zone start for each node Feb 9 09:54:03.079197 kernel: Early memory node ranges Feb 9 09:54:03.079203 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 09:54:03.079208 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 09:54:03.079234 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 09:54:03.079240 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 09:54:03.079245 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 09:54:03.079251 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 09:54:03.079257 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 09:54:03.079262 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 09:54:03.079268 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:54:03.079276 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:54:03.079284 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 09:54:03.079290 kernel: psci: probing for conduit method from ACPI. Feb 9 09:54:03.079296 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:54:03.079302 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:54:03.079309 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 09:54:03.079315 kernel: psci: SMC Calling Convention v1.4 Feb 9 09:54:03.079321 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 09:54:03.079327 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 09:54:03.079333 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:54:03.079339 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:54:03.079346 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:54:03.079352 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:54:03.079358 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:54:03.079364 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:54:03.079370 kernel: CPU features: detected: Spectre-BHB Feb 9 09:54:03.079376 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:54:03.079383 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:54:03.079389 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:54:03.079395 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 09:54:03.079401 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 09:54:03.079407 kernel: Policy zone: Normal Feb 9 09:54:03.079415 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:54:03.079421 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:54:03.079427 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:54:03.079433 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:54:03.079439 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:54:03.079447 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 09:54:03.079453 kernel: Memory: 3991936K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202224K reserved, 0K cma-reserved) Feb 9 09:54:03.079459 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:54:03.079465 kernel: trace event string verifier disabled Feb 9 09:54:03.079471 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:54:03.079478 kernel: rcu: RCU event tracing is enabled. Feb 9 09:54:03.079484 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:54:03.079490 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:54:03.079496 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:54:03.079502 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:54:03.079508 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:54:03.079515 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:54:03.079521 kernel: GICv3: 960 SPIs implemented Feb 9 09:54:03.079527 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:54:03.079533 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:54:03.079539 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:54:03.079545 kernel: GICv3: 16 PPIs implemented Feb 9 09:54:03.079551 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 09:54:03.079557 kernel: ITS: No ITS available, not enabling LPIs Feb 9 09:54:03.079563 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:54:03.079569 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:54:03.079575 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:54:03.079581 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:54:03.079589 kernel: Console: colour dummy device 80x25 Feb 9 09:54:03.079595 kernel: printk: console [tty1] enabled Feb 9 09:54:03.079602 kernel: ACPI: Core revision 20210730 Feb 9 09:54:03.079608 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:54:03.079614 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:54:03.079620 kernel: LSM: Security Framework initializing Feb 9 09:54:03.079627 kernel: SELinux: Initializing. Feb 9 09:54:03.079633 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:54:03.079639 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:54:03.079646 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 09:54:03.079652 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 09:54:03.079658 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:54:03.079664 kernel: Remapping and enabling EFI services. Feb 9 09:54:03.079670 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:54:03.079677 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:54:03.079683 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 09:54:03.079689 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:54:03.079695 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:54:03.079702 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:54:03.079709 kernel: SMP: Total of 2 processors activated. Feb 9 09:54:03.079715 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:54:03.079721 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 09:54:03.079728 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:54:03.079734 kernel: CPU features: detected: CRC32 instructions Feb 9 09:54:03.079740 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:54:03.079746 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:54:03.079753 kernel: CPU features: detected: Privileged Access Never Feb 9 09:54:03.079760 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:54:03.079766 kernel: alternatives: patching kernel code Feb 9 09:54:03.079777 kernel: devtmpfs: initialized Feb 9 09:54:03.079784 kernel: KASLR enabled Feb 9 09:54:03.079791 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:54:03.079797 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:54:03.079804 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:54:03.079810 kernel: SMBIOS 3.1.0 present. Feb 9 09:54:03.079817 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 09:54:03.079824 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:54:03.079831 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:54:03.079838 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:54:03.079845 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:54:03.079851 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:54:03.079858 kernel: audit: type=2000 audit(0.096:1): state=initialized audit_enabled=0 res=1 Feb 9 09:54:03.079864 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:54:03.079871 kernel: cpuidle: using governor menu Feb 9 09:54:03.079878 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:54:03.079885 kernel: ASID allocator initialised with 32768 entries Feb 9 09:54:03.079891 kernel: ACPI: bus type PCI registered Feb 9 09:54:03.079898 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:54:03.079904 kernel: Serial: AMBA PL011 UART driver Feb 9 09:54:03.079911 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:54:03.079917 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:54:03.079924 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:54:03.079930 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:54:03.079938 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:54:03.079944 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:54:03.079951 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:54:03.079957 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:54:03.079964 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:54:03.079970 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:54:03.079977 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:54:03.079983 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:54:03.079989 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:54:03.079997 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:54:03.080004 kernel: ACPI: Interpreter enabled Feb 9 09:54:03.080010 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:54:03.080017 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:54:03.080023 kernel: printk: console [ttyAMA0] enabled Feb 9 09:54:03.080030 kernel: printk: bootconsole [pl11] disabled Feb 9 09:54:03.080036 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 09:54:03.080043 kernel: iommu: Default domain type: Translated Feb 9 09:54:03.080049 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:54:03.080057 kernel: vgaarb: loaded Feb 9 09:54:03.080063 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:54:03.080070 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:54:03.080076 kernel: PTP clock support registered Feb 9 09:54:03.080083 kernel: Registered efivars operations Feb 9 09:54:03.080089 kernel: No ACPI PMU IRQ for CPU0 Feb 9 09:54:03.080096 kernel: No ACPI PMU IRQ for CPU1 Feb 9 09:54:03.080102 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:54:03.080109 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:54:03.080116 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:54:03.080123 kernel: pnp: PnP ACPI init Feb 9 09:54:03.080129 kernel: pnp: PnP ACPI: found 0 devices Feb 9 09:54:03.080136 kernel: NET: Registered PF_INET protocol family Feb 9 09:54:03.080142 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:54:03.080149 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:54:03.080155 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:54:03.080162 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:54:03.080168 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:54:03.080176 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:54:03.080183 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:54:03.080190 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:54:03.080196 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:54:03.080202 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:54:03.090414 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 09:54:03.090452 kernel: kvm [1]: HYP mode not available Feb 9 09:54:03.090460 kernel: Initialise system trusted keyrings Feb 9 09:54:03.090468 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:54:03.090482 kernel: Key type asymmetric registered Feb 9 09:54:03.090489 kernel: Asymmetric key parser 'x509' registered Feb 9 09:54:03.090496 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:54:03.090502 kernel: io scheduler mq-deadline registered Feb 9 09:54:03.090509 kernel: io scheduler kyber registered Feb 9 09:54:03.090516 kernel: io scheduler bfq registered Feb 9 09:54:03.090523 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:54:03.090529 kernel: thunder_xcv, ver 1.0 Feb 9 09:54:03.090536 kernel: thunder_bgx, ver 1.0 Feb 9 09:54:03.090544 kernel: nicpf, ver 1.0 Feb 9 09:54:03.090550 kernel: nicvf, ver 1.0 Feb 9 09:54:03.090701 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:54:03.090765 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:54:02 UTC (1707472442) Feb 9 09:54:03.090775 kernel: efifb: probing for efifb Feb 9 09:54:03.090781 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 09:54:03.090788 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 09:54:03.090795 kernel: efifb: scrolling: redraw Feb 9 09:54:03.090804 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 09:54:03.090811 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:54:03.090818 kernel: fb0: EFI VGA frame buffer device Feb 9 09:54:03.090825 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 09:54:03.090832 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:54:03.090839 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:54:03.090845 kernel: Segment Routing with IPv6 Feb 9 09:54:03.090852 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:54:03.090858 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:54:03.090866 kernel: Key type dns_resolver registered Feb 9 09:54:03.090873 kernel: registered taskstats version 1 Feb 9 09:54:03.090879 kernel: Loading compiled-in X.509 certificates Feb 9 09:54:03.090886 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:54:03.090893 kernel: Key type .fscrypt registered Feb 9 09:54:03.090900 kernel: Key type fscrypt-provisioning registered Feb 9 09:54:03.090907 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:54:03.090913 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:54:03.090920 kernel: ima: No architecture policies found Feb 9 09:54:03.090928 kernel: Freeing unused kernel memory: 34688K Feb 9 09:54:03.090935 kernel: Run /init as init process Feb 9 09:54:03.090941 kernel: with arguments: Feb 9 09:54:03.090948 kernel: /init Feb 9 09:54:03.090954 kernel: with environment: Feb 9 09:54:03.090960 kernel: HOME=/ Feb 9 09:54:03.090967 kernel: TERM=linux Feb 9 09:54:03.090974 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:54:03.090983 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:54:03.090993 systemd[1]: Detected virtualization microsoft. Feb 9 09:54:03.091000 systemd[1]: Detected architecture arm64. Feb 9 09:54:03.091007 systemd[1]: Running in initrd. Feb 9 09:54:03.091014 systemd[1]: No hostname configured, using default hostname. Feb 9 09:54:03.091021 systemd[1]: Hostname set to . Feb 9 09:54:03.091029 systemd[1]: Initializing machine ID from random generator. Feb 9 09:54:03.091036 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:54:03.091044 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:54:03.091051 systemd[1]: Reached target cryptsetup.target. Feb 9 09:54:03.091058 systemd[1]: Reached target paths.target. Feb 9 09:54:03.091065 systemd[1]: Reached target slices.target. Feb 9 09:54:03.091072 systemd[1]: Reached target swap.target. Feb 9 09:54:03.091079 systemd[1]: Reached target timers.target. Feb 9 09:54:03.091086 systemd[1]: Listening on iscsid.socket. Feb 9 09:54:03.091093 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:54:03.091101 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:54:03.091109 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:54:03.091116 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:54:03.091124 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:54:03.091131 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:54:03.091138 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:54:03.091145 systemd[1]: Reached target sockets.target. Feb 9 09:54:03.091152 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:54:03.091159 systemd[1]: Finished network-cleanup.service. Feb 9 09:54:03.091167 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:54:03.091174 systemd[1]: Starting systemd-journald.service... Feb 9 09:54:03.091181 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:54:03.091189 systemd[1]: Starting systemd-resolved.service... Feb 9 09:54:03.091196 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:54:03.091207 systemd-journald[276]: Journal started Feb 9 09:54:03.091299 systemd-journald[276]: Runtime Journal (/run/log/journal/f5ce1801227d4878afde79993b5d4c1b) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:54:03.074285 systemd-modules-load[277]: Inserted module 'overlay' Feb 9 09:54:03.132316 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:54:03.132340 systemd[1]: Started systemd-journald.service. Feb 9 09:54:03.132358 kernel: Bridge firewalling registered Feb 9 09:54:03.126019 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 9 09:54:03.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.140607 systemd-resolved[278]: Positive Trust Anchors: Feb 9 09:54:03.172986 kernel: audit: type=1130 audit(1707472443.139:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.173008 kernel: SCSI subsystem initialized Feb 9 09:54:03.140614 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:54:03.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.140643 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:54:03.277920 kernel: audit: type=1130 audit(1707472443.177:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.277941 kernel: audit: type=1130 audit(1707472443.209:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.277951 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:54:03.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.142668 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 9 09:54:03.313936 kernel: audit: type=1130 audit(1707472443.283:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.313955 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:54:03.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.167690 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:54:03.350887 kernel: audit: type=1130 audit(1707472443.319:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.350911 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:54:03.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.178179 systemd[1]: Started systemd-resolved.service. Feb 9 09:54:03.209637 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:54:03.284243 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:54:03.319811 systemd[1]: Reached target nss-lookup.target. Feb 9 09:54:03.421734 kernel: audit: type=1130 audit(1707472443.389:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.366799 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:54:03.371980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:54:03.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.376989 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 9 09:54:03.491090 kernel: audit: type=1130 audit(1707472443.421:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.491113 kernel: audit: type=1130 audit(1707472443.454:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.383854 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:54:03.389660 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:54:03.422030 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:54:03.517113 dracut-cmdline[296]: dracut-dracut-053 Feb 9 09:54:03.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.455324 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:54:03.553417 kernel: audit: type=1130 audit(1707472443.522:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.553446 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:54:03.486244 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:54:03.511532 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:54:03.659241 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:54:03.671234 kernel: iscsi: registered transport (tcp) Feb 9 09:54:03.693270 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:54:03.693325 kernel: QLogic iSCSI HBA Driver Feb 9 09:54:03.731092 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:54:03.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:03.737622 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:54:03.790237 kernel: raid6: neonx8 gen() 13816 MB/s Feb 9 09:54:03.811226 kernel: raid6: neonx8 xor() 10828 MB/s Feb 9 09:54:03.833223 kernel: raid6: neonx4 gen() 13532 MB/s Feb 9 09:54:03.854224 kernel: raid6: neonx4 xor() 11312 MB/s Feb 9 09:54:03.875228 kernel: raid6: neonx2 gen() 13048 MB/s Feb 9 09:54:03.898225 kernel: raid6: neonx2 xor() 10294 MB/s Feb 9 09:54:03.919222 kernel: raid6: neonx1 gen() 10532 MB/s Feb 9 09:54:03.940222 kernel: raid6: neonx1 xor() 8770 MB/s Feb 9 09:54:03.962247 kernel: raid6: int64x8 gen() 6295 MB/s Feb 9 09:54:03.983226 kernel: raid6: int64x8 xor() 3549 MB/s Feb 9 09:54:04.005240 kernel: raid6: int64x4 gen() 7237 MB/s Feb 9 09:54:04.026221 kernel: raid6: int64x4 xor() 3857 MB/s Feb 9 09:54:04.047220 kernel: raid6: int64x2 gen() 6156 MB/s Feb 9 09:54:04.069222 kernel: raid6: int64x2 xor() 3321 MB/s Feb 9 09:54:04.090219 kernel: raid6: int64x1 gen() 5046 MB/s Feb 9 09:54:04.116894 kernel: raid6: int64x1 xor() 2649 MB/s Feb 9 09:54:04.116904 kernel: raid6: using algorithm neonx8 gen() 13816 MB/s Feb 9 09:54:04.116912 kernel: raid6: .... xor() 10828 MB/s, rmw enabled Feb 9 09:54:04.121925 kernel: raid6: using neon recovery algorithm Feb 9 09:54:04.140223 kernel: xor: measuring software checksum speed Feb 9 09:54:04.149396 kernel: 8regs : 17286 MB/sec Feb 9 09:54:04.149406 kernel: 32regs : 20749 MB/sec Feb 9 09:54:04.153893 kernel: arm64_neon : 27939 MB/sec Feb 9 09:54:04.153902 kernel: xor: using function: arm64_neon (27939 MB/sec) Feb 9 09:54:04.215227 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:54:04.223823 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:54:04.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:04.233000 audit: BPF prog-id=7 op=LOAD Feb 9 09:54:04.233000 audit: BPF prog-id=8 op=LOAD Feb 9 09:54:04.233851 systemd[1]: Starting systemd-udevd.service... Feb 9 09:54:04.249176 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 9 09:54:04.255492 systemd[1]: Started systemd-udevd.service. Feb 9 09:54:04.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:04.268088 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:54:04.283157 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Feb 9 09:54:04.309907 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:54:04.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:04.315857 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:54:04.349068 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:54:04.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:04.410240 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 09:54:04.425242 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 09:54:04.425297 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 09:54:04.425309 kernel: scsi host1: storvsc_host_t Feb 9 09:54:04.432770 kernel: scsi host0: storvsc_host_t Feb 9 09:54:04.453595 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 09:54:04.453677 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 09:54:04.453694 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 09:54:04.458231 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 09:54:04.473126 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Feb 9 09:54:04.483160 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 09:54:04.484256 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Feb 9 09:54:04.514472 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 09:54:04.514692 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 09:54:04.516254 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 09:54:04.537646 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 09:54:04.537857 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 09:54:04.543200 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 09:54:04.551559 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 09:54:04.551716 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 09:54:04.564235 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:04.571260 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 09:54:04.596131 kernel: hv_netvsc 0022487b-35c7-0022-487b-35c70022487b eth0: VF slot 1 added Feb 9 09:54:04.612819 kernel: hv_vmbus: registering driver hv_pci Feb 9 09:54:04.612867 kernel: hv_pci 24d93840-8946-4b30-9bf2-2e83f5b79af2: PCI VMBus probing: Using version 0x10004 Feb 9 09:54:04.633165 kernel: hv_pci 24d93840-8946-4b30-9bf2-2e83f5b79af2: PCI host bridge to bus 8946:00 Feb 9 09:54:04.633341 kernel: pci_bus 8946:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 09:54:04.640850 kernel: pci_bus 8946:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 09:54:04.648846 kernel: pci 8946:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 09:54:04.662220 kernel: pci 8946:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:54:04.685639 kernel: pci 8946:00:02.0: enabling Extended Tags Feb 9 09:54:04.711747 kernel: pci 8946:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8946:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 09:54:04.711938 kernel: pci_bus 8946:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 09:54:04.712027 kernel: pci 8946:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:54:04.761253 kernel: mlx5_core 8946:00:02.0: firmware version: 16.30.1284 Feb 9 09:54:05.006232 kernel: mlx5_core 8946:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 09:54:05.071582 kernel: hv_netvsc 0022487b-35c7-0022-487b-35c70022487b eth0: VF registering: eth1 Feb 9 09:54:05.071774 kernel: mlx5_core 8946:00:02.0 eth1: joined to eth0 Feb 9 09:54:05.089264 kernel: mlx5_core 8946:00:02.0 enP35142s1: renamed from eth1 Feb 9 09:54:05.209607 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:54:05.244301 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (542) Feb 9 09:54:05.256902 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:54:05.466918 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:54:05.475137 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:54:05.505886 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:54:05.516323 systemd[1]: Starting disk-uuid.service... Feb 9 09:54:05.550706 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:05.581254 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:06.567110 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:06.567165 disk-uuid[602]: The operation has completed successfully. Feb 9 09:54:06.630353 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:54:06.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.630444 systemd[1]: Finished disk-uuid.service. Feb 9 09:54:06.640600 systemd[1]: Starting verity-setup.service... Feb 9 09:54:06.689305 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:54:06.989581 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:54:06.999994 systemd[1]: Finished verity-setup.service. Feb 9 09:54:07.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.005863 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:54:07.038924 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 9 09:54:07.038950 kernel: audit: type=1130 audit(1707472447.004:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.089234 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:54:07.089902 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:54:07.094398 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:54:07.095126 systemd[1]: Starting ignition-setup.service... Feb 9 09:54:07.103593 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:54:07.147304 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:07.147357 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:07.152925 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:07.199608 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:54:07.249283 kernel: audit: type=1130 audit(1707472447.204:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.249317 kernel: audit: type=1334 audit(1707472447.210:22): prog-id=9 op=LOAD Feb 9 09:54:07.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.210000 audit: BPF prog-id=9 op=LOAD Feb 9 09:54:07.211285 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:07.260666 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:54:07.303182 kernel: audit: type=1130 audit(1707472447.272:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.262254 systemd-networkd[870]: lo: Link UP Feb 9 09:54:07.262258 systemd-networkd[870]: lo: Gained carrier Feb 9 09:54:07.262628 systemd-networkd[870]: Enumeration completed Feb 9 09:54:07.266550 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:07.357702 kernel: audit: type=1130 audit(1707472447.330:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.266654 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:07.399800 kernel: audit: type=1130 audit(1707472447.370:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.399824 kernel: mlx5_core 8946:00:02.0 enP35142s1: Link up Feb 9 09:54:07.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.400014 iscsid[879]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:54:07.400014 iscsid[879]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 09:54:07.400014 iscsid[879]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:54:07.400014 iscsid[879]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:54:07.400014 iscsid[879]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:54:07.400014 iscsid[879]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:54:07.400014 iscsid[879]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:54:07.552440 kernel: audit: type=1130 audit(1707472447.427:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.552466 kernel: hv_netvsc 0022487b-35c7-0022-487b-35c70022487b eth0: Data path switched to VF: enP35142s1 Feb 9 09:54:07.552606 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:54:07.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.272558 systemd[1]: Reached target network.target. Feb 9 09:54:07.304188 systemd[1]: Starting iscsiuio.service... Feb 9 09:54:07.321092 systemd[1]: Started iscsiuio.service. Feb 9 09:54:07.339110 systemd[1]: Starting iscsid.service... Feb 9 09:54:07.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.366435 systemd[1]: Started iscsid.service. Feb 9 09:54:07.619084 kernel: audit: type=1130 audit(1707472447.584:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.388717 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:54:07.648973 kernel: audit: type=1130 audit(1707472447.614:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.421791 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:54:07.427355 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:54:07.488346 systemd-networkd[870]: enP35142s1: Link UP Feb 9 09:54:07.488420 systemd-networkd[870]: eth0: Link UP Feb 9 09:54:07.488541 systemd-networkd[870]: eth0: Gained carrier Feb 9 09:54:07.512034 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:54:07.518168 systemd-networkd[870]: enP35142s1: Gained carrier Feb 9 09:54:07.534905 systemd[1]: Reached target remote-fs.target. Feb 9 09:54:07.543314 systemd-networkd[870]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:54:07.559321 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:54:07.574856 systemd[1]: Finished ignition-setup.service. Feb 9 09:54:07.584719 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:54:07.615990 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:54:09.396310 systemd-networkd[870]: eth0: Gained IPv6LL Feb 9 09:54:10.783304 ignition[894]: Ignition 2.14.0 Feb 9 09:54:10.787098 ignition[894]: Stage: fetch-offline Feb 9 09:54:10.787183 ignition[894]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:10.787230 ignition[894]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:10.919783 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:10.919926 ignition[894]: parsed url from cmdline: "" Feb 9 09:54:10.919930 ignition[894]: no config URL provided Feb 9 09:54:10.919935 ignition[894]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:54:10.975534 kernel: audit: type=1130 audit(1707472450.947:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:10.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:10.937653 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:54:10.919943 ignition[894]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:54:10.948750 systemd[1]: Starting ignition-fetch.service... Feb 9 09:54:10.919949 ignition[894]: failed to fetch config: resource requires networking Feb 9 09:54:10.920059 ignition[894]: Ignition finished successfully Feb 9 09:54:10.979593 ignition[900]: Ignition 2.14.0 Feb 9 09:54:10.979600 ignition[900]: Stage: fetch Feb 9 09:54:10.979698 ignition[900]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:10.979719 ignition[900]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:10.995653 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:10.995793 ignition[900]: parsed url from cmdline: "" Feb 9 09:54:10.995797 ignition[900]: no config URL provided Feb 9 09:54:10.995802 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:54:10.995809 ignition[900]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:54:10.995838 ignition[900]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 09:54:11.138983 ignition[900]: GET result: OK Feb 9 09:54:11.139092 ignition[900]: config has been read from IMDS userdata Feb 9 09:54:11.139157 ignition[900]: parsing config with SHA512: fab22abfc25926859718bb289ea9398a60676b9a361838c7773b2f6c5c2367eaf17c09519f6332562218b3ae200294c99ba19cda9af71c2d1b0a3b856bb9ea89 Feb 9 09:54:11.170789 unknown[900]: fetched base config from "system" Feb 9 09:54:11.176017 unknown[900]: fetched base config from "system" Feb 9 09:54:11.176025 unknown[900]: fetched user config from "azure" Feb 9 09:54:11.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.176697 ignition[900]: fetch: fetch complete Feb 9 09:54:11.182549 systemd[1]: Finished ignition-fetch.service. Feb 9 09:54:11.176705 ignition[900]: fetch: fetch passed Feb 9 09:54:11.189095 systemd[1]: Starting ignition-kargs.service... Feb 9 09:54:11.176760 ignition[900]: Ignition finished successfully Feb 9 09:54:11.204680 ignition[906]: Ignition 2.14.0 Feb 9 09:54:11.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.224796 systemd[1]: Finished ignition-kargs.service. Feb 9 09:54:11.204686 ignition[906]: Stage: kargs Feb 9 09:54:11.231493 systemd[1]: Starting ignition-disks.service... Feb 9 09:54:11.204784 ignition[906]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:11.204802 ignition[906]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:11.213538 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:11.215150 ignition[906]: kargs: kargs passed Feb 9 09:54:11.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.273717 systemd[1]: Finished ignition-disks.service. Feb 9 09:54:11.215199 ignition[906]: Ignition finished successfully Feb 9 09:54:11.280076 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:54:11.257010 ignition[912]: Ignition 2.14.0 Feb 9 09:54:11.290232 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:54:11.257016 ignition[912]: Stage: disks Feb 9 09:54:11.301224 systemd[1]: Reached target local-fs.target. Feb 9 09:54:11.257129 ignition[912]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:11.311558 systemd[1]: Reached target sysinit.target. Feb 9 09:54:11.257151 ignition[912]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:11.321907 systemd[1]: Reached target basic.target. Feb 9 09:54:11.267894 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:11.331799 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:54:11.270162 ignition[912]: disks: disks passed Feb 9 09:54:11.270229 ignition[912]: Ignition finished successfully Feb 9 09:54:11.451117 systemd-fsck[920]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 09:54:11.467819 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:54:11.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.475121 systemd[1]: Mounting sysroot.mount... Feb 9 09:54:11.507233 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:54:11.508094 systemd[1]: Mounted sysroot.mount. Feb 9 09:54:11.513136 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:54:11.563015 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:54:11.568499 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 09:54:11.577609 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:54:11.577641 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:54:11.584545 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:54:11.629677 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:54:11.635616 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:54:11.667260 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (931) Feb 9 09:54:11.675240 initrd-setup-root[936]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:54:11.687695 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:11.687727 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:11.693461 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:11.697647 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:54:11.724224 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:54:11.734616 initrd-setup-root[970]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:54:11.744969 initrd-setup-root[978]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:54:12.254991 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:54:12.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:12.275219 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 09:54:12.275268 kernel: audit: type=1130 audit(1707472452.260:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:12.270916 systemd[1]: Starting ignition-mount.service... Feb 9 09:54:12.302868 systemd[1]: Starting sysroot-boot.service... Feb 9 09:54:12.310850 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:54:12.314117 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:54:12.336778 ignition[1000]: INFO : Ignition 2.14.0 Feb 9 09:54:12.338352 systemd[1]: Finished sysroot-boot.service. Feb 9 09:54:12.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:12.373571 ignition[1000]: INFO : Stage: mount Feb 9 09:54:12.373571 ignition[1000]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:12.373571 ignition[1000]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:12.373571 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:12.431775 kernel: audit: type=1130 audit(1707472452.349:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:12.431798 kernel: audit: type=1130 audit(1707472452.390:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:12.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:12.378264 systemd[1]: Finished ignition-mount.service. Feb 9 09:54:12.436342 ignition[1000]: INFO : mount: mount passed Feb 9 09:54:12.436342 ignition[1000]: INFO : Ignition finished successfully Feb 9 09:54:12.993803 coreos-metadata[930]: Feb 09 09:54:12.993 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 09:54:13.004747 coreos-metadata[930]: Feb 09 09:54:13.004 INFO Fetch successful Feb 9 09:54:13.037440 coreos-metadata[930]: Feb 09 09:54:13.037 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 09:54:13.065664 coreos-metadata[930]: Feb 09 09:54:13.065 INFO Fetch successful Feb 9 09:54:13.081790 coreos-metadata[930]: Feb 09 09:54:13.081 INFO wrote hostname ci-3510.3.2-a-f1c369a1bc to /sysroot/etc/hostname Feb 9 09:54:13.092170 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 09:54:13.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:13.121423 systemd[1]: Starting ignition-files.service... Feb 9 09:54:13.132544 kernel: audit: type=1130 audit(1707472453.097:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:13.133325 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:54:13.161854 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1010) Feb 9 09:54:13.161896 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:13.161906 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:13.174441 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:13.179452 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:54:13.198172 ignition[1029]: INFO : Ignition 2.14.0 Feb 9 09:54:13.198172 ignition[1029]: INFO : Stage: files Feb 9 09:54:13.211487 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:13.211487 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:13.211487 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:13.211487 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:54:13.251685 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:54:13.251685 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:54:13.330492 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:54:13.340597 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:54:13.350616 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:54:13.349721 unknown[1029]: wrote ssh authorized keys file for user: core Feb 9 09:54:13.366968 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:54:13.366968 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:54:13.815005 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:54:13.980185 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 09:54:14.000788 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:54:14.000788 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:54:14.000788 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:54:14.259669 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:54:14.526644 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:54:14.539680 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:54:14.539680 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 09:54:14.934369 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:54:15.191158 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 09:54:15.209932 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:54:15.209932 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:54:15.209932 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:54:15.402567 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:54:15.730498 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 14be61ec35669a27acf2df0380afb85b9b42311d50ca1165718421c5f605df1119ec9ae314696a674051712e80deeaa65e62d2d62ed4d107fe99d0aaf419dafc Feb 9 09:54:15.749813 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:54:15.749813 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:54:15.749813 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:54:15.807332 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:54:16.103231 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 9 09:54:16.122039 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:54:16.122039 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:54:16.122039 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:54:16.162960 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:54:16.822163 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:54:16.843579 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:54:17.116378 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1029) Feb 9 09:54:17.116405 kernel: audit: type=1130 audit(1707472456.949:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.116421 kernel: audit: type=1130 audit(1707472457.028:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.116432 kernel: audit: type=1131 audit(1707472457.056:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.116532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1849788075" Feb 9 09:54:17.116532 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1849788075": device or resource busy Feb 9 09:54:17.116532 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1849788075", trying btrfs: device or resource busy Feb 9 09:54:17.116532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1849788075" Feb 9 09:54:17.116532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1849788075" Feb 9 09:54:17.116532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1849788075" Feb 9 09:54:17.116532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1849788075" Feb 9 09:54:17.116532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:54:17.116532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:54:17.116532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:54:17.116532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3337873509" Feb 9 09:54:17.116532 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3337873509": device or resource busy Feb 9 09:54:17.116532 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3337873509", trying btrfs: device or resource busy Feb 9 09:54:17.116532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3337873509" Feb 9 09:54:17.402351 kernel: audit: type=1130 audit(1707472457.179:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.402380 kernel: audit: type=1130 audit(1707472457.290:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.402390 kernel: audit: type=1131 audit(1707472457.323:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.916231 systemd[1]: mnt-oem1849788075.mount: Deactivated successfully. Feb 9 09:54:17.409438 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3337873509" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem3337873509" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem3337873509" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(17): [started] processing unit "waagent.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(17): [finished] processing unit "waagent.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(19): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(19): op(1a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(19): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(1b): [started] processing unit "prepare-critools.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(1b): [finished] processing unit "prepare-critools.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(1d): [started] processing unit "prepare-helm.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:54:17.409438 ignition[1029]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:54:17.721454 kernel: audit: type=1130 audit(1707472457.483:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.721483 kernel: audit: type=1131 audit(1707472457.594:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.933652 systemd[1]: Finished ignition-files.service. Feb 9 09:54:17.731285 ignition[1029]: INFO : files: op(1d): [finished] processing unit "prepare-helm.service" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: op(21): [started] setting preset to enabled for "waagent.service" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: op(21): [finished] setting preset to enabled for "waagent.service" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: op(22): [started] setting preset to enabled for "nvidia.service" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: op(22): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:54:17.731285 ignition[1029]: INFO : files: files passed Feb 9 09:54:17.731285 ignition[1029]: INFO : Ignition finished successfully Feb 9 09:54:18.023498 kernel: audit: type=1131 audit(1707472457.790:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.023526 kernel: audit: type=1131 audit(1707472457.838:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.023537 kernel: audit: type=1131 audit(1707472457.870:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.023547 kernel: audit: type=1131 audit(1707472457.903:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.023556 kernel: audit: type=1131 audit(1707472457.941:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.952613 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:54:18.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.052116 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:54:18.065254 kernel: audit: type=1131 audit(1707472458.028:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.065372 iscsid[879]: iscsid shutting down. Feb 9 09:54:16.984652 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:54:18.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.985603 systemd[1]: Starting ignition-quench.service... Feb 9 09:54:17.012593 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:54:18.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.012690 systemd[1]: Finished ignition-quench.service. Feb 9 09:54:18.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.173497 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:54:18.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.180323 systemd[1]: Reached target ignition-complete.target. Feb 9 09:54:18.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.230345 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:54:18.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.165060 ignition[1067]: INFO : Ignition 2.14.0 Feb 9 09:54:18.165060 ignition[1067]: INFO : Stage: umount Feb 9 09:54:18.165060 ignition[1067]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:18.165060 ignition[1067]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:18.165060 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:18.165060 ignition[1067]: INFO : umount: umount passed Feb 9 09:54:18.165060 ignition[1067]: INFO : Ignition finished successfully Feb 9 09:54:18.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.274505 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:54:18.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.274635 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:54:17.323887 systemd[1]: Reached target initrd-fs.target. Feb 9 09:54:18.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.344656 systemd[1]: Reached target initrd.target. Feb 9 09:54:18.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.385177 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:54:17.386120 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:54:17.470174 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:54:18.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.485089 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:54:17.528399 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:54:17.541921 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:54:18.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.561229 systemd[1]: Stopped target timers.target. Feb 9 09:54:17.579956 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:54:17.580017 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:54:17.594573 systemd[1]: Stopped target initrd.target. Feb 9 09:54:18.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.627235 systemd[1]: Stopped target basic.target. Feb 9 09:54:18.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.644848 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:54:18.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.663861 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:54:17.678314 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:54:18.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.691794 systemd[1]: Stopped target remote-fs.target. Feb 9 09:54:18.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.709056 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:54:18.432000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:54:17.726512 systemd[1]: Stopped target sysinit.target. Feb 9 09:54:18.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.735911 systemd[1]: Stopped target local-fs.target. Feb 9 09:54:18.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.748676 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:54:18.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.763071 systemd[1]: Stopped target swap.target. Feb 9 09:54:18.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.776936 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:54:18.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:17.777000 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:54:18.513722 kernel: hv_netvsc 0022487b-35c7-0022-487b-35c70022487b eth0: Data path switched from VF: enP35142s1 Feb 9 09:54:17.791013 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:54:17.825021 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:54:17.825074 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:54:17.838591 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:54:17.838629 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:54:17.870588 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:54:17.870633 systemd[1]: Stopped ignition-files.service. Feb 9 09:54:17.903371 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 09:54:17.903419 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 09:54:18.004534 systemd[1]: Stopping ignition-mount.service... Feb 9 09:54:18.009292 systemd[1]: Stopping iscsid.service... Feb 9 09:54:18.019589 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:54:18.019663 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:54:18.064631 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:54:18.075611 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:54:18.075680 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:54:18.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.081444 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:54:18.081485 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:54:18.100918 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:54:18.101013 systemd[1]: Stopped iscsid.service. Feb 9 09:54:18.108509 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:54:18.108600 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:54:18.115935 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:54:18.116011 systemd[1]: Stopped ignition-mount.service. Feb 9 09:54:18.128809 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:54:18.129208 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:54:18.678747 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). Feb 9 09:54:18.129261 systemd[1]: Stopped ignition-disks.service. Feb 9 09:54:18.137550 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:54:18.137591 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:54:18.151621 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:54:18.151664 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:54:18.160374 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:54:18.160416 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:54:18.170611 systemd[1]: Stopped target paths.target. Feb 9 09:54:18.179505 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:54:18.189233 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:54:18.197716 systemd[1]: Stopped target slices.target. Feb 9 09:54:18.216986 systemd[1]: Stopped target sockets.target. Feb 9 09:54:18.229360 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:54:18.229408 systemd[1]: Closed iscsid.socket. Feb 9 09:54:18.238372 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:54:18.238412 systemd[1]: Stopped ignition-setup.service. Feb 9 09:54:18.249011 systemd[1]: Stopping iscsiuio.service... Feb 9 09:54:18.260079 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:54:18.260176 systemd[1]: Stopped iscsiuio.service. Feb 9 09:54:18.269106 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:54:18.269186 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:54:18.278152 systemd[1]: Stopped target network.target. Feb 9 09:54:18.286947 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:54:18.286978 systemd[1]: Closed iscsiuio.socket. Feb 9 09:54:18.299547 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:54:18.299591 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:54:18.309282 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:54:18.317832 systemd-networkd[870]: eth0: DHCPv6 lease lost Feb 9 09:54:18.679000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:54:18.318992 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:54:18.328088 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:54:18.328190 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:54:18.337991 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:54:18.338026 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:54:18.350184 systemd[1]: Stopping network-cleanup.service... Feb 9 09:54:18.364543 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:54:18.364615 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:54:18.374748 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:54:18.374792 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:54:18.387433 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:54:18.387479 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:54:18.393268 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:54:18.403867 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:54:18.404349 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:54:18.404440 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:54:18.413365 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:54:18.413476 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:54:18.424089 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:54:18.424130 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:54:18.433118 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:54:18.433150 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:54:18.438486 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:54:18.438537 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:54:18.448160 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:54:18.448198 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:54:18.457484 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:54:18.457516 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:54:18.467580 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:54:18.477114 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:54:18.477170 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:54:18.483269 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:54:18.483370 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:54:18.596474 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:54:18.596584 systemd[1]: Stopped network-cleanup.service. Feb 9 09:54:18.602179 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:54:18.613351 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:54:18.631397 systemd[1]: Switching root. Feb 9 09:54:18.680453 systemd-journald[276]: Journal stopped Feb 9 09:54:30.860582 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:54:30.860602 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:54:30.860612 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:54:30.860621 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:54:30.860629 kernel: SELinux: policy capability open_perms=1 Feb 9 09:54:30.860637 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:54:30.860646 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:54:30.860654 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:54:30.860662 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:54:30.860669 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:54:30.860678 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:54:30.860687 systemd[1]: Successfully loaded SELinux policy in 300.360ms. Feb 9 09:54:30.860697 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.148ms. Feb 9 09:54:30.860707 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:54:30.860719 systemd[1]: Detected virtualization microsoft. Feb 9 09:54:30.860728 systemd[1]: Detected architecture arm64. Feb 9 09:54:30.860736 systemd[1]: Detected first boot. Feb 9 09:54:30.860746 systemd[1]: Hostname set to . Feb 9 09:54:30.860754 systemd[1]: Initializing machine ID from random generator. Feb 9 09:54:30.860763 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:54:30.860771 kernel: kauditd_printk_skb: 36 callbacks suppressed Feb 9 09:54:30.860782 kernel: audit: type=1400 audit(1707472463.021:88): avc: denied { associate } for pid=1100 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:54:30.860793 kernel: audit: type=1300 audit(1707472463.021:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458a2 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1083 pid=1100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:30.860803 kernel: audit: type=1327 audit(1707472463.021:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:30.860812 kernel: audit: type=1400 audit(1707472463.032:89): avc: denied { associate } for pid=1100 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:54:30.860821 kernel: audit: type=1300 audit(1707472463.032:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145979 a2=1ed a3=0 items=2 ppid=1083 pid=1100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:30.860830 kernel: audit: type=1307 audit(1707472463.032:89): cwd="/" Feb 9 09:54:30.860841 kernel: audit: type=1302 audit(1707472463.032:89): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:30.860849 kernel: audit: type=1302 audit(1707472463.032:89): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:30.860859 kernel: audit: type=1327 audit(1707472463.032:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:30.860867 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:54:30.860877 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:54:30.860886 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:54:30.860896 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:54:30.860906 kernel: audit: type=1334 audit(1707472470.078:90): prog-id=12 op=LOAD Feb 9 09:54:30.860914 kernel: audit: type=1334 audit(1707472470.078:91): prog-id=3 op=UNLOAD Feb 9 09:54:30.860923 kernel: audit: type=1334 audit(1707472470.086:92): prog-id=13 op=LOAD Feb 9 09:54:30.860931 kernel: audit: type=1334 audit(1707472470.093:93): prog-id=14 op=LOAD Feb 9 09:54:30.860940 kernel: audit: type=1334 audit(1707472470.093:94): prog-id=4 op=UNLOAD Feb 9 09:54:30.860948 kernel: audit: type=1334 audit(1707472470.093:95): prog-id=5 op=UNLOAD Feb 9 09:54:30.860958 kernel: audit: type=1334 audit(1707472470.100:96): prog-id=15 op=LOAD Feb 9 09:54:30.860967 kernel: audit: type=1334 audit(1707472470.100:97): prog-id=12 op=UNLOAD Feb 9 09:54:30.860977 kernel: audit: type=1334 audit(1707472470.107:98): prog-id=16 op=LOAD Feb 9 09:54:30.860986 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:54:30.860995 kernel: audit: type=1334 audit(1707472470.114:99): prog-id=17 op=LOAD Feb 9 09:54:30.861004 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:54:30.861013 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:54:30.861023 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:54:30.861032 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:54:30.861043 systemd[1]: Created slice system-getty.slice. Feb 9 09:54:30.861052 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:54:30.861061 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:54:30.861070 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:54:30.861080 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:54:30.861089 systemd[1]: Created slice user.slice. Feb 9 09:54:30.861098 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:54:30.861108 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:54:30.861117 systemd[1]: Set up automount boot.automount. Feb 9 09:54:30.861127 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:54:30.861137 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:54:30.861146 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:54:30.861155 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:54:30.861164 systemd[1]: Reached target integritysetup.target. Feb 9 09:54:30.861174 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:54:30.861184 systemd[1]: Reached target remote-fs.target. Feb 9 09:54:30.861193 systemd[1]: Reached target slices.target. Feb 9 09:54:30.861204 systemd[1]: Reached target swap.target. Feb 9 09:54:30.861228 systemd[1]: Reached target torcx.target. Feb 9 09:54:30.861238 systemd[1]: Reached target veritysetup.target. Feb 9 09:54:30.861248 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:54:30.861257 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:54:30.861266 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:54:30.861277 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:54:30.861286 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:54:30.861295 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:54:30.861305 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:54:30.861314 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:54:30.861324 systemd[1]: Mounting media.mount... Feb 9 09:54:30.861333 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:54:30.861343 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:54:30.861353 systemd[1]: Mounting tmp.mount... Feb 9 09:54:30.861363 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:54:30.861372 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:54:30.861382 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:54:30.861393 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:54:30.861402 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:54:30.861412 systemd[1]: Starting modprobe@drm.service... Feb 9 09:54:30.861421 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:54:30.861430 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:54:30.861441 systemd[1]: Starting modprobe@loop.service... Feb 9 09:54:30.861450 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:54:30.861460 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:54:30.861469 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:54:30.861478 kernel: loop: module loaded Feb 9 09:54:30.861488 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:54:30.861497 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:54:30.861506 systemd[1]: Stopped systemd-journald.service. Feb 9 09:54:30.861517 systemd[1]: systemd-journald.service: Consumed 3.660s CPU time. Feb 9 09:54:30.861526 systemd[1]: Starting systemd-journald.service... Feb 9 09:54:30.861536 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:54:30.861545 kernel: fuse: init (API version 7.34) Feb 9 09:54:30.861553 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:54:30.861563 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:54:30.861572 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:54:30.861581 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:54:30.861591 systemd[1]: Stopped verity-setup.service. Feb 9 09:54:30.861601 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:54:30.861611 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:54:30.861620 systemd[1]: Mounted media.mount. Feb 9 09:54:30.861629 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:54:30.861641 systemd-journald[1206]: Journal started Feb 9 09:54:30.861678 systemd-journald[1206]: Runtime Journal (/run/log/journal/d3c9fb08dfaf45ceb005e7cc8503ac43) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:54:20.983000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:54:21.824000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:54:21.824000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:54:21.824000 audit: BPF prog-id=10 op=LOAD Feb 9 09:54:21.825000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:54:21.825000 audit: BPF prog-id=11 op=LOAD Feb 9 09:54:21.825000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:54:23.021000 audit[1100]: AVC avc: denied { associate } for pid=1100 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:54:23.021000 audit[1100]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458a2 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1083 pid=1100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:23.021000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:23.032000 audit[1100]: AVC avc: denied { associate } for pid=1100 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:54:23.032000 audit[1100]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145979 a2=1ed a3=0 items=2 ppid=1083 pid=1100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:23.032000 audit: CWD cwd="/" Feb 9 09:54:23.032000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:23.032000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:23.032000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:30.078000 audit: BPF prog-id=12 op=LOAD Feb 9 09:54:30.078000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:54:30.086000 audit: BPF prog-id=13 op=LOAD Feb 9 09:54:30.093000 audit: BPF prog-id=14 op=LOAD Feb 9 09:54:30.093000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:54:30.093000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:54:30.100000 audit: BPF prog-id=15 op=LOAD Feb 9 09:54:30.100000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:54:30.107000 audit: BPF prog-id=16 op=LOAD Feb 9 09:54:30.114000 audit: BPF prog-id=17 op=LOAD Feb 9 09:54:30.114000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:54:30.114000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:54:30.120000 audit: BPF prog-id=18 op=LOAD Feb 9 09:54:30.120000 audit: BPF prog-id=15 op=UNLOAD Feb 9 09:54:30.127000 audit: BPF prog-id=19 op=LOAD Feb 9 09:54:30.134000 audit: BPF prog-id=20 op=LOAD Feb 9 09:54:30.134000 audit: BPF prog-id=16 op=UNLOAD Feb 9 09:54:30.134000 audit: BPF prog-id=17 op=UNLOAD Feb 9 09:54:30.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.160000 audit: BPF prog-id=18 op=UNLOAD Feb 9 09:54:30.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.751000 audit: BPF prog-id=21 op=LOAD Feb 9 09:54:30.751000 audit: BPF prog-id=22 op=LOAD Feb 9 09:54:30.751000 audit: BPF prog-id=23 op=LOAD Feb 9 09:54:30.751000 audit: BPF prog-id=19 op=UNLOAD Feb 9 09:54:30.751000 audit: BPF prog-id=20 op=UNLOAD Feb 9 09:54:30.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.857000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:54:30.857000 audit[1206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffffd5bf610 a2=4000 a3=1 items=0 ppid=1 pid=1206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:30.857000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:54:30.077284 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:54:22.959667 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:54:30.135505 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:54:22.994072 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:54:30.135880 systemd[1]: systemd-journald.service: Consumed 3.660s CPU time. Feb 9 09:54:22.994092 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:54:22.994131 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:54:22.994141 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:54:22.994173 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:54:22.994185 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:54:22.994414 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:54:22.994448 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:54:22.994460 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:54:22.994815 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:54:22.994850 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:54:22.994867 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:54:22.994881 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:54:22.994897 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:54:22.994910 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:54:28.962973 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:28Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:28.963241 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:28Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:28.963339 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:28Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:28.963495 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:28Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:28.963543 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:28Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:54:28.963595 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-09T09:54:28Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:54:30.881604 systemd[1]: Started systemd-journald.service. Feb 9 09:54:30.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.882441 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:54:30.888605 systemd[1]: Mounted tmp.mount. Feb 9 09:54:30.892883 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:54:30.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.898300 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:54:30.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.904185 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:54:30.904507 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:54:30.910235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:54:30.910358 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:54:30.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.915997 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:54:30.916163 systemd[1]: Finished modprobe@drm.service. Feb 9 09:54:30.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.921623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:54:30.921753 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:54:30.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.927631 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:54:30.927770 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:54:30.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.933028 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:54:30.933173 systemd[1]: Finished modprobe@loop.service. Feb 9 09:54:30.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.939023 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:54:30.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.945037 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:54:30.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.951275 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:54:30.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.956941 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:54:30.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:30.963522 systemd[1]: Reached target network-pre.target. Feb 9 09:54:30.970230 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:54:30.976559 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:54:30.981699 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:54:30.999734 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:54:31.005568 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:54:31.010983 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:54:31.012068 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:54:31.017540 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:54:31.018763 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:54:31.024384 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:54:31.030128 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:54:31.036948 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:54:31.042563 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:54:31.050500 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 09:54:31.070550 systemd-journald[1206]: Time spent on flushing to /var/log/journal/d3c9fb08dfaf45ceb005e7cc8503ac43 is 15.639ms for 1145 entries. Feb 9 09:54:31.070550 systemd-journald[1206]: System Journal (/var/log/journal/d3c9fb08dfaf45ceb005e7cc8503ac43) is 8.0M, max 2.6G, 2.6G free. Feb 9 09:54:31.167926 systemd-journald[1206]: Received client request to flush runtime journal. Feb 9 09:54:31.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:31.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:31.082222 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:54:31.087738 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:54:31.122408 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:54:31.168894 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:54:31.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:31.566274 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:54:31.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:32.128890 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:54:32.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:32.135000 audit: BPF prog-id=24 op=LOAD Feb 9 09:54:32.135000 audit: BPF prog-id=25 op=LOAD Feb 9 09:54:32.135000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:54:32.135000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:54:32.136153 systemd[1]: Starting systemd-udevd.service... Feb 9 09:54:32.154276 systemd-udevd[1223]: Using default interface naming scheme 'v252'. Feb 9 09:54:32.321986 systemd[1]: Started systemd-udevd.service. Feb 9 09:54:32.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:32.333000 audit: BPF prog-id=26 op=LOAD Feb 9 09:54:32.334654 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:32.372900 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 09:54:32.425246 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 09:54:32.426198 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:54:32.425000 audit: BPF prog-id=27 op=LOAD Feb 9 09:54:32.425000 audit: BPF prog-id=28 op=LOAD Feb 9 09:54:32.425000 audit: BPF prog-id=29 op=LOAD Feb 9 09:54:32.447772 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 09:54:32.448193 kernel: hv_vmbus: registering driver hv_utils Feb 9 09:54:32.448259 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 09:54:32.456587 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 09:54:32.456665 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 09:54:32.071557 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 09:54:32.196263 systemd-journald[1206]: Time jumped backwards, rotating. Feb 9 09:54:32.196342 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 09:54:32.196355 kernel: hv_vmbus: registering driver hv_balloon Feb 9 09:54:32.196370 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 09:54:32.196383 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 09:54:32.196396 kernel: Console: switching to colour dummy device 80x25 Feb 9 09:54:32.196408 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 09:54:32.196420 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:54:32.453000 audit[1226]: AVC avc: denied { confidentiality } for pid=1226 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:54:32.453000 audit[1226]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaac2d92690 a1=aa2c a2=ffffbacd24b0 a3=aaaac2cf2010 items=12 ppid=1223 pid=1226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:32.453000 audit: CWD cwd="/" Feb 9 09:54:32.453000 audit: PATH item=0 name=(null) inode=5634 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PATH item=1 name=(null) inode=10913 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PATH item=2 name=(null) inode=10913 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PATH item=3 name=(null) inode=10914 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PATH item=4 name=(null) inode=10913 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PATH item=5 name=(null) inode=10915 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PATH item=6 name=(null) inode=10913 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PATH item=7 name=(null) inode=10916 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PATH item=8 name=(null) inode=10913 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PATH item=9 name=(null) inode=10917 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PATH item=10 name=(null) inode=10913 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PATH item=11 name=(null) inode=10918 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:32.453000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 09:54:32.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:32.139648 systemd[1]: Started systemd-userdbd.service. Feb 9 09:54:32.375379 systemd-networkd[1244]: lo: Link UP Feb 9 09:54:32.375701 systemd-networkd[1244]: lo: Gained carrier Feb 9 09:54:32.376242 systemd-networkd[1244]: Enumeration completed Feb 9 09:54:32.376440 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:32.392498 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1236) Feb 9 09:54:32.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:32.396032 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:54:32.426230 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:54:32.432910 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:32.435951 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:54:32.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:32.442964 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:54:32.491501 kernel: mlx5_core 8946:00:02.0 enP35142s1: Link up Feb 9 09:54:32.520501 kernel: hv_netvsc 0022487b-35c7-0022-487b-35c70022487b eth0: Data path switched to VF: enP35142s1 Feb 9 09:54:32.521304 systemd-networkd[1244]: enP35142s1: Link UP Feb 9 09:54:32.521685 systemd-networkd[1244]: eth0: Link UP Feb 9 09:54:32.521698 systemd-networkd[1244]: eth0: Gained carrier Feb 9 09:54:32.525976 systemd-networkd[1244]: enP35142s1: Gained carrier Feb 9 09:54:32.572588 systemd-networkd[1244]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:54:32.712585 lvm[1302]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:54:32.765504 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:54:32.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:32.772235 systemd[1]: Reached target cryptsetup.target. Feb 9 09:54:32.779259 systemd[1]: Starting lvm2-activation.service... Feb 9 09:54:32.784013 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:54:32.802411 systemd[1]: Finished lvm2-activation.service. Feb 9 09:54:32.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:32.807959 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:54:32.813492 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:54:32.813522 systemd[1]: Reached target local-fs.target. Feb 9 09:54:32.818935 systemd[1]: Reached target machines.target. Feb 9 09:54:32.825555 systemd[1]: Starting ldconfig.service... Feb 9 09:54:32.830186 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:54:32.830253 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:32.831417 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:54:32.837858 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:54:32.846187 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:54:32.852271 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:54:32.852332 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:54:32.853449 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:54:32.891195 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1305 (bootctl) Feb 9 09:54:32.892600 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:54:33.290757 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:54:33.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:33.586834 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:54:33.654796 systemd-fsck[1313]: fsck.fat 4.2 (2021-01-31) Feb 9 09:54:33.654796 systemd-fsck[1313]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 09:54:33.656393 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:54:33.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:33.665805 systemd[1]: Mounting boot.mount... Feb 9 09:54:33.720122 systemd[1]: Mounted boot.mount. Feb 9 09:54:33.731680 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:54:33.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:33.760654 systemd-networkd[1244]: eth0: Gained IPv6LL Feb 9 09:54:33.765509 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:54:33.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.125604 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:54:34.225430 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:54:34.299932 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:54:34.300540 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:54:34.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.947238 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:54:34.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.957649 kernel: kauditd_printk_skb: 84 callbacks suppressed Feb 9 09:54:34.957708 kernel: audit: type=1130 audit(1707472474.951:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.978922 systemd[1]: Starting audit-rules.service... Feb 9 09:54:34.984171 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:54:34.989963 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:54:34.995000 audit: BPF prog-id=30 op=LOAD Feb 9 09:54:34.998227 systemd[1]: Starting systemd-resolved.service... Feb 9 09:54:35.008248 kernel: audit: type=1334 audit(1707472474.995:168): prog-id=30 op=LOAD Feb 9 09:54:35.008000 audit: BPF prog-id=31 op=LOAD Feb 9 09:54:35.016506 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:54:35.021238 kernel: audit: type=1334 audit(1707472475.008:169): prog-id=31 op=LOAD Feb 9 09:54:35.023085 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:54:35.079205 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:54:35.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.085218 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:54:35.106490 kernel: audit: type=1130 audit(1707472475.083:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.108000 audit[1330]: SYSTEM_BOOT pid=1330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.130850 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:54:35.132485 kernel: audit: type=1127 audit(1707472475.108:171): pid=1330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.137120 systemd[1]: Reached target time-set.target. Feb 9 09:54:35.161487 kernel: audit: type=1130 audit(1707472475.135:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.161800 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:54:35.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.188533 kernel: audit: type=1130 audit(1707472475.166:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.323307 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:54:35.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.351532 kernel: audit: type=1130 audit(1707472475.329:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.446000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:54:35.450103 systemd-resolved[1323]: Positive Trust Anchors: Feb 9 09:54:35.450113 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:54:35.450140 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:54:35.446000 audit[1340]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff0db6d30 a2=420 a3=0 items=0 ppid=1319 pid=1340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:35.489060 kernel: audit: type=1305 audit(1707472475.446:175): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:54:35.489229 kernel: audit: type=1300 audit(1707472475.446:175): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff0db6d30 a2=420 a3=0 items=0 ppid=1319 pid=1340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:35.446000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:54:35.489310 augenrules[1340]: No rules Feb 9 09:54:35.490067 systemd[1]: Finished audit-rules.service. Feb 9 09:54:35.525368 systemd-resolved[1323]: Using system hostname 'ci-3510.3.2-a-f1c369a1bc'. Feb 9 09:54:35.526865 systemd[1]: Started systemd-resolved.service. Feb 9 09:54:35.532050 systemd[1]: Reached target network.target. Feb 9 09:54:35.536641 systemd[1]: Reached target network-online.target. Feb 9 09:54:35.542234 systemd[1]: Reached target nss-lookup.target. Feb 9 09:54:42.092122 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:54:42.125878 systemd[1]: Finished ldconfig.service. Feb 9 09:54:42.132459 systemd[1]: Starting systemd-update-done.service... Feb 9 09:54:42.168371 systemd[1]: Finished systemd-update-done.service. Feb 9 09:54:42.174132 systemd[1]: Reached target sysinit.target. Feb 9 09:54:42.179159 systemd[1]: Started motdgen.path. Feb 9 09:54:42.183782 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:54:42.191012 systemd[1]: Started logrotate.timer. Feb 9 09:54:42.195513 systemd[1]: Started mdadm.timer. Feb 9 09:54:42.199702 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:54:42.205018 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:54:42.205047 systemd[1]: Reached target paths.target. Feb 9 09:54:42.209782 systemd[1]: Reached target timers.target. Feb 9 09:54:42.215327 systemd[1]: Listening on dbus.socket. Feb 9 09:54:42.221339 systemd[1]: Starting docker.socket... Feb 9 09:54:42.227595 systemd[1]: Listening on sshd.socket. Feb 9 09:54:42.232316 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:42.232826 systemd[1]: Listening on docker.socket. Feb 9 09:54:42.237580 systemd[1]: Reached target sockets.target. Feb 9 09:54:42.242353 systemd[1]: Reached target basic.target. Feb 9 09:54:42.247062 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:54:42.247089 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:54:42.248187 systemd[1]: Starting containerd.service... Feb 9 09:54:42.253355 systemd[1]: Starting dbus.service... Feb 9 09:54:42.257968 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:54:42.263873 systemd[1]: Starting extend-filesystems.service... Feb 9 09:54:42.268787 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:54:42.269804 systemd[1]: Starting motdgen.service... Feb 9 09:54:42.274686 systemd[1]: Started nvidia.service. Feb 9 09:54:42.280504 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:54:42.286286 systemd[1]: Starting prepare-critools.service... Feb 9 09:54:42.291934 systemd[1]: Starting prepare-helm.service... Feb 9 09:54:42.297373 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:54:42.303263 systemd[1]: Starting sshd-keygen.service... Feb 9 09:54:42.311604 systemd[1]: Starting systemd-logind.service... Feb 9 09:54:42.316294 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:42.316357 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:54:42.316765 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:54:42.317405 systemd[1]: Starting update-engine.service... Feb 9 09:54:42.322834 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:54:42.324307 jq[1350]: false Feb 9 09:54:42.327168 jq[1369]: true Feb 9 09:54:42.333421 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:54:42.333955 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:54:42.347811 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:54:42.347979 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:54:42.378225 extend-filesystems[1351]: Found sda Feb 9 09:54:42.383595 extend-filesystems[1351]: Found sda1 Feb 9 09:54:42.383595 extend-filesystems[1351]: Found sda2 Feb 9 09:54:42.383595 extend-filesystems[1351]: Found sda3 Feb 9 09:54:42.383595 extend-filesystems[1351]: Found usr Feb 9 09:54:42.383595 extend-filesystems[1351]: Found sda4 Feb 9 09:54:42.383595 extend-filesystems[1351]: Found sda6 Feb 9 09:54:42.383595 extend-filesystems[1351]: Found sda7 Feb 9 09:54:42.383595 extend-filesystems[1351]: Found sda9 Feb 9 09:54:42.383595 extend-filesystems[1351]: Checking size of /dev/sda9 Feb 9 09:54:42.446681 jq[1374]: true Feb 9 09:54:42.395043 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:54:42.395222 systemd[1]: Finished motdgen.service. Feb 9 09:54:42.457136 systemd-logind[1364]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 9 09:54:42.457552 systemd-logind[1364]: New seat seat0. Feb 9 09:54:42.467055 env[1381]: time="2024-02-09T09:54:42.467008820Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:54:42.499176 env[1381]: time="2024-02-09T09:54:42.499132260Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:54:42.499429 env[1381]: time="2024-02-09T09:54:42.499412660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:42.500519 env[1381]: time="2024-02-09T09:54:42.500488540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:42.500611 env[1381]: time="2024-02-09T09:54:42.500596380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:42.500882 env[1381]: time="2024-02-09T09:54:42.500861220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:42.500955 env[1381]: time="2024-02-09T09:54:42.500940860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:42.501010 env[1381]: time="2024-02-09T09:54:42.500996180Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:54:42.501058 env[1381]: time="2024-02-09T09:54:42.501046220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:42.501184 env[1381]: time="2024-02-09T09:54:42.501169820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:42.501455 env[1381]: time="2024-02-09T09:54:42.501437900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:42.502093 env[1381]: time="2024-02-09T09:54:42.502068460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:42.502177 env[1381]: time="2024-02-09T09:54:42.502163060Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:54:42.502286 env[1381]: time="2024-02-09T09:54:42.502270740Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:54:42.502350 env[1381]: time="2024-02-09T09:54:42.502337020Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:54:42.508250 tar[1371]: ./ Feb 9 09:54:42.508250 tar[1371]: ./loopback Feb 9 09:54:42.510109 tar[1373]: linux-arm64/helm Feb 9 09:54:42.510300 tar[1372]: crictl Feb 9 09:54:42.526099 extend-filesystems[1351]: Old size kept for /dev/sda9 Feb 9 09:54:42.532806 extend-filesystems[1351]: Found sr0 Feb 9 09:54:42.526728 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.550554300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.550595740Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.550609900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.550642620Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.550658980Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.550675540Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.550688980Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.551039940Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.551061140Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.551074980Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.551086940Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.551100060Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.551231660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:54:42.568133 env[1381]: time="2024-02-09T09:54:42.551330540Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:54:42.526894 systemd[1]: Finished extend-filesystems.service. Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551559620Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551584260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551597780Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551639980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551652300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551663620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551675420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551687540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551699140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551711300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551736780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551750820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551854940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551872140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.568580 env[1381]: time="2024-02-09T09:54:42.551885620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.558758 systemd[1]: Started containerd.service. Feb 9 09:54:42.568977 env[1381]: time="2024-02-09T09:54:42.551898300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:54:42.568977 env[1381]: time="2024-02-09T09:54:42.551913140Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:54:42.568977 env[1381]: time="2024-02-09T09:54:42.551923860Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:54:42.568977 env[1381]: time="2024-02-09T09:54:42.551940300Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:54:42.568977 env[1381]: time="2024-02-09T09:54:42.551974060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.552164300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.552218300Z" level=info msg="Connect containerd service" Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.552252260Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.558200740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.558349700Z" level=info msg="Start subscribing containerd event" Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.558406380Z" level=info msg="Start recovering state" Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.558499940Z" level=info msg="Start event monitor" Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.558500340Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.558519540Z" level=info msg="Start snapshots syncer" Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.558530340Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.558538740Z" level=info msg="Start streaming server" Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.558542420Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:54:42.569132 env[1381]: time="2024-02-09T09:54:42.558863260Z" level=info msg="containerd successfully booted in 0.104195s" Feb 9 09:54:42.589253 bash[1399]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:54:42.578580 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:54:42.590028 dbus-daemon[1349]: [system] SELinux support is enabled Feb 9 09:54:42.590170 systemd[1]: Started dbus.service. Feb 9 09:54:42.597407 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:54:42.597443 systemd[1]: Reached target system-config.target. Feb 9 09:54:42.606637 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:54:42.606662 systemd[1]: Reached target user-config.target. Feb 9 09:54:42.615328 dbus-daemon[1349]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 09:54:42.615655 systemd[1]: Started systemd-logind.service. Feb 9 09:54:42.657300 tar[1371]: ./bandwidth Feb 9 09:54:42.753648 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:54:42.782640 tar[1371]: ./ptp Feb 9 09:54:42.887890 tar[1371]: ./vlan Feb 9 09:54:42.983776 tar[1371]: ./host-device Feb 9 09:54:43.049447 update_engine[1368]: I0209 09:54:43.031687 1368 main.cc:92] Flatcar Update Engine starting Feb 9 09:54:43.063749 tar[1371]: ./tuning Feb 9 09:54:43.108280 systemd[1]: Started update-engine.service. Feb 9 09:54:43.108585 update_engine[1368]: I0209 09:54:43.108317 1368 update_check_scheduler.cc:74] Next update check in 5m8s Feb 9 09:54:43.117558 systemd[1]: Started locksmithd.service. Feb 9 09:54:43.131491 tar[1371]: ./vrf Feb 9 09:54:43.159191 tar[1373]: linux-arm64/LICENSE Feb 9 09:54:43.159300 tar[1373]: linux-arm64/README.md Feb 9 09:54:43.165428 systemd[1]: Finished prepare-helm.service. Feb 9 09:54:43.188535 tar[1371]: ./sbr Feb 9 09:54:43.241057 tar[1371]: ./tap Feb 9 09:54:43.308084 tar[1371]: ./dhcp Feb 9 09:54:43.419190 tar[1371]: ./static Feb 9 09:54:43.443777 tar[1371]: ./firewall Feb 9 09:54:43.479371 systemd[1]: Finished prepare-critools.service. Feb 9 09:54:43.493666 tar[1371]: ./macvlan Feb 9 09:54:43.527544 tar[1371]: ./dummy Feb 9 09:54:43.561385 tar[1371]: ./bridge Feb 9 09:54:43.597821 tar[1371]: ./ipvlan Feb 9 09:54:43.630969 tar[1371]: ./portmap Feb 9 09:54:43.662671 tar[1371]: ./host-local Feb 9 09:54:43.738995 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:54:44.527255 sshd_keygen[1367]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:54:44.544186 systemd[1]: Finished sshd-keygen.service. Feb 9 09:54:44.550924 systemd[1]: Starting issuegen.service... Feb 9 09:54:44.556435 systemd[1]: Started waagent.service. Feb 9 09:54:44.561825 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:54:44.561991 systemd[1]: Finished issuegen.service. Feb 9 09:54:44.568589 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:54:44.579485 locksmithd[1460]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:54:44.592271 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:54:44.600061 systemd[1]: Started getty@tty1.service. Feb 9 09:54:44.606847 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:54:44.613004 systemd[1]: Reached target getty.target. Feb 9 09:54:44.618419 systemd[1]: Reached target multi-user.target. Feb 9 09:54:44.625179 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:54:44.637996 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:54:44.638150 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:54:44.645156 systemd[1]: Startup finished in 779ms (kernel) + 17.849s (initrd) + 24.542s (userspace) = 43.171s. Feb 9 09:54:45.280844 login[1482]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 09:54:45.282309 login[1483]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:54:45.344896 systemd[1]: Created slice user-500.slice. Feb 9 09:54:45.345953 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:54:45.349670 systemd-logind[1364]: New session 2 of user core. Feb 9 09:54:45.382101 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:54:45.383568 systemd[1]: Starting user@500.service... Feb 9 09:54:45.415823 (systemd)[1486]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:54:45.624489 systemd[1486]: Queued start job for default target default.target. Feb 9 09:54:45.625267 systemd[1486]: Reached target paths.target. Feb 9 09:54:45.625374 systemd[1486]: Reached target sockets.target. Feb 9 09:54:45.625531 systemd[1486]: Reached target timers.target. Feb 9 09:54:45.625623 systemd[1486]: Reached target basic.target. Feb 9 09:54:45.625733 systemd[1486]: Reached target default.target. Feb 9 09:54:45.625797 systemd[1]: Started user@500.service. Feb 9 09:54:45.625991 systemd[1486]: Startup finished in 204ms. Feb 9 09:54:45.626758 systemd[1]: Started session-2.scope. Feb 9 09:54:46.281161 login[1482]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:54:46.284876 systemd-logind[1364]: New session 1 of user core. Feb 9 09:54:46.285293 systemd[1]: Started session-1.scope. Feb 9 09:54:51.058429 waagent[1480]: 2024-02-09T09:54:51.058325Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 09:54:51.066480 waagent[1480]: 2024-02-09T09:54:51.066371Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 09:54:51.071945 waagent[1480]: 2024-02-09T09:54:51.071864Z INFO Daemon Daemon Python: 3.9.16 Feb 9 09:54:51.077587 waagent[1480]: 2024-02-09T09:54:51.077479Z INFO Daemon Daemon Run daemon Feb 9 09:54:51.082515 waagent[1480]: 2024-02-09T09:54:51.082434Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 09:54:51.101438 waagent[1480]: 2024-02-09T09:54:51.101317Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:54:51.119203 waagent[1480]: 2024-02-09T09:54:51.119074Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:54:51.130744 waagent[1480]: 2024-02-09T09:54:51.130656Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:54:51.136788 waagent[1480]: 2024-02-09T09:54:51.136708Z INFO Daemon Daemon Using waagent for provisioning Feb 9 09:54:51.143612 waagent[1480]: 2024-02-09T09:54:51.143545Z INFO Daemon Daemon Activate resource disk Feb 9 09:54:51.149141 waagent[1480]: 2024-02-09T09:54:51.149076Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 09:54:51.164886 waagent[1480]: 2024-02-09T09:54:51.164809Z INFO Daemon Daemon Found device: None Feb 9 09:54:51.170399 waagent[1480]: 2024-02-09T09:54:51.170330Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 09:54:51.181162 waagent[1480]: 2024-02-09T09:54:51.181078Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 09:54:51.195070 waagent[1480]: 2024-02-09T09:54:51.194998Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:54:51.202048 waagent[1480]: 2024-02-09T09:54:51.201972Z INFO Daemon Daemon Running default provisioning handler Feb 9 09:54:51.216065 waagent[1480]: 2024-02-09T09:54:51.215934Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:54:51.233429 waagent[1480]: 2024-02-09T09:54:51.233298Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:54:51.245524 waagent[1480]: 2024-02-09T09:54:51.245418Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:54:51.252104 waagent[1480]: 2024-02-09T09:54:51.252016Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 09:54:51.361532 waagent[1480]: 2024-02-09T09:54:51.361329Z INFO Daemon Daemon Successfully mounted dvd Feb 9 09:54:51.488752 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 09:54:51.539211 waagent[1480]: 2024-02-09T09:54:51.539054Z INFO Daemon Daemon Detect protocol endpoint Feb 9 09:54:51.545179 waagent[1480]: 2024-02-09T09:54:51.545101Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:54:51.552182 waagent[1480]: 2024-02-09T09:54:51.552107Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 09:54:51.560127 waagent[1480]: 2024-02-09T09:54:51.560050Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 09:54:51.566761 waagent[1480]: 2024-02-09T09:54:51.566689Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 09:54:51.573237 waagent[1480]: 2024-02-09T09:54:51.573163Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 09:54:51.677027 waagent[1480]: 2024-02-09T09:54:51.676951Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 09:54:51.685276 waagent[1480]: 2024-02-09T09:54:51.685226Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 09:54:51.691473 waagent[1480]: 2024-02-09T09:54:51.691400Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 09:54:52.593261 waagent[1480]: 2024-02-09T09:54:52.593100Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 09:54:52.610405 waagent[1480]: 2024-02-09T09:54:52.610329Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 09:54:52.621426 waagent[1480]: 2024-02-09T09:54:52.621350Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 09:54:52.698051 waagent[1480]: 2024-02-09T09:54:52.697914Z INFO Daemon Daemon Found private key matching thumbprint A597E6CE8DA8C3D35D5BBF8A79D0B72458C68D38 Feb 9 09:54:52.708013 waagent[1480]: 2024-02-09T09:54:52.707933Z INFO Daemon Daemon Certificate with thumbprint B8910F831702841F93B7EF734F4F2193722C3D6E has no matching private key. Feb 9 09:54:52.719060 waagent[1480]: 2024-02-09T09:54:52.718982Z INFO Daemon Daemon Fetch goal state completed Feb 9 09:54:52.751358 waagent[1480]: 2024-02-09T09:54:52.751298Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: c88cbaf1-6245-4045-89f1-1b53bb66be5a New eTag: 4622149913800798293] Feb 9 09:54:52.763687 waagent[1480]: 2024-02-09T09:54:52.763607Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:54:52.782359 waagent[1480]: 2024-02-09T09:54:52.782279Z INFO Daemon Daemon Starting provisioning Feb 9 09:54:52.788414 waagent[1480]: 2024-02-09T09:54:52.788334Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 09:54:52.793867 waagent[1480]: 2024-02-09T09:54:52.793800Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-f1c369a1bc] Feb 9 09:54:52.849939 waagent[1480]: 2024-02-09T09:54:52.849809Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-f1c369a1bc] Feb 9 09:54:52.857610 waagent[1480]: 2024-02-09T09:54:52.857527Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 09:54:52.865033 waagent[1480]: 2024-02-09T09:54:52.864963Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 09:54:52.882047 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 09:54:52.882223 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 09:54:52.882281 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 09:54:52.882543 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:54:52.887511 systemd-networkd[1244]: eth0: DHCPv6 lease lost Feb 9 09:54:52.887845 systemd-timesyncd[1324]: Network configuration changed, trying to establish connection. Feb 9 09:54:52.888899 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:54:52.889060 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:54:52.891192 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:52.919384 systemd-networkd[1531]: enP35142s1: Link UP Feb 9 09:54:52.919396 systemd-networkd[1531]: enP35142s1: Gained carrier Feb 9 09:54:52.920294 systemd-networkd[1531]: eth0: Link UP Feb 9 09:54:52.920306 systemd-networkd[1531]: eth0: Gained carrier Feb 9 09:54:52.920856 systemd-networkd[1531]: lo: Link UP Feb 9 09:54:52.920866 systemd-networkd[1531]: lo: Gained carrier Feb 9 09:54:52.921101 systemd-networkd[1531]: eth0: Gained IPv6LL Feb 9 09:54:52.921369 systemd-timesyncd[1324]: Network configuration changed, trying to establish connection. Feb 9 09:54:52.921618 systemd-networkd[1531]: Enumeration completed Feb 9 09:54:52.921692 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:52.922540 systemd-timesyncd[1324]: Network configuration changed, trying to establish connection. Feb 9 09:54:52.923362 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:54:52.924650 systemd-timesyncd[1324]: Network configuration changed, trying to establish connection. Feb 9 09:54:52.924965 systemd-networkd[1531]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:52.925121 systemd-timesyncd[1324]: Network configuration changed, trying to establish connection. Feb 9 09:54:52.930151 waagent[1480]: 2024-02-09T09:54:52.928136Z INFO Daemon Daemon Create user account if not exists Feb 9 09:54:52.935041 systemd-timesyncd[1324]: Network configuration changed, trying to establish connection. Feb 9 09:54:52.935688 waagent[1480]: 2024-02-09T09:54:52.935578Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 09:54:52.942795 systemd-timesyncd[1324]: Network configuration changed, trying to establish connection. Feb 9 09:54:52.944119 waagent[1480]: 2024-02-09T09:54:52.944021Z INFO Daemon Daemon Configure sudoer Feb 9 09:54:52.950138 waagent[1480]: 2024-02-09T09:54:52.950057Z INFO Daemon Daemon Configure sshd Feb 9 09:54:52.955479 waagent[1480]: 2024-02-09T09:54:52.955399Z INFO Daemon Daemon Deploy ssh public key. Feb 9 09:54:52.967572 systemd-networkd[1531]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:54:52.968561 systemd-timesyncd[1324]: Network configuration changed, trying to establish connection. Feb 9 09:54:52.970214 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:54:54.221438 waagent[1480]: 2024-02-09T09:54:54.221354Z INFO Daemon Daemon Provisioning complete Feb 9 09:54:54.249703 waagent[1480]: 2024-02-09T09:54:54.249635Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 09:54:54.257700 waagent[1480]: 2024-02-09T09:54:54.257613Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 09:54:54.271216 waagent[1480]: 2024-02-09T09:54:54.271133Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 09:54:54.567644 waagent[1540]: 2024-02-09T09:54:54.567497Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 09:54:54.568691 waagent[1540]: 2024-02-09T09:54:54.568636Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:54:54.568916 waagent[1540]: 2024-02-09T09:54:54.568871Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:54:54.589662 waagent[1540]: 2024-02-09T09:54:54.589581Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 09:54:54.589984 waagent[1540]: 2024-02-09T09:54:54.589936Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 09:54:54.659396 waagent[1540]: 2024-02-09T09:54:54.659271Z INFO ExtHandler ExtHandler Found private key matching thumbprint A597E6CE8DA8C3D35D5BBF8A79D0B72458C68D38 Feb 9 09:54:54.659798 waagent[1540]: 2024-02-09T09:54:54.659746Z INFO ExtHandler ExtHandler Certificate with thumbprint B8910F831702841F93B7EF734F4F2193722C3D6E has no matching private key. Feb 9 09:54:54.660127 waagent[1540]: 2024-02-09T09:54:54.660078Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 09:54:54.676931 waagent[1540]: 2024-02-09T09:54:54.676875Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: fb18eeda-0f97-4810-a49b-30ca531a491d New eTag: 4622149913800798293] Feb 9 09:54:54.677744 waagent[1540]: 2024-02-09T09:54:54.677686Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:54:54.767253 waagent[1540]: 2024-02-09T09:54:54.767121Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:54:54.777430 waagent[1540]: 2024-02-09T09:54:54.777353Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1540 Feb 9 09:54:54.781273 waagent[1540]: 2024-02-09T09:54:54.781212Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:54:54.782726 waagent[1540]: 2024-02-09T09:54:54.782669Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:54:54.884129 waagent[1540]: 2024-02-09T09:54:54.884024Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:54:54.884681 waagent[1540]: 2024-02-09T09:54:54.884626Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:54:54.892559 waagent[1540]: 2024-02-09T09:54:54.892505Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:54:54.893193 waagent[1540]: 2024-02-09T09:54:54.893139Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:54:54.894496 waagent[1540]: 2024-02-09T09:54:54.894421Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 09:54:54.895996 waagent[1540]: 2024-02-09T09:54:54.895930Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:54:54.896264 waagent[1540]: 2024-02-09T09:54:54.896196Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:54:54.896850 waagent[1540]: 2024-02-09T09:54:54.896776Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:54:54.897482 waagent[1540]: 2024-02-09T09:54:54.897400Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:54:54.897834 waagent[1540]: 2024-02-09T09:54:54.897772Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:54:54.897834 waagent[1540]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:54:54.897834 waagent[1540]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:54:54.897834 waagent[1540]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:54:54.897834 waagent[1540]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:54:54.897834 waagent[1540]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:54:54.897834 waagent[1540]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:54:54.900182 waagent[1540]: 2024-02-09T09:54:54.900012Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:54:54.901061 waagent[1540]: 2024-02-09T09:54:54.900985Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:54:54.901251 waagent[1540]: 2024-02-09T09:54:54.901195Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:54:54.901895 waagent[1540]: 2024-02-09T09:54:54.901821Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:54:54.902106 waagent[1540]: 2024-02-09T09:54:54.902034Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:54:54.902345 waagent[1540]: 2024-02-09T09:54:54.902278Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:54:54.902432 waagent[1540]: 2024-02-09T09:54:54.902378Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:54:54.902660 waagent[1540]: 2024-02-09T09:54:54.902602Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:54:54.903661 waagent[1540]: 2024-02-09T09:54:54.903386Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:54:54.904263 waagent[1540]: 2024-02-09T09:54:54.904187Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:54:54.905209 waagent[1540]: 2024-02-09T09:54:54.905140Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:54:54.915783 waagent[1540]: 2024-02-09T09:54:54.915717Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 09:54:54.917167 waagent[1540]: 2024-02-09T09:54:54.917113Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:54:54.918206 waagent[1540]: 2024-02-09T09:54:54.918153Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 09:54:54.945002 waagent[1540]: 2024-02-09T09:54:54.944875Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1531' Feb 9 09:54:54.964324 waagent[1540]: 2024-02-09T09:54:54.964258Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 09:54:55.031905 waagent[1540]: 2024-02-09T09:54:55.031763Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:54:55.031905 waagent[1540]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:54:55.031905 waagent[1540]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:54:55.031905 waagent[1540]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:35:c7 brd ff:ff:ff:ff:ff:ff Feb 9 09:54:55.031905 waagent[1540]: 3: enP35142s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:35:c7 brd ff:ff:ff:ff:ff:ff\ altname enP35142p0s2 Feb 9 09:54:55.031905 waagent[1540]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:54:55.031905 waagent[1540]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:54:55.031905 waagent[1540]: 2: eth0 inet 10.200.20.13/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:54:55.031905 waagent[1540]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:54:55.031905 waagent[1540]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:54:55.031905 waagent[1540]: 2: eth0 inet6 fe80::222:48ff:fe7b:35c7/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:54:55.187897 waagent[1540]: 2024-02-09T09:54:55.187839Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 09:54:55.274748 waagent[1480]: 2024-02-09T09:54:55.274625Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 09:54:55.278373 waagent[1480]: 2024-02-09T09:54:55.278320Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 09:54:56.406332 waagent[1569]: 2024-02-09T09:54:56.406232Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 09:54:56.407382 waagent[1569]: 2024-02-09T09:54:56.407327Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 09:54:56.407631 waagent[1569]: 2024-02-09T09:54:56.407583Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 09:54:56.415838 waagent[1569]: 2024-02-09T09:54:56.415724Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:54:56.416384 waagent[1569]: 2024-02-09T09:54:56.416332Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:54:56.416734 waagent[1569]: 2024-02-09T09:54:56.416682Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:54:56.429893 waagent[1569]: 2024-02-09T09:54:56.429819Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 09:54:56.438562 waagent[1569]: 2024-02-09T09:54:56.438505Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 09:54:56.439683 waagent[1569]: 2024-02-09T09:54:56.439627Z INFO ExtHandler Feb 9 09:54:56.439919 waagent[1569]: 2024-02-09T09:54:56.439872Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2ec29f22-6be6-48c2-afd8-5059c55a3890 eTag: 4622149913800798293 source: Fabric] Feb 9 09:54:56.440738 waagent[1569]: 2024-02-09T09:54:56.440683Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 09:54:56.442014 waagent[1569]: 2024-02-09T09:54:56.441959Z INFO ExtHandler Feb 9 09:54:56.442231 waagent[1569]: 2024-02-09T09:54:56.442185Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 09:54:56.448454 waagent[1569]: 2024-02-09T09:54:56.448410Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 09:54:56.449023 waagent[1569]: 2024-02-09T09:54:56.448979Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:54:56.482282 waagent[1569]: 2024-02-09T09:54:56.482220Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 09:54:56.558951 waagent[1569]: 2024-02-09T09:54:56.558810Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B8910F831702841F93B7EF734F4F2193722C3D6E', 'hasPrivateKey': False} Feb 9 09:54:56.560161 waagent[1569]: 2024-02-09T09:54:56.560103Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A597E6CE8DA8C3D35D5BBF8A79D0B72458C68D38', 'hasPrivateKey': True} Feb 9 09:54:56.561353 waagent[1569]: 2024-02-09T09:54:56.561292Z INFO ExtHandler Fetch goal state completed Feb 9 09:54:56.588725 waagent[1569]: 2024-02-09T09:54:56.588647Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1569 Feb 9 09:54:56.592335 waagent[1569]: 2024-02-09T09:54:56.592270Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:54:56.593951 waagent[1569]: 2024-02-09T09:54:56.593894Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:54:56.599101 waagent[1569]: 2024-02-09T09:54:56.599049Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:54:56.599647 waagent[1569]: 2024-02-09T09:54:56.599591Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:54:56.607820 waagent[1569]: 2024-02-09T09:54:56.607763Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:54:56.608511 waagent[1569]: 2024-02-09T09:54:56.608437Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:54:56.614883 waagent[1569]: 2024-02-09T09:54:56.614776Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 09:54:56.618690 waagent[1569]: 2024-02-09T09:54:56.618631Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 09:54:56.620476 waagent[1569]: 2024-02-09T09:54:56.620395Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:54:56.621029 waagent[1569]: 2024-02-09T09:54:56.620964Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:54:56.621209 waagent[1569]: 2024-02-09T09:54:56.621155Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:54:56.621807 waagent[1569]: 2024-02-09T09:54:56.621739Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:54:56.622349 waagent[1569]: 2024-02-09T09:54:56.622281Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:54:56.623069 waagent[1569]: 2024-02-09T09:54:56.622925Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:54:56.623214 waagent[1569]: 2024-02-09T09:54:56.623152Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:54:56.623424 waagent[1569]: 2024-02-09T09:54:56.623361Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:54:56.623424 waagent[1569]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:54:56.623424 waagent[1569]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:54:56.623424 waagent[1569]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:54:56.623424 waagent[1569]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:54:56.623424 waagent[1569]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:54:56.623424 waagent[1569]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:54:56.623955 waagent[1569]: 2024-02-09T09:54:56.623881Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:54:56.625900 waagent[1569]: 2024-02-09T09:54:56.625754Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:54:56.627129 waagent[1569]: 2024-02-09T09:54:56.627043Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:54:56.627661 waagent[1569]: 2024-02-09T09:54:56.627581Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:54:56.627813 waagent[1569]: 2024-02-09T09:54:56.627748Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:54:56.628666 waagent[1569]: 2024-02-09T09:54:56.628590Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:54:56.631212 waagent[1569]: 2024-02-09T09:54:56.631136Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:54:56.636212 waagent[1569]: 2024-02-09T09:54:56.636140Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:54:56.640371 waagent[1569]: 2024-02-09T09:54:56.640302Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:54:56.640371 waagent[1569]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:54:56.640371 waagent[1569]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:54:56.640371 waagent[1569]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:35:c7 brd ff:ff:ff:ff:ff:ff Feb 9 09:54:56.640371 waagent[1569]: 3: enP35142s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:35:c7 brd ff:ff:ff:ff:ff:ff\ altname enP35142p0s2 Feb 9 09:54:56.640371 waagent[1569]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:54:56.640371 waagent[1569]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:54:56.640371 waagent[1569]: 2: eth0 inet 10.200.20.13/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:54:56.640371 waagent[1569]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:54:56.640371 waagent[1569]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:54:56.640371 waagent[1569]: 2: eth0 inet6 fe80::222:48ff:fe7b:35c7/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:54:56.652033 waagent[1569]: 2024-02-09T09:54:56.651933Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 09:54:56.653782 waagent[1569]: 2024-02-09T09:54:56.653704Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 09:54:56.673880 waagent[1569]: 2024-02-09T09:54:56.673808Z INFO ExtHandler ExtHandler Feb 9 09:54:56.674051 waagent[1569]: 2024-02-09T09:54:56.673990Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 19abbce2-4892-42bc-9a32-a384fb3582af correlation 25d7f255-14ca-4d8b-943b-e1908319b8cb created: 2024-02-09T09:53:11.151566Z] Feb 9 09:54:56.675057 waagent[1569]: 2024-02-09T09:54:56.674977Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 09:54:56.676958 waagent[1569]: 2024-02-09T09:54:56.676885Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 9 09:54:56.707671 waagent[1569]: 2024-02-09T09:54:56.707588Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 09:54:56.736417 waagent[1569]: 2024-02-09T09:54:56.736338Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B55485BE-98A5-4FED-98E2-112E4B7436CF;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 09:54:56.899244 waagent[1569]: 2024-02-09T09:54:56.899107Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 09:54:56.899244 waagent[1569]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:54:56.899244 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:54:56.899244 waagent[1569]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:54:56.899244 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:54:56.899244 waagent[1569]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:54:56.899244 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:54:56.899244 waagent[1569]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:54:56.899244 waagent[1569]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:54:56.899244 waagent[1569]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:54:56.908215 waagent[1569]: 2024-02-09T09:54:56.908086Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 09:54:56.908215 waagent[1569]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:54:56.908215 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:54:56.908215 waagent[1569]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:54:56.908215 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:54:56.908215 waagent[1569]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:54:56.908215 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:54:56.908215 waagent[1569]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:54:56.908215 waagent[1569]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:54:56.908215 waagent[1569]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:54:56.909138 waagent[1569]: 2024-02-09T09:54:56.909084Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 09:55:17.815608 systemd[1]: Created slice system-sshd.slice. Feb 9 09:55:17.816742 systemd[1]: Started sshd@0-10.200.20.13:22-10.200.12.6:60754.service. Feb 9 09:55:18.415432 sshd[1620]: Accepted publickey for core from 10.200.12.6 port 60754 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:18.449778 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:18.453581 systemd-logind[1364]: New session 3 of user core. Feb 9 09:55:18.454352 systemd[1]: Started session-3.scope. Feb 9 09:55:18.763550 systemd[1]: Started sshd@1-10.200.20.13:22-10.200.12.6:60756.service. Feb 9 09:55:19.183517 sshd[1625]: Accepted publickey for core from 10.200.12.6 port 60756 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:19.184787 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:19.188550 systemd-logind[1364]: New session 4 of user core. Feb 9 09:55:19.188970 systemd[1]: Started session-4.scope. Feb 9 09:55:19.490320 sshd[1625]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:19.493124 systemd-logind[1364]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:55:19.493782 systemd[1]: sshd@1-10.200.20.13:22-10.200.12.6:60756.service: Deactivated successfully. Feb 9 09:55:19.494458 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:55:19.495100 systemd-logind[1364]: Removed session 4. Feb 9 09:55:19.556613 systemd[1]: Started sshd@2-10.200.20.13:22-10.200.12.6:60766.service. Feb 9 09:55:19.937264 sshd[1631]: Accepted publickey for core from 10.200.12.6 port 60766 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:19.938516 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:19.942209 systemd-logind[1364]: New session 5 of user core. Feb 9 09:55:19.942671 systemd[1]: Started session-5.scope. Feb 9 09:55:20.205071 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 09:55:20.213250 sshd[1631]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:20.215729 systemd[1]: sshd@2-10.200.20.13:22-10.200.12.6:60766.service: Deactivated successfully. Feb 9 09:55:20.216407 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:55:20.216965 systemd-logind[1364]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:55:20.217753 systemd-logind[1364]: Removed session 5. Feb 9 09:55:20.283080 systemd[1]: Started sshd@3-10.200.20.13:22-10.200.12.6:60774.service. Feb 9 09:55:20.669415 sshd[1637]: Accepted publickey for core from 10.200.12.6 port 60774 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:20.670655 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:20.674451 systemd-logind[1364]: New session 6 of user core. Feb 9 09:55:20.674942 systemd[1]: Started session-6.scope. Feb 9 09:55:20.953066 sshd[1637]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:20.955620 systemd[1]: sshd@3-10.200.20.13:22-10.200.12.6:60774.service: Deactivated successfully. Feb 9 09:55:20.956282 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:55:20.956813 systemd-logind[1364]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:55:20.957622 systemd-logind[1364]: Removed session 6. Feb 9 09:55:21.016038 systemd[1]: Started sshd@4-10.200.20.13:22-10.200.12.6:60790.service. Feb 9 09:55:21.390662 sshd[1643]: Accepted publickey for core from 10.200.12.6 port 60790 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:21.391559 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:21.395296 systemd-logind[1364]: New session 7 of user core. Feb 9 09:55:21.395749 systemd[1]: Started session-7.scope. Feb 9 09:55:21.873395 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:55:21.873631 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:55:22.763753 systemd[1]: Starting docker.service... Feb 9 09:55:22.795123 env[1661]: time="2024-02-09T09:55:22.795076820Z" level=info msg="Starting up" Feb 9 09:55:22.796635 env[1661]: time="2024-02-09T09:55:22.796604940Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:55:22.796635 env[1661]: time="2024-02-09T09:55:22.796629180Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:55:22.796745 env[1661]: time="2024-02-09T09:55:22.796653700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:55:22.796745 env[1661]: time="2024-02-09T09:55:22.796663780Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:55:22.798288 env[1661]: time="2024-02-09T09:55:22.798261500Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:55:22.798288 env[1661]: time="2024-02-09T09:55:22.798286660Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:55:22.798361 env[1661]: time="2024-02-09T09:55:22.798302580Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:55:22.798361 env[1661]: time="2024-02-09T09:55:22.798310700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:55:22.874051 env[1661]: time="2024-02-09T09:55:22.874006260Z" level=info msg="Loading containers: start." Feb 9 09:55:23.057491 kernel: Initializing XFRM netlink socket Feb 9 09:55:23.081198 env[1661]: time="2024-02-09T09:55:23.081156100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:55:23.084276 systemd-timesyncd[1324]: Network configuration changed, trying to establish connection. Feb 9 09:55:23.189214 systemd-networkd[1531]: docker0: Link UP Feb 9 09:55:23.227147 env[1661]: time="2024-02-09T09:55:23.227116500Z" level=info msg="Loading containers: done." Feb 9 09:55:23.238023 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3506557516-merged.mount: Deactivated successfully. Feb 9 09:55:23.289310 env[1661]: time="2024-02-09T09:55:23.289272660Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:55:23.289685 env[1661]: time="2024-02-09T09:55:23.289667300Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:55:23.289872 env[1661]: time="2024-02-09T09:55:23.289857940Z" level=info msg="Daemon has completed initialization" Feb 9 09:55:23.332869 systemd[1]: Started docker.service. Feb 9 09:55:23.342242 env[1661]: time="2024-02-09T09:55:23.342189380Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:55:23.350533 systemd-timesyncd[1324]: Contacted time server 72.30.35.89:123 (0.flatcar.pool.ntp.org). Feb 9 09:55:23.350767 systemd-timesyncd[1324]: Initial clock synchronization to Fri 2024-02-09 09:55:23.345947 UTC. Feb 9 09:55:23.357303 systemd[1]: Reloading. Feb 9 09:55:23.417014 /usr/lib/systemd/system-generators/torcx-generator[1790]: time="2024-02-09T09:55:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:55:23.417381 /usr/lib/systemd/system-generators/torcx-generator[1790]: time="2024-02-09T09:55:23Z" level=info msg="torcx already run" Feb 9 09:55:23.495057 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:55:23.495076 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:55:23.511654 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:55:23.598351 systemd[1]: Started kubelet.service. Feb 9 09:55:23.669528 kubelet[1850]: E0209 09:55:23.669456 1850 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 09:55:23.671443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:55:23.671584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:55:28.480135 update_engine[1368]: I0209 09:55:28.480088 1368 update_attempter.cc:509] Updating boot flags... Feb 9 09:55:29.385315 env[1381]: time="2024-02-09T09:55:29.385261455Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 9 09:55:30.283360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount584002393.mount: Deactivated successfully. Feb 9 09:55:32.956251 env[1381]: time="2024-02-09T09:55:32.956205403Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:32.970264 env[1381]: time="2024-02-09T09:55:32.970225774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:32.977980 env[1381]: time="2024-02-09T09:55:32.977944023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:32.982449 env[1381]: time="2024-02-09T09:55:32.982416366Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:32.983085 env[1381]: time="2024-02-09T09:55:32.983057421Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa\"" Feb 9 09:55:32.992235 env[1381]: time="2024-02-09T09:55:32.992202755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 9 09:55:33.914918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:55:33.915105 systemd[1]: Stopped kubelet.service. Feb 9 09:55:33.916507 systemd[1]: Started kubelet.service. Feb 9 09:55:33.962892 kubelet[1915]: E0209 09:55:33.962851 1915 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 09:55:33.965715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:55:33.965837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:55:35.658913 env[1381]: time="2024-02-09T09:55:35.658862430Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:35.672779 env[1381]: time="2024-02-09T09:55:35.672737827Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:35.678380 env[1381]: time="2024-02-09T09:55:35.678342867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:35.689223 env[1381]: time="2024-02-09T09:55:35.689185075Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:35.689701 env[1381]: time="2024-02-09T09:55:35.689676969Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95\"" Feb 9 09:55:35.698234 env[1381]: time="2024-02-09T09:55:35.698196413Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 9 09:55:37.623217 env[1381]: time="2024-02-09T09:55:37.623166734Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:37.642762 env[1381]: time="2024-02-09T09:55:37.642720122Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:37.651378 env[1381]: time="2024-02-09T09:55:37.651336295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:37.660890 env[1381]: time="2024-02-09T09:55:37.660849520Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:37.661589 env[1381]: time="2024-02-09T09:55:37.661562835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb\"" Feb 9 09:55:37.670349 env[1381]: time="2024-02-09T09:55:37.670322750Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 09:55:39.233323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045789363.mount: Deactivated successfully. Feb 9 09:55:40.062800 env[1381]: time="2024-02-09T09:55:40.062737567Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:40.075186 env[1381]: time="2024-02-09T09:55:40.075146027Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:40.084010 env[1381]: time="2024-02-09T09:55:40.083941363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:40.093243 env[1381]: time="2024-02-09T09:55:40.093201733Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:40.093585 env[1381]: time="2024-02-09T09:55:40.093555138Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 9 09:55:40.102118 env[1381]: time="2024-02-09T09:55:40.102087500Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:55:40.875725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918977667.mount: Deactivated successfully. Feb 9 09:55:40.919339 env[1381]: time="2024-02-09T09:55:40.919282114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:40.934210 env[1381]: time="2024-02-09T09:55:40.934171891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:40.941957 env[1381]: time="2024-02-09T09:55:40.941922769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:40.955327 env[1381]: time="2024-02-09T09:55:40.955276857Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:40.955748 env[1381]: time="2024-02-09T09:55:40.955709934Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:55:40.964442 env[1381]: time="2024-02-09T09:55:40.964401400Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 9 09:55:42.196131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925829898.mount: Deactivated successfully. Feb 9 09:55:44.164858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 09:55:44.165039 systemd[1]: Stopped kubelet.service. Feb 9 09:55:44.166529 systemd[1]: Started kubelet.service. Feb 9 09:55:44.206323 kubelet[1942]: E0209 09:55:44.206271 1942 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 09:55:44.208778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:55:44.208904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:55:46.148328 env[1381]: time="2024-02-09T09:55:46.148256478Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:46.167690 env[1381]: time="2024-02-09T09:55:46.167650544Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:46.178131 env[1381]: time="2024-02-09T09:55:46.178090688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:46.185100 env[1381]: time="2024-02-09T09:55:46.185063503Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:46.185967 env[1381]: time="2024-02-09T09:55:46.185942244Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737\"" Feb 9 09:55:46.194999 env[1381]: time="2024-02-09T09:55:46.194962842Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 09:55:47.022084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount660474287.mount: Deactivated successfully. Feb 9 09:55:47.625461 env[1381]: time="2024-02-09T09:55:47.625405602Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:47.646855 env[1381]: time="2024-02-09T09:55:47.646808624Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:47.660491 env[1381]: time="2024-02-09T09:55:47.660436011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:47.692775 env[1381]: time="2024-02-09T09:55:47.692725872Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:47.693186 env[1381]: time="2024-02-09T09:55:47.693157725Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 9 09:55:53.444340 systemd[1]: Stopped kubelet.service. Feb 9 09:55:53.459631 systemd[1]: Reloading. Feb 9 09:55:53.551176 /usr/lib/systemd/system-generators/torcx-generator[2034]: time="2024-02-09T09:55:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:55:53.551555 /usr/lib/systemd/system-generators/torcx-generator[2034]: time="2024-02-09T09:55:53Z" level=info msg="torcx already run" Feb 9 09:55:53.616901 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:55:53.616920 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:55:53.633792 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:55:53.741824 systemd[1]: Started kubelet.service. Feb 9 09:55:53.785205 kubelet[2093]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:55:53.785547 kubelet[2093]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 09:55:53.785606 kubelet[2093]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:55:53.785734 kubelet[2093]: I0209 09:55:53.785698 2093 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:55:54.730490 kubelet[2093]: I0209 09:55:54.730436 2093 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 09:55:54.730490 kubelet[2093]: I0209 09:55:54.730470 2093 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:55:54.730700 kubelet[2093]: I0209 09:55:54.730680 2093 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 09:55:54.737328 kubelet[2093]: I0209 09:55:54.737304 2093 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:55:54.737732 kubelet[2093]: E0209 09:55:54.737704 2093 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:54.738810 kubelet[2093]: W0209 09:55:54.738794 2093 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:55:54.739423 kubelet[2093]: I0209 09:55:54.739404 2093 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:55:54.739657 kubelet[2093]: I0209 09:55:54.739643 2093 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:55:54.739736 kubelet[2093]: I0209 09:55:54.739721 2093 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:55:54.739816 kubelet[2093]: I0209 09:55:54.739739 2093 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:55:54.739816 kubelet[2093]: I0209 09:55:54.739750 2093 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 09:55:54.739865 kubelet[2093]: I0209 09:55:54.739840 2093 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:55:54.742267 kubelet[2093]: I0209 09:55:54.742241 2093 kubelet.go:405] "Attempting to sync node with API server" Feb 9 09:55:54.742267 kubelet[2093]: I0209 09:55:54.742267 2093 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:55:54.742382 kubelet[2093]: I0209 09:55:54.742289 2093 kubelet.go:309] "Adding apiserver pod source" Feb 9 09:55:54.742382 kubelet[2093]: I0209 09:55:54.742301 2093 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:55:54.743001 kubelet[2093]: W0209 09:55:54.742954 2093 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-f1c369a1bc&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:54.743085 kubelet[2093]: E0209 09:55:54.743010 2093 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-f1c369a1bc&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:54.743085 kubelet[2093]: W0209 09:55:54.743057 2093 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:54.743085 kubelet[2093]: E0209 09:55:54.743079 2093 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:54.743171 kubelet[2093]: I0209 09:55:54.743152 2093 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:55:54.743394 kubelet[2093]: W0209 09:55:54.743371 2093 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:55:54.743804 kubelet[2093]: I0209 09:55:54.743774 2093 server.go:1168] "Started kubelet" Feb 9 09:55:54.747686 kubelet[2093]: E0209 09:55:54.747669 2093 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:55:54.747797 kubelet[2093]: E0209 09:55:54.747786 2093 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:55:54.748970 kubelet[2093]: I0209 09:55:54.748956 2093 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:55:54.749602 kubelet[2093]: I0209 09:55:54.749587 2093 server.go:461] "Adding debug handlers to kubelet server" Feb 9 09:55:54.750930 kubelet[2093]: I0209 09:55:54.750915 2093 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 09:55:54.751486 kubelet[2093]: E0209 09:55:54.751371 2093 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-f1c369a1bc.17b2293df13adf2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-f1c369a1bc", UID:"ci-3510.3.2-a-f1c369a1bc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-f1c369a1bc"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 55, 54, 743750443, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 55, 54, 743750443, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.13:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.13:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:55:54.752886 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:55:54.753021 kubelet[2093]: I0209 09:55:54.753003 2093 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:55:54.753529 kubelet[2093]: I0209 09:55:54.753510 2093 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 09:55:54.754802 kubelet[2093]: I0209 09:55:54.754782 2093 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 09:55:54.756316 kubelet[2093]: W0209 09:55:54.756283 2093 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:54.756424 kubelet[2093]: E0209 09:55:54.756414 2093 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:54.757211 kubelet[2093]: E0209 09:55:54.757178 2093 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-f1c369a1bc\" not found" Feb 9 09:55:54.757563 kubelet[2093]: E0209 09:55:54.757547 2093 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f1c369a1bc?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="200ms" Feb 9 09:55:54.809222 kubelet[2093]: I0209 09:55:54.809177 2093 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:55:54.810602 kubelet[2093]: I0209 09:55:54.810579 2093 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:55:54.810602 kubelet[2093]: I0209 09:55:54.810601 2093 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 09:55:54.810723 kubelet[2093]: I0209 09:55:54.810619 2093 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 09:55:54.810723 kubelet[2093]: E0209 09:55:54.810662 2093 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:55:54.811779 kubelet[2093]: W0209 09:55:54.811724 2093 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:54.811779 kubelet[2093]: E0209 09:55:54.811775 2093 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:54.911178 kubelet[2093]: E0209 09:55:54.911144 2093 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 09:55:54.958830 kubelet[2093]: E0209 09:55:54.958806 2093 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f1c369a1bc?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="400ms" Feb 9 09:55:55.066304 kubelet[2093]: I0209 09:55:55.065169 2093 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.066304 kubelet[2093]: E0209 09:55:55.065516 2093 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.066444 kubelet[2093]: I0209 09:55:55.066413 2093 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:55:55.066444 kubelet[2093]: I0209 09:55:55.066434 2093 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:55:55.066529 kubelet[2093]: I0209 09:55:55.066453 2093 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:55:55.077726 kubelet[2093]: I0209 09:55:55.077690 2093 policy_none.go:49] "None policy: Start" Feb 9 09:55:55.078449 kubelet[2093]: I0209 09:55:55.078428 2093 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:55:55.078541 kubelet[2093]: I0209 09:55:55.078456 2093 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:55:55.092315 systemd[1]: Created slice kubepods.slice. Feb 9 09:55:55.096554 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:55:55.099276 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:55:55.108069 kubelet[2093]: I0209 09:55:55.107858 2093 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:55:55.108619 kubelet[2093]: I0209 09:55:55.108583 2093 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:55:55.109282 kubelet[2093]: E0209 09:55:55.109267 2093 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-f1c369a1bc\" not found" Feb 9 09:55:55.111890 kubelet[2093]: I0209 09:55:55.111872 2093 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:55:55.113353 kubelet[2093]: I0209 09:55:55.113332 2093 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:55:55.114456 kubelet[2093]: I0209 09:55:55.114431 2093 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:55:55.119860 systemd[1]: Created slice kubepods-burstable-pod144bdbe13ac066c2eca73e0e6b58274b.slice. Feb 9 09:55:55.129802 systemd[1]: Created slice kubepods-burstable-podf5f6f1c9e1df9aae683b42a6dfbbed75.slice. Feb 9 09:55:55.132798 systemd[1]: Created slice kubepods-burstable-pod33e1ed44fac717261fbc0c4ba188bcfc.slice. Feb 9 09:55:55.157776 kubelet[2093]: I0209 09:55:55.157747 2093 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5f6f1c9e1df9aae683b42a6dfbbed75-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-f1c369a1bc\" (UID: \"f5f6f1c9e1df9aae683b42a6dfbbed75\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.157964 kubelet[2093]: I0209 09:55:55.157949 2093 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5f6f1c9e1df9aae683b42a6dfbbed75-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-f1c369a1bc\" (UID: \"f5f6f1c9e1df9aae683b42a6dfbbed75\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.158040 kubelet[2093]: I0209 09:55:55.158031 2093 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33e1ed44fac717261fbc0c4ba188bcfc-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-f1c369a1bc\" (UID: \"33e1ed44fac717261fbc0c4ba188bcfc\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.158113 kubelet[2093]: I0209 09:55:55.158104 2093 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/144bdbe13ac066c2eca73e0e6b58274b-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-f1c369a1bc\" (UID: \"144bdbe13ac066c2eca73e0e6b58274b\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.158194 kubelet[2093]: I0209 09:55:55.158185 2093 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5f6f1c9e1df9aae683b42a6dfbbed75-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-f1c369a1bc\" (UID: \"f5f6f1c9e1df9aae683b42a6dfbbed75\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.158263 kubelet[2093]: I0209 09:55:55.158255 2093 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5f6f1c9e1df9aae683b42a6dfbbed75-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-f1c369a1bc\" (UID: \"f5f6f1c9e1df9aae683b42a6dfbbed75\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.158340 kubelet[2093]: I0209 09:55:55.158332 2093 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5f6f1c9e1df9aae683b42a6dfbbed75-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-f1c369a1bc\" (UID: \"f5f6f1c9e1df9aae683b42a6dfbbed75\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.158406 kubelet[2093]: I0209 09:55:55.158398 2093 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/144bdbe13ac066c2eca73e0e6b58274b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-f1c369a1bc\" (UID: \"144bdbe13ac066c2eca73e0e6b58274b\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.158493 kubelet[2093]: I0209 09:55:55.158484 2093 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/144bdbe13ac066c2eca73e0e6b58274b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-f1c369a1bc\" (UID: \"144bdbe13ac066c2eca73e0e6b58274b\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.267545 kubelet[2093]: I0209 09:55:55.267520 2093 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.267991 kubelet[2093]: E0209 09:55:55.267975 2093 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.360586 kubelet[2093]: E0209 09:55:55.359720 2093 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f1c369a1bc?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="800ms" Feb 9 09:55:55.429594 env[1381]: time="2024-02-09T09:55:55.429553829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-f1c369a1bc,Uid:144bdbe13ac066c2eca73e0e6b58274b,Namespace:kube-system,Attempt:0,}" Feb 9 09:55:55.434624 env[1381]: time="2024-02-09T09:55:55.434589561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-f1c369a1bc,Uid:f5f6f1c9e1df9aae683b42a6dfbbed75,Namespace:kube-system,Attempt:0,}" Feb 9 09:55:55.435182 env[1381]: time="2024-02-09T09:55:55.435141621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-f1c369a1bc,Uid:33e1ed44fac717261fbc0c4ba188bcfc,Namespace:kube-system,Attempt:0,}" Feb 9 09:55:55.560288 kubelet[2093]: W0209 09:55:55.560231 2093 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-f1c369a1bc&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:55.560288 kubelet[2093]: E0209 09:55:55.560292 2093 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-f1c369a1bc&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:55.670294 kubelet[2093]: I0209 09:55:55.670268 2093 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.670623 kubelet[2093]: E0209 09:55:55.670601 2093 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:55.734396 kubelet[2093]: W0209 09:55:55.734339 2093 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:55.734594 kubelet[2093]: E0209 09:55:55.734583 2093 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:55.743782 kubelet[2093]: W0209 09:55:55.743739 2093 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:55.743921 kubelet[2093]: E0209 09:55:55.743911 2093 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:56.162123 kubelet[2093]: E0209 09:55:56.161695 2093 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f1c369a1bc?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="1.6s" Feb 9 09:55:56.162123 kubelet[2093]: W0209 09:55:56.161684 2093 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:56.162123 kubelet[2093]: E0209 09:55:56.161738 2093 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:56.225964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276916777.mount: Deactivated successfully. Feb 9 09:55:56.284845 env[1381]: time="2024-02-09T09:55:56.284801653Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.294926 env[1381]: time="2024-02-09T09:55:56.294888820Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.318258 env[1381]: time="2024-02-09T09:55:56.318210484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.325258 env[1381]: time="2024-02-09T09:55:56.325224319Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.347677 env[1381]: time="2024-02-09T09:55:56.347629055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.356262 env[1381]: time="2024-02-09T09:55:56.356228314Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.364705 env[1381]: time="2024-02-09T09:55:56.364578622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.369449 env[1381]: time="2024-02-09T09:55:56.369404733Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.378441 env[1381]: time="2024-02-09T09:55:56.378409218Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.389832 env[1381]: time="2024-02-09T09:55:56.389788700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.394682 env[1381]: time="2024-02-09T09:55:56.394650250Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.420281 env[1381]: time="2024-02-09T09:55:56.420232714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:56.472631 kubelet[2093]: I0209 09:55:56.472600 2093 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:56.472941 kubelet[2093]: E0209 09:55:56.472917 2093 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:56.509817 env[1381]: time="2024-02-09T09:55:56.509669865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:55:56.510264 env[1381]: time="2024-02-09T09:55:56.509730623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:55:56.510409 env[1381]: time="2024-02-09T09:55:56.510376120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:55:56.510682 env[1381]: time="2024-02-09T09:55:56.510642071Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92bd452938bc5d4e939585fff7c2c7887da87fc4f3b921396d1c25c15607b869 pid=2134 runtime=io.containerd.runc.v2 Feb 9 09:55:56.527687 systemd[1]: Started cri-containerd-92bd452938bc5d4e939585fff7c2c7887da87fc4f3b921396d1c25c15607b869.scope. Feb 9 09:55:56.562068 env[1381]: time="2024-02-09T09:55:56.562009554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-f1c369a1bc,Uid:144bdbe13ac066c2eca73e0e6b58274b,Namespace:kube-system,Attempt:0,} returns sandbox id \"92bd452938bc5d4e939585fff7c2c7887da87fc4f3b921396d1c25c15607b869\"" Feb 9 09:55:56.565935 env[1381]: time="2024-02-09T09:55:56.565887138Z" level=info msg="CreateContainer within sandbox \"92bd452938bc5d4e939585fff7c2c7887da87fc4f3b921396d1c25c15607b869\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:55:56.569889 env[1381]: time="2024-02-09T09:55:56.569697685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:55:56.569889 env[1381]: time="2024-02-09T09:55:56.569740163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:55:56.569889 env[1381]: time="2024-02-09T09:55:56.569750083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:55:56.570070 env[1381]: time="2024-02-09T09:55:56.569918637Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd6965ce0106a66c2665d674a3bb2bb6cee8cf2ef2dcd6444155fcb11711724f pid=2174 runtime=io.containerd.runc.v2 Feb 9 09:55:56.573791 env[1381]: time="2024-02-09T09:55:56.573724824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:55:56.573893 env[1381]: time="2024-02-09T09:55:56.573807101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:55:56.573893 env[1381]: time="2024-02-09T09:55:56.573833020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:55:56.574163 env[1381]: time="2024-02-09T09:55:56.574113210Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4f29aa8e9e39697b9d7bd2a5b07417a0225cfb86f9ffa9c9518210e62eee8d9 pid=2191 runtime=io.containerd.runc.v2 Feb 9 09:55:56.588042 systemd[1]: Started cri-containerd-fd6965ce0106a66c2665d674a3bb2bb6cee8cf2ef2dcd6444155fcb11711724f.scope. Feb 9 09:55:56.601239 systemd[1]: Started cri-containerd-f4f29aa8e9e39697b9d7bd2a5b07417a0225cfb86f9ffa9c9518210e62eee8d9.scope. Feb 9 09:55:56.633208 env[1381]: time="2024-02-09T09:55:56.633162544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-f1c369a1bc,Uid:33e1ed44fac717261fbc0c4ba188bcfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd6965ce0106a66c2665d674a3bb2bb6cee8cf2ef2dcd6444155fcb11711724f\"" Feb 9 09:55:56.635883 env[1381]: time="2024-02-09T09:55:56.635842450Z" level=info msg="CreateContainer within sandbox \"fd6965ce0106a66c2665d674a3bb2bb6cee8cf2ef2dcd6444155fcb11711724f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:55:56.651138 env[1381]: time="2024-02-09T09:55:56.651086077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-f1c369a1bc,Uid:f5f6f1c9e1df9aae683b42a6dfbbed75,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4f29aa8e9e39697b9d7bd2a5b07417a0225cfb86f9ffa9c9518210e62eee8d9\"" Feb 9 09:55:56.654014 env[1381]: time="2024-02-09T09:55:56.653979016Z" level=info msg="CreateContainer within sandbox \"f4f29aa8e9e39697b9d7bd2a5b07417a0225cfb86f9ffa9c9518210e62eee8d9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:55:56.674335 env[1381]: time="2024-02-09T09:55:56.673537892Z" level=info msg="CreateContainer within sandbox \"92bd452938bc5d4e939585fff7c2c7887da87fc4f3b921396d1c25c15607b869\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f0764d678460f16f63e366beaeb3f3bbd0b2e66e71394d5ba3b4930180f2c2c9\"" Feb 9 09:55:56.674777 env[1381]: time="2024-02-09T09:55:56.674748809Z" level=info msg="StartContainer for \"f0764d678460f16f63e366beaeb3f3bbd0b2e66e71394d5ba3b4930180f2c2c9\"" Feb 9 09:55:56.691262 systemd[1]: Started cri-containerd-f0764d678460f16f63e366beaeb3f3bbd0b2e66e71394d5ba3b4930180f2c2c9.scope. Feb 9 09:55:56.741682 env[1381]: time="2024-02-09T09:55:56.741635469Z" level=info msg="StartContainer for \"f0764d678460f16f63e366beaeb3f3bbd0b2e66e71394d5ba3b4930180f2c2c9\" returns successfully" Feb 9 09:55:56.752996 env[1381]: time="2024-02-09T09:55:56.752941353Z" level=info msg="CreateContainer within sandbox \"fd6965ce0106a66c2665d674a3bb2bb6cee8cf2ef2dcd6444155fcb11711724f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a4f6ff7332863fd63c7a9f74c1f9cd21f01e6c3a98788928c7d1b9d6b424db5\"" Feb 9 09:55:56.753535 env[1381]: time="2024-02-09T09:55:56.753505494Z" level=info msg="StartContainer for \"3a4f6ff7332863fd63c7a9f74c1f9cd21f01e6c3a98788928c7d1b9d6b424db5\"" Feb 9 09:55:56.763296 env[1381]: time="2024-02-09T09:55:56.763253432Z" level=info msg="CreateContainer within sandbox \"f4f29aa8e9e39697b9d7bd2a5b07417a0225cfb86f9ffa9c9518210e62eee8d9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"996a1f63cfbaea530a5f4c15f74c4debd52b8281b21295cae817872d1c574f7d\"" Feb 9 09:55:56.764846 env[1381]: time="2024-02-09T09:55:56.764817698Z" level=info msg="StartContainer for \"996a1f63cfbaea530a5f4c15f74c4debd52b8281b21295cae817872d1c574f7d\"" Feb 9 09:55:56.770666 kubelet[2093]: E0209 09:55:56.770625 2093 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.13:6443: connect: connection refused Feb 9 09:55:56.773855 systemd[1]: Started cri-containerd-3a4f6ff7332863fd63c7a9f74c1f9cd21f01e6c3a98788928c7d1b9d6b424db5.scope. Feb 9 09:55:56.823728 systemd[1]: Started cri-containerd-996a1f63cfbaea530a5f4c15f74c4debd52b8281b21295cae817872d1c574f7d.scope. Feb 9 09:55:56.827367 env[1381]: time="2024-02-09T09:55:56.827318871Z" level=info msg="StartContainer for \"3a4f6ff7332863fd63c7a9f74c1f9cd21f01e6c3a98788928c7d1b9d6b424db5\" returns successfully" Feb 9 09:55:56.886022 env[1381]: time="2024-02-09T09:55:56.885972499Z" level=info msg="StartContainer for \"996a1f63cfbaea530a5f4c15f74c4debd52b8281b21295cae817872d1c574f7d\" returns successfully" Feb 9 09:55:58.074848 kubelet[2093]: I0209 09:55:58.074822 2093 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:59.435512 kubelet[2093]: E0209 09:55:59.435483 2093 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-f1c369a1bc\" not found" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:59.489650 kubelet[2093]: I0209 09:55:59.489614 2093 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:55:59.615185 kubelet[2093]: E0209 09:55:59.615073 2093 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-f1c369a1bc.17b2293df13adf2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-f1c369a1bc", UID:"ci-3510.3.2-a-f1c369a1bc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-f1c369a1bc"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 55, 54, 743750443, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 55, 54, 743750443, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:55:59.745678 kubelet[2093]: I0209 09:55:59.745586 2093 apiserver.go:52] "Watching apiserver" Feb 9 09:55:59.755942 kubelet[2093]: I0209 09:55:59.755914 2093 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 09:55:59.763205 kubelet[2093]: E0209 09:55:59.763092 2093 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-f1c369a1bc.17b2293df1784aeb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-f1c369a1bc", UID:"ci-3510.3.2-a-f1c369a1bc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-f1c369a1bc"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 55, 54, 747775723, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 55, 54, 747775723, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:55:59.796632 kubelet[2093]: I0209 09:55:59.796598 2093 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:56:02.328640 systemd[1]: Reloading. Feb 9 09:56:02.387709 /usr/lib/systemd/system-generators/torcx-generator[2383]: time="2024-02-09T09:56:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:56:02.388050 /usr/lib/systemd/system-generators/torcx-generator[2383]: time="2024-02-09T09:56:02Z" level=info msg="torcx already run" Feb 9 09:56:02.476620 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:02.476752 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:02.493911 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:02.611450 systemd[1]: Stopping kubelet.service... Feb 9 09:56:02.627948 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:56:02.628286 systemd[1]: Stopped kubelet.service. Feb 9 09:56:02.628399 systemd[1]: kubelet.service: Consumed 1.292s CPU time. Feb 9 09:56:02.630726 systemd[1]: Started kubelet.service. Feb 9 09:56:02.683384 kubelet[2442]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:02.683384 kubelet[2442]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:02.683384 kubelet[2442]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:02.683732 kubelet[2442]: I0209 09:56:02.683439 2442 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:56:02.688347 kubelet[2442]: I0209 09:56:02.688320 2442 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 09:56:02.688347 kubelet[2442]: I0209 09:56:02.688343 2442 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:56:02.688556 kubelet[2442]: I0209 09:56:02.688537 2442 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 09:56:02.689999 kubelet[2442]: I0209 09:56:02.689981 2442 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:56:02.690895 kubelet[2442]: I0209 09:56:02.690880 2442 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:02.693181 kubelet[2442]: W0209 09:56:02.693161 2442 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:56:02.693828 kubelet[2442]: I0209 09:56:02.693810 2442 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:56:02.694038 kubelet[2442]: I0209 09:56:02.694023 2442 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:56:02.694105 kubelet[2442]: I0209 09:56:02.694088 2442 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:56:02.694174 kubelet[2442]: I0209 09:56:02.694108 2442 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:56:02.694174 kubelet[2442]: I0209 09:56:02.694122 2442 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 09:56:02.694174 kubelet[2442]: I0209 09:56:02.694150 2442 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:02.697228 kubelet[2442]: I0209 09:56:02.697214 2442 kubelet.go:405] "Attempting to sync node with API server" Feb 9 09:56:02.707132 kubelet[2442]: I0209 09:56:02.699570 2442 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:56:02.707132 kubelet[2442]: I0209 09:56:02.699617 2442 kubelet.go:309] "Adding apiserver pod source" Feb 9 09:56:02.707132 kubelet[2442]: I0209 09:56:02.699631 2442 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:56:02.707132 kubelet[2442]: I0209 09:56:02.701243 2442 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:56:02.707132 kubelet[2442]: I0209 09:56:02.701656 2442 server.go:1168] "Started kubelet" Feb 9 09:56:02.707132 kubelet[2442]: I0209 09:56:02.705810 2442 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:56:02.707769 kubelet[2442]: E0209 09:56:02.707743 2442 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:56:02.707769 kubelet[2442]: E0209 09:56:02.707771 2442 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:56:02.724630 kubelet[2442]: I0209 09:56:02.724598 2442 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:56:02.725227 kubelet[2442]: I0209 09:56:02.725198 2442 server.go:461] "Adding debug handlers to kubelet server" Feb 9 09:56:02.726338 kubelet[2442]: I0209 09:56:02.726309 2442 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 09:56:02.727757 kubelet[2442]: I0209 09:56:02.727734 2442 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 09:56:02.733711 kubelet[2442]: I0209 09:56:02.733690 2442 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 09:56:02.741755 kubelet[2442]: I0209 09:56:02.741724 2442 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:56:02.742566 kubelet[2442]: I0209 09:56:02.742543 2442 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:56:02.742611 kubelet[2442]: I0209 09:56:02.742583 2442 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 09:56:02.742611 kubelet[2442]: I0209 09:56:02.742602 2442 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 09:56:02.742655 kubelet[2442]: E0209 09:56:02.742645 2442 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:56:02.803159 kubelet[2442]: I0209 09:56:02.803125 2442 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:56:02.803159 kubelet[2442]: I0209 09:56:02.803153 2442 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:56:02.803308 kubelet[2442]: I0209 09:56:02.803171 2442 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:02.803333 kubelet[2442]: I0209 09:56:02.803321 2442 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:56:02.803363 kubelet[2442]: I0209 09:56:02.803336 2442 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:56:02.803363 kubelet[2442]: I0209 09:56:02.803343 2442 policy_none.go:49] "None policy: Start" Feb 9 09:56:02.804112 kubelet[2442]: I0209 09:56:02.804076 2442 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:56:02.804171 kubelet[2442]: I0209 09:56:02.804129 2442 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:56:02.804271 kubelet[2442]: I0209 09:56:02.804251 2442 state_mem.go:75] "Updated machine memory state" Feb 9 09:56:02.812902 kubelet[2442]: I0209 09:56:02.812886 2442 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:56:02.814611 kubelet[2442]: I0209 09:56:02.814597 2442 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:56:02.830705 kubelet[2442]: I0209 09:56:02.830669 2442 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:02.844196 kubelet[2442]: I0209 09:56:02.844161 2442 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:02.844315 kubelet[2442]: I0209 09:56:02.844256 2442 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:02.845435 kubelet[2442]: I0209 09:56:02.844463 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:56:02.845435 kubelet[2442]: I0209 09:56:02.844581 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:56:02.846557 kubelet[2442]: I0209 09:56:02.846542 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:56:02.851331 kubelet[2442]: W0209 09:56:02.851316 2442 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 09:56:02.856687 kubelet[2442]: W0209 09:56:02.856657 2442 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 09:56:02.856762 kubelet[2442]: W0209 09:56:02.856711 2442 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 09:56:03.034880 kubelet[2442]: I0209 09:56:03.034848 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5f6f1c9e1df9aae683b42a6dfbbed75-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-f1c369a1bc\" (UID: \"f5f6f1c9e1df9aae683b42a6dfbbed75\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:03.035085 kubelet[2442]: I0209 09:56:03.035073 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5f6f1c9e1df9aae683b42a6dfbbed75-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-f1c369a1bc\" (UID: \"f5f6f1c9e1df9aae683b42a6dfbbed75\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:03.035179 kubelet[2442]: I0209 09:56:03.035170 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5f6f1c9e1df9aae683b42a6dfbbed75-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-f1c369a1bc\" (UID: \"f5f6f1c9e1df9aae683b42a6dfbbed75\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:03.035276 kubelet[2442]: I0209 09:56:03.035265 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/144bdbe13ac066c2eca73e0e6b58274b-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-f1c369a1bc\" (UID: \"144bdbe13ac066c2eca73e0e6b58274b\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:03.035379 kubelet[2442]: I0209 09:56:03.035369 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/144bdbe13ac066c2eca73e0e6b58274b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-f1c369a1bc\" (UID: \"144bdbe13ac066c2eca73e0e6b58274b\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:03.035503 kubelet[2442]: I0209 09:56:03.035494 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5f6f1c9e1df9aae683b42a6dfbbed75-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-f1c369a1bc\" (UID: \"f5f6f1c9e1df9aae683b42a6dfbbed75\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:03.035606 kubelet[2442]: I0209 09:56:03.035597 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33e1ed44fac717261fbc0c4ba188bcfc-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-f1c369a1bc\" (UID: \"33e1ed44fac717261fbc0c4ba188bcfc\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:03.035700 kubelet[2442]: I0209 09:56:03.035690 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/144bdbe13ac066c2eca73e0e6b58274b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-f1c369a1bc\" (UID: \"144bdbe13ac066c2eca73e0e6b58274b\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:03.035801 kubelet[2442]: I0209 09:56:03.035778 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5f6f1c9e1df9aae683b42a6dfbbed75-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-f1c369a1bc\" (UID: \"f5f6f1c9e1df9aae683b42a6dfbbed75\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:03.700806 kubelet[2442]: I0209 09:56:03.700773 2442 apiserver.go:52] "Watching apiserver" Feb 9 09:56:03.734875 kubelet[2442]: I0209 09:56:03.734841 2442 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 09:56:03.738262 kubelet[2442]: I0209 09:56:03.738233 2442 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:56:03.793359 kubelet[2442]: W0209 09:56:03.793325 2442 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 09:56:03.793557 kubelet[2442]: E0209 09:56:03.793543 2442 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-f1c369a1bc\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-f1c369a1bc" Feb 9 09:56:03.836345 kubelet[2442]: I0209 09:56:03.836317 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" podStartSLOduration=1.836271792 podCreationTimestamp="2024-02-09 09:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:03.827878739 +0000 UTC m=+1.193716469" watchObservedRunningTime="2024-02-09 09:56:03.836271792 +0000 UTC m=+1.202109482" Feb 9 09:56:03.848990 kubelet[2442]: I0209 09:56:03.848963 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-f1c369a1bc" podStartSLOduration=1.8489188300000001 podCreationTimestamp="2024-02-09 09:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:03.836833139 +0000 UTC m=+1.202670869" watchObservedRunningTime="2024-02-09 09:56:03.84891883 +0000 UTC m=+1.214756560" Feb 9 09:56:03.849306 kubelet[2442]: I0209 09:56:03.849289 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-f1c369a1bc" podStartSLOduration=1.8492685020000001 podCreationTimestamp="2024-02-09 09:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:03.849252383 +0000 UTC m=+1.215090113" watchObservedRunningTime="2024-02-09 09:56:03.849268502 +0000 UTC m=+1.215106232" Feb 9 09:56:04.199955 sudo[1646]: pam_unix(sudo:session): session closed for user root Feb 9 09:56:04.294675 sshd[1643]: pam_unix(sshd:session): session closed for user core Feb 9 09:56:04.297120 systemd-logind[1364]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:56:04.297291 systemd[1]: sshd@4-10.200.20.13:22-10.200.12.6:60790.service: Deactivated successfully. Feb 9 09:56:04.298038 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:56:04.298202 systemd[1]: session-7.scope: Consumed 6.088s CPU time. Feb 9 09:56:04.299428 systemd-logind[1364]: Removed session 7. Feb 9 09:56:17.083972 kubelet[2442]: I0209 09:56:17.083929 2442 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:56:17.084614 env[1381]: time="2024-02-09T09:56:17.084582146Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:56:17.085042 kubelet[2442]: I0209 09:56:17.085026 2442 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:56:17.127749 kubelet[2442]: I0209 09:56:17.127702 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:56:17.132626 systemd[1]: Created slice kubepods-burstable-pod01c94fc0_e809_40e3_bcf6_bbc097c7caee.slice. Feb 9 09:56:17.141092 kubelet[2442]: W0209 09:56:17.141052 2442 reflector.go:533] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-f1c369a1bc" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3510.3.2-a-f1c369a1bc' and this object Feb 9 09:56:17.141238 kubelet[2442]: E0209 09:56:17.141096 2442 reflector.go:148] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-f1c369a1bc" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3510.3.2-a-f1c369a1bc' and this object Feb 9 09:56:17.141238 kubelet[2442]: W0209 09:56:17.141052 2442 reflector.go:533] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-3510.3.2-a-f1c369a1bc" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3510.3.2-a-f1c369a1bc' and this object Feb 9 09:56:17.141238 kubelet[2442]: E0209 09:56:17.141129 2442 reflector.go:148] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-3510.3.2-a-f1c369a1bc" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3510.3.2-a-f1c369a1bc' and this object Feb 9 09:56:17.141898 kubelet[2442]: I0209 09:56:17.141875 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:56:17.146364 systemd[1]: Created slice kubepods-besteffort-pode1ee0101_81f7_4a4b_974e_251c3dbea76c.slice. Feb 9 09:56:17.154219 kubelet[2442]: W0209 09:56:17.154185 2442 reflector.go:533] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-a-f1c369a1bc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-f1c369a1bc' and this object Feb 9 09:56:17.154219 kubelet[2442]: E0209 09:56:17.154222 2442 reflector.go:148] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-a-f1c369a1bc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-f1c369a1bc' and this object Feb 9 09:56:17.302649 kubelet[2442]: I0209 09:56:17.302622 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/01c94fc0-e809-40e3-bcf6-bbc097c7caee-run\") pod \"kube-flannel-ds-jgcl8\" (UID: \"01c94fc0-e809-40e3-bcf6-bbc097c7caee\") " pod="kube-flannel/kube-flannel-ds-jgcl8" Feb 9 09:56:17.302876 kubelet[2442]: I0209 09:56:17.302862 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/01c94fc0-e809-40e3-bcf6-bbc097c7caee-flannel-cfg\") pod \"kube-flannel-ds-jgcl8\" (UID: \"01c94fc0-e809-40e3-bcf6-bbc097c7caee\") " pod="kube-flannel/kube-flannel-ds-jgcl8" Feb 9 09:56:17.302998 kubelet[2442]: I0209 09:56:17.302989 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01c94fc0-e809-40e3-bcf6-bbc097c7caee-xtables-lock\") pod \"kube-flannel-ds-jgcl8\" (UID: \"01c94fc0-e809-40e3-bcf6-bbc097c7caee\") " pod="kube-flannel/kube-flannel-ds-jgcl8" Feb 9 09:56:17.303087 kubelet[2442]: I0209 09:56:17.303079 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27kzf\" (UniqueName: \"kubernetes.io/projected/01c94fc0-e809-40e3-bcf6-bbc097c7caee-kube-api-access-27kzf\") pod \"kube-flannel-ds-jgcl8\" (UID: \"01c94fc0-e809-40e3-bcf6-bbc097c7caee\") " pod="kube-flannel/kube-flannel-ds-jgcl8" Feb 9 09:56:17.303180 kubelet[2442]: I0209 09:56:17.303171 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1ee0101-81f7-4a4b-974e-251c3dbea76c-xtables-lock\") pod \"kube-proxy-pnlqh\" (UID: \"e1ee0101-81f7-4a4b-974e-251c3dbea76c\") " pod="kube-system/kube-proxy-pnlqh" Feb 9 09:56:17.303278 kubelet[2442]: I0209 09:56:17.303269 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e1ee0101-81f7-4a4b-974e-251c3dbea76c-kube-proxy\") pod \"kube-proxy-pnlqh\" (UID: \"e1ee0101-81f7-4a4b-974e-251c3dbea76c\") " pod="kube-system/kube-proxy-pnlqh" Feb 9 09:56:17.303365 kubelet[2442]: I0209 09:56:17.303357 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4882\" (UniqueName: \"kubernetes.io/projected/e1ee0101-81f7-4a4b-974e-251c3dbea76c-kube-api-access-m4882\") pod \"kube-proxy-pnlqh\" (UID: \"e1ee0101-81f7-4a4b-974e-251c3dbea76c\") " pod="kube-system/kube-proxy-pnlqh" Feb 9 09:56:17.303459 kubelet[2442]: I0209 09:56:17.303451 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/01c94fc0-e809-40e3-bcf6-bbc097c7caee-cni\") pod \"kube-flannel-ds-jgcl8\" (UID: \"01c94fc0-e809-40e3-bcf6-bbc097c7caee\") " pod="kube-flannel/kube-flannel-ds-jgcl8" Feb 9 09:56:17.303662 kubelet[2442]: I0209 09:56:17.303651 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/01c94fc0-e809-40e3-bcf6-bbc097c7caee-cni-plugin\") pod \"kube-flannel-ds-jgcl8\" (UID: \"01c94fc0-e809-40e3-bcf6-bbc097c7caee\") " pod="kube-flannel/kube-flannel-ds-jgcl8" Feb 9 09:56:17.303749 kubelet[2442]: I0209 09:56:17.303740 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1ee0101-81f7-4a4b-974e-251c3dbea76c-lib-modules\") pod \"kube-proxy-pnlqh\" (UID: \"e1ee0101-81f7-4a4b-974e-251c3dbea76c\") " pod="kube-system/kube-proxy-pnlqh" Feb 9 09:56:18.404454 kubelet[2442]: E0209 09:56:18.404416 2442 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 9 09:56:18.406512 kubelet[2442]: E0209 09:56:18.404524 2442 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1ee0101-81f7-4a4b-974e-251c3dbea76c-kube-proxy podName:e1ee0101-81f7-4a4b-974e-251c3dbea76c nodeName:}" failed. No retries permitted until 2024-02-09 09:56:18.904504183 +0000 UTC m=+16.270341913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e1ee0101-81f7-4a4b-974e-251c3dbea76c-kube-proxy") pod "kube-proxy-pnlqh" (UID: "e1ee0101-81f7-4a4b-974e-251c3dbea76c") : failed to sync configmap cache: timed out waiting for the condition Feb 9 09:56:18.419498 kubelet[2442]: E0209 09:56:18.418167 2442 projected.go:292] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 9 09:56:18.419498 kubelet[2442]: E0209 09:56:18.418196 2442 projected.go:198] Error preparing data for projected volume kube-api-access-27kzf for pod kube-flannel/kube-flannel-ds-jgcl8: failed to sync configmap cache: timed out waiting for the condition Feb 9 09:56:18.419498 kubelet[2442]: E0209 09:56:18.418262 2442 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01c94fc0-e809-40e3-bcf6-bbc097c7caee-kube-api-access-27kzf podName:01c94fc0-e809-40e3-bcf6-bbc097c7caee nodeName:}" failed. No retries permitted until 2024-02-09 09:56:18.918245667 +0000 UTC m=+16.284083357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-27kzf" (UniqueName: "kubernetes.io/projected/01c94fc0-e809-40e3-bcf6-bbc097c7caee-kube-api-access-27kzf") pod "kube-flannel-ds-jgcl8" (UID: "01c94fc0-e809-40e3-bcf6-bbc097c7caee") : failed to sync configmap cache: timed out waiting for the condition Feb 9 09:56:18.954501 env[1381]: time="2024-02-09T09:56:18.954273013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pnlqh,Uid:e1ee0101-81f7-4a4b-974e-251c3dbea76c,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:19.011498 env[1381]: time="2024-02-09T09:56:19.011401576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:19.011763 env[1381]: time="2024-02-09T09:56:19.011451455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:19.011763 env[1381]: time="2024-02-09T09:56:19.011462015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:19.011763 env[1381]: time="2024-02-09T09:56:19.011702373Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c5ab45d861dda4ac196e6fbe40f9d36c2af8fdfe103051cbc1838dc887e19ab pid=2508 runtime=io.containerd.runc.v2 Feb 9 09:56:19.032834 systemd[1]: Started cri-containerd-7c5ab45d861dda4ac196e6fbe40f9d36c2af8fdfe103051cbc1838dc887e19ab.scope. Feb 9 09:56:19.053348 env[1381]: time="2024-02-09T09:56:19.053297083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pnlqh,Uid:e1ee0101-81f7-4a4b-974e-251c3dbea76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c5ab45d861dda4ac196e6fbe40f9d36c2af8fdfe103051cbc1838dc887e19ab\"" Feb 9 09:56:19.058120 env[1381]: time="2024-02-09T09:56:19.058073166Z" level=info msg="CreateContainer within sandbox \"7c5ab45d861dda4ac196e6fbe40f9d36c2af8fdfe103051cbc1838dc887e19ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:56:19.130716 env[1381]: time="2024-02-09T09:56:19.130664350Z" level=info msg="CreateContainer within sandbox \"7c5ab45d861dda4ac196e6fbe40f9d36c2af8fdfe103051cbc1838dc887e19ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e1191f17d049913087888a41643e71249e9c72a53f1bcda658f849ac39c95eca\"" Feb 9 09:56:19.132516 env[1381]: time="2024-02-09T09:56:19.131870420Z" level=info msg="StartContainer for \"e1191f17d049913087888a41643e71249e9c72a53f1bcda658f849ac39c95eca\"" Feb 9 09:56:19.147993 systemd[1]: Started cri-containerd-e1191f17d049913087888a41643e71249e9c72a53f1bcda658f849ac39c95eca.scope. Feb 9 09:56:19.185928 env[1381]: time="2024-02-09T09:56:19.185871632Z" level=info msg="StartContainer for \"e1191f17d049913087888a41643e71249e9c72a53f1bcda658f849ac39c95eca\" returns successfully" Feb 9 09:56:19.236362 env[1381]: time="2024-02-09T09:56:19.236266473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jgcl8,Uid:01c94fc0-e809-40e3-bcf6-bbc097c7caee,Namespace:kube-flannel,Attempt:0,}" Feb 9 09:56:19.288892 env[1381]: time="2024-02-09T09:56:19.288823816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:19.289026 env[1381]: time="2024-02-09T09:56:19.288896375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:19.289026 env[1381]: time="2024-02-09T09:56:19.288922015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:19.289189 env[1381]: time="2024-02-09T09:56:19.289139493Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/89291b98e5618841352347c15e9b455ec4a9ac95cc5da1a1d0e9d5767077acee pid=2613 runtime=io.containerd.runc.v2 Feb 9 09:56:19.299368 systemd[1]: Started cri-containerd-89291b98e5618841352347c15e9b455ec4a9ac95cc5da1a1d0e9d5767077acee.scope. Feb 9 09:56:19.339869 env[1381]: time="2024-02-09T09:56:19.339825251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jgcl8,Uid:01c94fc0-e809-40e3-bcf6-bbc097c7caee,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"89291b98e5618841352347c15e9b455ec4a9ac95cc5da1a1d0e9d5767077acee\"" Feb 9 09:56:19.343489 env[1381]: time="2024-02-09T09:56:19.341598557Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 9 09:56:20.006917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162284207.mount: Deactivated successfully. Feb 9 09:56:21.437656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970561995.mount: Deactivated successfully. Feb 9 09:56:21.713957 env[1381]: time="2024-02-09T09:56:21.713849084Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.731307 env[1381]: time="2024-02-09T09:56:21.731264483Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.739456 env[1381]: time="2024-02-09T09:56:21.739413786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.750512 env[1381]: time="2024-02-09T09:56:21.750456149Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.751181 env[1381]: time="2024-02-09T09:56:21.751148184Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 9 09:56:21.753322 env[1381]: time="2024-02-09T09:56:21.753286690Z" level=info msg="CreateContainer within sandbox \"89291b98e5618841352347c15e9b455ec4a9ac95cc5da1a1d0e9d5767077acee\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 9 09:56:21.803862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894675209.mount: Deactivated successfully. Feb 9 09:56:21.837114 env[1381]: time="2024-02-09T09:56:21.837042346Z" level=info msg="CreateContainer within sandbox \"89291b98e5618841352347c15e9b455ec4a9ac95cc5da1a1d0e9d5767077acee\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"86fd9b4a374c9f9e81850e415829776e8b31f1c18b42b77186b44002f5a52c3e\"" Feb 9 09:56:21.837837 env[1381]: time="2024-02-09T09:56:21.837804660Z" level=info msg="StartContainer for \"86fd9b4a374c9f9e81850e415829776e8b31f1c18b42b77186b44002f5a52c3e\"" Feb 9 09:56:21.856264 systemd[1]: Started cri-containerd-86fd9b4a374c9f9e81850e415829776e8b31f1c18b42b77186b44002f5a52c3e.scope. Feb 9 09:56:21.887281 systemd[1]: cri-containerd-86fd9b4a374c9f9e81850e415829776e8b31f1c18b42b77186b44002f5a52c3e.scope: Deactivated successfully. Feb 9 09:56:21.888702 env[1381]: time="2024-02-09T09:56:21.888650546Z" level=info msg="StartContainer for \"86fd9b4a374c9f9e81850e415829776e8b31f1c18b42b77186b44002f5a52c3e\" returns successfully" Feb 9 09:56:21.957130 env[1381]: time="2024-02-09T09:56:21.957084909Z" level=info msg="shim disconnected" id=86fd9b4a374c9f9e81850e415829776e8b31f1c18b42b77186b44002f5a52c3e Feb 9 09:56:21.957359 env[1381]: time="2024-02-09T09:56:21.957340907Z" level=warning msg="cleaning up after shim disconnected" id=86fd9b4a374c9f9e81850e415829776e8b31f1c18b42b77186b44002f5a52c3e namespace=k8s.io Feb 9 09:56:21.957419 env[1381]: time="2024-02-09T09:56:21.957406627Z" level=info msg="cleaning up dead shim" Feb 9 09:56:21.965632 env[1381]: time="2024-02-09T09:56:21.964883575Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2780 runtime=io.containerd.runc.v2\n" Feb 9 09:56:22.360654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159600387.mount: Deactivated successfully. Feb 9 09:56:22.754128 kubelet[2442]: I0209 09:56:22.753853 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pnlqh" podStartSLOduration=5.753817404 podCreationTimestamp="2024-02-09 09:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:19.81418797 +0000 UTC m=+17.180025700" watchObservedRunningTime="2024-02-09 09:56:22.753817404 +0000 UTC m=+20.119655134" Feb 9 09:56:22.812359 env[1381]: time="2024-02-09T09:56:22.812313902Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 9 09:56:24.944721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782734357.mount: Deactivated successfully. Feb 9 09:56:26.772223 env[1381]: time="2024-02-09T09:56:26.772177083Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:26.796269 env[1381]: time="2024-02-09T09:56:26.796228762Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:26.812815 env[1381]: time="2024-02-09T09:56:26.812775998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:26.830484 env[1381]: time="2024-02-09T09:56:26.830406989Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:26.831415 env[1381]: time="2024-02-09T09:56:26.831386144Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 9 09:56:26.835343 env[1381]: time="2024-02-09T09:56:26.835302884Z" level=info msg="CreateContainer within sandbox \"89291b98e5618841352347c15e9b455ec4a9ac95cc5da1a1d0e9d5767077acee\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 09:56:26.890592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320816220.mount: Deactivated successfully. Feb 9 09:56:26.896691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223803925.mount: Deactivated successfully. Feb 9 09:56:26.931179 env[1381]: time="2024-02-09T09:56:26.931130521Z" level=info msg="CreateContainer within sandbox \"89291b98e5618841352347c15e9b455ec4a9ac95cc5da1a1d0e9d5767077acee\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9818993eec5aa4aea00f00ef2e1b2b2810396e71413130d49e4e99beb8651d8b\"" Feb 9 09:56:26.931951 env[1381]: time="2024-02-09T09:56:26.931924957Z" level=info msg="StartContainer for \"9818993eec5aa4aea00f00ef2e1b2b2810396e71413130d49e4e99beb8651d8b\"" Feb 9 09:56:26.949361 systemd[1]: Started cri-containerd-9818993eec5aa4aea00f00ef2e1b2b2810396e71413130d49e4e99beb8651d8b.scope. Feb 9 09:56:26.979858 systemd[1]: cri-containerd-9818993eec5aa4aea00f00ef2e1b2b2810396e71413130d49e4e99beb8651d8b.scope: Deactivated successfully. Feb 9 09:56:26.990214 env[1381]: time="2024-02-09T09:56:26.990168743Z" level=info msg="StartContainer for \"9818993eec5aa4aea00f00ef2e1b2b2810396e71413130d49e4e99beb8651d8b\" returns successfully" Feb 9 09:56:27.024921 kubelet[2442]: I0209 09:56:27.024286 2442 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:56:27.044801 kubelet[2442]: I0209 09:56:27.044755 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:56:27.050796 systemd[1]: Created slice kubepods-burstable-pod1ffffceb_0de5_4ff9_b7fc_36bae9ac757d.slice. Feb 9 09:56:27.054140 kubelet[2442]: I0209 09:56:27.054113 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:56:27.058548 systemd[1]: Created slice kubepods-burstable-podf65797e9_cb32_45d6_af41_6784d230c83e.slice. Feb 9 09:56:27.065646 kubelet[2442]: I0209 09:56:27.065619 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9m2c\" (UniqueName: \"kubernetes.io/projected/1ffffceb-0de5-4ff9-b7fc-36bae9ac757d-kube-api-access-f9m2c\") pod \"coredns-5d78c9869d-qrzsq\" (UID: \"1ffffceb-0de5-4ff9-b7fc-36bae9ac757d\") " pod="kube-system/coredns-5d78c9869d-qrzsq" Feb 9 09:56:27.065922 kubelet[2442]: I0209 09:56:27.065909 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f65797e9-cb32-45d6-af41-6784d230c83e-config-volume\") pod \"coredns-5d78c9869d-kzmsp\" (UID: \"f65797e9-cb32-45d6-af41-6784d230c83e\") " pod="kube-system/coredns-5d78c9869d-kzmsp" Feb 9 09:56:27.066021 kubelet[2442]: I0209 09:56:27.066011 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ffffceb-0de5-4ff9-b7fc-36bae9ac757d-config-volume\") pod \"coredns-5d78c9869d-qrzsq\" (UID: \"1ffffceb-0de5-4ff9-b7fc-36bae9ac757d\") " pod="kube-system/coredns-5d78c9869d-qrzsq" Feb 9 09:56:27.066097 kubelet[2442]: I0209 09:56:27.066087 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5xvs\" (UniqueName: \"kubernetes.io/projected/f65797e9-cb32-45d6-af41-6784d230c83e-kube-api-access-s5xvs\") pod \"coredns-5d78c9869d-kzmsp\" (UID: \"f65797e9-cb32-45d6-af41-6784d230c83e\") " pod="kube-system/coredns-5d78c9869d-kzmsp" Feb 9 09:56:27.354028 env[1381]: time="2024-02-09T09:56:27.353375340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-qrzsq,Uid:1ffffceb-0de5-4ff9-b7fc-36bae9ac757d,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:27.363213 env[1381]: time="2024-02-09T09:56:27.362937415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-kzmsp,Uid:f65797e9-cb32-45d6-af41-6784d230c83e,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:27.505215 env[1381]: time="2024-02-09T09:56:27.505165600Z" level=info msg="shim disconnected" id=9818993eec5aa4aea00f00ef2e1b2b2810396e71413130d49e4e99beb8651d8b Feb 9 09:56:27.505215 env[1381]: time="2024-02-09T09:56:27.505210279Z" level=warning msg="cleaning up after shim disconnected" id=9818993eec5aa4aea00f00ef2e1b2b2810396e71413130d49e4e99beb8651d8b namespace=k8s.io Feb 9 09:56:27.505215 env[1381]: time="2024-02-09T09:56:27.505220599Z" level=info msg="cleaning up dead shim" Feb 9 09:56:27.513022 env[1381]: time="2024-02-09T09:56:27.512975225Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2839 runtime=io.containerd.runc.v2\n" Feb 9 09:56:27.737963 env[1381]: time="2024-02-09T09:56:27.737883359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-qrzsq,Uid:1ffffceb-0de5-4ff9-b7fc-36bae9ac757d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5c3afe02681452a8c150800da49bb7f6b9f001709faf10055d3aedd4dc08833\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 09:56:27.738497 kubelet[2442]: E0209 09:56:27.738308 2442 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c3afe02681452a8c150800da49bb7f6b9f001709faf10055d3aedd4dc08833\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 09:56:27.738497 kubelet[2442]: E0209 09:56:27.738371 2442 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c3afe02681452a8c150800da49bb7f6b9f001709faf10055d3aedd4dc08833\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-qrzsq" Feb 9 09:56:27.738497 kubelet[2442]: E0209 09:56:27.738391 2442 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c3afe02681452a8c150800da49bb7f6b9f001709faf10055d3aedd4dc08833\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-qrzsq" Feb 9 09:56:27.738497 kubelet[2442]: E0209 09:56:27.738453 2442 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5d78c9869d-qrzsq_kube-system(1ffffceb-0de5-4ff9-b7fc-36bae9ac757d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5d78c9869d-qrzsq_kube-system(1ffffceb-0de5-4ff9-b7fc-36bae9ac757d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5c3afe02681452a8c150800da49bb7f6b9f001709faf10055d3aedd4dc08833\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5d78c9869d-qrzsq" podUID=1ffffceb-0de5-4ff9-b7fc-36bae9ac757d Feb 9 09:56:27.768553 env[1381]: time="2024-02-09T09:56:27.768486383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-kzmsp,Uid:f65797e9-cb32-45d6-af41-6784d230c83e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a17731edcd1ac237fe7e7fa0fa1de8237e559cafb9617d8b06b221bb1be7ceb8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 09:56:27.768768 kubelet[2442]: E0209 09:56:27.768744 2442 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a17731edcd1ac237fe7e7fa0fa1de8237e559cafb9617d8b06b221bb1be7ceb8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 09:56:27.768830 kubelet[2442]: E0209 09:56:27.768796 2442 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a17731edcd1ac237fe7e7fa0fa1de8237e559cafb9617d8b06b221bb1be7ceb8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-kzmsp" Feb 9 09:56:27.768830 kubelet[2442]: E0209 09:56:27.768815 2442 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a17731edcd1ac237fe7e7fa0fa1de8237e559cafb9617d8b06b221bb1be7ceb8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-kzmsp" Feb 9 09:56:27.768888 kubelet[2442]: E0209 09:56:27.768861 2442 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5d78c9869d-kzmsp_kube-system(f65797e9-cb32-45d6-af41-6784d230c83e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5d78c9869d-kzmsp_kube-system(f65797e9-cb32-45d6-af41-6784d230c83e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a17731edcd1ac237fe7e7fa0fa1de8237e559cafb9617d8b06b221bb1be7ceb8\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5d78c9869d-kzmsp" podUID=f65797e9-cb32-45d6-af41-6784d230c83e Feb 9 09:56:27.826503 env[1381]: time="2024-02-09T09:56:27.826450255Z" level=info msg="CreateContainer within sandbox \"89291b98e5618841352347c15e9b455ec4a9ac95cc5da1a1d0e9d5767077acee\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 9 09:56:27.889882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9818993eec5aa4aea00f00ef2e1b2b2810396e71413130d49e4e99beb8651d8b-rootfs.mount: Deactivated successfully. Feb 9 09:56:27.907530 env[1381]: time="2024-02-09T09:56:27.907457684Z" level=info msg="CreateContainer within sandbox \"89291b98e5618841352347c15e9b455ec4a9ac95cc5da1a1d0e9d5767077acee\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"bf92fe2ea75b44376bf25d04c3eb63b371a9e289bfbbe222563e952847d85880\"" Feb 9 09:56:27.908028 env[1381]: time="2024-02-09T09:56:27.908003440Z" level=info msg="StartContainer for \"bf92fe2ea75b44376bf25d04c3eb63b371a9e289bfbbe222563e952847d85880\"" Feb 9 09:56:27.930290 systemd[1]: run-containerd-runc-k8s.io-bf92fe2ea75b44376bf25d04c3eb63b371a9e289bfbbe222563e952847d85880-runc.GmPFF6.mount: Deactivated successfully. Feb 9 09:56:27.931590 systemd[1]: Started cri-containerd-bf92fe2ea75b44376bf25d04c3eb63b371a9e289bfbbe222563e952847d85880.scope. Feb 9 09:56:27.969072 env[1381]: time="2024-02-09T09:56:27.969009970Z" level=info msg="StartContainer for \"bf92fe2ea75b44376bf25d04c3eb63b371a9e289bfbbe222563e952847d85880\" returns successfully" Feb 9 09:56:29.111782 systemd-networkd[1531]: flannel.1: Link UP Feb 9 09:56:29.111796 systemd-networkd[1531]: flannel.1: Gained carrier Feb 9 09:56:30.496594 systemd-networkd[1531]: flannel.1: Gained IPv6LL Feb 9 09:56:38.744109 env[1381]: time="2024-02-09T09:56:38.744049547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-kzmsp,Uid:f65797e9-cb32-45d6-af41-6784d230c83e,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:38.830609 systemd-networkd[1531]: cni0: Link UP Feb 9 09:56:38.830616 systemd-networkd[1531]: cni0: Gained carrier Feb 9 09:56:38.832368 systemd-networkd[1531]: cni0: Lost carrier Feb 9 09:56:38.858880 systemd-networkd[1531]: vethd0b41fa2: Link UP Feb 9 09:56:38.870898 kernel: cni0: port 1(vethd0b41fa2) entered blocking state Feb 9 09:56:38.871012 kernel: cni0: port 1(vethd0b41fa2) entered disabled state Feb 9 09:56:38.877405 kernel: device vethd0b41fa2 entered promiscuous mode Feb 9 09:56:38.883404 kernel: cni0: port 1(vethd0b41fa2) entered blocking state Feb 9 09:56:38.883520 kernel: cni0: port 1(vethd0b41fa2) entered forwarding state Feb 9 09:56:38.895504 kernel: cni0: port 1(vethd0b41fa2) entered disabled state Feb 9 09:56:38.910552 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethd0b41fa2: link becomes ready Feb 9 09:56:38.910666 kernel: cni0: port 1(vethd0b41fa2) entered blocking state Feb 9 09:56:38.910690 kernel: cni0: port 1(vethd0b41fa2) entered forwarding state Feb 9 09:56:38.916404 systemd-networkd[1531]: vethd0b41fa2: Gained carrier Feb 9 09:56:38.918351 systemd-networkd[1531]: cni0: Gained carrier Feb 9 09:56:38.919494 env[1381]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014928), "name":"cbr0", "type":"bridge"} Feb 9 09:56:38.919494 env[1381]: delegateAdd: netconf sent to delegate plugin: Feb 9 09:56:38.940359 env[1381]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T09:56:38.940176325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:38.940359 env[1381]: time="2024-02-09T09:56:38.940218284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:38.940359 env[1381]: time="2024-02-09T09:56:38.940228084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:38.940555 env[1381]: time="2024-02-09T09:56:38.940394601Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1812fa456f62857680a0c8b3a501649a22ba02ae318d74bbe7168ca77cd2e50a pid=3082 runtime=io.containerd.runc.v2 Feb 9 09:56:38.954636 systemd[1]: Started cri-containerd-1812fa456f62857680a0c8b3a501649a22ba02ae318d74bbe7168ca77cd2e50a.scope. Feb 9 09:56:38.988672 env[1381]: time="2024-02-09T09:56:38.988630478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-kzmsp,Uid:f65797e9-cb32-45d6-af41-6784d230c83e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1812fa456f62857680a0c8b3a501649a22ba02ae318d74bbe7168ca77cd2e50a\"" Feb 9 09:56:38.991654 env[1381]: time="2024-02-09T09:56:38.991620191Z" level=info msg="CreateContainer within sandbox \"1812fa456f62857680a0c8b3a501649a22ba02ae318d74bbe7168ca77cd2e50a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:56:39.062038 env[1381]: time="2024-02-09T09:56:39.061928705Z" level=info msg="CreateContainer within sandbox \"1812fa456f62857680a0c8b3a501649a22ba02ae318d74bbe7168ca77cd2e50a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11f5da1c90417327584e5d3da299bc125c5534656e2bd96b78b560a25ecf9581\"" Feb 9 09:56:39.064667 env[1381]: time="2024-02-09T09:56:39.064632983Z" level=info msg="StartContainer for \"11f5da1c90417327584e5d3da299bc125c5534656e2bd96b78b560a25ecf9581\"" Feb 9 09:56:39.079247 systemd[1]: Started cri-containerd-11f5da1c90417327584e5d3da299bc125c5534656e2bd96b78b560a25ecf9581.scope. Feb 9 09:56:39.119153 env[1381]: time="2024-02-09T09:56:39.119091745Z" level=info msg="StartContainer for \"11f5da1c90417327584e5d3da299bc125c5534656e2bd96b78b560a25ecf9581\" returns successfully" Feb 9 09:56:39.811011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437686346.mount: Deactivated successfully. Feb 9 09:56:39.854882 kubelet[2442]: I0209 09:56:39.854853 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-kzmsp" podStartSLOduration=22.854816777 podCreationTimestamp="2024-02-09 09:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:39.854155547 +0000 UTC m=+37.219993397" watchObservedRunningTime="2024-02-09 09:56:39.854816777 +0000 UTC m=+37.220654507" Feb 9 09:56:39.855453 kubelet[2442]: I0209 09:56:39.855430 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-jgcl8" podStartSLOduration=15.364709867 podCreationTimestamp="2024-02-09 09:56:17 +0000 UTC" firstStartedPulling="2024-02-09 09:56:19.340993402 +0000 UTC m=+16.706831132" lastFinishedPulling="2024-02-09 09:56:26.831691303 +0000 UTC m=+24.197529033" observedRunningTime="2024-02-09 09:56:28.835400147 +0000 UTC m=+26.201237877" watchObservedRunningTime="2024-02-09 09:56:39.855407768 +0000 UTC m=+37.221245458" Feb 9 09:56:40.096625 systemd-networkd[1531]: cni0: Gained IPv6LL Feb 9 09:56:40.352619 systemd-networkd[1531]: vethd0b41fa2: Gained IPv6LL Feb 9 09:56:40.744294 env[1381]: time="2024-02-09T09:56:40.744191187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-qrzsq,Uid:1ffffceb-0de5-4ff9-b7fc-36bae9ac757d,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:40.845414 systemd-networkd[1531]: veth3cae83fd: Link UP Feb 9 09:56:40.859056 kernel: cni0: port 2(veth3cae83fd) entered blocking state Feb 9 09:56:40.859150 kernel: cni0: port 2(veth3cae83fd) entered disabled state Feb 9 09:56:40.864523 kernel: device veth3cae83fd entered promiscuous mode Feb 9 09:56:40.879293 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:56:40.879396 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3cae83fd: link becomes ready Feb 9 09:56:40.884654 kernel: cni0: port 2(veth3cae83fd) entered blocking state Feb 9 09:56:40.884739 kernel: cni0: port 2(veth3cae83fd) entered forwarding state Feb 9 09:56:40.890050 systemd-networkd[1531]: veth3cae83fd: Gained carrier Feb 9 09:56:40.893044 env[1381]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a88e8), "name":"cbr0", "type":"bridge"} Feb 9 09:56:40.893044 env[1381]: delegateAdd: netconf sent to delegate plugin: Feb 9 09:56:40.917690 env[1381]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T09:56:40.917614228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:40.917842 env[1381]: time="2024-02-09T09:56:40.917699906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:40.917842 env[1381]: time="2024-02-09T09:56:40.917727746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:40.917943 env[1381]: time="2024-02-09T09:56:40.917910183Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d10efab6ad82d9ffd667b614029f06e1371dbe033a1d981ada6d314cd2092d74 pid=3215 runtime=io.containerd.runc.v2 Feb 9 09:56:40.936080 systemd[1]: Started cri-containerd-d10efab6ad82d9ffd667b614029f06e1371dbe033a1d981ada6d314cd2092d74.scope. Feb 9 09:56:40.968616 env[1381]: time="2024-02-09T09:56:40.968576744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-qrzsq,Uid:1ffffceb-0de5-4ff9-b7fc-36bae9ac757d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d10efab6ad82d9ffd667b614029f06e1371dbe033a1d981ada6d314cd2092d74\"" Feb 9 09:56:40.973340 env[1381]: time="2024-02-09T09:56:40.973307313Z" level=info msg="CreateContainer within sandbox \"d10efab6ad82d9ffd667b614029f06e1371dbe033a1d981ada6d314cd2092d74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:56:41.056860 env[1381]: time="2024-02-09T09:56:41.056084334Z" level=info msg="CreateContainer within sandbox \"d10efab6ad82d9ffd667b614029f06e1371dbe033a1d981ada6d314cd2092d74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0366d592e37b070ddefc98f97113548271a7c39bdd00863e72bc85821e0bf997\"" Feb 9 09:56:41.059102 env[1381]: time="2024-02-09T09:56:41.059072051Z" level=info msg="StartContainer for \"0366d592e37b070ddefc98f97113548271a7c39bdd00863e72bc85821e0bf997\"" Feb 9 09:56:41.073715 systemd[1]: Started cri-containerd-0366d592e37b070ddefc98f97113548271a7c39bdd00863e72bc85821e0bf997.scope. Feb 9 09:56:41.121329 env[1381]: time="2024-02-09T09:56:41.121281063Z" level=info msg="StartContainer for \"0366d592e37b070ddefc98f97113548271a7c39bdd00863e72bc85821e0bf997\" returns successfully" Feb 9 09:56:41.819178 systemd[1]: run-containerd-runc-k8s.io-d10efab6ad82d9ffd667b614029f06e1371dbe033a1d981ada6d314cd2092d74-runc.ScPOZt.mount: Deactivated successfully. Feb 9 09:56:41.867329 kubelet[2442]: I0209 09:56:41.867290 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-qrzsq" podStartSLOduration=24.867246018 podCreationTimestamp="2024-02-09 09:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:41.866695586 +0000 UTC m=+39.232533276" watchObservedRunningTime="2024-02-09 09:56:41.867246018 +0000 UTC m=+39.233083748" Feb 9 09:56:42.080574 systemd-networkd[1531]: veth3cae83fd: Gained IPv6LL Feb 9 09:57:53.718733 systemd[1]: Started sshd@5-10.200.20.13:22-10.200.12.6:46870.service. Feb 9 09:57:54.105304 sshd[3600]: Accepted publickey for core from 10.200.12.6 port 46870 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:57:54.106808 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:54.111340 systemd[1]: Started session-8.scope. Feb 9 09:57:54.111776 systemd-logind[1364]: New session 8 of user core. Feb 9 09:57:54.562678 sshd[3600]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:54.565081 systemd[1]: sshd@5-10.200.20.13:22-10.200.12.6:46870.service: Deactivated successfully. Feb 9 09:57:54.565846 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:57:54.566416 systemd-logind[1364]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:57:54.567199 systemd-logind[1364]: Removed session 8. Feb 9 09:57:59.626716 systemd[1]: Started sshd@6-10.200.20.13:22-10.200.12.6:58196.service. Feb 9 09:58:00.004188 sshd[3654]: Accepted publickey for core from 10.200.12.6 port 58196 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:00.005785 sshd[3654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:00.010074 systemd[1]: Started session-9.scope. Feb 9 09:58:00.011332 systemd-logind[1364]: New session 9 of user core. Feb 9 09:58:00.339426 sshd[3654]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:00.342303 systemd[1]: sshd@6-10.200.20.13:22-10.200.12.6:58196.service: Deactivated successfully. Feb 9 09:58:00.343079 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:58:00.343620 systemd-logind[1364]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:58:00.344268 systemd-logind[1364]: Removed session 9. Feb 9 09:58:05.409096 systemd[1]: Started sshd@7-10.200.20.13:22-10.200.12.6:58198.service. Feb 9 09:58:05.786100 sshd[3690]: Accepted publickey for core from 10.200.12.6 port 58198 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:05.787345 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:05.791669 systemd[1]: Started session-10.scope. Feb 9 09:58:05.792143 systemd-logind[1364]: New session 10 of user core. Feb 9 09:58:06.122385 sshd[3690]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:06.125138 systemd[1]: sshd@7-10.200.20.13:22-10.200.12.6:58198.service: Deactivated successfully. Feb 9 09:58:06.125895 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:58:06.126954 systemd-logind[1364]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:58:06.127741 systemd-logind[1364]: Removed session 10. Feb 9 09:58:06.186831 systemd[1]: Started sshd@8-10.200.20.13:22-10.200.12.6:58200.service. Feb 9 09:58:06.562873 sshd[3703]: Accepted publickey for core from 10.200.12.6 port 58200 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:06.564408 sshd[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:06.568625 systemd-logind[1364]: New session 11 of user core. Feb 9 09:58:06.569205 systemd[1]: Started session-11.scope. Feb 9 09:58:07.016907 sshd[3703]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:07.019831 systemd[1]: sshd@8-10.200.20.13:22-10.200.12.6:58200.service: Deactivated successfully. Feb 9 09:58:07.020580 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:58:07.021230 systemd-logind[1364]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:58:07.022136 systemd-logind[1364]: Removed session 11. Feb 9 09:58:07.082260 systemd[1]: Started sshd@9-10.200.20.13:22-10.200.12.6:39960.service. Feb 9 09:58:07.468256 sshd[3713]: Accepted publickey for core from 10.200.12.6 port 39960 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:07.469526 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:07.473935 systemd[1]: Started session-12.scope. Feb 9 09:58:07.474710 systemd-logind[1364]: New session 12 of user core. Feb 9 09:58:07.817108 sshd[3713]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:07.819822 systemd-logind[1364]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:58:07.820004 systemd[1]: sshd@9-10.200.20.13:22-10.200.12.6:39960.service: Deactivated successfully. Feb 9 09:58:07.820745 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:58:07.821807 systemd-logind[1364]: Removed session 12. Feb 9 09:58:12.882385 systemd[1]: Started sshd@10-10.200.20.13:22-10.200.12.6:39966.service. Feb 9 09:58:13.267491 sshd[3746]: Accepted publickey for core from 10.200.12.6 port 39966 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:13.268757 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:13.273115 systemd[1]: Started session-13.scope. Feb 9 09:58:13.273525 systemd-logind[1364]: New session 13 of user core. Feb 9 09:58:13.606350 sshd[3746]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:13.609253 systemd[1]: sshd@10-10.200.20.13:22-10.200.12.6:39966.service: Deactivated successfully. Feb 9 09:58:13.609998 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:58:13.610452 systemd-logind[1364]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:58:13.611164 systemd-logind[1364]: Removed session 13. Feb 9 09:58:13.682104 systemd[1]: Started sshd@11-10.200.20.13:22-10.200.12.6:39978.service. Feb 9 09:58:14.059482 sshd[3758]: Accepted publickey for core from 10.200.12.6 port 39978 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:14.059943 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:14.064224 systemd[1]: Started session-14.scope. Feb 9 09:58:14.064591 systemd-logind[1364]: New session 14 of user core. Feb 9 09:58:14.501011 sshd[3758]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:14.503707 systemd[1]: sshd@11-10.200.20.13:22-10.200.12.6:39978.service: Deactivated successfully. Feb 9 09:58:14.504438 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:58:14.505085 systemd-logind[1364]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:58:14.506067 systemd-logind[1364]: Removed session 14. Feb 9 09:58:14.568440 systemd[1]: Started sshd@12-10.200.20.13:22-10.200.12.6:39990.service. Feb 9 09:58:14.950805 sshd[3788]: Accepted publickey for core from 10.200.12.6 port 39990 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:14.952108 sshd[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:14.956730 systemd[1]: Started session-15.scope. Feb 9 09:58:14.957072 systemd-logind[1364]: New session 15 of user core. Feb 9 09:58:16.061016 sshd[3788]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:16.063984 systemd[1]: sshd@12-10.200.20.13:22-10.200.12.6:39990.service: Deactivated successfully. Feb 9 09:58:16.064779 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:58:16.065405 systemd-logind[1364]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:58:16.066193 systemd-logind[1364]: Removed session 15. Feb 9 09:58:16.130948 systemd[1]: Started sshd@13-10.200.20.13:22-10.200.12.6:40006.service. Feb 9 09:58:16.517011 sshd[3805]: Accepted publickey for core from 10.200.12.6 port 40006 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:16.518578 sshd[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:16.522347 systemd-logind[1364]: New session 16 of user core. Feb 9 09:58:16.522845 systemd[1]: Started session-16.scope. Feb 9 09:58:17.028403 sshd[3805]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:17.031233 systemd[1]: sshd@13-10.200.20.13:22-10.200.12.6:40006.service: Deactivated successfully. Feb 9 09:58:17.032270 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:58:17.033015 systemd-logind[1364]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:58:17.033776 systemd-logind[1364]: Removed session 16. Feb 9 09:58:17.093995 systemd[1]: Started sshd@14-10.200.20.13:22-10.200.12.6:41616.service. Feb 9 09:58:17.479528 sshd[3814]: Accepted publickey for core from 10.200.12.6 port 41616 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:17.481064 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:17.485424 systemd[1]: Started session-17.scope. Feb 9 09:58:17.486394 systemd-logind[1364]: New session 17 of user core. Feb 9 09:58:17.821124 sshd[3814]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:17.824182 systemd-logind[1364]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:58:17.824341 systemd[1]: sshd@14-10.200.20.13:22-10.200.12.6:41616.service: Deactivated successfully. Feb 9 09:58:17.825090 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:58:17.825928 systemd-logind[1364]: Removed session 17. Feb 9 09:58:22.885898 systemd[1]: Started sshd@15-10.200.20.13:22-10.200.12.6:41626.service. Feb 9 09:58:23.271673 sshd[3850]: Accepted publickey for core from 10.200.12.6 port 41626 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:23.273262 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:23.277095 systemd-logind[1364]: New session 18 of user core. Feb 9 09:58:23.277632 systemd[1]: Started session-18.scope. Feb 9 09:58:23.609275 sshd[3850]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:23.611985 systemd-logind[1364]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:58:23.612147 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:58:23.612983 systemd[1]: sshd@15-10.200.20.13:22-10.200.12.6:41626.service: Deactivated successfully. Feb 9 09:58:23.614196 systemd-logind[1364]: Removed session 18. Feb 9 09:58:28.674182 systemd[1]: Started sshd@16-10.200.20.13:22-10.200.12.6:36726.service. Feb 9 09:58:29.051640 sshd[3885]: Accepted publickey for core from 10.200.12.6 port 36726 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:29.053239 sshd[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:29.057415 systemd[1]: Started session-19.scope. Feb 9 09:58:29.058536 systemd-logind[1364]: New session 19 of user core. Feb 9 09:58:29.390757 sshd[3885]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:29.393871 systemd-logind[1364]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:58:29.394035 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:58:29.394839 systemd[1]: sshd@16-10.200.20.13:22-10.200.12.6:36726.service: Deactivated successfully. Feb 9 09:58:29.396020 systemd-logind[1364]: Removed session 19. Feb 9 09:58:34.461043 systemd[1]: Started sshd@17-10.200.20.13:22-10.200.12.6:36736.service. Feb 9 09:58:34.872891 sshd[3924]: Accepted publickey for core from 10.200.12.6 port 36736 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:34.874508 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:34.878194 systemd-logind[1364]: New session 20 of user core. Feb 9 09:58:34.878719 systemd[1]: Started session-20.scope. Feb 9 09:58:35.242458 sshd[3924]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:35.245032 systemd-logind[1364]: Session 20 logged out. Waiting for processes to exit. Feb 9 09:58:35.245293 systemd[1]: sshd@17-10.200.20.13:22-10.200.12.6:36736.service: Deactivated successfully. Feb 9 09:58:35.246020 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 09:58:35.246879 systemd-logind[1364]: Removed session 20. Feb 9 09:58:40.309320 systemd[1]: Started sshd@18-10.200.20.13:22-10.200.12.6:50618.service. Feb 9 09:58:40.686531 sshd[3972]: Accepted publickey for core from 10.200.12.6 port 50618 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:40.688099 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:40.692396 systemd[1]: Started session-21.scope. Feb 9 09:58:40.692715 systemd-logind[1364]: New session 21 of user core. Feb 9 09:58:41.030233 sshd[3972]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:41.032735 systemd[1]: sshd@18-10.200.20.13:22-10.200.12.6:50618.service: Deactivated successfully. Feb 9 09:58:41.033483 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 09:58:41.034007 systemd-logind[1364]: Session 21 logged out. Waiting for processes to exit. Feb 9 09:58:41.034685 systemd-logind[1364]: Removed session 21. Feb 9 09:58:56.093700 systemd[1]: cri-containerd-996a1f63cfbaea530a5f4c15f74c4debd52b8281b21295cae817872d1c574f7d.scope: Deactivated successfully. Feb 9 09:58:56.094018 systemd[1]: cri-containerd-996a1f63cfbaea530a5f4c15f74c4debd52b8281b21295cae817872d1c574f7d.scope: Consumed 3.169s CPU time. Feb 9 09:58:56.113783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-996a1f63cfbaea530a5f4c15f74c4debd52b8281b21295cae817872d1c574f7d-rootfs.mount: Deactivated successfully. Feb 9 09:58:56.146970 env[1381]: time="2024-02-09T09:58:56.146922126Z" level=info msg="shim disconnected" id=996a1f63cfbaea530a5f4c15f74c4debd52b8281b21295cae817872d1c574f7d Feb 9 09:58:56.147541 env[1381]: time="2024-02-09T09:58:56.147516419Z" level=warning msg="cleaning up after shim disconnected" id=996a1f63cfbaea530a5f4c15f74c4debd52b8281b21295cae817872d1c574f7d namespace=k8s.io Feb 9 09:58:56.147649 env[1381]: time="2024-02-09T09:58:56.147621741Z" level=info msg="cleaning up dead shim" Feb 9 09:58:56.155080 env[1381]: time="2024-02-09T09:58:56.155042825Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4059 runtime=io.containerd.runc.v2\n" Feb 9 09:58:56.355527 kubelet[2442]: E0209 09:58:56.355401 2442 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.13:46310->10.200.20.24:2379: read: connection timed out" Feb 9 09:58:56.357593 systemd[1]: cri-containerd-3a4f6ff7332863fd63c7a9f74c1f9cd21f01e6c3a98788928c7d1b9d6b424db5.scope: Deactivated successfully. Feb 9 09:58:56.357880 systemd[1]: cri-containerd-3a4f6ff7332863fd63c7a9f74c1f9cd21f01e6c3a98788928c7d1b9d6b424db5.scope: Consumed 1.846s CPU time. Feb 9 09:58:56.378020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a4f6ff7332863fd63c7a9f74c1f9cd21f01e6c3a98788928c7d1b9d6b424db5-rootfs.mount: Deactivated successfully. Feb 9 09:58:56.416617 env[1381]: time="2024-02-09T09:58:56.416525743Z" level=info msg="shim disconnected" id=3a4f6ff7332863fd63c7a9f74c1f9cd21f01e6c3a98788928c7d1b9d6b424db5 Feb 9 09:58:56.416617 env[1381]: time="2024-02-09T09:58:56.416615985Z" level=warning msg="cleaning up after shim disconnected" id=3a4f6ff7332863fd63c7a9f74c1f9cd21f01e6c3a98788928c7d1b9d6b424db5 namespace=k8s.io Feb 9 09:58:56.416833 env[1381]: time="2024-02-09T09:58:56.416629346Z" level=info msg="cleaning up dead shim" Feb 9 09:58:56.424186 env[1381]: time="2024-02-09T09:58:56.424138631Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4083 runtime=io.containerd.runc.v2\n" Feb 9 09:58:57.084495 kubelet[2442]: I0209 09:58:57.084090 2442 scope.go:115] "RemoveContainer" containerID="996a1f63cfbaea530a5f4c15f74c4debd52b8281b21295cae817872d1c574f7d" Feb 9 09:58:57.086831 env[1381]: time="2024-02-09T09:58:57.086787357Z" level=info msg="CreateContainer within sandbox \"f4f29aa8e9e39697b9d7bd2a5b07417a0225cfb86f9ffa9c9518210e62eee8d9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 09:58:57.087526 kubelet[2442]: I0209 09:58:57.087493 2442 scope.go:115] "RemoveContainer" containerID="3a4f6ff7332863fd63c7a9f74c1f9cd21f01e6c3a98788928c7d1b9d6b424db5" Feb 9 09:58:57.089037 env[1381]: time="2024-02-09T09:58:57.089009726Z" level=info msg="CreateContainer within sandbox \"fd6965ce0106a66c2665d674a3bb2bb6cee8cf2ef2dcd6444155fcb11711724f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 09:58:57.173570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3144090731.mount: Deactivated successfully. Feb 9 09:58:57.181195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762332924.mount: Deactivated successfully. Feb 9 09:58:57.235787 env[1381]: time="2024-02-09T09:58:57.235739989Z" level=info msg="CreateContainer within sandbox \"f4f29aa8e9e39697b9d7bd2a5b07417a0225cfb86f9ffa9c9518210e62eee8d9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c76b43bc5b5959e76eff566fb765a37b4de9ab6af73e668eede40d1d9c5664d0\"" Feb 9 09:58:57.236620 env[1381]: time="2024-02-09T09:58:57.236587128Z" level=info msg="StartContainer for \"c76b43bc5b5959e76eff566fb765a37b4de9ab6af73e668eede40d1d9c5664d0\"" Feb 9 09:58:57.248703 env[1381]: time="2024-02-09T09:58:57.248526627Z" level=info msg="CreateContainer within sandbox \"fd6965ce0106a66c2665d674a3bb2bb6cee8cf2ef2dcd6444155fcb11711724f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"46d6c8646244a41ac0dbfbab2535aba4e98d49d4f6ac88a0682b3830a6b5f395\"" Feb 9 09:58:57.249403 env[1381]: time="2024-02-09T09:58:57.249376165Z" level=info msg="StartContainer for \"46d6c8646244a41ac0dbfbab2535aba4e98d49d4f6ac88a0682b3830a6b5f395\"" Feb 9 09:58:57.251375 systemd[1]: Started cri-containerd-c76b43bc5b5959e76eff566fb765a37b4de9ab6af73e668eede40d1d9c5664d0.scope. Feb 9 09:58:57.272422 systemd[1]: Started cri-containerd-46d6c8646244a41ac0dbfbab2535aba4e98d49d4f6ac88a0682b3830a6b5f395.scope. Feb 9 09:58:57.309552 env[1381]: time="2024-02-09T09:58:57.309493630Z" level=info msg="StartContainer for \"c76b43bc5b5959e76eff566fb765a37b4de9ab6af73e668eede40d1d9c5664d0\" returns successfully" Feb 9 09:58:57.324840 env[1381]: time="2024-02-09T09:58:57.324781241Z" level=info msg="StartContainer for \"46d6c8646244a41ac0dbfbab2535aba4e98d49d4f6ac88a0682b3830a6b5f395\" returns successfully" Feb 9 09:59:00.897516 kubelet[2442]: E0209 09:59:00.897351 2442 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-f1c369a1bc.17b22966d8be097a", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-f1c369a1bc", UID:"144bdbe13ac066c2eca73e0e6b58274b", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-f1c369a1bc"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 50, 426575226, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 50, 426575226, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.13:46068->10.200.20.24:2379: read: connection timed out' (will not retry!) Feb 9 09:59:06.356162 kubelet[2442]: E0209 09:59:06.356122 2442 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f1c369a1bc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 09:59:07.553412 kubelet[2442]: I0209 09:59:07.553384 2442 status_manager.go:809] "Failed to get status for pod" podUID=f5f6f1c9e1df9aae683b42a6dfbbed75 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f1c369a1bc" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.13:46194->10.200.20.24:2379: read: connection timed out" Feb 9 09:59:16.356949 kubelet[2442]: E0209 09:59:16.356917 2442 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f1c369a1bc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 09:59:26.358336 kubelet[2442]: E0209 09:59:26.358294 2442 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f1c369a1bc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 09:59:27.094217 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.094551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.103268 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.112384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.121906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.131394 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#13 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.140216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.149203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.158188 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#14 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.167559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.176630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.185759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.195419 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#17 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.204861 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#18 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.214898 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#19 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.225029 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#20 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.247101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#20 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.247309 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#19 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.256356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#18 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.266113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#17 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.275610 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.284921 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.294616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.303814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#14 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.314009 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.323200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.332362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#13 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.341570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.350943 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.360120 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.369538 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.379386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.388411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#21 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.417616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#253 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.418020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#254 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.418212 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#255 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.427711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#192 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.437843 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#193 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.447824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#194 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.457891 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#195 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.467970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.478167 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#197 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.487896 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#198 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.498041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#199 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.508403 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#200 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.520080 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#201 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.530360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#202 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.550131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#21 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.550341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.560423 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.570493 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.580452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.590369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#13 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.600502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.610535 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.621907 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#14 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.630285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.640907 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.653642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.663854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.674563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#17 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.684614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#18 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.694289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#19 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.707619 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#20 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.714116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#253 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.725106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#254 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.735337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#255 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.745095 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#192 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.755358 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#194 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.765672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#193 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.775599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#195 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.785731 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#197 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.795855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.806334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#198 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.817394 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#199 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.827367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#200 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.838396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#201 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.848789 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.860312 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#202 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.870024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#204 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.880379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#206 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.890200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#205 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.900442 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#207 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.910653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#208 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.920249 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#209 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.930915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#210 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.941412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.951847 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.961902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#213 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.972601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#214 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.982661 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#215 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:27.993010 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#216 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.008324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#218 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.013218 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#217 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.023330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#219 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.033590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#220 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.043815 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#221 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.069216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#222 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.069477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#223 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.079330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#224 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.089452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#225 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.100474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#227 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.111063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#226 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.121179 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#228 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.131572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#230 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.141985 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#229 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.152407 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.163260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#232 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.173986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#20 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.193881 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#19 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.194105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#18 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.203373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#17 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.213389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.224113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.234105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.244063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.254102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#14 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.263937 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.274403 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.284602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#13 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.294403 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.304730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.314719 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.325733 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.335736 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#21 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.346021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#253 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.356654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#254 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.367110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#255 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.377076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#192 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.387659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#194 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.397667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#193 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.407876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#195 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.418492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#197 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.438831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#198 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.439055 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#199 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.449406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#201 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.459946 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.470137 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#200 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.480653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.490624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#202 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.500576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#206 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.510491 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#204 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.520548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#205 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.530822 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#207 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.540999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#208 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.551031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#209 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.561280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#210 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.572162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.583751 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.598099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#213 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.608954 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#214 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.619695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#215 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.629691 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#216 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.639838 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#218 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.650025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#217 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.659977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#219 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.670675 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#220 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.680721 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#221 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.690927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#223 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.701043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#222 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.710844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#224 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.721036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#227 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.731372 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#225 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.741571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#226 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.751778 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#230 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.762087 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#228 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.772260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#229 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.782572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.792338 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#232 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.802151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#233 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.812261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#236 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.822599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#235 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.833167 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#237 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.843434 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#238 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.853848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#239 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.864207 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.874895 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.885591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.895776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#242 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.905977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#244 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.916285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#243 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.926431 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#245 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 09:59:28.936671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#246 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001