Dec 13 01:26:08.302646 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:26:08.302669 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:26:08.302677 kernel: KASLR enabled Dec 13 01:26:08.302683 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:26:08.302690 kernel: printk: bootconsole [pl11] enabled Dec 13 01:26:08.302695 kernel: efi: EFI v2.7 by EDK II Dec 13 01:26:08.302703 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:26:08.302709 kernel: random: crng init done Dec 13 01:26:08.302714 kernel: ACPI: Early table checksum verification disabled Dec 13 01:26:08.302720 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:26:08.302726 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302732 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302739 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:26:08.302745 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302752 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302759 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302765 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302773 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302779 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302786 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:26:08.302792 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302808 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:26:08.302816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:26:08.302822 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:26:08.302829 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:26:08.302835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:26:08.302841 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:26:08.302847 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:26:08.302856 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:26:08.302862 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:26:08.302869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:26:08.302875 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:26:08.302881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:26:08.302887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:26:08.302894 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 01:26:08.302900 kernel: Zone ranges: Dec 13 01:26:08.302906 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:26:08.302912 kernel: DMA32 empty Dec 13 01:26:08.302918 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:26:08.302924 kernel: Movable zone start for each node Dec 13 01:26:08.302934 kernel: Early memory node ranges Dec 13 01:26:08.302941 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:26:08.302948 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:26:08.302954 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:26:08.302961 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:26:08.302969 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:26:08.302975 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:26:08.302982 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:26:08.302989 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:26:08.302996 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:26:08.303002 kernel: psci: probing for conduit method from ACPI. Dec 13 01:26:08.303009 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:26:08.303015 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:26:08.303022 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:26:08.303028 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:26:08.303035 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:26:08.303042 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:26:08.303050 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:26:08.303056 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:26:08.303063 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:26:08.303069 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:26:08.303076 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:26:08.303082 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:26:08.303089 kernel: CPU features: detected: Spectre-BHB Dec 13 01:26:08.303096 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:26:08.303102 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:26:08.303109 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:26:08.303115 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:26:08.303123 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:26:08.303130 kernel: alternatives: applying boot alternatives Dec 13 01:26:08.303138 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:26:08.303145 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:26:08.303152 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:26:08.303159 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:26:08.303165 kernel: Fallback order for Node 0: 0 Dec 13 01:26:08.303172 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:26:08.303179 kernel: Policy zone: Normal Dec 13 01:26:08.303185 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:26:08.303192 kernel: software IO TLB: area num 2. Dec 13 01:26:08.303200 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:26:08.303206 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Dec 13 01:26:08.303213 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:26:08.303220 kernel: trace event string verifier disabled Dec 13 01:26:08.303227 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:26:08.303234 kernel: rcu: RCU event tracing is enabled. Dec 13 01:26:08.303241 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:26:08.303248 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:26:08.303255 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:26:08.303261 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:26:08.303268 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:26:08.303276 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:26:08.303283 kernel: GICv3: 960 SPIs implemented Dec 13 01:26:08.303289 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:26:08.303296 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:26:08.303302 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:26:08.303309 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:26:08.303315 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:26:08.303322 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:26:08.303329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:26:08.303335 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:26:08.303342 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:26:08.303349 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:26:08.303357 kernel: Console: colour dummy device 80x25 Dec 13 01:26:08.303364 kernel: printk: console [tty1] enabled Dec 13 01:26:08.303371 kernel: ACPI: Core revision 20230628 Dec 13 01:26:08.303378 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:26:08.303385 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:26:08.303392 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:26:08.303399 kernel: landlock: Up and running. Dec 13 01:26:08.303405 kernel: SELinux: Initializing. Dec 13 01:26:08.303412 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:26:08.303421 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:26:08.303428 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:26:08.303435 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:26:08.303442 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:26:08.303448 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:26:08.303455 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:26:08.303462 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:26:08.303475 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:26:08.303482 kernel: Remapping and enabling EFI services. Dec 13 01:26:08.303490 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:26:08.303497 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:26:08.303505 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:26:08.303513 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:26:08.303520 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:26:08.303527 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:26:08.303534 kernel: SMP: Total of 2 processors activated. Dec 13 01:26:08.303541 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:26:08.303550 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:26:08.303557 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:26:08.303565 kernel: CPU features: detected: CRC32 instructions Dec 13 01:26:08.303572 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:26:08.303579 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:26:08.303586 kernel: CPU features: detected: Privileged Access Never Dec 13 01:26:08.303594 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:26:08.303601 kernel: alternatives: applying system-wide alternatives Dec 13 01:26:08.303608 kernel: devtmpfs: initialized Dec 13 01:26:08.303617 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:26:08.303624 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:26:08.303631 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:26:08.303638 kernel: SMBIOS 3.1.0 present. Dec 13 01:26:08.303645 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:26:08.303652 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:26:08.303660 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:26:08.303667 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:26:08.303676 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:26:08.303683 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:26:08.303690 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:26:08.303697 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:26:08.303704 kernel: cpuidle: using governor menu Dec 13 01:26:08.303711 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:26:08.303718 kernel: ASID allocator initialised with 32768 entries Dec 13 01:26:08.303725 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:26:08.303733 kernel: Serial: AMBA PL011 UART driver Dec 13 01:26:08.303741 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:26:08.303748 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:26:08.303755 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:26:08.303763 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:26:08.303770 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:26:08.303777 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:26:08.303784 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:26:08.303791 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:26:08.303805 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:26:08.303816 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:26:08.303824 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:26:08.303831 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:26:08.303838 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:26:08.303845 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:26:08.303852 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:26:08.303859 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:26:08.303866 kernel: ACPI: Interpreter enabled Dec 13 01:26:08.303873 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:26:08.303880 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:26:08.303889 kernel: printk: console [ttyAMA0] enabled Dec 13 01:26:08.303896 kernel: printk: bootconsole [pl11] disabled Dec 13 01:26:08.303904 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:26:08.303911 kernel: iommu: Default domain type: Translated Dec 13 01:26:08.303918 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:26:08.303925 kernel: efivars: Registered efivars operations Dec 13 01:26:08.303932 kernel: vgaarb: loaded Dec 13 01:26:08.303939 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:26:08.303946 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:26:08.303955 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:26:08.303963 kernel: pnp: PnP ACPI init Dec 13 01:26:08.303970 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:26:08.303977 kernel: NET: Registered PF_INET protocol family Dec 13 01:26:08.303984 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:26:08.303991 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:26:08.303999 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:26:08.304006 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:26:08.304015 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:26:08.304022 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:26:08.304029 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:26:08.304037 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:26:08.304044 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:26:08.304051 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:26:08.304058 kernel: kvm [1]: HYP mode not available Dec 13 01:26:08.304065 kernel: Initialise system trusted keyrings Dec 13 01:26:08.304072 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:26:08.304081 kernel: Key type asymmetric registered Dec 13 01:26:08.304088 kernel: Asymmetric key parser 'x509' registered Dec 13 01:26:08.304095 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:26:08.304102 kernel: io scheduler mq-deadline registered Dec 13 01:26:08.304109 kernel: io scheduler kyber registered Dec 13 01:26:08.304116 kernel: io scheduler bfq registered Dec 13 01:26:08.304123 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:26:08.304130 kernel: thunder_xcv, ver 1.0 Dec 13 01:26:08.304137 kernel: thunder_bgx, ver 1.0 Dec 13 01:26:08.304144 kernel: nicpf, ver 1.0 Dec 13 01:26:08.304153 kernel: nicvf, ver 1.0 Dec 13 01:26:08.304287 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:26:08.304359 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:26:07 UTC (1734053167) Dec 13 01:26:08.304369 kernel: efifb: probing for efifb Dec 13 01:26:08.304377 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:26:08.304385 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:26:08.304392 kernel: efifb: scrolling: redraw Dec 13 01:26:08.304401 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:26:08.304409 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:26:08.304416 kernel: fb0: EFI VGA frame buffer device Dec 13 01:26:08.304423 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:26:08.304430 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:26:08.304437 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:26:08.304444 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:26:08.304451 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:26:08.304459 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:26:08.304468 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:26:08.304475 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:26:08.304482 kernel: Segment Routing with IPv6 Dec 13 01:26:08.304490 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:26:08.304497 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:26:08.304504 kernel: Key type dns_resolver registered Dec 13 01:26:08.304511 kernel: registered taskstats version 1 Dec 13 01:26:08.304518 kernel: Loading compiled-in X.509 certificates Dec 13 01:26:08.304525 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:26:08.304532 kernel: Key type .fscrypt registered Dec 13 01:26:08.304541 kernel: Key type fscrypt-provisioning registered Dec 13 01:26:08.304548 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:26:08.304556 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:26:08.304563 kernel: ima: No architecture policies found Dec 13 01:26:08.304570 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:26:08.304577 kernel: clk: Disabling unused clocks Dec 13 01:26:08.304585 kernel: Freeing unused kernel memory: 39360K Dec 13 01:26:08.304592 kernel: Run /init as init process Dec 13 01:26:08.304600 kernel: with arguments: Dec 13 01:26:08.304607 kernel: /init Dec 13 01:26:08.304614 kernel: with environment: Dec 13 01:26:08.304621 kernel: HOME=/ Dec 13 01:26:08.304628 kernel: TERM=linux Dec 13 01:26:08.304635 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:26:08.304645 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:08.304654 systemd[1]: Detected virtualization microsoft. Dec 13 01:26:08.304664 systemd[1]: Detected architecture arm64. Dec 13 01:26:08.304671 systemd[1]: Running in initrd. Dec 13 01:26:08.304679 systemd[1]: No hostname configured, using default hostname. Dec 13 01:26:08.304686 systemd[1]: Hostname set to . Dec 13 01:26:08.304695 systemd[1]: Initializing machine ID from random generator. Dec 13 01:26:08.304702 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:26:08.304710 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:08.304718 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:08.304727 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:26:08.304736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:08.304743 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:26:08.304752 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:26:08.304761 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:26:08.304769 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:26:08.304777 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:08.304786 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:08.304794 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:08.304812 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:08.304821 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:08.304828 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:08.304836 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:08.304844 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:08.304852 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:26:08.304862 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:26:08.304869 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:08.304877 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:08.304885 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:08.304893 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:08.304901 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:26:08.304908 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:08.304916 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:26:08.304924 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:26:08.304933 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:08.304941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:08.304966 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:26:08.304986 systemd-journald[217]: Journal started Dec 13 01:26:08.305006 systemd-journald[217]: Runtime Journal (/run/log/journal/4e6ef018db86447a9b67e7513d02b6ca) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:26:08.322103 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:26:08.332190 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:08.347814 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:26:08.347859 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:08.357044 kernel: Bridge firewalling registered Dec 13 01:26:08.357157 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:26:08.364827 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:08.380828 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:08.387589 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:26:08.399820 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:08.407873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:08.428182 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:08.435988 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:08.459350 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:26:08.484010 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:08.491041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:08.506130 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:08.520756 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:08.532913 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:08.560138 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:26:08.568978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:08.592338 dracut-cmdline[249]: dracut-dracut-053 Dec 13 01:26:08.596578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:08.611540 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:26:08.647483 systemd-resolved[251]: Positive Trust Anchors: Dec 13 01:26:08.647500 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:08.647532 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:08.650732 systemd-resolved[251]: Defaulting to hostname 'linux'. Dec 13 01:26:08.653913 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:08.661209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:08.716827 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:08.796837 kernel: SCSI subsystem initialized Dec 13 01:26:08.804829 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:26:08.814827 kernel: iscsi: registered transport (tcp) Dec 13 01:26:08.833062 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:26:08.833137 kernel: QLogic iSCSI HBA Driver Dec 13 01:26:08.875770 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:08.890114 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:26:08.924839 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:26:08.924895 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:26:08.931069 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:26:08.981825 kernel: raid6: neonx8 gen() 15761 MB/s Dec 13 01:26:09.000814 kernel: raid6: neonx4 gen() 15496 MB/s Dec 13 01:26:09.020811 kernel: raid6: neonx2 gen() 13258 MB/s Dec 13 01:26:09.041816 kernel: raid6: neonx1 gen() 10513 MB/s Dec 13 01:26:09.061810 kernel: raid6: int64x8 gen() 6979 MB/s Dec 13 01:26:09.081809 kernel: raid6: int64x4 gen() 7338 MB/s Dec 13 01:26:09.102815 kernel: raid6: int64x2 gen() 6131 MB/s Dec 13 01:26:09.125821 kernel: raid6: int64x1 gen() 5055 MB/s Dec 13 01:26:09.125856 kernel: raid6: using algorithm neonx8 gen() 15761 MB/s Dec 13 01:26:09.150896 kernel: raid6: .... xor() 11938 MB/s, rmw enabled Dec 13 01:26:09.150921 kernel: raid6: using neon recovery algorithm Dec 13 01:26:09.162054 kernel: xor: measuring software checksum speed Dec 13 01:26:09.162068 kernel: 8regs : 19731 MB/sec Dec 13 01:26:09.165400 kernel: 32regs : 19622 MB/sec Dec 13 01:26:09.168770 kernel: arm64_neon : 27052 MB/sec Dec 13 01:26:09.172961 kernel: xor: using function: arm64_neon (27052 MB/sec) Dec 13 01:26:09.223820 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:26:09.234019 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:09.250009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:09.274092 systemd-udevd[436]: Using default interface naming scheme 'v255'. Dec 13 01:26:09.280021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:09.299928 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:26:09.329989 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Dec 13 01:26:09.362080 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:09.376100 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:09.416823 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:09.444943 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:26:09.469370 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:09.483096 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:09.497775 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:09.511461 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:09.532833 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:26:09.534042 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:26:09.558282 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:09.574782 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:09.598576 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:26:09.598601 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:26:09.598611 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:26:09.598644 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:26:09.598655 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:26:09.574954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:09.635840 kernel: scsi host0: storvsc_host_t Dec 13 01:26:09.636005 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:26:09.636017 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 01:26:09.636027 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:26:09.636135 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:26:09.636235 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 01:26:09.653110 kernel: scsi host1: storvsc_host_t Dec 13 01:26:09.653173 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:26:09.661514 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:09.676026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:09.682057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:09.696033 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:09.716853 kernel: PTP clock support registered Dec 13 01:26:09.717205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:09.732944 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:26:09.747952 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:26:09.747971 kernel: hv_netvsc 0022487d-d013-0022-487d-d0130022487d eth0: VF slot 1 added Dec 13 01:26:09.748091 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:26:09.748186 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:26:09.754073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:09.773208 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:26:09.776193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:09.878161 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:26:09.878188 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:26:09.878198 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:26:09.878208 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:26:09.878224 kernel: hv_pci 4e5a3359-050c-4f6d-af46-bdf11d0cbde5: PCI VMBus probing: Using version 0x10004 Dec 13 01:26:10.031637 kernel: hv_pci 4e5a3359-050c-4f6d-af46-bdf11d0cbde5: PCI host bridge to bus 050c:00 Dec 13 01:26:10.031782 kernel: pci_bus 050c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:26:10.031884 kernel: pci_bus 050c:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:26:10.031961 kernel: pci 050c:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:26:10.032057 kernel: pci 050c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:26:10.032140 kernel: pci 050c:00:02.0: enabling Extended Tags Dec 13 01:26:10.032222 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:26:10.041443 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:26:10.041642 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:26:10.041725 kernel: pci 050c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 050c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:26:10.041820 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:26:10.041903 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:26:10.041989 kernel: pci_bus 050c:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:26:10.042074 kernel: pci 050c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:26:10.042158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:10.042168 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:26:09.864877 systemd-resolved[251]: Clock change detected. Flushing caches. Dec 13 01:26:09.947381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:10.097549 kernel: mlx5_core 050c:00:02.0: enabling device (0000 -> 0002) Dec 13 01:26:10.338306 kernel: mlx5_core 050c:00:02.0: firmware version: 16.30.1284 Dec 13 01:26:10.338475 kernel: hv_netvsc 0022487d-d013-0022-487d-d0130022487d eth0: VF registering: eth1 Dec 13 01:26:10.338571 kernel: mlx5_core 050c:00:02.0 eth1: joined to eth0 Dec 13 01:26:10.338665 kernel: mlx5_core 050c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:26:10.347415 kernel: mlx5_core 050c:00:02.0 enP1292s1: renamed from eth1 Dec 13 01:26:10.507240 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:26:10.593368 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (499) Dec 13 01:26:10.608048 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:26:10.642366 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (502) Dec 13 01:26:10.650527 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:26:10.668228 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:26:10.676795 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:26:10.708643 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:26:10.736411 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:10.745370 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:11.753097 disk-uuid[603]: The operation has completed successfully. Dec 13 01:26:11.759593 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:11.826558 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:26:11.828363 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:26:11.854802 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:26:11.867561 sh[689]: Success Dec 13 01:26:11.897362 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:26:12.071388 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:26:12.091408 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:26:12.100811 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:26:12.130404 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:26:12.130455 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:26:12.137206 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:26:12.141985 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:26:12.146307 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:26:12.451050 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:26:12.456486 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:26:12.473613 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:26:12.511804 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:12.511863 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:26:12.505158 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:26:12.524388 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:12.545204 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:12.553409 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:26:12.567417 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:12.575976 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:26:12.592906 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:26:12.616801 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:12.636500 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:12.663988 systemd-networkd[873]: lo: Link UP Dec 13 01:26:12.667211 systemd-networkd[873]: lo: Gained carrier Dec 13 01:26:12.668850 systemd-networkd[873]: Enumeration completed Dec 13 01:26:12.669158 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:12.675911 systemd[1]: Reached target network.target - Network. Dec 13 01:26:12.679520 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:12.679523 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:12.765352 kernel: mlx5_core 050c:00:02.0 enP1292s1: Link up Dec 13 01:26:12.804418 kernel: hv_netvsc 0022487d-d013-0022-487d-d0130022487d eth0: Data path switched to VF: enP1292s1 Dec 13 01:26:12.804995 systemd-networkd[873]: enP1292s1: Link UP Dec 13 01:26:12.808721 systemd-networkd[873]: eth0: Link UP Dec 13 01:26:12.808826 systemd-networkd[873]: eth0: Gained carrier Dec 13 01:26:12.808835 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:12.827565 systemd-networkd[873]: enP1292s1: Gained carrier Dec 13 01:26:12.840397 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:26:13.589387 ignition[853]: Ignition 2.19.0 Dec 13 01:26:13.589397 ignition[853]: Stage: fetch-offline Dec 13 01:26:13.591285 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:13.589438 ignition[853]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:13.609589 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:26:13.589446 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:13.589537 ignition[853]: parsed url from cmdline: "" Dec 13 01:26:13.589540 ignition[853]: no config URL provided Dec 13 01:26:13.589544 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:26:13.589551 ignition[853]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:26:13.589556 ignition[853]: failed to fetch config: resource requires networking Dec 13 01:26:13.589741 ignition[853]: Ignition finished successfully Dec 13 01:26:13.638041 ignition[884]: Ignition 2.19.0 Dec 13 01:26:13.638047 ignition[884]: Stage: fetch Dec 13 01:26:13.638204 ignition[884]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:13.638213 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:13.638322 ignition[884]: parsed url from cmdline: "" Dec 13 01:26:13.638325 ignition[884]: no config URL provided Dec 13 01:26:13.638330 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:26:13.638374 ignition[884]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:26:13.638396 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:26:13.722107 ignition[884]: GET result: OK Dec 13 01:26:13.722205 ignition[884]: config has been read from IMDS userdata Dec 13 01:26:13.722269 ignition[884]: parsing config with SHA512: b6cb456c8b09b7e0783994dff3319c0fcca6157c6bd34306e44ab06c1658bb55201c350d6b654d034d463caea3d689e5f8c3cc226f092e6a4b95a77c5b931247 Dec 13 01:26:13.726771 unknown[884]: fetched base config from "system" Dec 13 01:26:13.727207 ignition[884]: fetch: fetch complete Dec 13 01:26:13.726779 unknown[884]: fetched base config from "system" Dec 13 01:26:13.727211 ignition[884]: fetch: fetch passed Dec 13 01:26:13.726786 unknown[884]: fetched user config from "azure" Dec 13 01:26:13.727255 ignition[884]: Ignition finished successfully Dec 13 01:26:13.731133 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:26:13.755757 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:26:13.779727 ignition[890]: Ignition 2.19.0 Dec 13 01:26:13.779734 ignition[890]: Stage: kargs Dec 13 01:26:13.784397 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:26:13.779945 ignition[890]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:13.779955 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:13.780980 ignition[890]: kargs: kargs passed Dec 13 01:26:13.781031 ignition[890]: Ignition finished successfully Dec 13 01:26:13.812488 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:26:13.832859 ignition[896]: Ignition 2.19.0 Dec 13 01:26:13.832870 ignition[896]: Stage: disks Dec 13 01:26:13.837082 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:26:13.833034 ignition[896]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:13.842921 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:13.833043 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:13.851466 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:26:13.833961 ignition[896]: disks: disks passed Dec 13 01:26:13.863116 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:13.834005 ignition[896]: Ignition finished successfully Dec 13 01:26:13.873193 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:13.884348 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:13.914638 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:26:13.982385 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:26:13.995479 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:26:14.012599 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:26:14.068371 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:26:14.068738 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:26:14.073607 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:26:14.114431 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:14.121479 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:26:14.132550 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:26:14.139609 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:26:14.178964 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (915) Dec 13 01:26:14.178987 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:14.139644 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:14.201107 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:26:14.201131 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:14.147546 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:26:14.205731 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:26:14.226365 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:14.228818 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:14.263538 systemd-networkd[873]: enP1292s1: Gained IPv6LL Dec 13 01:26:14.519480 systemd-networkd[873]: eth0: Gained IPv6LL Dec 13 01:26:14.720853 coreos-metadata[917]: Dec 13 01:26:14.720 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:26:14.731132 coreos-metadata[917]: Dec 13 01:26:14.731 INFO Fetch successful Dec 13 01:26:14.736400 coreos-metadata[917]: Dec 13 01:26:14.731 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:26:14.749580 coreos-metadata[917]: Dec 13 01:26:14.749 INFO Fetch successful Dec 13 01:26:14.763038 coreos-metadata[917]: Dec 13 01:26:14.763 INFO wrote hostname ci-4081.2.1-a-dd942dbb76 to /sysroot/etc/hostname Dec 13 01:26:14.772057 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:26:14.917493 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:26:14.927195 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:26:14.937360 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:26:14.960102 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:26:15.765707 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:15.782557 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:26:15.793803 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:26:15.812353 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:15.808736 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:26:15.835588 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:26:15.849177 ignition[1034]: INFO : Ignition 2.19.0 Dec 13 01:26:15.849177 ignition[1034]: INFO : Stage: mount Dec 13 01:26:15.857934 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:15.857934 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:15.857934 ignition[1034]: INFO : mount: mount passed Dec 13 01:26:15.857934 ignition[1034]: INFO : Ignition finished successfully Dec 13 01:26:15.857681 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:26:15.882573 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:26:15.901561 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:15.930481 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1044) Dec 13 01:26:15.946321 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:15.946356 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:26:15.950924 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:15.958362 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:15.959261 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:15.989369 ignition[1061]: INFO : Ignition 2.19.0 Dec 13 01:26:15.989369 ignition[1061]: INFO : Stage: files Dec 13 01:26:15.989369 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:15.989369 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:16.011142 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:26:16.025014 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:26:16.025014 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:26:16.088651 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:26:16.096568 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:26:16.096568 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:26:16.089066 unknown[1061]: wrote ssh authorized keys file for user: core Dec 13 01:26:16.116660 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:26:16.116660 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:26:16.198020 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:26:16.377275 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:26:16.377275 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:26:16.398525 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 01:26:16.851321 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:26:16.930995 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:26:16.930995 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:26:17.204371 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:26:17.458221 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:17.458221 ignition[1061]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:26:17.477349 ignition[1061]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: files passed Dec 13 01:26:17.487675 ignition[1061]: INFO : Ignition finished successfully Dec 13 01:26:17.489238 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:26:17.527239 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:26:17.542539 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:26:17.562458 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:26:17.601228 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:17.601228 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:17.562554 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:26:17.637103 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:17.602530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:17.616898 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:26:17.652611 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:26:17.689066 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:26:17.690547 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:26:17.701221 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:26:17.713105 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:26:17.724261 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:26:17.739852 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:26:17.761932 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:17.778686 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:26:17.795202 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:17.801866 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:17.814087 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:26:17.824882 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:26:17.825003 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:17.842136 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:26:17.854172 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:26:17.864093 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:26:17.874308 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:17.886146 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:17.898078 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:26:17.909193 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:17.920736 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:26:17.932425 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:26:17.943224 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:26:17.952539 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:26:17.952707 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:17.967604 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:17.978862 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:17.990856 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:26:17.990969 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:18.003464 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:26:18.003643 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:18.021111 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:26:18.021298 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:18.033317 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:26:18.033494 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:26:18.043862 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:26:18.044018 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:26:18.077474 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:26:18.104882 ignition[1114]: INFO : Ignition 2.19.0 Dec 13 01:26:18.104882 ignition[1114]: INFO : Stage: umount Dec 13 01:26:18.104882 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:18.104882 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:18.104882 ignition[1114]: INFO : umount: umount passed Dec 13 01:26:18.104882 ignition[1114]: INFO : Ignition finished successfully Dec 13 01:26:18.099941 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:26:18.112634 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:26:18.112805 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:18.131003 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:26:18.131123 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:18.143030 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:26:18.143122 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:26:18.150772 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:26:18.150878 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:26:18.160932 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:26:18.160988 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:26:18.178123 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:26:18.178185 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:26:18.195785 systemd[1]: Stopped target network.target - Network. Dec 13 01:26:18.206325 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:26:18.206401 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:18.218511 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:26:18.228811 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:26:18.234539 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:18.242292 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:26:18.253977 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:26:18.265165 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:26:18.265221 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:18.275966 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:26:18.276082 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:18.286972 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:26:18.287029 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:26:18.302542 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:26:18.302607 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:18.312969 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:26:18.324416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:26:18.342133 systemd-networkd[873]: eth0: DHCPv6 lease lost Dec 13 01:26:18.342637 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:26:18.343285 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:26:18.345368 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:26:18.356580 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:26:18.356714 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:26:18.375853 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:26:18.377377 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:26:18.390137 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:26:18.390209 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:18.583138 kernel: hv_netvsc 0022487d-d013-0022-487d-d0130022487d eth0: Data path switched from VF: enP1292s1 Dec 13 01:26:18.413825 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:26:18.424397 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:26:18.424486 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:18.436145 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:26:18.436216 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:18.446753 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:26:18.446812 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:18.458006 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:26:18.458063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:18.471695 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:18.506937 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:26:18.507145 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:18.515863 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:26:18.515915 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:18.526184 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:26:18.526224 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:18.539781 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:26:18.539837 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:18.566072 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:26:18.566148 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:18.583172 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:18.583229 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:18.620629 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:26:18.635449 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:26:18.635540 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:18.650301 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:26:18.650378 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:18.662704 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:26:18.662761 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:18.675979 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:18.676057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:18.689442 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:26:18.689556 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:26:18.699181 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:26:18.699263 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:26:18.960109 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:26:18.960229 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:26:18.970519 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:26:18.980851 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:26:18.980925 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:19.003674 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:26:19.016587 systemd[1]: Switching root. Dec 13 01:26:19.063571 systemd-journald[217]: Journal stopped Dec 13 01:26:08.302646 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:26:08.302669 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:26:08.302677 kernel: KASLR enabled Dec 13 01:26:08.302683 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:26:08.302690 kernel: printk: bootconsole [pl11] enabled Dec 13 01:26:08.302695 kernel: efi: EFI v2.7 by EDK II Dec 13 01:26:08.302703 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:26:08.302709 kernel: random: crng init done Dec 13 01:26:08.302714 kernel: ACPI: Early table checksum verification disabled Dec 13 01:26:08.302720 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:26:08.302726 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302732 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302739 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:26:08.302745 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302752 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302759 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302765 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302773 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302779 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302786 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:26:08.302792 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:26:08.302808 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:26:08.302816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:26:08.302822 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:26:08.302829 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:26:08.302835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:26:08.302841 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:26:08.302847 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:26:08.302856 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:26:08.302862 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:26:08.302869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:26:08.302875 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:26:08.302881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:26:08.302887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:26:08.302894 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 01:26:08.302900 kernel: Zone ranges: Dec 13 01:26:08.302906 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:26:08.302912 kernel: DMA32 empty Dec 13 01:26:08.302918 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:26:08.302924 kernel: Movable zone start for each node Dec 13 01:26:08.302934 kernel: Early memory node ranges Dec 13 01:26:08.302941 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:26:08.302948 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:26:08.302954 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:26:08.302961 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:26:08.302969 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:26:08.302975 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:26:08.302982 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:26:08.302989 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:26:08.302996 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:26:08.303002 kernel: psci: probing for conduit method from ACPI. Dec 13 01:26:08.303009 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:26:08.303015 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:26:08.303022 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:26:08.303028 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:26:08.303035 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:26:08.303042 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:26:08.303050 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:26:08.303056 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:26:08.303063 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:26:08.303069 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:26:08.303076 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:26:08.303082 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:26:08.303089 kernel: CPU features: detected: Spectre-BHB Dec 13 01:26:08.303096 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:26:08.303102 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:26:08.303109 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:26:08.303115 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:26:08.303123 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:26:08.303130 kernel: alternatives: applying boot alternatives Dec 13 01:26:08.303138 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:26:08.303145 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:26:08.303152 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:26:08.303159 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:26:08.303165 kernel: Fallback order for Node 0: 0 Dec 13 01:26:08.303172 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:26:08.303179 kernel: Policy zone: Normal Dec 13 01:26:08.303185 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:26:08.303192 kernel: software IO TLB: area num 2. Dec 13 01:26:08.303200 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:26:08.303206 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Dec 13 01:26:08.303213 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:26:08.303220 kernel: trace event string verifier disabled Dec 13 01:26:08.303227 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:26:08.303234 kernel: rcu: RCU event tracing is enabled. Dec 13 01:26:08.303241 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:26:08.303248 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:26:08.303255 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:26:08.303261 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:26:08.303268 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:26:08.303276 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:26:08.303283 kernel: GICv3: 960 SPIs implemented Dec 13 01:26:08.303289 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:26:08.303296 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:26:08.303302 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:26:08.303309 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:26:08.303315 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:26:08.303322 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:26:08.303329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:26:08.303335 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:26:08.303342 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:26:08.303349 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:26:08.303357 kernel: Console: colour dummy device 80x25 Dec 13 01:26:08.303364 kernel: printk: console [tty1] enabled Dec 13 01:26:08.303371 kernel: ACPI: Core revision 20230628 Dec 13 01:26:08.303378 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:26:08.303385 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:26:08.303392 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:26:08.303399 kernel: landlock: Up and running. Dec 13 01:26:08.303405 kernel: SELinux: Initializing. Dec 13 01:26:08.303412 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:26:08.303421 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:26:08.303428 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:26:08.303435 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:26:08.303442 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:26:08.303448 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:26:08.303455 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:26:08.303462 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:26:08.303475 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:26:08.303482 kernel: Remapping and enabling EFI services. Dec 13 01:26:08.303490 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:26:08.303497 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:26:08.303505 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:26:08.303513 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:26:08.303520 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:26:08.303527 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:26:08.303534 kernel: SMP: Total of 2 processors activated. Dec 13 01:26:08.303541 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:26:08.303550 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:26:08.303557 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:26:08.303565 kernel: CPU features: detected: CRC32 instructions Dec 13 01:26:08.303572 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:26:08.303579 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:26:08.303586 kernel: CPU features: detected: Privileged Access Never Dec 13 01:26:08.303594 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:26:08.303601 kernel: alternatives: applying system-wide alternatives Dec 13 01:26:08.303608 kernel: devtmpfs: initialized Dec 13 01:26:08.303617 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:26:08.303624 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:26:08.303631 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:26:08.303638 kernel: SMBIOS 3.1.0 present. Dec 13 01:26:08.303645 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:26:08.303652 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:26:08.303660 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:26:08.303667 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:26:08.303676 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:26:08.303683 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:26:08.303690 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:26:08.303697 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:26:08.303704 kernel: cpuidle: using governor menu Dec 13 01:26:08.303711 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:26:08.303718 kernel: ASID allocator initialised with 32768 entries Dec 13 01:26:08.303725 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:26:08.303733 kernel: Serial: AMBA PL011 UART driver Dec 13 01:26:08.303741 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:26:08.303748 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:26:08.303755 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:26:08.303763 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:26:08.303770 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:26:08.303777 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:26:08.303784 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:26:08.303791 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:26:08.303805 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:26:08.303816 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:26:08.303824 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:26:08.303831 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:26:08.303838 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:26:08.303845 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:26:08.303852 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:26:08.303859 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:26:08.303866 kernel: ACPI: Interpreter enabled Dec 13 01:26:08.303873 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:26:08.303880 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:26:08.303889 kernel: printk: console [ttyAMA0] enabled Dec 13 01:26:08.303896 kernel: printk: bootconsole [pl11] disabled Dec 13 01:26:08.303904 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:26:08.303911 kernel: iommu: Default domain type: Translated Dec 13 01:26:08.303918 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:26:08.303925 kernel: efivars: Registered efivars operations Dec 13 01:26:08.303932 kernel: vgaarb: loaded Dec 13 01:26:08.303939 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:26:08.303946 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:26:08.303955 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:26:08.303963 kernel: pnp: PnP ACPI init Dec 13 01:26:08.303970 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:26:08.303977 kernel: NET: Registered PF_INET protocol family Dec 13 01:26:08.303984 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:26:08.303991 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:26:08.303999 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:26:08.304006 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:26:08.304015 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:26:08.304022 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:26:08.304029 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:26:08.304037 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:26:08.304044 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:26:08.304051 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:26:08.304058 kernel: kvm [1]: HYP mode not available Dec 13 01:26:08.304065 kernel: Initialise system trusted keyrings Dec 13 01:26:08.304072 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:26:08.304081 kernel: Key type asymmetric registered Dec 13 01:26:08.304088 kernel: Asymmetric key parser 'x509' registered Dec 13 01:26:08.304095 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:26:08.304102 kernel: io scheduler mq-deadline registered Dec 13 01:26:08.304109 kernel: io scheduler kyber registered Dec 13 01:26:08.304116 kernel: io scheduler bfq registered Dec 13 01:26:08.304123 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:26:08.304130 kernel: thunder_xcv, ver 1.0 Dec 13 01:26:08.304137 kernel: thunder_bgx, ver 1.0 Dec 13 01:26:08.304144 kernel: nicpf, ver 1.0 Dec 13 01:26:08.304153 kernel: nicvf, ver 1.0 Dec 13 01:26:08.304287 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:26:08.304359 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:26:07 UTC (1734053167) Dec 13 01:26:08.304369 kernel: efifb: probing for efifb Dec 13 01:26:08.304377 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:26:08.304385 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:26:08.304392 kernel: efifb: scrolling: redraw Dec 13 01:26:08.304401 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:26:08.304409 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:26:08.304416 kernel: fb0: EFI VGA frame buffer device Dec 13 01:26:08.304423 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:26:08.304430 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:26:08.304437 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:26:08.304444 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:26:08.304451 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:26:08.304459 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:26:08.304468 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:26:08.304475 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:26:08.304482 kernel: Segment Routing with IPv6 Dec 13 01:26:08.304490 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:26:08.304497 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:26:08.304504 kernel: Key type dns_resolver registered Dec 13 01:26:08.304511 kernel: registered taskstats version 1 Dec 13 01:26:08.304518 kernel: Loading compiled-in X.509 certificates Dec 13 01:26:08.304525 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:26:08.304532 kernel: Key type .fscrypt registered Dec 13 01:26:08.304541 kernel: Key type fscrypt-provisioning registered Dec 13 01:26:08.304548 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:26:08.304556 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:26:08.304563 kernel: ima: No architecture policies found Dec 13 01:26:08.304570 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:26:08.304577 kernel: clk: Disabling unused clocks Dec 13 01:26:08.304585 kernel: Freeing unused kernel memory: 39360K Dec 13 01:26:08.304592 kernel: Run /init as init process Dec 13 01:26:08.304600 kernel: with arguments: Dec 13 01:26:08.304607 kernel: /init Dec 13 01:26:08.304614 kernel: with environment: Dec 13 01:26:08.304621 kernel: HOME=/ Dec 13 01:26:08.304628 kernel: TERM=linux Dec 13 01:26:08.304635 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:26:08.304645 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:08.304654 systemd[1]: Detected virtualization microsoft. Dec 13 01:26:08.304664 systemd[1]: Detected architecture arm64. Dec 13 01:26:08.304671 systemd[1]: Running in initrd. Dec 13 01:26:08.304679 systemd[1]: No hostname configured, using default hostname. Dec 13 01:26:08.304686 systemd[1]: Hostname set to . Dec 13 01:26:08.304695 systemd[1]: Initializing machine ID from random generator. Dec 13 01:26:08.304702 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:26:08.304710 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:08.304718 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:08.304727 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:26:08.304736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:08.304743 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:26:08.304752 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:26:08.304761 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:26:08.304769 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:26:08.304777 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:08.304786 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:08.304794 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:08.304812 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:08.304821 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:08.304828 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:08.304836 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:08.304844 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:08.304852 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:26:08.304862 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:26:08.304869 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:08.304877 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:08.304885 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:08.304893 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:08.304901 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:26:08.304908 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:08.304916 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:26:08.304924 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:26:08.304933 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:08.304941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:08.304966 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:26:08.304986 systemd-journald[217]: Journal started Dec 13 01:26:08.305006 systemd-journald[217]: Runtime Journal (/run/log/journal/4e6ef018db86447a9b67e7513d02b6ca) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:26:08.322103 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:26:08.332190 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:08.347814 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:26:08.347859 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:08.357044 kernel: Bridge firewalling registered Dec 13 01:26:08.357157 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:26:08.364827 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:08.380828 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:08.387589 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:26:08.399820 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:08.407873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:08.428182 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:08.435988 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:08.459350 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:26:08.484010 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:08.491041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:08.506130 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:08.520756 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:08.532913 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:08.560138 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:26:08.568978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:08.592338 dracut-cmdline[249]: dracut-dracut-053 Dec 13 01:26:08.596578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:08.611540 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:26:08.647483 systemd-resolved[251]: Positive Trust Anchors: Dec 13 01:26:08.647500 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:08.647532 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:08.650732 systemd-resolved[251]: Defaulting to hostname 'linux'. Dec 13 01:26:08.653913 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:08.661209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:08.716827 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:08.796837 kernel: SCSI subsystem initialized Dec 13 01:26:08.804829 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:26:08.814827 kernel: iscsi: registered transport (tcp) Dec 13 01:26:08.833062 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:26:08.833137 kernel: QLogic iSCSI HBA Driver Dec 13 01:26:08.875770 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:08.890114 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:26:08.924839 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:26:08.924895 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:26:08.931069 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:26:08.981825 kernel: raid6: neonx8 gen() 15761 MB/s Dec 13 01:26:09.000814 kernel: raid6: neonx4 gen() 15496 MB/s Dec 13 01:26:09.020811 kernel: raid6: neonx2 gen() 13258 MB/s Dec 13 01:26:09.041816 kernel: raid6: neonx1 gen() 10513 MB/s Dec 13 01:26:09.061810 kernel: raid6: int64x8 gen() 6979 MB/s Dec 13 01:26:09.081809 kernel: raid6: int64x4 gen() 7338 MB/s Dec 13 01:26:09.102815 kernel: raid6: int64x2 gen() 6131 MB/s Dec 13 01:26:09.125821 kernel: raid6: int64x1 gen() 5055 MB/s Dec 13 01:26:09.125856 kernel: raid6: using algorithm neonx8 gen() 15761 MB/s Dec 13 01:26:09.150896 kernel: raid6: .... xor() 11938 MB/s, rmw enabled Dec 13 01:26:09.150921 kernel: raid6: using neon recovery algorithm Dec 13 01:26:09.162054 kernel: xor: measuring software checksum speed Dec 13 01:26:09.162068 kernel: 8regs : 19731 MB/sec Dec 13 01:26:09.165400 kernel: 32regs : 19622 MB/sec Dec 13 01:26:09.168770 kernel: arm64_neon : 27052 MB/sec Dec 13 01:26:09.172961 kernel: xor: using function: arm64_neon (27052 MB/sec) Dec 13 01:26:09.223820 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:26:09.234019 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:09.250009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:09.274092 systemd-udevd[436]: Using default interface naming scheme 'v255'. Dec 13 01:26:09.280021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:09.299928 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:26:09.329989 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Dec 13 01:26:09.362080 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:09.376100 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:09.416823 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:09.444943 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:26:09.469370 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:09.483096 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:09.497775 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:09.511461 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:09.532833 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:26:09.534042 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:26:09.558282 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:09.574782 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:09.598576 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:26:09.598601 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:26:09.598611 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:26:09.598644 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:26:09.598655 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:26:09.574954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:09.635840 kernel: scsi host0: storvsc_host_t Dec 13 01:26:09.636005 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:26:09.636017 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 01:26:09.636027 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:26:09.636135 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:26:09.636235 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 01:26:09.653110 kernel: scsi host1: storvsc_host_t Dec 13 01:26:09.653173 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:26:09.661514 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:09.676026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:09.682057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:09.696033 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:09.716853 kernel: PTP clock support registered Dec 13 01:26:09.717205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:09.732944 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:26:09.747952 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:26:09.747971 kernel: hv_netvsc 0022487d-d013-0022-487d-d0130022487d eth0: VF slot 1 added Dec 13 01:26:09.748091 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:26:09.748186 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:26:09.754073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:09.773208 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:26:09.776193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:09.878161 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:26:09.878188 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:26:09.878198 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:26:09.878208 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:26:09.878224 kernel: hv_pci 4e5a3359-050c-4f6d-af46-bdf11d0cbde5: PCI VMBus probing: Using version 0x10004 Dec 13 01:26:10.031637 kernel: hv_pci 4e5a3359-050c-4f6d-af46-bdf11d0cbde5: PCI host bridge to bus 050c:00 Dec 13 01:26:10.031782 kernel: pci_bus 050c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:26:10.031884 kernel: pci_bus 050c:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:26:10.031961 kernel: pci 050c:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:26:10.032057 kernel: pci 050c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:26:10.032140 kernel: pci 050c:00:02.0: enabling Extended Tags Dec 13 01:26:10.032222 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:26:10.041443 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:26:10.041642 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:26:10.041725 kernel: pci 050c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 050c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:26:10.041820 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:26:10.041903 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:26:10.041989 kernel: pci_bus 050c:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:26:10.042074 kernel: pci 050c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:26:10.042158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:10.042168 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:26:09.864877 systemd-resolved[251]: Clock change detected. Flushing caches. Dec 13 01:26:09.947381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:10.097549 kernel: mlx5_core 050c:00:02.0: enabling device (0000 -> 0002) Dec 13 01:26:10.338306 kernel: mlx5_core 050c:00:02.0: firmware version: 16.30.1284 Dec 13 01:26:10.338475 kernel: hv_netvsc 0022487d-d013-0022-487d-d0130022487d eth0: VF registering: eth1 Dec 13 01:26:10.338571 kernel: mlx5_core 050c:00:02.0 eth1: joined to eth0 Dec 13 01:26:10.338665 kernel: mlx5_core 050c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:26:10.347415 kernel: mlx5_core 050c:00:02.0 enP1292s1: renamed from eth1 Dec 13 01:26:10.507240 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:26:10.593368 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (499) Dec 13 01:26:10.608048 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:26:10.642366 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (502) Dec 13 01:26:10.650527 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:26:10.668228 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:26:10.676795 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:26:10.708643 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:26:10.736411 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:10.745370 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:11.753097 disk-uuid[603]: The operation has completed successfully. Dec 13 01:26:11.759593 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:11.826558 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:26:11.828363 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:26:11.854802 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:26:11.867561 sh[689]: Success Dec 13 01:26:11.897362 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:26:12.071388 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:26:12.091408 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:26:12.100811 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:26:12.130404 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:26:12.130455 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:26:12.137206 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:26:12.141985 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:26:12.146307 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:26:12.451050 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:26:12.456486 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:26:12.473613 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:26:12.511804 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:12.511863 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:26:12.505158 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:26:12.524388 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:12.545204 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:12.553409 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:26:12.567417 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:12.575976 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:26:12.592906 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:26:12.616801 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:12.636500 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:12.663988 systemd-networkd[873]: lo: Link UP Dec 13 01:26:12.667211 systemd-networkd[873]: lo: Gained carrier Dec 13 01:26:12.668850 systemd-networkd[873]: Enumeration completed Dec 13 01:26:12.669158 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:12.675911 systemd[1]: Reached target network.target - Network. Dec 13 01:26:12.679520 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:12.679523 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:12.765352 kernel: mlx5_core 050c:00:02.0 enP1292s1: Link up Dec 13 01:26:12.804418 kernel: hv_netvsc 0022487d-d013-0022-487d-d0130022487d eth0: Data path switched to VF: enP1292s1 Dec 13 01:26:12.804995 systemd-networkd[873]: enP1292s1: Link UP Dec 13 01:26:12.808721 systemd-networkd[873]: eth0: Link UP Dec 13 01:26:12.808826 systemd-networkd[873]: eth0: Gained carrier Dec 13 01:26:12.808835 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:12.827565 systemd-networkd[873]: enP1292s1: Gained carrier Dec 13 01:26:12.840397 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:26:13.589387 ignition[853]: Ignition 2.19.0 Dec 13 01:26:13.589397 ignition[853]: Stage: fetch-offline Dec 13 01:26:13.591285 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:13.589438 ignition[853]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:13.609589 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:26:13.589446 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:13.589537 ignition[853]: parsed url from cmdline: "" Dec 13 01:26:13.589540 ignition[853]: no config URL provided Dec 13 01:26:13.589544 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:26:13.589551 ignition[853]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:26:13.589556 ignition[853]: failed to fetch config: resource requires networking Dec 13 01:26:13.589741 ignition[853]: Ignition finished successfully Dec 13 01:26:13.638041 ignition[884]: Ignition 2.19.0 Dec 13 01:26:13.638047 ignition[884]: Stage: fetch Dec 13 01:26:13.638204 ignition[884]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:13.638213 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:13.638322 ignition[884]: parsed url from cmdline: "" Dec 13 01:26:13.638325 ignition[884]: no config URL provided Dec 13 01:26:13.638330 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:26:13.638374 ignition[884]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:26:13.638396 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:26:13.722107 ignition[884]: GET result: OK Dec 13 01:26:13.722205 ignition[884]: config has been read from IMDS userdata Dec 13 01:26:13.722269 ignition[884]: parsing config with SHA512: b6cb456c8b09b7e0783994dff3319c0fcca6157c6bd34306e44ab06c1658bb55201c350d6b654d034d463caea3d689e5f8c3cc226f092e6a4b95a77c5b931247 Dec 13 01:26:13.726771 unknown[884]: fetched base config from "system" Dec 13 01:26:13.727207 ignition[884]: fetch: fetch complete Dec 13 01:26:13.726779 unknown[884]: fetched base config from "system" Dec 13 01:26:13.727211 ignition[884]: fetch: fetch passed Dec 13 01:26:13.726786 unknown[884]: fetched user config from "azure" Dec 13 01:26:13.727255 ignition[884]: Ignition finished successfully Dec 13 01:26:13.731133 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:26:13.755757 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:26:13.779727 ignition[890]: Ignition 2.19.0 Dec 13 01:26:13.779734 ignition[890]: Stage: kargs Dec 13 01:26:13.784397 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:26:13.779945 ignition[890]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:13.779955 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:13.780980 ignition[890]: kargs: kargs passed Dec 13 01:26:13.781031 ignition[890]: Ignition finished successfully Dec 13 01:26:13.812488 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:26:13.832859 ignition[896]: Ignition 2.19.0 Dec 13 01:26:13.832870 ignition[896]: Stage: disks Dec 13 01:26:13.837082 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:26:13.833034 ignition[896]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:13.842921 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:13.833043 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:13.851466 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:26:13.833961 ignition[896]: disks: disks passed Dec 13 01:26:13.863116 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:13.834005 ignition[896]: Ignition finished successfully Dec 13 01:26:13.873193 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:13.884348 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:13.914638 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:26:13.982385 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:26:13.995479 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:26:14.012599 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:26:14.068371 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:26:14.068738 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:26:14.073607 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:26:14.114431 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:14.121479 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:26:14.132550 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:26:14.139609 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:26:14.178964 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (915) Dec 13 01:26:14.178987 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:14.139644 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:14.201107 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:26:14.201131 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:14.147546 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:26:14.205731 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:26:14.226365 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:14.228818 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:14.263538 systemd-networkd[873]: enP1292s1: Gained IPv6LL Dec 13 01:26:14.519480 systemd-networkd[873]: eth0: Gained IPv6LL Dec 13 01:26:14.720853 coreos-metadata[917]: Dec 13 01:26:14.720 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:26:14.731132 coreos-metadata[917]: Dec 13 01:26:14.731 INFO Fetch successful Dec 13 01:26:14.736400 coreos-metadata[917]: Dec 13 01:26:14.731 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:26:14.749580 coreos-metadata[917]: Dec 13 01:26:14.749 INFO Fetch successful Dec 13 01:26:14.763038 coreos-metadata[917]: Dec 13 01:26:14.763 INFO wrote hostname ci-4081.2.1-a-dd942dbb76 to /sysroot/etc/hostname Dec 13 01:26:14.772057 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:26:14.917493 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:26:14.927195 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:26:14.937360 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:26:14.960102 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:26:15.765707 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:15.782557 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:26:15.793803 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:26:15.812353 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:15.808736 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:26:15.835588 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:26:15.849177 ignition[1034]: INFO : Ignition 2.19.0 Dec 13 01:26:15.849177 ignition[1034]: INFO : Stage: mount Dec 13 01:26:15.857934 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:15.857934 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:15.857934 ignition[1034]: INFO : mount: mount passed Dec 13 01:26:15.857934 ignition[1034]: INFO : Ignition finished successfully Dec 13 01:26:15.857681 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:26:15.882573 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:26:15.901561 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:15.930481 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1044) Dec 13 01:26:15.946321 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:15.946356 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:26:15.950924 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:15.958362 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:15.959261 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:15.989369 ignition[1061]: INFO : Ignition 2.19.0 Dec 13 01:26:15.989369 ignition[1061]: INFO : Stage: files Dec 13 01:26:15.989369 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:15.989369 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:16.011142 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:26:16.025014 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:26:16.025014 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:26:16.088651 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:26:16.096568 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:26:16.096568 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:26:16.089066 unknown[1061]: wrote ssh authorized keys file for user: core Dec 13 01:26:16.116660 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:26:16.116660 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:26:16.198020 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:26:16.377275 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:26:16.377275 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:26:16.398525 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 01:26:16.851321 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:26:16.930995 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:26:16.930995 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:16.950574 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:26:17.204371 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:26:17.458221 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:17.458221 ignition[1061]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:26:17.477349 ignition[1061]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:17.487675 ignition[1061]: INFO : files: files passed Dec 13 01:26:17.487675 ignition[1061]: INFO : Ignition finished successfully Dec 13 01:26:17.489238 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:26:17.527239 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:26:17.542539 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:26:17.562458 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:26:17.601228 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:17.601228 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:17.562554 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:26:17.637103 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:17.602530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:17.616898 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:26:17.652611 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:26:17.689066 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:26:17.690547 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:26:17.701221 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:26:17.713105 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:26:17.724261 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:26:17.739852 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:26:17.761932 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:17.778686 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:26:17.795202 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:17.801866 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:17.814087 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:26:17.824882 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:26:17.825003 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:17.842136 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:26:17.854172 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:26:17.864093 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:26:17.874308 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:17.886146 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:17.898078 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:26:17.909193 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:17.920736 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:26:17.932425 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:26:17.943224 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:26:17.952539 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:26:17.952707 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:17.967604 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:17.978862 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:17.990856 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:26:17.990969 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:18.003464 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:26:18.003643 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:18.021111 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:26:18.021298 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:18.033317 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:26:18.033494 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:26:18.043862 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:26:18.044018 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:26:18.077474 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:26:18.104882 ignition[1114]: INFO : Ignition 2.19.0 Dec 13 01:26:18.104882 ignition[1114]: INFO : Stage: umount Dec 13 01:26:18.104882 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:18.104882 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:18.104882 ignition[1114]: INFO : umount: umount passed Dec 13 01:26:18.104882 ignition[1114]: INFO : Ignition finished successfully Dec 13 01:26:18.099941 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:26:18.112634 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:26:18.112805 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:18.131003 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:26:18.131123 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:18.143030 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:26:18.143122 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:26:18.150772 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:26:18.150878 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:26:18.160932 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:26:18.160988 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:26:18.178123 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:26:18.178185 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:26:18.195785 systemd[1]: Stopped target network.target - Network. Dec 13 01:26:18.206325 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:26:18.206401 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:18.218511 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:26:18.228811 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:26:18.234539 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:18.242292 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:26:18.253977 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:26:18.265165 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:26:18.265221 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:18.275966 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:26:18.276082 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:18.286972 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:26:18.287029 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:26:18.302542 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:26:18.302607 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:18.312969 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:26:18.324416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:26:18.342133 systemd-networkd[873]: eth0: DHCPv6 lease lost Dec 13 01:26:18.342637 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:26:18.343285 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:26:18.345368 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:26:18.356580 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:26:18.356714 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:26:18.375853 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:26:18.377377 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:26:18.390137 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:26:18.390209 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:18.583138 kernel: hv_netvsc 0022487d-d013-0022-487d-d0130022487d eth0: Data path switched from VF: enP1292s1 Dec 13 01:26:18.413825 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:26:18.424397 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:26:18.424486 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:18.436145 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:26:18.436216 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:18.446753 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:26:18.446812 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:18.458006 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:26:18.458063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:18.471695 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:18.506937 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:26:18.507145 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:18.515863 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:26:18.515915 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:18.526184 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:26:18.526224 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:18.539781 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:26:18.539837 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:18.566072 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:26:18.566148 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:18.583172 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:18.583229 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:18.620629 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:26:18.635449 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:26:18.635540 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:18.650301 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:26:18.650378 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:18.662704 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:26:18.662761 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:18.675979 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:18.676057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:18.689442 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:26:18.689556 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:26:18.699181 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:26:18.699263 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:26:18.960109 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:26:18.960229 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:26:18.970519 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:26:18.980851 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:26:18.980925 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:19.003674 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:26:19.016587 systemd[1]: Switching root. Dec 13 01:26:19.063571 systemd-journald[217]: Journal stopped Dec 13 01:26:23.873506 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Dec 13 01:26:23.873530 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:26:23.873543 kernel: SELinux: policy capability open_perms=1 Dec 13 01:26:23.873554 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:26:23.873561 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:26:23.873569 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:26:23.873578 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:26:23.873586 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:26:23.873594 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:26:23.873602 kernel: audit: type=1403 audit(1734053179.919:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:26:23.873612 systemd[1]: Successfully loaded SELinux policy in 118.378ms. Dec 13 01:26:23.873621 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.011ms. Dec 13 01:26:23.873631 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:23.873641 systemd[1]: Detected virtualization microsoft. Dec 13 01:26:23.873651 systemd[1]: Detected architecture arm64. Dec 13 01:26:23.873661 systemd[1]: Detected first boot. Dec 13 01:26:23.873671 systemd[1]: Hostname set to . Dec 13 01:26:23.873680 systemd[1]: Initializing machine ID from random generator. Dec 13 01:26:23.873689 zram_generator::config[1157]: No configuration found. Dec 13 01:26:23.873699 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:26:23.873708 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:26:23.873718 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:26:23.873728 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:26:23.873738 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:26:23.873748 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:26:23.873757 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:26:23.873766 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:26:23.873776 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:26:23.873786 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:26:23.873796 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:26:23.873805 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:26:23.873814 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:23.873824 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:23.873833 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:26:23.873842 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:26:23.873852 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:26:23.873861 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:23.873872 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:26:23.873881 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:23.873891 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:26:23.873902 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:26:23.873912 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:26:23.873921 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:26:23.873931 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:23.873942 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:23.873952 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:23.873961 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:23.873970 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:26:23.873980 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:26:23.873990 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:23.873999 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:23.874010 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:23.874020 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:26:23.874030 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:26:23.874040 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:26:23.874050 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:26:23.874059 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:26:23.874070 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:26:23.874080 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:26:23.874090 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:26:23.874100 systemd[1]: Reached target machines.target - Containers. Dec 13 01:26:23.874109 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:26:23.874119 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:23.874129 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:23.874138 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:26:23.874150 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:23.874160 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:23.874169 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:23.874179 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:26:23.874189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:23.874199 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:26:23.874208 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:26:23.874218 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:26:23.874227 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:26:23.874239 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:26:23.874248 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:23.874257 kernel: loop: module loaded Dec 13 01:26:23.874266 kernel: ACPI: bus type drm_connector registered Dec 13 01:26:23.874275 kernel: fuse: init (API version 7.39) Dec 13 01:26:23.874284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:23.874293 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:26:23.874319 systemd-journald[1260]: Collecting audit messages is disabled. Dec 13 01:26:23.874349 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:26:23.874360 systemd-journald[1260]: Journal started Dec 13 01:26:23.874380 systemd-journald[1260]: Runtime Journal (/run/log/journal/a1697a658f5c4c3ea98443713ddf7c50) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:26:22.874835 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:26:23.011213 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:26:23.011572 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:26:23.011875 systemd[1]: systemd-journald.service: Consumed 3.144s CPU time. Dec 13 01:26:23.895110 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:23.910016 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:26:23.910087 systemd[1]: Stopped verity-setup.service. Dec 13 01:26:23.928402 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:23.929201 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:26:23.934960 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:26:23.941057 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:26:23.946296 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:26:23.952522 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:26:23.959485 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:26:23.966414 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:26:23.973594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:23.980819 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:26:23.980950 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:26:23.987912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:23.988057 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:23.994424 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:23.994566 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:24.000652 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:24.000785 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:24.008138 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:26:24.008277 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:26:24.014531 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:24.014667 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:24.021134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:24.027711 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:26:24.034992 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:26:24.042367 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:24.059449 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:26:24.074456 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:26:24.081560 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:26:24.087875 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:26:24.087915 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:24.094473 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:26:24.102481 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:26:24.110160 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:26:24.116377 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:24.117980 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:26:24.125559 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:26:24.133123 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:24.134399 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:26:24.140312 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:24.143534 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:24.152536 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:26:24.168711 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:26:24.180664 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:26:24.201443 systemd-journald[1260]: Time spent on flushing to /var/log/journal/a1697a658f5c4c3ea98443713ddf7c50 is 15.322ms for 900 entries. Dec 13 01:26:24.201443 systemd-journald[1260]: System Journal (/var/log/journal/a1697a658f5c4c3ea98443713ddf7c50) is 8.0M, max 2.6G, 2.6G free. Dec 13 01:26:24.232697 systemd-journald[1260]: Received client request to flush runtime journal. Dec 13 01:26:24.232733 kernel: loop0: detected capacity change from 0 to 31320 Dec 13 01:26:24.195567 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:26:24.208733 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:26:24.217656 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:26:24.225004 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:26:24.240011 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:26:24.256417 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:26:24.267612 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:26:24.274435 udevadm[1294]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:26:24.304790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:24.318781 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Dec 13 01:26:24.319149 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Dec 13 01:26:24.323936 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:24.339534 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:26:24.369187 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:26:24.369878 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:26:24.437687 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:26:24.448581 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:24.464228 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Dec 13 01:26:24.464249 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Dec 13 01:26:24.468454 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:24.566365 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:26:24.619387 kernel: loop1: detected capacity change from 0 to 114328 Dec 13 01:26:24.886363 kernel: loop2: detected capacity change from 0 to 194512 Dec 13 01:26:24.922498 kernel: loop3: detected capacity change from 0 to 114432 Dec 13 01:26:25.241690 kernel: loop4: detected capacity change from 0 to 31320 Dec 13 01:26:25.249457 kernel: loop5: detected capacity change from 0 to 114328 Dec 13 01:26:25.257380 kernel: loop6: detected capacity change from 0 to 194512 Dec 13 01:26:25.266528 kernel: loop7: detected capacity change from 0 to 114432 Dec 13 01:26:25.269043 (sd-merge)[1319]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 13 01:26:25.269784 (sd-merge)[1319]: Merged extensions into '/usr'. Dec 13 01:26:25.273159 systemd[1]: Reloading requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:26:25.273174 systemd[1]: Reloading... Dec 13 01:26:25.338362 zram_generator::config[1348]: No configuration found. Dec 13 01:26:25.463133 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:25.519215 systemd[1]: Reloading finished in 245 ms. Dec 13 01:26:25.547889 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:26:25.555273 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:26:25.572557 systemd[1]: Starting ensure-sysext.service... Dec 13 01:26:25.577733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:25.595506 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:25.615687 systemd-udevd[1403]: Using default interface naming scheme 'v255'. Dec 13 01:26:25.624025 systemd[1]: Reloading requested from client PID 1401 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:26:25.624052 systemd[1]: Reloading... Dec 13 01:26:25.633003 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:26:25.633272 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:26:25.634697 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:26:25.634927 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Dec 13 01:26:25.634982 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Dec 13 01:26:25.679745 systemd-tmpfiles[1402]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:25.679756 systemd-tmpfiles[1402]: Skipping /boot Dec 13 01:26:25.687373 systemd-tmpfiles[1402]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:25.687384 systemd-tmpfiles[1402]: Skipping /boot Dec 13 01:26:25.710361 zram_generator::config[1432]: No configuration found. Dec 13 01:26:25.836396 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1476) Dec 13 01:26:25.868604 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1476) Dec 13 01:26:25.871965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:25.961139 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:26:25.961249 systemd[1]: Reloading finished in 336 ms. Dec 13 01:26:25.964468 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:26:25.964556 kernel: hv_vmbus: registering driver hv_balloon Dec 13 01:26:25.975352 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 01:26:25.975431 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 13 01:26:25.981067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:25.997807 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:26.012359 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 01:26:26.012455 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 01:26:26.021659 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 01:26:26.029215 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:26:26.031459 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:26:26.036270 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Dec 13 01:26:26.045137 systemd[1]: Finished ensure-sysext.service. Dec 13 01:26:26.064448 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:26.100757 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1456) Dec 13 01:26:26.102771 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:26:26.113884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:26.122119 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:26.140086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:26.154670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:26.162588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:26.174001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:26.176445 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:26:26.187013 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:26.198630 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:26.209553 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:26:26.224656 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:26:26.244795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:26.252032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:26.252210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:26.258892 augenrules[1588]: No rules Dec 13 01:26:26.259851 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:26.260004 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:26.267399 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:26.276023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:26.277387 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:26.284879 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:26.285019 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:26.292680 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:26:26.318746 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:26:26.329003 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:26:26.342363 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:26:26.357983 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:26:26.366465 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:26:26.372879 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:26.372959 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:26.376471 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:26:26.402243 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:26.403504 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:26.416446 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:26:26.423846 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:26:26.440011 lvm[1604]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:26.441613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:26.465565 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:26:26.476939 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:26.490583 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:26:26.505004 lvm[1620]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:26.536053 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:26:26.552421 systemd-resolved[1584]: Positive Trust Anchors: Dec 13 01:26:26.552439 systemd-resolved[1584]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:26.552470 systemd-resolved[1584]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:26.563050 systemd-networkd[1581]: lo: Link UP Dec 13 01:26:26.563059 systemd-networkd[1581]: lo: Gained carrier Dec 13 01:26:26.565588 systemd-networkd[1581]: Enumeration completed Dec 13 01:26:26.566180 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:26.566181 systemd-networkd[1581]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:26.566495 systemd-networkd[1581]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:26.579529 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:26:26.590709 systemd-resolved[1584]: Using system hostname 'ci-4081.2.1-a-dd942dbb76'. Dec 13 01:26:26.623271 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:26.634371 kernel: mlx5_core 050c:00:02.0 enP1292s1: Link up Dec 13 01:26:26.665428 kernel: hv_netvsc 0022487d-d013-0022-487d-d0130022487d eth0: Data path switched to VF: enP1292s1 Dec 13 01:26:26.666063 systemd-networkd[1581]: enP1292s1: Link UP Dec 13 01:26:26.666166 systemd-networkd[1581]: eth0: Link UP Dec 13 01:26:26.666169 systemd-networkd[1581]: eth0: Gained carrier Dec 13 01:26:26.666185 systemd-networkd[1581]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:26.666985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:26.673962 systemd[1]: Reached target network.target - Network. Dec 13 01:26:26.679218 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:26.686760 systemd-networkd[1581]: enP1292s1: Gained carrier Dec 13 01:26:26.698405 systemd-networkd[1581]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:26:26.777199 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:26:26.785372 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:26:28.343569 systemd-networkd[1581]: eth0: Gained IPv6LL Dec 13 01:26:28.345279 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:26:28.353251 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:26:28.663502 systemd-networkd[1581]: enP1292s1: Gained IPv6LL Dec 13 01:26:28.699219 ldconfig[1286]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:26:28.710250 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:26:28.721592 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:26:28.735671 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:26:28.741902 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:28.747459 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:26:28.754332 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:26:28.761140 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:26:28.767065 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:26:28.773896 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:26:28.780627 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:26:28.780665 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:28.785505 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:28.804825 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:26:28.812140 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:26:28.824082 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:26:28.830126 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:26:28.835794 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:28.840689 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:28.845714 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:28.845742 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:28.848011 systemd[1]: Starting chronyd.service - NTP client/server... Dec 13 01:26:28.855502 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:26:28.867531 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:26:28.876567 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:26:28.893482 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:26:28.898573 jq[1640]: false Dec 13 01:26:28.900634 (chronyd)[1634]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Dec 13 01:26:28.901013 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:26:28.906931 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:26:28.906973 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Dec 13 01:26:28.909643 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 13 01:26:28.915370 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 13 01:26:28.918634 KVP[1642]: KVP starting; pid is:1642 Dec 13 01:26:28.921114 chronyd[1646]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Dec 13 01:26:28.921483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:28.930518 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:26:28.944528 KVP[1642]: KVP LIC Version: 3.1 Dec 13 01:26:28.945385 kernel: hv_utils: KVP IC version 4.0 Dec 13 01:26:28.945526 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:26:28.950421 chronyd[1646]: Timezone right/UTC failed leap second check, ignoring Dec 13 01:26:28.950692 chronyd[1646]: Loaded seccomp filter (level 2) Dec 13 01:26:28.952606 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:26:28.967286 extend-filesystems[1641]: Found loop4 Dec 13 01:26:28.975895 extend-filesystems[1641]: Found loop5 Dec 13 01:26:28.975895 extend-filesystems[1641]: Found loop6 Dec 13 01:26:28.975895 extend-filesystems[1641]: Found loop7 Dec 13 01:26:28.975895 extend-filesystems[1641]: Found sda Dec 13 01:26:28.975895 extend-filesystems[1641]: Found sda1 Dec 13 01:26:28.975895 extend-filesystems[1641]: Found sda2 Dec 13 01:26:28.975895 extend-filesystems[1641]: Found sda3 Dec 13 01:26:28.975895 extend-filesystems[1641]: Found usr Dec 13 01:26:28.975895 extend-filesystems[1641]: Found sda4 Dec 13 01:26:28.975895 extend-filesystems[1641]: Found sda6 Dec 13 01:26:28.975895 extend-filesystems[1641]: Found sda7 Dec 13 01:26:28.975895 extend-filesystems[1641]: Found sda9 Dec 13 01:26:28.975895 extend-filesystems[1641]: Checking size of /dev/sda9 Dec 13 01:26:28.971627 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:26:29.187248 coreos-metadata[1636]: Dec 13 01:26:29.076 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:26:29.187248 coreos-metadata[1636]: Dec 13 01:26:29.085 INFO Fetch successful Dec 13 01:26:29.187248 coreos-metadata[1636]: Dec 13 01:26:29.085 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 01:26:29.187248 coreos-metadata[1636]: Dec 13 01:26:29.089 INFO Fetch successful Dec 13 01:26:29.187248 coreos-metadata[1636]: Dec 13 01:26:29.090 INFO Fetching http://168.63.129.16/machine/87825d10-631f-4fd9-8662-32dbaaa4124a/e57c3ad2%2D66fd%2D480f%2Daa32%2Dc67f8b8f97aa.%5Fci%2D4081.2.1%2Da%2Ddd942dbb76?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 01:26:29.187248 coreos-metadata[1636]: Dec 13 01:26:29.094 INFO Fetch successful Dec 13 01:26:29.187248 coreos-metadata[1636]: Dec 13 01:26:29.094 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:26:29.187248 coreos-metadata[1636]: Dec 13 01:26:29.108 INFO Fetch successful Dec 13 01:26:29.187549 extend-filesystems[1641]: Old size kept for /dev/sda9 Dec 13 01:26:29.187549 extend-filesystems[1641]: Found sr0 Dec 13 01:26:29.227739 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1484) Dec 13 01:26:29.016701 dbus-daemon[1639]: [system] SELinux support is enabled Dec 13 01:26:28.992574 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:26:29.021505 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:26:29.034000 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:26:29.228488 update_engine[1671]: I20241213 01:26:29.122603 1671 main.cc:92] Flatcar Update Engine starting Dec 13 01:26:29.228488 update_engine[1671]: I20241213 01:26:29.132529 1671 update_check_scheduler.cc:74] Next update check in 9m52s Dec 13 01:26:29.036806 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:26:29.228749 jq[1674]: true Dec 13 01:26:29.045558 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:26:29.063206 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:26:29.087750 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:26:29.102655 systemd[1]: Started chronyd.service - NTP client/server. Dec 13 01:26:29.128744 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:26:29.128898 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:26:29.129144 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:26:29.129276 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:26:29.143179 systemd-logind[1668]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:26:29.145510 systemd-logind[1668]: New seat seat0. Dec 13 01:26:29.194851 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:26:29.221762 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:26:29.222087 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:26:29.230602 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:26:29.240816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:26:29.242475 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:26:29.260102 jq[1706]: true Dec 13 01:26:29.285280 (ntainerd)[1707]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:26:29.288786 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:26:29.297026 dbus-daemon[1639]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:26:29.309588 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:26:29.320010 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:26:29.320216 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:26:29.320582 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:26:29.328991 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:26:29.329106 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:26:29.345680 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:26:29.356213 tar[1705]: linux-arm64/helm Dec 13 01:26:29.423070 bash[1740]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:26:29.425409 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:26:29.438532 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:26:29.550374 locksmithd[1736]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:26:29.645681 sshd_keygen[1667]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:26:29.677055 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:26:29.690845 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:26:29.699762 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 13 01:26:29.715280 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:26:29.715514 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:26:29.724069 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:26:29.753550 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 13 01:26:29.774395 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:26:29.788764 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:26:29.796642 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:26:29.807620 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:26:29.937881 tar[1705]: linux-arm64/LICENSE Dec 13 01:26:29.938105 tar[1705]: linux-arm64/README.md Dec 13 01:26:29.953429 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:26:29.984885 containerd[1707]: time="2024-12-13T01:26:29.984773140Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:26:30.022086 containerd[1707]: time="2024-12-13T01:26:30.022028700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:30.023290 containerd[1707]: time="2024-12-13T01:26:30.023237980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:30.023290 containerd[1707]: time="2024-12-13T01:26:30.023277740Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:26:30.023409 containerd[1707]: time="2024-12-13T01:26:30.023294740Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:26:30.023894 containerd[1707]: time="2024-12-13T01:26:30.023464340Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:26:30.023894 containerd[1707]: time="2024-12-13T01:26:30.023487740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:30.023894 containerd[1707]: time="2024-12-13T01:26:30.023545020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:30.023894 containerd[1707]: time="2024-12-13T01:26:30.023557140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:30.023894 containerd[1707]: time="2024-12-13T01:26:30.023703580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:30.023894 containerd[1707]: time="2024-12-13T01:26:30.023718980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:30.023894 containerd[1707]: time="2024-12-13T01:26:30.023732500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:30.023894 containerd[1707]: time="2024-12-13T01:26:30.023742500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:30.023894 containerd[1707]: time="2024-12-13T01:26:30.023809020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:30.024092 containerd[1707]: time="2024-12-13T01:26:30.024009140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:30.025768 containerd[1707]: time="2024-12-13T01:26:30.024117180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:30.025768 containerd[1707]: time="2024-12-13T01:26:30.024141300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:26:30.025768 containerd[1707]: time="2024-12-13T01:26:30.024219700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:26:30.025768 containerd[1707]: time="2024-12-13T01:26:30.024261700Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:26:30.024528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:30.033627 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:30.053629 containerd[1707]: time="2024-12-13T01:26:30.053579580Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:26:30.053732 containerd[1707]: time="2024-12-13T01:26:30.053652260Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:26:30.053732 containerd[1707]: time="2024-12-13T01:26:30.053670100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:26:30.053732 containerd[1707]: time="2024-12-13T01:26:30.053685980Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:26:30.053732 containerd[1707]: time="2024-12-13T01:26:30.053700620Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:26:30.053897 containerd[1707]: time="2024-12-13T01:26:30.053872380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:26:30.054167 containerd[1707]: time="2024-12-13T01:26:30.054145900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:26:30.054283 containerd[1707]: time="2024-12-13T01:26:30.054258580Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:26:30.054309 containerd[1707]: time="2024-12-13T01:26:30.054281180Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:26:30.054309 containerd[1707]: time="2024-12-13T01:26:30.054295220Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:26:30.054409 containerd[1707]: time="2024-12-13T01:26:30.054309060Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:26:30.054409 containerd[1707]: time="2024-12-13T01:26:30.054330460Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:26:30.054969 containerd[1707]: time="2024-12-13T01:26:30.054946580Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:26:30.055012 containerd[1707]: time="2024-12-13T01:26:30.054992580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:26:30.055039 containerd[1707]: time="2024-12-13T01:26:30.055014940Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:26:30.055039 containerd[1707]: time="2024-12-13T01:26:30.055031860Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:26:30.055079 containerd[1707]: time="2024-12-13T01:26:30.055044420Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:26:30.055079 containerd[1707]: time="2024-12-13T01:26:30.055057260Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:26:30.055111 containerd[1707]: time="2024-12-13T01:26:30.055077740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055111 containerd[1707]: time="2024-12-13T01:26:30.055092460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055111 containerd[1707]: time="2024-12-13T01:26:30.055104580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055163 containerd[1707]: time="2024-12-13T01:26:30.055117860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055163 containerd[1707]: time="2024-12-13T01:26:30.055138620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055163 containerd[1707]: time="2024-12-13T01:26:30.055151460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055214 containerd[1707]: time="2024-12-13T01:26:30.055163500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055214 containerd[1707]: time="2024-12-13T01:26:30.055176100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055214 containerd[1707]: time="2024-12-13T01:26:30.055188940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055214 containerd[1707]: time="2024-12-13T01:26:30.055203620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055283 containerd[1707]: time="2024-12-13T01:26:30.055215460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055283 containerd[1707]: time="2024-12-13T01:26:30.055227700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055283 containerd[1707]: time="2024-12-13T01:26:30.055239900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055283 containerd[1707]: time="2024-12-13T01:26:30.055257260Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:26:30.055283 containerd[1707]: time="2024-12-13T01:26:30.055280540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055506 containerd[1707]: time="2024-12-13T01:26:30.055294340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055506 containerd[1707]: time="2024-12-13T01:26:30.055306100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:26:30.055506 containerd[1707]: time="2024-12-13T01:26:30.055371260Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:26:30.055506 containerd[1707]: time="2024-12-13T01:26:30.055391300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:26:30.055506 containerd[1707]: time="2024-12-13T01:26:30.055401660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:26:30.055506 containerd[1707]: time="2024-12-13T01:26:30.055413100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:26:30.055506 containerd[1707]: time="2024-12-13T01:26:30.055423300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055506 containerd[1707]: time="2024-12-13T01:26:30.055435060Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:26:30.055506 containerd[1707]: time="2024-12-13T01:26:30.055445420Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:26:30.055506 containerd[1707]: time="2024-12-13T01:26:30.055455900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:26:30.055795 containerd[1707]: time="2024-12-13T01:26:30.055729340Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:26:30.055795 containerd[1707]: time="2024-12-13T01:26:30.055794780Z" level=info msg="Connect containerd service" Dec 13 01:26:30.055957 containerd[1707]: time="2024-12-13T01:26:30.055820260Z" level=info msg="using legacy CRI server" Dec 13 01:26:30.055957 containerd[1707]: time="2024-12-13T01:26:30.055833100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:26:30.055957 containerd[1707]: time="2024-12-13T01:26:30.055919180Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:26:30.058282 containerd[1707]: time="2024-12-13T01:26:30.058251220Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:26:30.058512 containerd[1707]: time="2024-12-13T01:26:30.058469820Z" level=info msg="Start subscribing containerd event" Dec 13 01:26:30.058661 containerd[1707]: time="2024-12-13T01:26:30.058532500Z" level=info msg="Start recovering state" Dec 13 01:26:30.058661 containerd[1707]: time="2024-12-13T01:26:30.058608500Z" level=info msg="Start event monitor" Dec 13 01:26:30.058661 containerd[1707]: time="2024-12-13T01:26:30.058620260Z" level=info msg="Start snapshots syncer" Dec 13 01:26:30.058661 containerd[1707]: time="2024-12-13T01:26:30.058637020Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:26:30.058661 containerd[1707]: time="2024-12-13T01:26:30.058645420Z" level=info msg="Start streaming server" Dec 13 01:26:30.059572 containerd[1707]: time="2024-12-13T01:26:30.059538140Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:26:30.059646 containerd[1707]: time="2024-12-13T01:26:30.059612420Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:26:30.067025 containerd[1707]: time="2024-12-13T01:26:30.060486860Z" level=info msg="containerd successfully booted in 0.077879s" Dec 13 01:26:30.059772 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:26:30.068850 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:26:30.076789 systemd[1]: Startup finished in 677ms (kernel) + 11.996s (initrd) + 10.273s (userspace) = 22.948s. Dec 13 01:26:30.467953 kubelet[1787]: E1213 01:26:30.467872 1787 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:30.470225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:30.470666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:30.560328 login[1775]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 13 01:26:30.573772 login[1774]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:30.585305 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:26:30.586222 systemd-logind[1668]: New session 1 of user core. Dec 13 01:26:30.590579 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:26:30.601770 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:26:30.609616 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:26:30.612107 (systemd)[1801]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:26:30.741141 systemd[1801]: Queued start job for default target default.target. Dec 13 01:26:30.746254 systemd[1801]: Created slice app.slice - User Application Slice. Dec 13 01:26:30.746278 systemd[1801]: Reached target paths.target - Paths. Dec 13 01:26:30.746289 systemd[1801]: Reached target timers.target - Timers. Dec 13 01:26:30.748502 systemd[1801]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:26:30.758445 systemd[1801]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:26:30.759068 systemd[1801]: Reached target sockets.target - Sockets. Dec 13 01:26:30.759093 systemd[1801]: Reached target basic.target - Basic System. Dec 13 01:26:30.759142 systemd[1801]: Reached target default.target - Main User Target. Dec 13 01:26:30.759174 systemd[1801]: Startup finished in 140ms. Dec 13 01:26:30.759599 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:26:30.768082 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:26:31.398359 waagent[1771]: 2024-12-13T01:26:31.397904Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Dec 13 01:26:31.404026 waagent[1771]: 2024-12-13T01:26:31.403933Z INFO Daemon Daemon OS: flatcar 4081.2.1 Dec 13 01:26:31.408572 waagent[1771]: 2024-12-13T01:26:31.408509Z INFO Daemon Daemon Python: 3.11.9 Dec 13 01:26:31.412737 waagent[1771]: 2024-12-13T01:26:31.412678Z INFO Daemon Daemon Run daemon Dec 13 01:26:31.416565 waagent[1771]: 2024-12-13T01:26:31.416511Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.2.1' Dec 13 01:26:31.425465 waagent[1771]: 2024-12-13T01:26:31.425391Z INFO Daemon Daemon Using waagent for provisioning Dec 13 01:26:31.430910 waagent[1771]: 2024-12-13T01:26:31.430860Z INFO Daemon Daemon Activate resource disk Dec 13 01:26:31.435667 waagent[1771]: 2024-12-13T01:26:31.435610Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 01:26:31.446660 waagent[1771]: 2024-12-13T01:26:31.446600Z INFO Daemon Daemon Found device: None Dec 13 01:26:31.451528 waagent[1771]: 2024-12-13T01:26:31.451474Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 01:26:31.459965 waagent[1771]: 2024-12-13T01:26:31.459913Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 01:26:31.473433 waagent[1771]: 2024-12-13T01:26:31.473369Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:26:31.479676 waagent[1771]: 2024-12-13T01:26:31.479620Z INFO Daemon Daemon Running default provisioning handler Dec 13 01:26:31.491784 waagent[1771]: 2024-12-13T01:26:31.491238Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 13 01:26:31.504529 waagent[1771]: 2024-12-13T01:26:31.504463Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:26:31.513544 waagent[1771]: 2024-12-13T01:26:31.513484Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:26:31.518363 waagent[1771]: 2024-12-13T01:26:31.518300Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 01:26:31.561862 login[1775]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:31.566876 systemd-logind[1668]: New session 2 of user core. Dec 13 01:26:31.572572 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:26:31.610382 waagent[1771]: 2024-12-13T01:26:31.608115Z INFO Daemon Daemon Successfully mounted dvd Dec 13 01:26:31.625181 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 01:26:31.627571 waagent[1771]: 2024-12-13T01:26:31.627477Z INFO Daemon Daemon Detect protocol endpoint Dec 13 01:26:31.632289 waagent[1771]: 2024-12-13T01:26:31.632219Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:26:31.637702 waagent[1771]: 2024-12-13T01:26:31.637637Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 01:26:31.643946 waagent[1771]: 2024-12-13T01:26:31.643889Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 01:26:31.649067 waagent[1771]: 2024-12-13T01:26:31.648976Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 01:26:31.654275 waagent[1771]: 2024-12-13T01:26:31.654224Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 01:26:31.715102 waagent[1771]: 2024-12-13T01:26:31.715049Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 01:26:31.721849 waagent[1771]: 2024-12-13T01:26:31.721808Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 01:26:31.726948 waagent[1771]: 2024-12-13T01:26:31.726891Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 01:26:32.147236 waagent[1771]: 2024-12-13T01:26:32.147126Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 01:26:32.154005 waagent[1771]: 2024-12-13T01:26:32.153932Z INFO Daemon Daemon Forcing an update of the goal state. Dec 13 01:26:32.163083 waagent[1771]: 2024-12-13T01:26:32.163029Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:26:32.184918 waagent[1771]: 2024-12-13T01:26:32.184870Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Dec 13 01:26:32.190714 waagent[1771]: 2024-12-13T01:26:32.190665Z INFO Daemon Dec 13 01:26:32.194011 waagent[1771]: 2024-12-13T01:26:32.193960Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f00c7f4f-a8e0-4caa-8160-b765b6d02b56 eTag: 17967612072110573154 source: Fabric] Dec 13 01:26:32.205646 waagent[1771]: 2024-12-13T01:26:32.205597Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 13 01:26:32.212966 waagent[1771]: 2024-12-13T01:26:32.212919Z INFO Daemon Dec 13 01:26:32.215750 waagent[1771]: 2024-12-13T01:26:32.215703Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:26:32.226466 waagent[1771]: 2024-12-13T01:26:32.226429Z INFO Daemon Daemon Downloading artifacts profile blob Dec 13 01:26:32.308544 waagent[1771]: 2024-12-13T01:26:32.308457Z INFO Daemon Downloaded certificate {'thumbprint': '77A11CB9BB6F264BF67FB03F47A0E889AF4C0841', 'hasPrivateKey': False} Dec 13 01:26:32.318495 waagent[1771]: 2024-12-13T01:26:32.318445Z INFO Daemon Downloaded certificate {'thumbprint': '00C4A336FEC49BCA1ED23ABE03E5FB248D6546F8', 'hasPrivateKey': True} Dec 13 01:26:32.328153 waagent[1771]: 2024-12-13T01:26:32.328100Z INFO Daemon Fetch goal state completed Dec 13 01:26:32.339807 waagent[1771]: 2024-12-13T01:26:32.339760Z INFO Daemon Daemon Starting provisioning Dec 13 01:26:32.344973 waagent[1771]: 2024-12-13T01:26:32.344904Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 01:26:32.349877 waagent[1771]: 2024-12-13T01:26:32.349825Z INFO Daemon Daemon Set hostname [ci-4081.2.1-a-dd942dbb76] Dec 13 01:26:32.370628 waagent[1771]: 2024-12-13T01:26:32.370554Z INFO Daemon Daemon Publish hostname [ci-4081.2.1-a-dd942dbb76] Dec 13 01:26:32.376755 waagent[1771]: 2024-12-13T01:26:32.376690Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 01:26:32.382875 waagent[1771]: 2024-12-13T01:26:32.382821Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 01:26:32.409556 systemd-networkd[1581]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:32.409564 systemd-networkd[1581]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:32.409591 systemd-networkd[1581]: eth0: DHCP lease lost Dec 13 01:26:32.410747 waagent[1771]: 2024-12-13T01:26:32.410652Z INFO Daemon Daemon Create user account if not exists Dec 13 01:26:32.416547 waagent[1771]: 2024-12-13T01:26:32.416309Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 01:26:32.417423 systemd-networkd[1581]: eth0: DHCPv6 lease lost Dec 13 01:26:32.422279 waagent[1771]: 2024-12-13T01:26:32.422198Z INFO Daemon Daemon Configure sudoer Dec 13 01:26:32.426881 waagent[1771]: 2024-12-13T01:26:32.426809Z INFO Daemon Daemon Configure sshd Dec 13 01:26:32.431561 waagent[1771]: 2024-12-13T01:26:32.431493Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 13 01:26:32.445133 waagent[1771]: 2024-12-13T01:26:32.445059Z INFO Daemon Daemon Deploy ssh public key. Dec 13 01:26:32.465410 systemd-networkd[1581]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:26:33.576759 waagent[1771]: 2024-12-13T01:26:33.576694Z INFO Daemon Daemon Provisioning complete Dec 13 01:26:33.595268 waagent[1771]: 2024-12-13T01:26:33.595218Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 01:26:33.601440 waagent[1771]: 2024-12-13T01:26:33.601382Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 01:26:33.610917 waagent[1771]: 2024-12-13T01:26:33.610859Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Dec 13 01:26:33.742245 waagent[1855]: 2024-12-13T01:26:33.742164Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Dec 13 01:26:33.743216 waagent[1855]: 2024-12-13T01:26:33.742680Z INFO ExtHandler ExtHandler OS: flatcar 4081.2.1 Dec 13 01:26:33.743216 waagent[1855]: 2024-12-13T01:26:33.742754Z INFO ExtHandler ExtHandler Python: 3.11.9 Dec 13 01:26:33.779384 waagent[1855]: 2024-12-13T01:26:33.779177Z INFO ExtHandler ExtHandler Distro: flatcar-4081.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:26:33.779484 waagent[1855]: 2024-12-13T01:26:33.779446Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:33.779548 waagent[1855]: 2024-12-13T01:26:33.779515Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:33.787984 waagent[1855]: 2024-12-13T01:26:33.787896Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:26:33.793884 waagent[1855]: 2024-12-13T01:26:33.793838Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 01:26:33.794428 waagent[1855]: 2024-12-13T01:26:33.794381Z INFO ExtHandler Dec 13 01:26:33.794503 waagent[1855]: 2024-12-13T01:26:33.794472Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ff19fd9d-64b5-49b6-a673-dd08489d02bd eTag: 17967612072110573154 source: Fabric] Dec 13 01:26:33.794814 waagent[1855]: 2024-12-13T01:26:33.794772Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 01:26:33.795391 waagent[1855]: 2024-12-13T01:26:33.795329Z INFO ExtHandler Dec 13 01:26:33.795461 waagent[1855]: 2024-12-13T01:26:33.795432Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:26:33.799277 waagent[1855]: 2024-12-13T01:26:33.799244Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 01:26:33.875969 waagent[1855]: 2024-12-13T01:26:33.875825Z INFO ExtHandler Downloaded certificate {'thumbprint': '77A11CB9BB6F264BF67FB03F47A0E889AF4C0841', 'hasPrivateKey': False} Dec 13 01:26:33.876361 waagent[1855]: 2024-12-13T01:26:33.876297Z INFO ExtHandler Downloaded certificate {'thumbprint': '00C4A336FEC49BCA1ED23ABE03E5FB248D6546F8', 'hasPrivateKey': True} Dec 13 01:26:33.876876 waagent[1855]: 2024-12-13T01:26:33.876826Z INFO ExtHandler Fetch goal state completed Dec 13 01:26:33.893556 waagent[1855]: 2024-12-13T01:26:33.893495Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1855 Dec 13 01:26:33.893711 waagent[1855]: 2024-12-13T01:26:33.893675Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 13 01:26:33.895387 waagent[1855]: 2024-12-13T01:26:33.895323Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.2.1', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:26:33.895772 waagent[1855]: 2024-12-13T01:26:33.895734Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:26:33.929459 waagent[1855]: 2024-12-13T01:26:33.929413Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:26:33.929665 waagent[1855]: 2024-12-13T01:26:33.929626Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:26:33.935677 waagent[1855]: 2024-12-13T01:26:33.935633Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:26:33.942449 systemd[1]: Reloading requested from client PID 1870 ('systemctl') (unit waagent.service)... Dec 13 01:26:33.942711 systemd[1]: Reloading... Dec 13 01:26:34.020381 zram_generator::config[1904]: No configuration found. Dec 13 01:26:34.127320 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:34.205031 systemd[1]: Reloading finished in 261 ms. Dec 13 01:26:34.228156 waagent[1855]: 2024-12-13T01:26:34.227783Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Dec 13 01:26:34.234282 systemd[1]: Reloading requested from client PID 1958 ('systemctl') (unit waagent.service)... Dec 13 01:26:34.234303 systemd[1]: Reloading... Dec 13 01:26:34.326369 zram_generator::config[1993]: No configuration found. Dec 13 01:26:34.434432 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:34.512409 systemd[1]: Reloading finished in 277 ms. Dec 13 01:26:34.536390 waagent[1855]: 2024-12-13T01:26:34.535669Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 13 01:26:34.536390 waagent[1855]: 2024-12-13T01:26:34.535853Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 13 01:26:35.132090 waagent[1855]: 2024-12-13T01:26:35.132010Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 01:26:35.136583 waagent[1855]: 2024-12-13T01:26:35.135897Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Dec 13 01:26:35.136871 waagent[1855]: 2024-12-13T01:26:35.136814Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:26:35.136976 waagent[1855]: 2024-12-13T01:26:35.136928Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:35.137162 waagent[1855]: 2024-12-13T01:26:35.137113Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:35.137587 waagent[1855]: 2024-12-13T01:26:35.137529Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:26:35.137927 waagent[1855]: 2024-12-13T01:26:35.137867Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:26:35.138392 waagent[1855]: 2024-12-13T01:26:35.138313Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:26:35.138598 waagent[1855]: 2024-12-13T01:26:35.138555Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:35.138681 waagent[1855]: 2024-12-13T01:26:35.138648Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:35.138833 waagent[1855]: 2024-12-13T01:26:35.138793Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:26:35.138902 waagent[1855]: 2024-12-13T01:26:35.138870Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:26:35.138997 waagent[1855]: 2024-12-13T01:26:35.138924Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:26:35.139127 waagent[1855]: 2024-12-13T01:26:35.139053Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:26:35.139912 waagent[1855]: 2024-12-13T01:26:35.139846Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:26:35.140055 waagent[1855]: 2024-12-13T01:26:35.139931Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:26:35.140055 waagent[1855]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:26:35.140055 waagent[1855]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:26:35.140055 waagent[1855]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:26:35.140055 waagent[1855]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:35.140055 waagent[1855]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:35.140055 waagent[1855]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:35.140620 waagent[1855]: 2024-12-13T01:26:35.140426Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:26:35.140957 waagent[1855]: 2024-12-13T01:26:35.140900Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:26:35.147433 waagent[1855]: 2024-12-13T01:26:35.147375Z INFO ExtHandler ExtHandler Dec 13 01:26:35.147808 waagent[1855]: 2024-12-13T01:26:35.147760Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b964df1f-2581-4cc9-9393-3d0a254e3c02 correlation ebea807d-799c-40c3-9c9f-88c8142959cb created: 2024-12-13T01:25:27.529472Z] Dec 13 01:26:35.148981 waagent[1855]: 2024-12-13T01:26:35.148938Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 01:26:35.150382 waagent[1855]: 2024-12-13T01:26:35.149656Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Dec 13 01:26:35.187000 waagent[1855]: 2024-12-13T01:26:35.186781Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A81A2FF8-5368-4054-BC1E-3BA6A60D00E4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Dec 13 01:26:35.195225 waagent[1855]: 2024-12-13T01:26:35.195140Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:26:35.195225 waagent[1855]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:26:35.195225 waagent[1855]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:26:35.195225 waagent[1855]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:d0:13 brd ff:ff:ff:ff:ff:ff Dec 13 01:26:35.195225 waagent[1855]: 3: enP1292s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:d0:13 brd ff:ff:ff:ff:ff:ff\ altname enP1292p0s2 Dec 13 01:26:35.195225 waagent[1855]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:26:35.195225 waagent[1855]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:26:35.195225 waagent[1855]: 2: eth0 inet 10.200.20.20/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:26:35.195225 waagent[1855]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:26:35.195225 waagent[1855]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 13 01:26:35.195225 waagent[1855]: 2: eth0 inet6 fe80::222:48ff:fe7d:d013/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:26:35.195225 waagent[1855]: 3: enP1292s1 inet6 fe80::222:48ff:fe7d:d013/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:26:35.243377 waagent[1855]: 2024-12-13T01:26:35.243030Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Dec 13 01:26:35.243377 waagent[1855]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:35.243377 waagent[1855]: pkts bytes target prot opt in out source destination Dec 13 01:26:35.243377 waagent[1855]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:35.243377 waagent[1855]: pkts bytes target prot opt in out source destination Dec 13 01:26:35.243377 waagent[1855]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:35.243377 waagent[1855]: pkts bytes target prot opt in out source destination Dec 13 01:26:35.243377 waagent[1855]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:26:35.243377 waagent[1855]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:26:35.243377 waagent[1855]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:26:35.246219 waagent[1855]: 2024-12-13T01:26:35.246147Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 01:26:35.246219 waagent[1855]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:35.246219 waagent[1855]: pkts bytes target prot opt in out source destination Dec 13 01:26:35.246219 waagent[1855]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:35.246219 waagent[1855]: pkts bytes target prot opt in out source destination Dec 13 01:26:35.246219 waagent[1855]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:35.246219 waagent[1855]: pkts bytes target prot opt in out source destination Dec 13 01:26:35.246219 waagent[1855]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:26:35.246219 waagent[1855]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:26:35.246219 waagent[1855]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:26:35.246566 waagent[1855]: 2024-12-13T01:26:35.246473Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 01:26:40.699773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:26:40.708521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:40.822316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:40.827360 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:40.869666 kubelet[2088]: E1213 01:26:40.869617 2088 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:40.872816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:40.872953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:50.949946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:26:50.957534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:51.049184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:51.053753 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:51.149102 kubelet[2104]: E1213 01:26:51.149053 2104 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:51.151935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:51.152267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:52.754935 chronyd[1646]: Selected source PHC0 Dec 13 01:27:01.200001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:27:01.208530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:01.304481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:01.305003 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:01.352621 kubelet[2120]: E1213 01:27:01.352524 2120 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:01.355143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:01.355294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:04.727444 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:27:04.734654 systemd[1]: Started sshd@0-10.200.20.20:22-10.200.16.10:54144.service - OpenSSH per-connection server daemon (10.200.16.10:54144). Dec 13 01:27:05.217546 sshd[2129]: Accepted publickey for core from 10.200.16.10 port 54144 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:05.218878 sshd[2129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:05.223774 systemd-logind[1668]: New session 3 of user core. Dec 13 01:27:05.229612 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:27:05.613985 systemd[1]: Started sshd@1-10.200.20.20:22-10.200.16.10:54156.service - OpenSSH per-connection server daemon (10.200.16.10:54156). Dec 13 01:27:06.027103 sshd[2134]: Accepted publickey for core from 10.200.16.10 port 54156 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:06.028513 sshd[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:06.032410 systemd-logind[1668]: New session 4 of user core. Dec 13 01:27:06.040503 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:27:06.344531 sshd[2134]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:06.347950 systemd[1]: sshd@1-10.200.20.20:22-10.200.16.10:54156.service: Deactivated successfully. Dec 13 01:27:06.349486 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:27:06.350748 systemd-logind[1668]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:27:06.351933 systemd-logind[1668]: Removed session 4. Dec 13 01:27:06.438885 systemd[1]: Started sshd@2-10.200.20.20:22-10.200.16.10:54164.service - OpenSSH per-connection server daemon (10.200.16.10:54164). Dec 13 01:27:06.863786 sshd[2141]: Accepted publickey for core from 10.200.16.10 port 54164 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:06.865156 sshd[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:06.868974 systemd-logind[1668]: New session 5 of user core. Dec 13 01:27:06.877518 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:27:07.183977 sshd[2141]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:07.187712 systemd[1]: sshd@2-10.200.20.20:22-10.200.16.10:54164.service: Deactivated successfully. Dec 13 01:27:07.189186 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:27:07.190009 systemd-logind[1668]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:27:07.190894 systemd-logind[1668]: Removed session 5. Dec 13 01:27:07.271562 systemd[1]: Started sshd@3-10.200.20.20:22-10.200.16.10:54174.service - OpenSSH per-connection server daemon (10.200.16.10:54174). Dec 13 01:27:07.684245 sshd[2148]: Accepted publickey for core from 10.200.16.10 port 54174 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:07.685561 sshd[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:07.689802 systemd-logind[1668]: New session 6 of user core. Dec 13 01:27:07.692520 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:27:08.003497 sshd[2148]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:08.006849 systemd-logind[1668]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:27:08.007576 systemd[1]: sshd@3-10.200.20.20:22-10.200.16.10:54174.service: Deactivated successfully. Dec 13 01:27:08.009414 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:27:08.010895 systemd-logind[1668]: Removed session 6. Dec 13 01:27:08.078953 systemd[1]: Started sshd@4-10.200.20.20:22-10.200.16.10:54180.service - OpenSSH per-connection server daemon (10.200.16.10:54180). Dec 13 01:27:08.497296 sshd[2155]: Accepted publickey for core from 10.200.16.10 port 54180 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:08.498648 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:08.502519 systemd-logind[1668]: New session 7 of user core. Dec 13 01:27:08.510574 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:27:08.830734 sudo[2158]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:27:08.831023 sudo[2158]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:08.860175 sudo[2158]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:08.941442 sshd[2155]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:08.944439 systemd[1]: sshd@4-10.200.20.20:22-10.200.16.10:54180.service: Deactivated successfully. Dec 13 01:27:08.946144 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:27:08.947781 systemd-logind[1668]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:27:08.949170 systemd-logind[1668]: Removed session 7. Dec 13 01:27:09.015416 systemd[1]: Started sshd@5-10.200.20.20:22-10.200.16.10:53986.service - OpenSSH per-connection server daemon (10.200.16.10:53986). Dec 13 01:27:09.425907 sshd[2163]: Accepted publickey for core from 10.200.16.10 port 53986 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:09.427316 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:09.430947 systemd-logind[1668]: New session 8 of user core. Dec 13 01:27:09.441536 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:27:09.663256 sudo[2167]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:27:09.663577 sudo[2167]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:09.667005 sudo[2167]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:09.672512 sudo[2166]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:27:09.672802 sudo[2166]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:09.693690 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:27:09.695912 auditctl[2170]: No rules Dec 13 01:27:09.696508 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:27:09.696708 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:27:09.699191 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:27:09.732413 augenrules[2188]: No rules Dec 13 01:27:09.734153 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:27:09.735849 sudo[2166]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:09.801566 sshd[2163]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:09.805606 systemd[1]: sshd@5-10.200.20.20:22-10.200.16.10:53986.service: Deactivated successfully. Dec 13 01:27:09.805739 systemd-logind[1668]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:27:09.808669 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:27:09.809648 systemd-logind[1668]: Removed session 8. Dec 13 01:27:09.884662 systemd[1]: Started sshd@6-10.200.20.20:22-10.200.16.10:53990.service - OpenSSH per-connection server daemon (10.200.16.10:53990). Dec 13 01:27:10.309449 sshd[2196]: Accepted publickey for core from 10.200.16.10 port 53990 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:10.310765 sshd[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:10.315563 systemd-logind[1668]: New session 9 of user core. Dec 13 01:27:10.321607 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:27:10.556107 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:27:10.556421 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:11.449654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:27:11.458901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:11.532762 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:27:11.533265 (dockerd)[2219]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:27:11.584054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:11.596616 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:11.639866 kubelet[2225]: E1213 01:27:11.639780 2225 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:11.643060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:11.643210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:12.553725 dockerd[2219]: time="2024-12-13T01:27:12.553663235Z" level=info msg="Starting up" Dec 13 01:27:12.966955 systemd[1]: var-lib-docker-metacopy\x2dcheck1564200338-merged.mount: Deactivated successfully. Dec 13 01:27:12.979807 dockerd[2219]: time="2024-12-13T01:27:12.979772219Z" level=info msg="Loading containers: start." Dec 13 01:27:13.123371 kernel: Initializing XFRM netlink socket Dec 13 01:27:13.232582 systemd-networkd[1581]: docker0: Link UP Dec 13 01:27:13.257101 dockerd[2219]: time="2024-12-13T01:27:13.256547078Z" level=info msg="Loading containers: done." Dec 13 01:27:13.266954 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3634118075-merged.mount: Deactivated successfully. Dec 13 01:27:13.279896 dockerd[2219]: time="2024-12-13T01:27:13.279805110Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:27:13.280097 dockerd[2219]: time="2024-12-13T01:27:13.279918270Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:27:13.280097 dockerd[2219]: time="2024-12-13T01:27:13.280034591Z" level=info msg="Daemon has completed initialization" Dec 13 01:27:13.335033 dockerd[2219]: time="2024-12-13T01:27:13.334288905Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:27:13.334942 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:27:14.098092 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 13 01:27:14.130354 update_engine[1671]: I20241213 01:27:14.130276 1671 update_attempter.cc:509] Updating boot flags... Dec 13 01:27:14.193572 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (2378) Dec 13 01:27:14.295358 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (2298) Dec 13 01:27:14.829051 containerd[1707]: time="2024-12-13T01:27:14.828707072Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:27:15.711494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831289866.mount: Deactivated successfully. Dec 13 01:27:17.143359 containerd[1707]: time="2024-12-13T01:27:17.143301272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:17.147638 containerd[1707]: time="2024-12-13T01:27:17.147586957Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Dec 13 01:27:17.151583 containerd[1707]: time="2024-12-13T01:27:17.151508881Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:17.155991 containerd[1707]: time="2024-12-13T01:27:17.155937326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:17.157159 containerd[1707]: time="2024-12-13T01:27:17.156967767Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.328212855s" Dec 13 01:27:17.157159 containerd[1707]: time="2024-12-13T01:27:17.157010847Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:27:17.175851 containerd[1707]: time="2024-12-13T01:27:17.175652348Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:27:18.675776 containerd[1707]: time="2024-12-13T01:27:18.675718818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:18.678968 containerd[1707]: time="2024-12-13T01:27:18.678930061Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Dec 13 01:27:18.684076 containerd[1707]: time="2024-12-13T01:27:18.682711105Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:18.687968 containerd[1707]: time="2024-12-13T01:27:18.687933671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:18.688813 containerd[1707]: time="2024-12-13T01:27:18.688783392Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.513092604s" Dec 13 01:27:18.688914 containerd[1707]: time="2024-12-13T01:27:18.688898272Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:27:18.711641 containerd[1707]: time="2024-12-13T01:27:18.711608098Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:27:19.778377 containerd[1707]: time="2024-12-13T01:27:19.777964324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:19.780277 containerd[1707]: time="2024-12-13T01:27:19.780227047Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Dec 13 01:27:19.784106 containerd[1707]: time="2024-12-13T01:27:19.784050491Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:19.790258 containerd[1707]: time="2024-12-13T01:27:19.790203498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:19.791387 containerd[1707]: time="2024-12-13T01:27:19.791228339Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.079432641s" Dec 13 01:27:19.791387 containerd[1707]: time="2024-12-13T01:27:19.791261579Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:27:19.812157 containerd[1707]: time="2024-12-13T01:27:19.812109282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:27:21.519167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155748722.mount: Deactivated successfully. Dec 13 01:27:21.701071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:27:21.706784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:21.811560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:21.817293 (kubelet)[2528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:21.863969 kubelet[2528]: E1213 01:27:21.863921 2528 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:21.867283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:21.867563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:22.188457 containerd[1707]: time="2024-12-13T01:27:22.187839007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:22.191031 containerd[1707]: time="2024-12-13T01:27:22.190879170Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 01:27:22.197790 containerd[1707]: time="2024-12-13T01:27:22.197737418Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:22.202912 containerd[1707]: time="2024-12-13T01:27:22.202859743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:22.203817 containerd[1707]: time="2024-12-13T01:27:22.203453064Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 2.391300262s" Dec 13 01:27:22.203817 containerd[1707]: time="2024-12-13T01:27:22.203491464Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:27:22.222547 containerd[1707]: time="2024-12-13T01:27:22.222406445Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:27:23.009875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392897354.mount: Deactivated successfully. Dec 13 01:27:23.942386 containerd[1707]: time="2024-12-13T01:27:23.942096119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:23.945150 containerd[1707]: time="2024-12-13T01:27:23.945100563Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:27:23.947859 containerd[1707]: time="2024-12-13T01:27:23.947805366Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:23.953083 containerd[1707]: time="2024-12-13T01:27:23.953014811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:23.954510 containerd[1707]: time="2024-12-13T01:27:23.954068773Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.731365407s" Dec 13 01:27:23.954510 containerd[1707]: time="2024-12-13T01:27:23.954110773Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:27:23.976081 containerd[1707]: time="2024-12-13T01:27:23.976035997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:27:24.543048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578971525.mount: Deactivated successfully. Dec 13 01:27:24.570392 containerd[1707]: time="2024-12-13T01:27:24.569708738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:24.572948 containerd[1707]: time="2024-12-13T01:27:24.572773941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 01:27:24.577667 containerd[1707]: time="2024-12-13T01:27:24.577621267Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:24.584158 containerd[1707]: time="2024-12-13T01:27:24.584108154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:24.584879 containerd[1707]: time="2024-12-13T01:27:24.584756195Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 608.674918ms" Dec 13 01:27:24.584879 containerd[1707]: time="2024-12-13T01:27:24.584791035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:27:24.605432 containerd[1707]: time="2024-12-13T01:27:24.605044697Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:27:25.343691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount615575276.mount: Deactivated successfully. Dec 13 01:27:28.378880 containerd[1707]: time="2024-12-13T01:27:28.378819292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:28.382171 containerd[1707]: time="2024-12-13T01:27:28.382132374Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Dec 13 01:27:28.388365 containerd[1707]: time="2024-12-13T01:27:28.386873696Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:28.392685 containerd[1707]: time="2024-12-13T01:27:28.392645378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:28.393910 containerd[1707]: time="2024-12-13T01:27:28.393868178Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.788783241s" Dec 13 01:27:28.393910 containerd[1707]: time="2024-12-13T01:27:28.393908738Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:27:31.949754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:27:31.959644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:32.057489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:32.066665 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:32.112567 kubelet[2710]: E1213 01:27:32.112513 2710 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:32.115621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:32.115758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:33.111187 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:33.120595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:33.143034 systemd[1]: Reloading requested from client PID 2725 ('systemctl') (unit session-9.scope)... Dec 13 01:27:33.143188 systemd[1]: Reloading... Dec 13 01:27:33.251392 zram_generator::config[2765]: No configuration found. Dec 13 01:27:33.354484 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:33.430800 systemd[1]: Reloading finished in 287 ms. Dec 13 01:27:33.576328 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:27:33.576746 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:27:33.577216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:33.583617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:33.682742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:33.693640 (kubelet)[2832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:33.744686 kubelet[2832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:33.744686 kubelet[2832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:33.744686 kubelet[2832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:33.745080 kubelet[2832]: I1213 01:27:33.744742 2832 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:34.336305 kubelet[2832]: I1213 01:27:34.336268 2832 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:34.336305 kubelet[2832]: I1213 01:27:34.336300 2832 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:34.336577 kubelet[2832]: I1213 01:27:34.336542 2832 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:34.349644 kubelet[2832]: I1213 01:27:34.349463 2832 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:34.349938 kubelet[2832]: E1213 01:27:34.349922 2832 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:34.360722 kubelet[2832]: I1213 01:27:34.360454 2832 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:34.360722 kubelet[2832]: I1213 01:27:34.360690 2832 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:34.360893 kubelet[2832]: I1213 01:27:34.360865 2832 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:34.360978 kubelet[2832]: I1213 01:27:34.360896 2832 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:34.360978 kubelet[2832]: I1213 01:27:34.360906 2832 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:34.361027 kubelet[2832]: I1213 01:27:34.361014 2832 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:34.363229 kubelet[2832]: I1213 01:27:34.363207 2832 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:34.363275 kubelet[2832]: I1213 01:27:34.363238 2832 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:34.363474 kubelet[2832]: I1213 01:27:34.363456 2832 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:34.363507 kubelet[2832]: I1213 01:27:34.363482 2832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:34.366407 kubelet[2832]: W1213 01:27:34.365335 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-dd942dbb76&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:34.366448 kubelet[2832]: E1213 01:27:34.366429 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-dd942dbb76&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:34.367635 kubelet[2832]: W1213 01:27:34.366820 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:34.367635 kubelet[2832]: E1213 01:27:34.366862 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:34.367635 kubelet[2832]: I1213 01:27:34.367230 2832 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:34.367635 kubelet[2832]: I1213 01:27:34.367511 2832 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:34.369529 kubelet[2832]: W1213 01:27:34.368578 2832 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:27:34.369529 kubelet[2832]: I1213 01:27:34.369379 2832 server.go:1256] "Started kubelet" Dec 13 01:27:34.371593 kubelet[2832]: I1213 01:27:34.371572 2832 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:34.372175 kubelet[2832]: I1213 01:27:34.372150 2832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:34.372324 kubelet[2832]: I1213 01:27:34.372291 2832 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:34.372627 kubelet[2832]: I1213 01:27:34.372608 2832 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:34.374517 kubelet[2832]: E1213 01:27:34.374491 2832 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-a-dd942dbb76.1810983f584b2367 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-dd942dbb76,UID:ci-4081.2.1-a-dd942dbb76,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-dd942dbb76,},FirstTimestamp:2024-12-13 01:27:34.369321831 +0000 UTC m=+0.672083349,LastTimestamp:2024-12-13 01:27:34.369321831 +0000 UTC m=+0.672083349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-dd942dbb76,}" Dec 13 01:27:34.374904 kubelet[2832]: I1213 01:27:34.374867 2832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:34.378365 kubelet[2832]: E1213 01:27:34.378323 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:34.378926 kubelet[2832]: I1213 01:27:34.378902 2832 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:34.379020 kubelet[2832]: I1213 01:27:34.379003 2832 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:34.379084 kubelet[2832]: I1213 01:27:34.379069 2832 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:34.379440 kubelet[2832]: W1213 01:27:34.379398 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:34.379516 kubelet[2832]: E1213 01:27:34.379446 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:34.380147 kubelet[2832]: E1213 01:27:34.379950 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-dd942dbb76?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="200ms" Dec 13 01:27:34.380751 kubelet[2832]: I1213 01:27:34.380283 2832 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:34.381991 kubelet[2832]: E1213 01:27:34.381966 2832 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:34.382266 kubelet[2832]: I1213 01:27:34.382237 2832 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:34.382266 kubelet[2832]: I1213 01:27:34.382257 2832 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:34.391307 kubelet[2832]: I1213 01:27:34.391282 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:34.392275 kubelet[2832]: I1213 01:27:34.392261 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:34.392396 kubelet[2832]: I1213 01:27:34.392385 2832 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:34.392472 kubelet[2832]: I1213 01:27:34.392463 2832 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:34.392562 kubelet[2832]: E1213 01:27:34.392553 2832 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:34.397992 kubelet[2832]: W1213 01:27:34.397955 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:34.398119 kubelet[2832]: E1213 01:27:34.398108 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:34.473873 kubelet[2832]: I1213 01:27:34.473814 2832 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:34.474186 kubelet[2832]: I1213 01:27:34.474058 2832 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:34.474186 kubelet[2832]: I1213 01:27:34.474080 2832 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:34.480868 kubelet[2832]: I1213 01:27:34.480842 2832 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.481244 kubelet[2832]: E1213 01:27:34.481224 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.481309 kubelet[2832]: I1213 01:27:34.481250 2832 policy_none.go:49] "None policy: Start" Dec 13 01:27:34.482032 kubelet[2832]: I1213 01:27:34.481998 2832 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:34.482126 kubelet[2832]: I1213 01:27:34.482040 2832 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:34.490310 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:27:34.492954 kubelet[2832]: E1213 01:27:34.492923 2832 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:34.498078 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:27:34.501062 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:27:34.509241 kubelet[2832]: I1213 01:27:34.509204 2832 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:34.509856 kubelet[2832]: I1213 01:27:34.509540 2832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:34.513570 kubelet[2832]: E1213 01:27:34.513451 2832 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:34.580526 kubelet[2832]: E1213 01:27:34.580488 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-dd942dbb76?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="400ms" Dec 13 01:27:34.684384 kubelet[2832]: I1213 01:27:34.683426 2832 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.684384 kubelet[2832]: E1213 01:27:34.683739 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.694063 kubelet[2832]: I1213 01:27:34.694032 2832 topology_manager.go:215] "Topology Admit Handler" podUID="5c484bcea414edb497ad719efe52812c" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.695746 kubelet[2832]: I1213 01:27:34.695660 2832 topology_manager.go:215] "Topology Admit Handler" podUID="05a9d321efed6703ddc58fe5c23b1edc" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.697029 kubelet[2832]: I1213 01:27:34.696991 2832 topology_manager.go:215] "Topology Admit Handler" podUID="76c7f245089f26be3e38828d9b9434d8" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.703283 systemd[1]: Created slice kubepods-burstable-pod5c484bcea414edb497ad719efe52812c.slice - libcontainer container kubepods-burstable-pod5c484bcea414edb497ad719efe52812c.slice. Dec 13 01:27:34.729722 systemd[1]: Created slice kubepods-burstable-pod05a9d321efed6703ddc58fe5c23b1edc.slice - libcontainer container kubepods-burstable-pod05a9d321efed6703ddc58fe5c23b1edc.slice. Dec 13 01:27:34.744394 systemd[1]: Created slice kubepods-burstable-pod76c7f245089f26be3e38828d9b9434d8.slice - libcontainer container kubepods-burstable-pod76c7f245089f26be3e38828d9b9434d8.slice. Dec 13 01:27:34.781445 kubelet[2832]: I1213 01:27:34.781403 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/76c7f245089f26be3e38828d9b9434d8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-dd942dbb76\" (UID: \"76c7f245089f26be3e38828d9b9434d8\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.781445 kubelet[2832]: I1213 01:27:34.781451 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/76c7f245089f26be3e38828d9b9434d8-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-dd942dbb76\" (UID: \"76c7f245089f26be3e38828d9b9434d8\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.781826 kubelet[2832]: I1213 01:27:34.781497 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76c7f245089f26be3e38828d9b9434d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-dd942dbb76\" (UID: \"76c7f245089f26be3e38828d9b9434d8\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.781826 kubelet[2832]: I1213 01:27:34.781518 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c484bcea414edb497ad719efe52812c-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-dd942dbb76\" (UID: \"5c484bcea414edb497ad719efe52812c\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.781826 kubelet[2832]: I1213 01:27:34.781541 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05a9d321efed6703ddc58fe5c23b1edc-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-dd942dbb76\" (UID: \"05a9d321efed6703ddc58fe5c23b1edc\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.781826 kubelet[2832]: I1213 01:27:34.781565 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05a9d321efed6703ddc58fe5c23b1edc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-dd942dbb76\" (UID: \"05a9d321efed6703ddc58fe5c23b1edc\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.781826 kubelet[2832]: I1213 01:27:34.781582 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05a9d321efed6703ddc58fe5c23b1edc-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-dd942dbb76\" (UID: \"05a9d321efed6703ddc58fe5c23b1edc\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.781944 kubelet[2832]: I1213 01:27:34.781601 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76c7f245089f26be3e38828d9b9434d8-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-dd942dbb76\" (UID: \"76c7f245089f26be3e38828d9b9434d8\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.781944 kubelet[2832]: I1213 01:27:34.781619 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76c7f245089f26be3e38828d9b9434d8-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-dd942dbb76\" (UID: \"76c7f245089f26be3e38828d9b9434d8\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:34.981938 kubelet[2832]: E1213 01:27:34.981874 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-dd942dbb76?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="800ms" Dec 13 01:27:35.027755 containerd[1707]: time="2024-12-13T01:27:35.027712922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-dd942dbb76,Uid:5c484bcea414edb497ad719efe52812c,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:35.042879 containerd[1707]: time="2024-12-13T01:27:35.042823141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-dd942dbb76,Uid:05a9d321efed6703ddc58fe5c23b1edc,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:35.050506 containerd[1707]: time="2024-12-13T01:27:35.050289311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-dd942dbb76,Uid:76c7f245089f26be3e38828d9b9434d8,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:35.086150 kubelet[2832]: I1213 01:27:35.085880 2832 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:35.086292 kubelet[2832]: E1213 01:27:35.086213 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:35.311011 kubelet[2832]: W1213 01:27:35.310854 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:35.311011 kubelet[2832]: E1213 01:27:35.310917 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:35.408469 kubelet[2832]: W1213 01:27:35.408404 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:35.408469 kubelet[2832]: E1213 01:27:35.408472 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:35.423841 kubelet[2832]: W1213 01:27:35.423794 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-dd942dbb76&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:35.423841 kubelet[2832]: E1213 01:27:35.423841 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-dd942dbb76&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:35.705681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount117551581.mount: Deactivated successfully. Dec 13 01:27:35.747819 containerd[1707]: time="2024-12-13T01:27:35.747749492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:35.750934 containerd[1707]: time="2024-12-13T01:27:35.750891416Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:35.754547 containerd[1707]: time="2024-12-13T01:27:35.754500061Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:27:35.757640 containerd[1707]: time="2024-12-13T01:27:35.757591225Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:35.761403 containerd[1707]: time="2024-12-13T01:27:35.761362070Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:35.766373 containerd[1707]: time="2024-12-13T01:27:35.766291716Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:35.771006 containerd[1707]: time="2024-12-13T01:27:35.770939282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:35.775901 containerd[1707]: time="2024-12-13T01:27:35.775848448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:35.776880 containerd[1707]: time="2024-12-13T01:27:35.776636289Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 748.845967ms" Dec 13 01:27:35.777714 containerd[1707]: time="2024-12-13T01:27:35.777672331Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 726.923099ms" Dec 13 01:27:35.783124 kubelet[2832]: E1213 01:27:35.783089 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-dd942dbb76?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="1.6s" Dec 13 01:27:35.789286 containerd[1707]: time="2024-12-13T01:27:35.789239946Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 746.342205ms" Dec 13 01:27:35.875971 kubelet[2832]: W1213 01:27:35.875885 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:35.875971 kubelet[2832]: E1213 01:27:35.875948 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:35.888929 kubelet[2832]: I1213 01:27:35.888860 2832 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:35.889231 kubelet[2832]: E1213 01:27:35.889198 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:36.387447 kubelet[2832]: E1213 01:27:36.387402 2832 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:36.391957 containerd[1707]: time="2024-12-13T01:27:36.391726884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:36.391957 containerd[1707]: time="2024-12-13T01:27:36.391790924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:36.391957 containerd[1707]: time="2024-12-13T01:27:36.391819844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:36.391957 containerd[1707]: time="2024-12-13T01:27:36.391904924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:36.397335 containerd[1707]: time="2024-12-13T01:27:36.397224891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:36.397608 containerd[1707]: time="2024-12-13T01:27:36.397297091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:36.397798 containerd[1707]: time="2024-12-13T01:27:36.397691892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:36.398991 containerd[1707]: time="2024-12-13T01:27:36.398925733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:36.404418 containerd[1707]: time="2024-12-13T01:27:36.401066016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:36.404418 containerd[1707]: time="2024-12-13T01:27:36.401124856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:36.404418 containerd[1707]: time="2024-12-13T01:27:36.401141016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:36.404418 containerd[1707]: time="2024-12-13T01:27:36.401221136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:36.426576 systemd[1]: Started cri-containerd-1737996d4795fdbc766c204eda1bb1bb64b16d98fe4c5a40ed56b11c6e492841.scope - libcontainer container 1737996d4795fdbc766c204eda1bb1bb64b16d98fe4c5a40ed56b11c6e492841. Dec 13 01:27:36.427751 systemd[1]: Started cri-containerd-d212556042728d9a2f602d7d26b9e49caac3431e8cf4bf473a2be7b921e0fc1a.scope - libcontainer container d212556042728d9a2f602d7d26b9e49caac3431e8cf4bf473a2be7b921e0fc1a. Dec 13 01:27:36.434695 systemd[1]: Started cri-containerd-4be426f61353e92f8c1fbf4079d531f5a062cbcbab34c33b6c67a65abe3c10cd.scope - libcontainer container 4be426f61353e92f8c1fbf4079d531f5a062cbcbab34c33b6c67a65abe3c10cd. Dec 13 01:27:36.484571 containerd[1707]: time="2024-12-13T01:27:36.484325764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-dd942dbb76,Uid:76c7f245089f26be3e38828d9b9434d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4be426f61353e92f8c1fbf4079d531f5a062cbcbab34c33b6c67a65abe3c10cd\"" Dec 13 01:27:36.491740 containerd[1707]: time="2024-12-13T01:27:36.491690413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-dd942dbb76,Uid:5c484bcea414edb497ad719efe52812c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d212556042728d9a2f602d7d26b9e49caac3431e8cf4bf473a2be7b921e0fc1a\"" Dec 13 01:27:36.491740 containerd[1707]: time="2024-12-13T01:27:36.491743773Z" level=info msg="CreateContainer within sandbox \"4be426f61353e92f8c1fbf4079d531f5a062cbcbab34c33b6c67a65abe3c10cd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:27:36.494697 containerd[1707]: time="2024-12-13T01:27:36.493669576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-dd942dbb76,Uid:05a9d321efed6703ddc58fe5c23b1edc,Namespace:kube-system,Attempt:0,} returns sandbox id \"1737996d4795fdbc766c204eda1bb1bb64b16d98fe4c5a40ed56b11c6e492841\"" Dec 13 01:27:36.497315 containerd[1707]: time="2024-12-13T01:27:36.497258420Z" level=info msg="CreateContainer within sandbox \"1737996d4795fdbc766c204eda1bb1bb64b16d98fe4c5a40ed56b11c6e492841\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:27:36.498020 containerd[1707]: time="2024-12-13T01:27:36.497992381Z" level=info msg="CreateContainer within sandbox \"d212556042728d9a2f602d7d26b9e49caac3431e8cf4bf473a2be7b921e0fc1a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:27:37.207090 kubelet[2832]: W1213 01:27:37.207050 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-dd942dbb76&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:37.207090 kubelet[2832]: E1213 01:27:37.207094 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-dd942dbb76&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:37.523367 kubelet[2832]: W1213 01:27:37.374361 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:37.523367 kubelet[2832]: E1213 01:27:37.374402 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:37.523367 kubelet[2832]: E1213 01:27:37.383720 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-dd942dbb76?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="3.2s" Dec 13 01:27:37.523367 kubelet[2832]: I1213 01:27:37.491120 2832 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:37.523367 kubelet[2832]: E1213 01:27:37.491436 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:37.557351 containerd[1707]: time="2024-12-13T01:27:37.557267550Z" level=info msg="CreateContainer within sandbox \"4be426f61353e92f8c1fbf4079d531f5a062cbcbab34c33b6c67a65abe3c10cd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"57da0887dba2344dc30e5708cef4b8ef5e925029c703e676b02a677739355af6\"" Dec 13 01:27:37.558213 containerd[1707]: time="2024-12-13T01:27:37.558024831Z" level=info msg="StartContainer for \"57da0887dba2344dc30e5708cef4b8ef5e925029c703e676b02a677739355af6\"" Dec 13 01:27:37.561081 containerd[1707]: time="2024-12-13T01:27:37.560753434Z" level=info msg="CreateContainer within sandbox \"1737996d4795fdbc766c204eda1bb1bb64b16d98fe4c5a40ed56b11c6e492841\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"77cca324acd27d1720cb9278244174ea9d0fe0c355b33bb246397fdee993d64a\"" Dec 13 01:27:37.562935 containerd[1707]: time="2024-12-13T01:27:37.561778676Z" level=info msg="StartContainer for \"77cca324acd27d1720cb9278244174ea9d0fe0c355b33bb246397fdee993d64a\"" Dec 13 01:27:37.566741 kubelet[2832]: W1213 01:27:37.566677 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:37.566741 kubelet[2832]: E1213 01:27:37.566714 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Dec 13 01:27:37.567257 containerd[1707]: time="2024-12-13T01:27:37.567221443Z" level=info msg="CreateContainer within sandbox \"d212556042728d9a2f602d7d26b9e49caac3431e8cf4bf473a2be7b921e0fc1a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f3ef0f4a8cb88fa06c3e2af9ed0b7cde1a4e9d67c02ef65e2a562d8195a18fba\"" Dec 13 01:27:37.568251 containerd[1707]: time="2024-12-13T01:27:37.568212284Z" level=info msg="StartContainer for \"f3ef0f4a8cb88fa06c3e2af9ed0b7cde1a4e9d67c02ef65e2a562d8195a18fba\"" Dec 13 01:27:37.591527 systemd[1]: Started cri-containerd-77cca324acd27d1720cb9278244174ea9d0fe0c355b33bb246397fdee993d64a.scope - libcontainer container 77cca324acd27d1720cb9278244174ea9d0fe0c355b33bb246397fdee993d64a. Dec 13 01:27:37.599512 systemd[1]: Started cri-containerd-57da0887dba2344dc30e5708cef4b8ef5e925029c703e676b02a677739355af6.scope - libcontainer container 57da0887dba2344dc30e5708cef4b8ef5e925029c703e676b02a677739355af6. Dec 13 01:27:37.617697 systemd[1]: Started cri-containerd-f3ef0f4a8cb88fa06c3e2af9ed0b7cde1a4e9d67c02ef65e2a562d8195a18fba.scope - libcontainer container f3ef0f4a8cb88fa06c3e2af9ed0b7cde1a4e9d67c02ef65e2a562d8195a18fba. Dec 13 01:27:37.649263 containerd[1707]: time="2024-12-13T01:27:37.647781387Z" level=info msg="StartContainer for \"77cca324acd27d1720cb9278244174ea9d0fe0c355b33bb246397fdee993d64a\" returns successfully" Dec 13 01:27:37.666844 containerd[1707]: time="2024-12-13T01:27:37.666772251Z" level=info msg="StartContainer for \"57da0887dba2344dc30e5708cef4b8ef5e925029c703e676b02a677739355af6\" returns successfully" Dec 13 01:27:37.674751 containerd[1707]: time="2024-12-13T01:27:37.674630621Z" level=info msg="StartContainer for \"f3ef0f4a8cb88fa06c3e2af9ed0b7cde1a4e9d67c02ef65e2a562d8195a18fba\" returns successfully" Dec 13 01:27:37.706445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3043854434.mount: Deactivated successfully. Dec 13 01:27:40.236262 kubelet[2832]: E1213 01:27:40.236216 2832 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.2.1-a-dd942dbb76" not found Dec 13 01:27:40.588367 kubelet[2832]: E1213 01:27:40.588025 2832 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-a-dd942dbb76\" not found" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:40.602371 kubelet[2832]: E1213 01:27:40.602326 2832 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.2.1-a-dd942dbb76" not found Dec 13 01:27:40.693707 kubelet[2832]: I1213 01:27:40.693675 2832 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:40.703963 kubelet[2832]: I1213 01:27:40.703927 2832 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:40.716853 kubelet[2832]: E1213 01:27:40.716813 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:40.818412 kubelet[2832]: E1213 01:27:40.818366 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:40.919857 kubelet[2832]: E1213 01:27:40.919295 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:41.019956 kubelet[2832]: E1213 01:27:41.019901 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:41.120513 kubelet[2832]: E1213 01:27:41.120470 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:41.221291 kubelet[2832]: E1213 01:27:41.221239 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:41.321796 kubelet[2832]: E1213 01:27:41.321757 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:41.421976 kubelet[2832]: E1213 01:27:41.421915 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:41.522664 kubelet[2832]: E1213 01:27:41.522518 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:41.623365 kubelet[2832]: E1213 01:27:41.623314 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:41.724251 kubelet[2832]: E1213 01:27:41.724121 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:41.824726 kubelet[2832]: E1213 01:27:41.824610 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:41.925282 kubelet[2832]: E1213 01:27:41.925239 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:42.025763 kubelet[2832]: E1213 01:27:42.025723 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:42.126488 kubelet[2832]: E1213 01:27:42.126228 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:42.226710 kubelet[2832]: E1213 01:27:42.226671 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:42.327180 kubelet[2832]: E1213 01:27:42.327141 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:42.428189 kubelet[2832]: E1213 01:27:42.428075 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:42.529201 kubelet[2832]: E1213 01:27:42.529154 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:42.629676 kubelet[2832]: E1213 01:27:42.629634 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:42.730156 kubelet[2832]: E1213 01:27:42.730114 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:42.829901 systemd[1]: Reloading requested from client PID 3111 ('systemctl') (unit session-9.scope)... Dec 13 01:27:42.830203 systemd[1]: Reloading... Dec 13 01:27:42.830977 kubelet[2832]: E1213 01:27:42.830948 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:42.918474 zram_generator::config[3151]: No configuration found. Dec 13 01:27:42.931835 kubelet[2832]: E1213 01:27:42.931786 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:43.026612 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:43.032292 kubelet[2832]: E1213 01:27:43.032248 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:43.120232 systemd[1]: Reloading finished in 289 ms. Dec 13 01:27:43.133296 kubelet[2832]: E1213 01:27:43.133250 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-dd942dbb76\" not found" Dec 13 01:27:43.161004 kubelet[2832]: I1213 01:27:43.160948 2832 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:43.161380 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:43.174405 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:27:43.174633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:43.181863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:43.278098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:43.289702 (kubelet)[3215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:43.337427 kubelet[3215]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:43.337427 kubelet[3215]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:43.337427 kubelet[3215]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:43.338322 kubelet[3215]: I1213 01:27:43.337935 3215 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:43.343536 kubelet[3215]: I1213 01:27:43.343503 3215 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:43.343536 kubelet[3215]: I1213 01:27:43.343531 3215 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:43.343739 kubelet[3215]: I1213 01:27:43.343721 3215 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:43.345961 kubelet[3215]: I1213 01:27:43.345929 3215 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:27:43.348006 kubelet[3215]: I1213 01:27:43.347893 3215 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:43.355286 kubelet[3215]: I1213 01:27:43.355262 3215 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:43.355936 kubelet[3215]: I1213 01:27:43.355679 3215 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:43.355936 kubelet[3215]: I1213 01:27:43.355843 3215 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:43.355936 kubelet[3215]: I1213 01:27:43.355863 3215 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:43.355936 kubelet[3215]: I1213 01:27:43.355872 3215 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:43.355936 kubelet[3215]: I1213 01:27:43.355901 3215 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:43.356197 kubelet[3215]: I1213 01:27:43.356013 3215 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:43.356197 kubelet[3215]: I1213 01:27:43.356026 3215 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:43.356197 kubelet[3215]: I1213 01:27:43.356048 3215 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:43.356197 kubelet[3215]: I1213 01:27:43.356062 3215 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:43.360409 kubelet[3215]: I1213 01:27:43.357331 3215 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:43.360409 kubelet[3215]: I1213 01:27:43.357529 3215 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:43.360409 kubelet[3215]: I1213 01:27:43.357886 3215 server.go:1256] "Started kubelet" Dec 13 01:27:43.360409 kubelet[3215]: I1213 01:27:43.359575 3215 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:43.362549 kubelet[3215]: I1213 01:27:43.362522 3215 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:43.371936 kubelet[3215]: I1213 01:27:43.371575 3215 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:43.371936 kubelet[3215]: I1213 01:27:43.371878 3215 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:43.375530 kubelet[3215]: I1213 01:27:43.375283 3215 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:43.379963 kubelet[3215]: I1213 01:27:43.379611 3215 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:43.379963 kubelet[3215]: I1213 01:27:43.379809 3215 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:43.392160 kubelet[3215]: I1213 01:27:43.392120 3215 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:43.400371 kubelet[3215]: I1213 01:27:43.396936 3215 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:43.400647 kubelet[3215]: I1213 01:27:43.400624 3215 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:43.408362 kubelet[3215]: I1213 01:27:43.406130 3215 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:43.414725 kubelet[3215]: I1213 01:27:43.414683 3215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:43.415755 kubelet[3215]: I1213 01:27:43.415724 3215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:43.415755 kubelet[3215]: I1213 01:27:43.415747 3215 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:43.415855 kubelet[3215]: I1213 01:27:43.415765 3215 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:43.415855 kubelet[3215]: E1213 01:27:43.415811 3215 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:43.470653 kubelet[3215]: I1213 01:27:43.470372 3215 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:43.470653 kubelet[3215]: I1213 01:27:43.470393 3215 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:43.470653 kubelet[3215]: I1213 01:27:43.470412 3215 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:43.470653 kubelet[3215]: I1213 01:27:43.470561 3215 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:27:43.470653 kubelet[3215]: I1213 01:27:43.470580 3215 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:27:43.470653 kubelet[3215]: I1213 01:27:43.470588 3215 policy_none.go:49] "None policy: Start" Dec 13 01:27:43.471406 kubelet[3215]: I1213 01:27:43.471392 3215 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:43.471489 kubelet[3215]: I1213 01:27:43.471481 3215 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:43.471721 kubelet[3215]: I1213 01:27:43.471710 3215 state_mem.go:75] "Updated machine memory state" Dec 13 01:27:43.475709 kubelet[3215]: I1213 01:27:43.475691 3215 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:43.476073 kubelet[3215]: I1213 01:27:43.476054 3215 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:43.480827 kubelet[3215]: I1213 01:27:43.478969 3215 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.496475 kubelet[3215]: I1213 01:27:43.496443 3215 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.496684 kubelet[3215]: I1213 01:27:43.496670 3215 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.516516 kubelet[3215]: I1213 01:27:43.516487 3215 topology_manager.go:215] "Topology Admit Handler" podUID="05a9d321efed6703ddc58fe5c23b1edc" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.516756 kubelet[3215]: I1213 01:27:43.516742 3215 topology_manager.go:215] "Topology Admit Handler" podUID="76c7f245089f26be3e38828d9b9434d8" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.517324 kubelet[3215]: I1213 01:27:43.517304 3215 topology_manager.go:215] "Topology Admit Handler" podUID="5c484bcea414edb497ad719efe52812c" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.528657 kubelet[3215]: W1213 01:27:43.528348 3215 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:43.531738 kubelet[3215]: W1213 01:27:43.531620 3215 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:43.533661 kubelet[3215]: W1213 01:27:43.532424 3215 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:43.582432 kubelet[3215]: I1213 01:27:43.582388 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05a9d321efed6703ddc58fe5c23b1edc-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-dd942dbb76\" (UID: \"05a9d321efed6703ddc58fe5c23b1edc\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.582564 kubelet[3215]: I1213 01:27:43.582474 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76c7f245089f26be3e38828d9b9434d8-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-dd942dbb76\" (UID: \"76c7f245089f26be3e38828d9b9434d8\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.582564 kubelet[3215]: I1213 01:27:43.582529 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76c7f245089f26be3e38828d9b9434d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-dd942dbb76\" (UID: \"76c7f245089f26be3e38828d9b9434d8\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.582618 kubelet[3215]: I1213 01:27:43.582564 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/76c7f245089f26be3e38828d9b9434d8-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-dd942dbb76\" (UID: \"76c7f245089f26be3e38828d9b9434d8\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.582618 kubelet[3215]: I1213 01:27:43.582588 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c484bcea414edb497ad719efe52812c-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-dd942dbb76\" (UID: \"5c484bcea414edb497ad719efe52812c\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.582618 kubelet[3215]: I1213 01:27:43.582608 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05a9d321efed6703ddc58fe5c23b1edc-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-dd942dbb76\" (UID: \"05a9d321efed6703ddc58fe5c23b1edc\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.582681 kubelet[3215]: I1213 01:27:43.582637 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05a9d321efed6703ddc58fe5c23b1edc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-dd942dbb76\" (UID: \"05a9d321efed6703ddc58fe5c23b1edc\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.582681 kubelet[3215]: I1213 01:27:43.582656 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76c7f245089f26be3e38828d9b9434d8-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-dd942dbb76\" (UID: \"76c7f245089f26be3e38828d9b9434d8\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.582725 kubelet[3215]: I1213 01:27:43.582682 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/76c7f245089f26be3e38828d9b9434d8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-dd942dbb76\" (UID: \"76c7f245089f26be3e38828d9b9434d8\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:43.642279 sudo[3248]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:27:43.642594 sudo[3248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:27:44.102926 sudo[3248]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:44.365795 kubelet[3215]: I1213 01:27:44.365452 3215 apiserver.go:52] "Watching apiserver" Dec 13 01:27:44.380587 kubelet[3215]: I1213 01:27:44.380530 3215 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:44.467758 kubelet[3215]: W1213 01:27:44.467209 3215 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:44.467758 kubelet[3215]: E1213 01:27:44.467707 3215 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-dd942dbb76\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-a-dd942dbb76" Dec 13 01:27:44.479202 kubelet[3215]: I1213 01:27:44.478182 3215 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-a-dd942dbb76" podStartSLOduration=1.478140111 podStartE2EDuration="1.478140111s" podCreationTimestamp="2024-12-13 01:27:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:44.476826269 +0000 UTC m=+1.183549049" watchObservedRunningTime="2024-12-13 01:27:44.478140111 +0000 UTC m=+1.184862891" Dec 13 01:27:44.490734 kubelet[3215]: I1213 01:27:44.489722 3215 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-dd942dbb76" podStartSLOduration=1.489683248 podStartE2EDuration="1.489683248s" podCreationTimestamp="2024-12-13 01:27:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:44.488783967 +0000 UTC m=+1.195506707" watchObservedRunningTime="2024-12-13 01:27:44.489683248 +0000 UTC m=+1.196406028" Dec 13 01:27:44.517964 kubelet[3215]: I1213 01:27:44.517622 3215 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-a-dd942dbb76" podStartSLOduration=1.517431331 podStartE2EDuration="1.517431331s" podCreationTimestamp="2024-12-13 01:27:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:44.502172027 +0000 UTC m=+1.208894767" watchObservedRunningTime="2024-12-13 01:27:44.517431331 +0000 UTC m=+1.224154111" Dec 13 01:27:45.762975 sudo[2199]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:45.844597 sshd[2196]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:45.847304 systemd[1]: sshd@6-10.200.20.20:22-10.200.16.10:53990.service: Deactivated successfully. Dec 13 01:27:45.849231 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:27:45.849481 systemd[1]: session-9.scope: Consumed 6.525s CPU time, 185.7M memory peak, 0B memory swap peak. Dec 13 01:27:45.850897 systemd-logind[1668]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:27:45.851838 systemd-logind[1668]: Removed session 9. Dec 13 01:27:56.881154 kubelet[3215]: I1213 01:27:56.881047 3215 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:27:56.881575 containerd[1707]: time="2024-12-13T01:27:56.881515745Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:27:56.881772 kubelet[3215]: I1213 01:27:56.881723 3215 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:27:57.930186 kubelet[3215]: I1213 01:27:57.930136 3215 topology_manager.go:215] "Topology Admit Handler" podUID="241d6c53-cdc2-4a7a-8b22-fe1e5c840268" podNamespace="kube-system" podName="kube-proxy-5xtbz" Dec 13 01:27:57.941700 systemd[1]: Created slice kubepods-besteffort-pod241d6c53_cdc2_4a7a_8b22_fe1e5c840268.slice - libcontainer container kubepods-besteffort-pod241d6c53_cdc2_4a7a_8b22_fe1e5c840268.slice. Dec 13 01:27:57.956109 kubelet[3215]: I1213 01:27:57.956071 3215 topology_manager.go:215] "Topology Admit Handler" podUID="8f1a73c5-a080-483b-99e3-e4c53bd16003" podNamespace="kube-system" podName="cilium-9jt72" Dec 13 01:27:57.966660 systemd[1]: Created slice kubepods-burstable-pod8f1a73c5_a080_483b_99e3_e4c53bd16003.slice - libcontainer container kubepods-burstable-pod8f1a73c5_a080_483b_99e3_e4c53bd16003.slice. Dec 13 01:27:57.967286 kubelet[3215]: W1213 01:27:57.967029 3215 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081.2.1-a-dd942dbb76" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-a-dd942dbb76' and this object Dec 13 01:27:57.967286 kubelet[3215]: E1213 01:27:57.967061 3215 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081.2.1-a-dd942dbb76" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-a-dd942dbb76' and this object Dec 13 01:27:57.978043 kubelet[3215]: I1213 01:27:57.975189 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxkwf\" (UniqueName: \"kubernetes.io/projected/241d6c53-cdc2-4a7a-8b22-fe1e5c840268-kube-api-access-gxkwf\") pod \"kube-proxy-5xtbz\" (UID: \"241d6c53-cdc2-4a7a-8b22-fe1e5c840268\") " pod="kube-system/kube-proxy-5xtbz" Dec 13 01:27:57.978043 kubelet[3215]: I1213 01:27:57.975262 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cni-path\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978043 kubelet[3215]: I1213 01:27:57.975297 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-cgroup\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978043 kubelet[3215]: I1213 01:27:57.975331 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-xtables-lock\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978043 kubelet[3215]: I1213 01:27:57.975383 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/241d6c53-cdc2-4a7a-8b22-fe1e5c840268-lib-modules\") pod \"kube-proxy-5xtbz\" (UID: \"241d6c53-cdc2-4a7a-8b22-fe1e5c840268\") " pod="kube-system/kube-proxy-5xtbz" Dec 13 01:27:57.978043 kubelet[3215]: I1213 01:27:57.975423 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-config-path\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978293 kubelet[3215]: I1213 01:27:57.975444 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-host-proc-sys-kernel\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978293 kubelet[3215]: I1213 01:27:57.975470 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ss6w\" (UniqueName: \"kubernetes.io/projected/8f1a73c5-a080-483b-99e3-e4c53bd16003-kube-api-access-8ss6w\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978293 kubelet[3215]: I1213 01:27:57.975502 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-etc-cni-netd\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978293 kubelet[3215]: I1213 01:27:57.975527 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-host-proc-sys-net\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978293 kubelet[3215]: I1213 01:27:57.975560 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/241d6c53-cdc2-4a7a-8b22-fe1e5c840268-kube-proxy\") pod \"kube-proxy-5xtbz\" (UID: \"241d6c53-cdc2-4a7a-8b22-fe1e5c840268\") " pod="kube-system/kube-proxy-5xtbz" Dec 13 01:27:57.978467 kubelet[3215]: I1213 01:27:57.975585 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-run\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978467 kubelet[3215]: I1213 01:27:57.975608 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/241d6c53-cdc2-4a7a-8b22-fe1e5c840268-xtables-lock\") pod \"kube-proxy-5xtbz\" (UID: \"241d6c53-cdc2-4a7a-8b22-fe1e5c840268\") " pod="kube-system/kube-proxy-5xtbz" Dec 13 01:27:57.978467 kubelet[3215]: I1213 01:27:57.975642 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f1a73c5-a080-483b-99e3-e4c53bd16003-clustermesh-secrets\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978467 kubelet[3215]: I1213 01:27:57.975666 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f1a73c5-a080-483b-99e3-e4c53bd16003-hubble-tls\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978467 kubelet[3215]: I1213 01:27:57.975687 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-bpf-maps\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978467 kubelet[3215]: I1213 01:27:57.975734 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-hostproc\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.978594 kubelet[3215]: I1213 01:27:57.975763 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-lib-modules\") pod \"cilium-9jt72\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " pod="kube-system/cilium-9jt72" Dec 13 01:27:57.997383 kubelet[3215]: I1213 01:27:57.997350 3215 topology_manager.go:215] "Topology Admit Handler" podUID="93eb33a2-0879-4a9f-bb76-b89f8c4b9c75" podNamespace="kube-system" podName="cilium-operator-5cc964979-gmmtd" Dec 13 01:27:58.005601 systemd[1]: Created slice kubepods-besteffort-pod93eb33a2_0879_4a9f_bb76_b89f8c4b9c75.slice - libcontainer container kubepods-besteffort-pod93eb33a2_0879_4a9f_bb76_b89f8c4b9c75.slice. Dec 13 01:27:58.076998 kubelet[3215]: I1213 01:27:58.076894 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93eb33a2-0879-4a9f-bb76-b89f8c4b9c75-cilium-config-path\") pod \"cilium-operator-5cc964979-gmmtd\" (UID: \"93eb33a2-0879-4a9f-bb76-b89f8c4b9c75\") " pod="kube-system/cilium-operator-5cc964979-gmmtd" Dec 13 01:27:58.077143 kubelet[3215]: I1213 01:27:58.077066 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zfgt\" (UniqueName: \"kubernetes.io/projected/93eb33a2-0879-4a9f-bb76-b89f8c4b9c75-kube-api-access-7zfgt\") pod \"cilium-operator-5cc964979-gmmtd\" (UID: \"93eb33a2-0879-4a9f-bb76-b89f8c4b9c75\") " pod="kube-system/cilium-operator-5cc964979-gmmtd" Dec 13 01:27:58.251269 containerd[1707]: time="2024-12-13T01:27:58.251224299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xtbz,Uid:241d6c53-cdc2-4a7a-8b22-fe1e5c840268,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:58.290043 containerd[1707]: time="2024-12-13T01:27:58.289922825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:58.290043 containerd[1707]: time="2024-12-13T01:27:58.289977865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:58.290043 containerd[1707]: time="2024-12-13T01:27:58.289997385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:58.290971 containerd[1707]: time="2024-12-13T01:27:58.290081585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:58.305533 systemd[1]: Started cri-containerd-0e5384699058c9bc3f0a1f40db566caabd4057e56d9ac713274d9b3a456747d4.scope - libcontainer container 0e5384699058c9bc3f0a1f40db566caabd4057e56d9ac713274d9b3a456747d4. Dec 13 01:27:58.309664 containerd[1707]: time="2024-12-13T01:27:58.309621128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gmmtd,Uid:93eb33a2-0879-4a9f-bb76-b89f8c4b9c75,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:58.332099 containerd[1707]: time="2024-12-13T01:27:58.332056715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xtbz,Uid:241d6c53-cdc2-4a7a-8b22-fe1e5c840268,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e5384699058c9bc3f0a1f40db566caabd4057e56d9ac713274d9b3a456747d4\"" Dec 13 01:27:58.335864 containerd[1707]: time="2024-12-13T01:27:58.335815760Z" level=info msg="CreateContainer within sandbox \"0e5384699058c9bc3f0a1f40db566caabd4057e56d9ac713274d9b3a456747d4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:27:58.366616 containerd[1707]: time="2024-12-13T01:27:58.366519156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:58.366616 containerd[1707]: time="2024-12-13T01:27:58.366585236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:58.366977 containerd[1707]: time="2024-12-13T01:27:58.366652476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:58.367311 containerd[1707]: time="2024-12-13T01:27:58.367254437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:58.382515 systemd[1]: Started cri-containerd-8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0.scope - libcontainer container 8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0. Dec 13 01:27:58.389685 containerd[1707]: time="2024-12-13T01:27:58.389557664Z" level=info msg="CreateContainer within sandbox \"0e5384699058c9bc3f0a1f40db566caabd4057e56d9ac713274d9b3a456747d4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3337b3d5a0624bc1e3a812ff322d503cd287ffe39e56ba78872930e258ed5c51\"" Dec 13 01:27:58.390594 containerd[1707]: time="2024-12-13T01:27:58.390561745Z" level=info msg="StartContainer for \"3337b3d5a0624bc1e3a812ff322d503cd287ffe39e56ba78872930e258ed5c51\"" Dec 13 01:27:58.419654 systemd[1]: Started cri-containerd-3337b3d5a0624bc1e3a812ff322d503cd287ffe39e56ba78872930e258ed5c51.scope - libcontainer container 3337b3d5a0624bc1e3a812ff322d503cd287ffe39e56ba78872930e258ed5c51. Dec 13 01:27:58.425415 containerd[1707]: time="2024-12-13T01:27:58.425309906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gmmtd,Uid:93eb33a2-0879-4a9f-bb76-b89f8c4b9c75,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\"" Dec 13 01:27:58.430409 containerd[1707]: time="2024-12-13T01:27:58.430359472Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:27:58.453393 containerd[1707]: time="2024-12-13T01:27:58.453316340Z" level=info msg="StartContainer for \"3337b3d5a0624bc1e3a812ff322d503cd287ffe39e56ba78872930e258ed5c51\" returns successfully" Dec 13 01:27:58.873668 containerd[1707]: time="2024-12-13T01:27:58.873569321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jt72,Uid:8f1a73c5-a080-483b-99e3-e4c53bd16003,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:58.915857 containerd[1707]: time="2024-12-13T01:27:58.915622091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:58.915857 containerd[1707]: time="2024-12-13T01:27:58.915687331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:58.915857 containerd[1707]: time="2024-12-13T01:27:58.915697651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:58.915857 containerd[1707]: time="2024-12-13T01:27:58.915780811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:58.934568 systemd[1]: Started cri-containerd-41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68.scope - libcontainer container 41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68. Dec 13 01:27:58.956476 containerd[1707]: time="2024-12-13T01:27:58.956415500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jt72,Uid:8f1a73c5-a080-483b-99e3-e4c53bd16003,Namespace:kube-system,Attempt:0,} returns sandbox id \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\"" Dec 13 01:28:00.193647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63160933.mount: Deactivated successfully. Dec 13 01:28:00.683291 containerd[1707]: time="2024-12-13T01:28:00.682511359Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:00.684783 containerd[1707]: time="2024-12-13T01:28:00.684750122Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138286" Dec 13 01:28:00.689770 containerd[1707]: time="2024-12-13T01:28:00.688757927Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:00.691884 containerd[1707]: time="2024-12-13T01:28:00.691757090Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.261349698s" Dec 13 01:28:00.691884 containerd[1707]: time="2024-12-13T01:28:00.691791730Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 01:28:00.693837 containerd[1707]: time="2024-12-13T01:28:00.693538772Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:28:00.695127 containerd[1707]: time="2024-12-13T01:28:00.695083094Z" level=info msg="CreateContainer within sandbox \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:28:00.732569 containerd[1707]: time="2024-12-13T01:28:00.732519659Z" level=info msg="CreateContainer within sandbox \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\"" Dec 13 01:28:00.733369 containerd[1707]: time="2024-12-13T01:28:00.733233100Z" level=info msg="StartContainer for \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\"" Dec 13 01:28:00.762554 systemd[1]: Started cri-containerd-a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe.scope - libcontainer container a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe. Dec 13 01:28:00.788588 containerd[1707]: time="2024-12-13T01:28:00.788457445Z" level=info msg="StartContainer for \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\" returns successfully" Dec 13 01:28:01.497719 kubelet[3215]: I1213 01:28:01.497671 3215 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5xtbz" podStartSLOduration=4.497630651 podStartE2EDuration="4.497630651s" podCreationTimestamp="2024-12-13 01:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:58.48732086 +0000 UTC m=+15.194043640" watchObservedRunningTime="2024-12-13 01:28:01.497630651 +0000 UTC m=+18.204353431" Dec 13 01:28:03.430068 kubelet[3215]: I1213 01:28:03.429265 3215 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-gmmtd" podStartSLOduration=4.164744054 podStartE2EDuration="6.429223356s" podCreationTimestamp="2024-12-13 01:27:57 +0000 UTC" firstStartedPulling="2024-12-13 01:27:58.427907469 +0000 UTC m=+15.134630249" lastFinishedPulling="2024-12-13 01:28:00.692386731 +0000 UTC m=+17.399109551" observedRunningTime="2024-12-13 01:28:01.497478531 +0000 UTC m=+18.204201311" watchObservedRunningTime="2024-12-13 01:28:03.429223356 +0000 UTC m=+20.135946136" Dec 13 01:28:09.304523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325810215.mount: Deactivated successfully. Dec 13 01:28:11.017084 containerd[1707]: time="2024-12-13T01:28:11.017028531Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:11.020377 containerd[1707]: time="2024-12-13T01:28:11.020289295Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651554" Dec 13 01:28:11.024972 containerd[1707]: time="2024-12-13T01:28:11.024927461Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:11.026418 containerd[1707]: time="2024-12-13T01:28:11.026384102Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.33260353s" Dec 13 01:28:11.026457 containerd[1707]: time="2024-12-13T01:28:11.026422982Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 01:28:11.029745 containerd[1707]: time="2024-12-13T01:28:11.029585146Z" level=info msg="CreateContainer within sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:28:11.063175 containerd[1707]: time="2024-12-13T01:28:11.063052105Z" level=info msg="CreateContainer within sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f\"" Dec 13 01:28:11.064075 containerd[1707]: time="2024-12-13T01:28:11.063935266Z" level=info msg="StartContainer for \"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f\"" Dec 13 01:28:11.088469 systemd[1]: run-containerd-runc-k8s.io-47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f-runc.GpOUob.mount: Deactivated successfully. Dec 13 01:28:11.094582 systemd[1]: Started cri-containerd-47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f.scope - libcontainer container 47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f. Dec 13 01:28:11.124841 containerd[1707]: time="2024-12-13T01:28:11.124315297Z" level=info msg="StartContainer for \"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f\" returns successfully" Dec 13 01:28:11.127214 systemd[1]: cri-containerd-47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f.scope: Deactivated successfully. Dec 13 01:28:12.049128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f-rootfs.mount: Deactivated successfully. Dec 13 01:28:12.786942 containerd[1707]: time="2024-12-13T01:28:12.786868409Z" level=info msg="shim disconnected" id=47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f namespace=k8s.io Dec 13 01:28:12.786942 containerd[1707]: time="2024-12-13T01:28:12.786934889Z" level=warning msg="cleaning up after shim disconnected" id=47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f namespace=k8s.io Dec 13 01:28:12.786942 containerd[1707]: time="2024-12-13T01:28:12.786946089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:13.517629 containerd[1707]: time="2024-12-13T01:28:13.517213719Z" level=info msg="CreateContainer within sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:28:13.562975 containerd[1707]: time="2024-12-13T01:28:13.562918048Z" level=info msg="CreateContainer within sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1\"" Dec 13 01:28:13.563823 containerd[1707]: time="2024-12-13T01:28:13.563782209Z" level=info msg="StartContainer for \"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1\"" Dec 13 01:28:13.589503 systemd[1]: Started cri-containerd-f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1.scope - libcontainer container f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1. Dec 13 01:28:13.616677 containerd[1707]: time="2024-12-13T01:28:13.616625065Z" level=info msg="StartContainer for \"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1\" returns successfully" Dec 13 01:28:13.625170 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:28:13.625651 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:13.625822 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:28:13.634703 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:28:13.636637 systemd[1]: cri-containerd-f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1.scope: Deactivated successfully. Dec 13 01:28:13.650525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1-rootfs.mount: Deactivated successfully. Dec 13 01:28:13.654843 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:13.668092 containerd[1707]: time="2024-12-13T01:28:13.668011640Z" level=info msg="shim disconnected" id=f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1 namespace=k8s.io Dec 13 01:28:13.668092 containerd[1707]: time="2024-12-13T01:28:13.668066400Z" level=warning msg="cleaning up after shim disconnected" id=f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1 namespace=k8s.io Dec 13 01:28:13.668431 containerd[1707]: time="2024-12-13T01:28:13.668076040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:14.516475 containerd[1707]: time="2024-12-13T01:28:14.516327788Z" level=info msg="CreateContainer within sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:28:14.554859 containerd[1707]: time="2024-12-13T01:28:14.554810549Z" level=info msg="CreateContainer within sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1\"" Dec 13 01:28:14.555656 containerd[1707]: time="2024-12-13T01:28:14.555592230Z" level=info msg="StartContainer for \"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1\"" Dec 13 01:28:14.583527 systemd[1]: Started cri-containerd-d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1.scope - libcontainer container d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1. Dec 13 01:28:14.611100 systemd[1]: cri-containerd-d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1.scope: Deactivated successfully. Dec 13 01:28:14.616848 containerd[1707]: time="2024-12-13T01:28:14.616811015Z" level=info msg="StartContainer for \"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1\" returns successfully" Dec 13 01:28:14.634567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1-rootfs.mount: Deactivated successfully. Dec 13 01:28:14.650348 containerd[1707]: time="2024-12-13T01:28:14.650262731Z" level=info msg="shim disconnected" id=d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1 namespace=k8s.io Dec 13 01:28:14.650348 containerd[1707]: time="2024-12-13T01:28:14.650329731Z" level=warning msg="cleaning up after shim disconnected" id=d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1 namespace=k8s.io Dec 13 01:28:14.650555 containerd[1707]: time="2024-12-13T01:28:14.650364811Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:15.521729 containerd[1707]: time="2024-12-13T01:28:15.521642543Z" level=info msg="CreateContainer within sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:28:15.560863 containerd[1707]: time="2024-12-13T01:28:15.560810865Z" level=info msg="CreateContainer within sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730\"" Dec 13 01:28:15.561443 containerd[1707]: time="2024-12-13T01:28:15.561417305Z" level=info msg="StartContainer for \"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730\"" Dec 13 01:28:15.590589 systemd[1]: Started cri-containerd-fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730.scope - libcontainer container fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730. Dec 13 01:28:15.611584 systemd[1]: cri-containerd-fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730.scope: Deactivated successfully. Dec 13 01:28:15.616800 containerd[1707]: time="2024-12-13T01:28:15.616755845Z" level=info msg="StartContainer for \"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730\" returns successfully" Dec 13 01:28:15.636594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730-rootfs.mount: Deactivated successfully. Dec 13 01:28:15.651365 containerd[1707]: time="2024-12-13T01:28:15.651292002Z" level=info msg="shim disconnected" id=fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730 namespace=k8s.io Dec 13 01:28:15.651670 containerd[1707]: time="2024-12-13T01:28:15.651520682Z" level=warning msg="cleaning up after shim disconnected" id=fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730 namespace=k8s.io Dec 13 01:28:15.651670 containerd[1707]: time="2024-12-13T01:28:15.651537242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:16.524355 containerd[1707]: time="2024-12-13T01:28:16.523596615Z" level=info msg="CreateContainer within sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:28:16.566707 containerd[1707]: time="2024-12-13T01:28:16.566453180Z" level=info msg="CreateContainer within sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\"" Dec 13 01:28:16.567298 containerd[1707]: time="2024-12-13T01:28:16.567267781Z" level=info msg="StartContainer for \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\"" Dec 13 01:28:16.625552 systemd[1]: Started cri-containerd-45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2.scope - libcontainer container 45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2. Dec 13 01:28:16.651844 containerd[1707]: time="2024-12-13T01:28:16.651792032Z" level=info msg="StartContainer for \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\" returns successfully" Dec 13 01:28:16.805874 kubelet[3215]: I1213 01:28:16.805755 3215 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:28:16.844443 kubelet[3215]: I1213 01:28:16.842648 3215 topology_manager.go:215] "Topology Admit Handler" podUID="971bd107-7277-4818-8e94-84489814d010" podNamespace="kube-system" podName="coredns-76f75df574-7pl8z" Dec 13 01:28:16.854061 systemd[1]: Created slice kubepods-burstable-pod971bd107_7277_4818_8e94_84489814d010.slice - libcontainer container kubepods-burstable-pod971bd107_7277_4818_8e94_84489814d010.slice. Dec 13 01:28:16.863745 kubelet[3215]: I1213 01:28:16.863591 3215 topology_manager.go:215] "Topology Admit Handler" podUID="614f13f8-a7f4-405c-83c1-40fd9bd102f4" podNamespace="kube-system" podName="coredns-76f75df574-4kjwv" Dec 13 01:28:16.875571 systemd[1]: Created slice kubepods-burstable-pod614f13f8_a7f4_405c_83c1_40fd9bd102f4.slice - libcontainer container kubepods-burstable-pod614f13f8_a7f4_405c_83c1_40fd9bd102f4.slice. Dec 13 01:28:16.897273 kubelet[3215]: I1213 01:28:16.897222 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/614f13f8-a7f4-405c-83c1-40fd9bd102f4-config-volume\") pod \"coredns-76f75df574-4kjwv\" (UID: \"614f13f8-a7f4-405c-83c1-40fd9bd102f4\") " pod="kube-system/coredns-76f75df574-4kjwv" Dec 13 01:28:16.897273 kubelet[3215]: I1213 01:28:16.897272 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvc7v\" (UniqueName: \"kubernetes.io/projected/971bd107-7277-4818-8e94-84489814d010-kube-api-access-kvc7v\") pod \"coredns-76f75df574-7pl8z\" (UID: \"971bd107-7277-4818-8e94-84489814d010\") " pod="kube-system/coredns-76f75df574-7pl8z" Dec 13 01:28:16.898506 kubelet[3215]: I1213 01:28:16.897295 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58lvp\" (UniqueName: \"kubernetes.io/projected/614f13f8-a7f4-405c-83c1-40fd9bd102f4-kube-api-access-58lvp\") pod \"coredns-76f75df574-4kjwv\" (UID: \"614f13f8-a7f4-405c-83c1-40fd9bd102f4\") " pod="kube-system/coredns-76f75df574-4kjwv" Dec 13 01:28:16.898506 kubelet[3215]: I1213 01:28:16.897316 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/971bd107-7277-4818-8e94-84489814d010-config-volume\") pod \"coredns-76f75df574-7pl8z\" (UID: \"971bd107-7277-4818-8e94-84489814d010\") " pod="kube-system/coredns-76f75df574-7pl8z" Dec 13 01:28:17.161456 containerd[1707]: time="2024-12-13T01:28:17.160699416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7pl8z,Uid:971bd107-7277-4818-8e94-84489814d010,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:17.179997 containerd[1707]: time="2024-12-13T01:28:17.179632356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4kjwv,Uid:614f13f8-a7f4-405c-83c1-40fd9bd102f4,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:18.794561 systemd-networkd[1581]: cilium_host: Link UP Dec 13 01:28:18.795535 systemd-networkd[1581]: cilium_net: Link UP Dec 13 01:28:18.796474 systemd-networkd[1581]: cilium_net: Gained carrier Dec 13 01:28:18.797315 systemd-networkd[1581]: cilium_host: Gained carrier Dec 13 01:28:18.929230 systemd-networkd[1581]: cilium_vxlan: Link UP Dec 13 01:28:18.929239 systemd-networkd[1581]: cilium_vxlan: Gained carrier Dec 13 01:28:19.286432 kernel: NET: Registered PF_ALG protocol family Dec 13 01:28:19.384449 systemd-networkd[1581]: cilium_host: Gained IPv6LL Dec 13 01:28:19.768519 systemd-networkd[1581]: cilium_net: Gained IPv6LL Dec 13 01:28:20.009759 systemd-networkd[1581]: lxc_health: Link UP Dec 13 01:28:20.016750 systemd-networkd[1581]: lxc_health: Gained carrier Dec 13 01:28:20.232212 systemd-networkd[1581]: lxc05afb4468571: Link UP Dec 13 01:28:20.248881 kernel: eth0: renamed from tmp685f6 Dec 13 01:28:20.249908 systemd-networkd[1581]: lxc05afb4468571: Gained carrier Dec 13 01:28:20.260289 systemd-networkd[1581]: lxc687976b1a24d: Link UP Dec 13 01:28:20.268423 kernel: eth0: renamed from tmpd99d2 Dec 13 01:28:20.282827 systemd-networkd[1581]: lxc687976b1a24d: Gained carrier Dec 13 01:28:20.663598 systemd-networkd[1581]: cilium_vxlan: Gained IPv6LL Dec 13 01:28:20.899971 kubelet[3215]: I1213 01:28:20.899871 3215 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9jt72" podStartSLOduration=11.830951015 podStartE2EDuration="23.899828577s" podCreationTimestamp="2024-12-13 01:27:57 +0000 UTC" firstStartedPulling="2024-12-13 01:27:58.957731701 +0000 UTC m=+15.664454481" lastFinishedPulling="2024-12-13 01:28:11.026609263 +0000 UTC m=+27.733332043" observedRunningTime="2024-12-13 01:28:17.544741467 +0000 UTC m=+34.251464247" watchObservedRunningTime="2024-12-13 01:28:20.899828577 +0000 UTC m=+37.606551357" Dec 13 01:28:21.751614 systemd-networkd[1581]: lxc_health: Gained IPv6LL Dec 13 01:28:21.879478 systemd-networkd[1581]: lxc687976b1a24d: Gained IPv6LL Dec 13 01:28:22.071517 systemd-networkd[1581]: lxc05afb4468571: Gained IPv6LL Dec 13 01:28:24.039376 containerd[1707]: time="2024-12-13T01:28:24.037488289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:24.039376 containerd[1707]: time="2024-12-13T01:28:24.037544369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:24.039376 containerd[1707]: time="2024-12-13T01:28:24.037555489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:24.039376 containerd[1707]: time="2024-12-13T01:28:24.037646769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:24.052057 containerd[1707]: time="2024-12-13T01:28:24.051922225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:24.054534 containerd[1707]: time="2024-12-13T01:28:24.052066025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:24.054534 containerd[1707]: time="2024-12-13T01:28:24.052296626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:24.054679 containerd[1707]: time="2024-12-13T01:28:24.054466588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:24.089531 systemd[1]: Started cri-containerd-685f697cd8360ad8f0e3fadf9d4ec0239199df221b39c88ba06b96046c7f4498.scope - libcontainer container 685f697cd8360ad8f0e3fadf9d4ec0239199df221b39c88ba06b96046c7f4498. Dec 13 01:28:24.091803 systemd[1]: Started cri-containerd-d99d217c16a6c81f2204f14c5b8cfac76b61aae964b784f5507b2b3b1416c066.scope - libcontainer container d99d217c16a6c81f2204f14c5b8cfac76b61aae964b784f5507b2b3b1416c066. Dec 13 01:28:24.151479 containerd[1707]: time="2024-12-13T01:28:24.151435219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4kjwv,Uid:614f13f8-a7f4-405c-83c1-40fd9bd102f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d99d217c16a6c81f2204f14c5b8cfac76b61aae964b784f5507b2b3b1416c066\"" Dec 13 01:28:24.159434 containerd[1707]: time="2024-12-13T01:28:24.159301708Z" level=info msg="CreateContainer within sandbox \"d99d217c16a6c81f2204f14c5b8cfac76b61aae964b784f5507b2b3b1416c066\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:24.164677 containerd[1707]: time="2024-12-13T01:28:24.163213913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7pl8z,Uid:971bd107-7277-4818-8e94-84489814d010,Namespace:kube-system,Attempt:0,} returns sandbox id \"685f697cd8360ad8f0e3fadf9d4ec0239199df221b39c88ba06b96046c7f4498\"" Dec 13 01:28:24.170042 containerd[1707]: time="2024-12-13T01:28:24.169882280Z" level=info msg="CreateContainer within sandbox \"685f697cd8360ad8f0e3fadf9d4ec0239199df221b39c88ba06b96046c7f4498\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:24.220127 containerd[1707]: time="2024-12-13T01:28:24.219994858Z" level=info msg="CreateContainer within sandbox \"d99d217c16a6c81f2204f14c5b8cfac76b61aae964b784f5507b2b3b1416c066\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"619e6f005555c07796cf0e05ef419f3af4a86c7259a181a25866b77738590364\"" Dec 13 01:28:24.220537 containerd[1707]: time="2024-12-13T01:28:24.220505378Z" level=info msg="StartContainer for \"619e6f005555c07796cf0e05ef419f3af4a86c7259a181a25866b77738590364\"" Dec 13 01:28:24.225162 containerd[1707]: time="2024-12-13T01:28:24.225116664Z" level=info msg="CreateContainer within sandbox \"685f697cd8360ad8f0e3fadf9d4ec0239199df221b39c88ba06b96046c7f4498\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e8eea98835312157643123a32fade8b09ebfd3b3f47b24718a1c281f3d9c2e4\"" Dec 13 01:28:24.226118 containerd[1707]: time="2024-12-13T01:28:24.226081665Z" level=info msg="StartContainer for \"3e8eea98835312157643123a32fade8b09ebfd3b3f47b24718a1c281f3d9c2e4\"" Dec 13 01:28:24.251563 systemd[1]: Started cri-containerd-619e6f005555c07796cf0e05ef419f3af4a86c7259a181a25866b77738590364.scope - libcontainer container 619e6f005555c07796cf0e05ef419f3af4a86c7259a181a25866b77738590364. Dec 13 01:28:24.257694 systemd[1]: Started cri-containerd-3e8eea98835312157643123a32fade8b09ebfd3b3f47b24718a1c281f3d9c2e4.scope - libcontainer container 3e8eea98835312157643123a32fade8b09ebfd3b3f47b24718a1c281f3d9c2e4. Dec 13 01:28:24.306191 containerd[1707]: time="2024-12-13T01:28:24.305528716Z" level=info msg="StartContainer for \"619e6f005555c07796cf0e05ef419f3af4a86c7259a181a25866b77738590364\" returns successfully" Dec 13 01:28:24.306519 containerd[1707]: time="2024-12-13T01:28:24.306453757Z" level=info msg="StartContainer for \"3e8eea98835312157643123a32fade8b09ebfd3b3f47b24718a1c281f3d9c2e4\" returns successfully" Dec 13 01:28:24.562404 kubelet[3215]: I1213 01:28:24.561630 3215 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4kjwv" podStartSLOduration=27.561585089 podStartE2EDuration="27.561585089s" podCreationTimestamp="2024-12-13 01:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:24.560740288 +0000 UTC m=+41.267463068" watchObservedRunningTime="2024-12-13 01:28:24.561585089 +0000 UTC m=+41.268307949" Dec 13 01:28:24.576742 kubelet[3215]: I1213 01:28:24.576673 3215 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7pl8z" podStartSLOduration=27.576628066 podStartE2EDuration="27.576628066s" podCreationTimestamp="2024-12-13 01:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:24.574782424 +0000 UTC m=+41.281505204" watchObservedRunningTime="2024-12-13 01:28:24.576628066 +0000 UTC m=+41.283350846" Dec 13 01:30:35.406299 systemd[1]: Started sshd@7-10.200.20.20:22-10.200.16.10:58404.service - OpenSSH per-connection server daemon (10.200.16.10:58404). Dec 13 01:30:35.813512 sshd[4606]: Accepted publickey for core from 10.200.16.10 port 58404 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:35.814887 sshd[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:35.818910 systemd-logind[1668]: New session 10 of user core. Dec 13 01:30:35.829668 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:30:36.211587 sshd[4606]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:36.215319 systemd[1]: sshd@7-10.200.20.20:22-10.200.16.10:58404.service: Deactivated successfully. Dec 13 01:30:36.217564 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:30:36.219212 systemd-logind[1668]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:30:36.220253 systemd-logind[1668]: Removed session 10. Dec 13 01:30:41.295676 systemd[1]: Started sshd@8-10.200.20.20:22-10.200.16.10:38048.service - OpenSSH per-connection server daemon (10.200.16.10:38048). Dec 13 01:30:41.723466 sshd[4620]: Accepted publickey for core from 10.200.16.10 port 38048 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:41.724911 sshd[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:41.728930 systemd-logind[1668]: New session 11 of user core. Dec 13 01:30:41.735500 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:30:42.098434 sshd[4620]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:42.102812 systemd[1]: sshd@8-10.200.20.20:22-10.200.16.10:38048.service: Deactivated successfully. Dec 13 01:30:42.105461 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:30:42.107132 systemd-logind[1668]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:30:42.108384 systemd-logind[1668]: Removed session 11. Dec 13 01:30:47.173622 systemd[1]: Started sshd@9-10.200.20.20:22-10.200.16.10:38062.service - OpenSSH per-connection server daemon (10.200.16.10:38062). Dec 13 01:30:47.602866 sshd[4636]: Accepted publickey for core from 10.200.16.10 port 38062 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:47.604202 sshd[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:47.608022 systemd-logind[1668]: New session 12 of user core. Dec 13 01:30:47.611579 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:30:47.974535 sshd[4636]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:47.977787 systemd[1]: sshd@9-10.200.20.20:22-10.200.16.10:38062.service: Deactivated successfully. Dec 13 01:30:47.979517 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:30:47.980249 systemd-logind[1668]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:30:47.981151 systemd-logind[1668]: Removed session 12. Dec 13 01:30:53.052430 systemd[1]: Started sshd@10-10.200.20.20:22-10.200.16.10:35686.service - OpenSSH per-connection server daemon (10.200.16.10:35686). Dec 13 01:30:53.463241 sshd[4650]: Accepted publickey for core from 10.200.16.10 port 35686 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:53.464647 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:53.473242 systemd-logind[1668]: New session 13 of user core. Dec 13 01:30:53.476252 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:30:53.840612 sshd[4650]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:53.844332 systemd[1]: sshd@10-10.200.20.20:22-10.200.16.10:35686.service: Deactivated successfully. Dec 13 01:30:53.847597 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:30:53.848736 systemd-logind[1668]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:30:53.850120 systemd-logind[1668]: Removed session 13. Dec 13 01:30:53.915173 systemd[1]: Started sshd@11-10.200.20.20:22-10.200.16.10:35692.service - OpenSSH per-connection server daemon (10.200.16.10:35692). Dec 13 01:30:54.321540 sshd[4664]: Accepted publickey for core from 10.200.16.10 port 35692 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:54.322965 sshd[4664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:54.327815 systemd-logind[1668]: New session 14 of user core. Dec 13 01:30:54.332522 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:30:54.737001 sshd[4664]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:54.740636 systemd[1]: sshd@11-10.200.20.20:22-10.200.16.10:35692.service: Deactivated successfully. Dec 13 01:30:54.742262 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:30:54.742945 systemd-logind[1668]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:30:54.744000 systemd-logind[1668]: Removed session 14. Dec 13 01:30:54.817930 systemd[1]: Started sshd@12-10.200.20.20:22-10.200.16.10:35696.service - OpenSSH per-connection server daemon (10.200.16.10:35696). Dec 13 01:30:55.232626 sshd[4675]: Accepted publickey for core from 10.200.16.10 port 35696 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:55.234068 sshd[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:55.238532 systemd-logind[1668]: New session 15 of user core. Dec 13 01:30:55.244522 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:30:55.598438 sshd[4675]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:55.602473 systemd[1]: sshd@12-10.200.20.20:22-10.200.16.10:35696.service: Deactivated successfully. Dec 13 01:30:55.605149 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:30:55.606254 systemd-logind[1668]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:30:55.607728 systemd-logind[1668]: Removed session 15. Dec 13 01:31:00.673622 systemd[1]: Started sshd@13-10.200.20.20:22-10.200.16.10:46212.service - OpenSSH per-connection server daemon (10.200.16.10:46212). Dec 13 01:31:01.079663 sshd[4689]: Accepted publickey for core from 10.200.16.10 port 46212 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:01.081072 sshd[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:01.085014 systemd-logind[1668]: New session 16 of user core. Dec 13 01:31:01.089508 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:31:01.462175 sshd[4689]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:01.466148 systemd[1]: sshd@13-10.200.20.20:22-10.200.16.10:46212.service: Deactivated successfully. Dec 13 01:31:01.468259 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:31:01.470211 systemd-logind[1668]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:31:01.471171 systemd-logind[1668]: Removed session 16. Dec 13 01:31:06.544538 systemd[1]: Started sshd@14-10.200.20.20:22-10.200.16.10:46224.service - OpenSSH per-connection server daemon (10.200.16.10:46224). Dec 13 01:31:06.962050 sshd[4702]: Accepted publickey for core from 10.200.16.10 port 46224 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:06.963499 sshd[4702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:06.967450 systemd-logind[1668]: New session 17 of user core. Dec 13 01:31:06.977539 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:31:07.334586 sshd[4702]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:07.338412 systemd[1]: sshd@14-10.200.20.20:22-10.200.16.10:46224.service: Deactivated successfully. Dec 13 01:31:07.341033 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:31:07.342582 systemd-logind[1668]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:31:07.343960 systemd-logind[1668]: Removed session 17. Dec 13 01:31:07.419825 systemd[1]: Started sshd@15-10.200.20.20:22-10.200.16.10:46232.service - OpenSSH per-connection server daemon (10.200.16.10:46232). Dec 13 01:31:07.828967 sshd[4714]: Accepted publickey for core from 10.200.16.10 port 46232 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:07.830358 sshd[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:07.834587 systemd-logind[1668]: New session 18 of user core. Dec 13 01:31:07.840583 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:31:08.247557 sshd[4714]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:08.251163 systemd-logind[1668]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:31:08.252134 systemd[1]: sshd@15-10.200.20.20:22-10.200.16.10:46232.service: Deactivated successfully. Dec 13 01:31:08.254752 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:31:08.255826 systemd-logind[1668]: Removed session 18. Dec 13 01:31:08.323623 systemd[1]: Started sshd@16-10.200.20.20:22-10.200.16.10:46240.service - OpenSSH per-connection server daemon (10.200.16.10:46240). Dec 13 01:31:08.728634 sshd[4724]: Accepted publickey for core from 10.200.16.10 port 46240 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:08.730079 sshd[4724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:08.733970 systemd-logind[1668]: New session 19 of user core. Dec 13 01:31:08.742564 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:31:10.300789 sshd[4724]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:10.304127 systemd[1]: sshd@16-10.200.20.20:22-10.200.16.10:46240.service: Deactivated successfully. Dec 13 01:31:10.305743 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:31:10.307882 systemd-logind[1668]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:31:10.309293 systemd-logind[1668]: Removed session 19. Dec 13 01:31:10.383932 systemd[1]: Started sshd@17-10.200.20.20:22-10.200.16.10:55866.service - OpenSSH per-connection server daemon (10.200.16.10:55866). Dec 13 01:31:10.811542 sshd[4742]: Accepted publickey for core from 10.200.16.10 port 55866 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:10.812992 sshd[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:10.816851 systemd-logind[1668]: New session 20 of user core. Dec 13 01:31:10.824570 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:31:11.289185 sshd[4742]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:11.291942 systemd[1]: sshd@17-10.200.20.20:22-10.200.16.10:55866.service: Deactivated successfully. Dec 13 01:31:11.294739 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:31:11.296857 systemd-logind[1668]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:31:11.297914 systemd-logind[1668]: Removed session 20. Dec 13 01:31:11.363293 systemd[1]: Started sshd@18-10.200.20.20:22-10.200.16.10:55868.service - OpenSSH per-connection server daemon (10.200.16.10:55868). Dec 13 01:31:11.778181 sshd[4753]: Accepted publickey for core from 10.200.16.10 port 55868 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:11.779594 sshd[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:11.784309 systemd-logind[1668]: New session 21 of user core. Dec 13 01:31:11.788581 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:31:12.136590 sshd[4753]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:12.140440 systemd-logind[1668]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:31:12.141143 systemd[1]: sshd@18-10.200.20.20:22-10.200.16.10:55868.service: Deactivated successfully. Dec 13 01:31:12.143675 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:31:12.144933 systemd-logind[1668]: Removed session 21. Dec 13 01:31:17.217626 systemd[1]: Started sshd@19-10.200.20.20:22-10.200.16.10:55884.service - OpenSSH per-connection server daemon (10.200.16.10:55884). Dec 13 01:31:17.635644 sshd[4769]: Accepted publickey for core from 10.200.16.10 port 55884 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:17.637051 sshd[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:17.641434 systemd-logind[1668]: New session 22 of user core. Dec 13 01:31:17.648596 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:31:18.000135 sshd[4769]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:18.003865 systemd-logind[1668]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:31:18.004131 systemd[1]: sshd@19-10.200.20.20:22-10.200.16.10:55884.service: Deactivated successfully. Dec 13 01:31:18.005851 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:31:18.007789 systemd-logind[1668]: Removed session 22. Dec 13 01:31:23.076293 systemd[1]: Started sshd@20-10.200.20.20:22-10.200.16.10:50632.service - OpenSSH per-connection server daemon (10.200.16.10:50632). Dec 13 01:31:23.482515 sshd[4782]: Accepted publickey for core from 10.200.16.10 port 50632 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:23.483912 sshd[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:23.487958 systemd-logind[1668]: New session 23 of user core. Dec 13 01:31:23.495504 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:31:23.859478 sshd[4782]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:23.862811 systemd[1]: sshd@20-10.200.20.20:22-10.200.16.10:50632.service: Deactivated successfully. Dec 13 01:31:23.865165 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:31:23.867423 systemd-logind[1668]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:31:23.868865 systemd-logind[1668]: Removed session 23. Dec 13 01:31:28.939244 systemd[1]: Started sshd@21-10.200.20.20:22-10.200.16.10:35984.service - OpenSSH per-connection server daemon (10.200.16.10:35984). Dec 13 01:31:29.370057 sshd[4798]: Accepted publickey for core from 10.200.16.10 port 35984 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:29.371442 sshd[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:29.377095 systemd-logind[1668]: New session 24 of user core. Dec 13 01:31:29.380504 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:31:29.741923 sshd[4798]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:29.744591 systemd[1]: sshd@21-10.200.20.20:22-10.200.16.10:35984.service: Deactivated successfully. Dec 13 01:31:29.746598 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:31:29.748666 systemd-logind[1668]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:31:29.750083 systemd-logind[1668]: Removed session 24. Dec 13 01:31:29.817255 systemd[1]: Started sshd@22-10.200.20.20:22-10.200.16.10:35992.service - OpenSSH per-connection server daemon (10.200.16.10:35992). Dec 13 01:31:30.232111 sshd[4814]: Accepted publickey for core from 10.200.16.10 port 35992 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:30.233578 sshd[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:30.238195 systemd-logind[1668]: New session 25 of user core. Dec 13 01:31:30.246557 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:31:31.973769 systemd[1]: run-containerd-runc-k8s.io-45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2-runc.WmUcEF.mount: Deactivated successfully. Dec 13 01:31:31.982146 containerd[1707]: time="2024-12-13T01:31:31.982019398Z" level=info msg="StopContainer for \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\" with timeout 30 (s)" Dec 13 01:31:31.983404 containerd[1707]: time="2024-12-13T01:31:31.983227039Z" level=info msg="Stop container \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\" with signal terminated" Dec 13 01:31:31.992308 containerd[1707]: time="2024-12-13T01:31:31.992262488Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:31:31.997853 systemd[1]: cri-containerd-a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe.scope: Deactivated successfully. Dec 13 01:31:32.006319 containerd[1707]: time="2024-12-13T01:31:32.004685780Z" level=info msg="StopContainer for \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\" with timeout 2 (s)" Dec 13 01:31:32.006891 containerd[1707]: time="2024-12-13T01:31:32.006763982Z" level=info msg="Stop container \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\" with signal terminated" Dec 13 01:31:32.014794 systemd-networkd[1581]: lxc_health: Link DOWN Dec 13 01:31:32.014801 systemd-networkd[1581]: lxc_health: Lost carrier Dec 13 01:31:32.029777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe-rootfs.mount: Deactivated successfully. Dec 13 01:31:32.039475 systemd[1]: cri-containerd-45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2.scope: Deactivated successfully. Dec 13 01:31:32.039797 systemd[1]: cri-containerd-45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2.scope: Consumed 6.536s CPU time. Dec 13 01:31:32.059304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2-rootfs.mount: Deactivated successfully. Dec 13 01:31:32.082196 containerd[1707]: time="2024-12-13T01:31:32.082132255Z" level=info msg="shim disconnected" id=a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe namespace=k8s.io Dec 13 01:31:32.082196 containerd[1707]: time="2024-12-13T01:31:32.082187495Z" level=warning msg="cleaning up after shim disconnected" id=a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe namespace=k8s.io Dec 13 01:31:32.082196 containerd[1707]: time="2024-12-13T01:31:32.082196375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:32.085552 containerd[1707]: time="2024-12-13T01:31:32.085411978Z" level=info msg="shim disconnected" id=45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2 namespace=k8s.io Dec 13 01:31:32.085552 containerd[1707]: time="2024-12-13T01:31:32.085485098Z" level=warning msg="cleaning up after shim disconnected" id=45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2 namespace=k8s.io Dec 13 01:31:32.085552 containerd[1707]: time="2024-12-13T01:31:32.085499298Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:32.101811 containerd[1707]: time="2024-12-13T01:31:32.101636154Z" level=info msg="StopContainer for \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\" returns successfully" Dec 13 01:31:32.103220 containerd[1707]: time="2024-12-13T01:31:32.103080835Z" level=info msg="StopPodSandbox for \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\"" Dec 13 01:31:32.103220 containerd[1707]: time="2024-12-13T01:31:32.103124595Z" level=info msg="Container to stop \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:32.105183 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0-shm.mount: Deactivated successfully. Dec 13 01:31:32.114378 containerd[1707]: time="2024-12-13T01:31:32.114207286Z" level=info msg="StopContainer for \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\" returns successfully" Dec 13 01:31:32.114766 systemd[1]: cri-containerd-8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0.scope: Deactivated successfully. Dec 13 01:31:32.116774 containerd[1707]: time="2024-12-13T01:31:32.116745529Z" level=info msg="StopPodSandbox for \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\"" Dec 13 01:31:32.116914 containerd[1707]: time="2024-12-13T01:31:32.116896689Z" level=info msg="Container to stop \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:32.117082 containerd[1707]: time="2024-12-13T01:31:32.116963929Z" level=info msg="Container to stop \"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:32.117082 containerd[1707]: time="2024-12-13T01:31:32.116979689Z" level=info msg="Container to stop \"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:32.117082 containerd[1707]: time="2024-12-13T01:31:32.116989969Z" level=info msg="Container to stop \"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:32.117082 containerd[1707]: time="2024-12-13T01:31:32.116999769Z" level=info msg="Container to stop \"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:32.126441 systemd[1]: cri-containerd-41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68.scope: Deactivated successfully. Dec 13 01:31:32.158330 containerd[1707]: time="2024-12-13T01:31:32.158206929Z" level=info msg="shim disconnected" id=41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68 namespace=k8s.io Dec 13 01:31:32.158330 containerd[1707]: time="2024-12-13T01:31:32.158260969Z" level=warning msg="cleaning up after shim disconnected" id=41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68 namespace=k8s.io Dec 13 01:31:32.158330 containerd[1707]: time="2024-12-13T01:31:32.158272969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:32.158644 containerd[1707]: time="2024-12-13T01:31:32.158561969Z" level=info msg="shim disconnected" id=8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0 namespace=k8s.io Dec 13 01:31:32.158644 containerd[1707]: time="2024-12-13T01:31:32.158591009Z" level=warning msg="cleaning up after shim disconnected" id=8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0 namespace=k8s.io Dec 13 01:31:32.158644 containerd[1707]: time="2024-12-13T01:31:32.158617289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:32.172438 containerd[1707]: time="2024-12-13T01:31:32.172242583Z" level=info msg="TearDown network for sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" successfully" Dec 13 01:31:32.172438 containerd[1707]: time="2024-12-13T01:31:32.172277543Z" level=info msg="StopPodSandbox for \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" returns successfully" Dec 13 01:31:32.172632 containerd[1707]: time="2024-12-13T01:31:32.172556783Z" level=info msg="TearDown network for sandbox \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\" successfully" Dec 13 01:31:32.172632 containerd[1707]: time="2024-12-13T01:31:32.172576743Z" level=info msg="StopPodSandbox for \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\" returns successfully" Dec 13 01:31:32.305377 kubelet[3215]: I1213 01:31:32.304451 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f1a73c5-a080-483b-99e3-e4c53bd16003-hubble-tls\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305377 kubelet[3215]: I1213 01:31:32.304500 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93eb33a2-0879-4a9f-bb76-b89f8c4b9c75-cilium-config-path\") pod \"93eb33a2-0879-4a9f-bb76-b89f8c4b9c75\" (UID: \"93eb33a2-0879-4a9f-bb76-b89f8c4b9c75\") " Dec 13 01:31:32.305377 kubelet[3215]: I1213 01:31:32.304522 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-run\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305377 kubelet[3215]: I1213 01:31:32.304544 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f1a73c5-a080-483b-99e3-e4c53bd16003-clustermesh-secrets\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305377 kubelet[3215]: I1213 01:31:32.304562 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-xtables-lock\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305377 kubelet[3215]: I1213 01:31:32.304585 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-config-path\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305817 kubelet[3215]: I1213 01:31:32.304604 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-host-proc-sys-kernel\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305817 kubelet[3215]: I1213 01:31:32.304623 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zfgt\" (UniqueName: \"kubernetes.io/projected/93eb33a2-0879-4a9f-bb76-b89f8c4b9c75-kube-api-access-7zfgt\") pod \"93eb33a2-0879-4a9f-bb76-b89f8c4b9c75\" (UID: \"93eb33a2-0879-4a9f-bb76-b89f8c4b9c75\") " Dec 13 01:31:32.305817 kubelet[3215]: I1213 01:31:32.304647 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-cgroup\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305817 kubelet[3215]: I1213 01:31:32.304667 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ss6w\" (UniqueName: \"kubernetes.io/projected/8f1a73c5-a080-483b-99e3-e4c53bd16003-kube-api-access-8ss6w\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305817 kubelet[3215]: I1213 01:31:32.304684 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-host-proc-sys-net\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305817 kubelet[3215]: I1213 01:31:32.304702 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-bpf-maps\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305967 kubelet[3215]: I1213 01:31:32.304720 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cni-path\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305967 kubelet[3215]: I1213 01:31:32.304742 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-lib-modules\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305967 kubelet[3215]: I1213 01:31:32.304759 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-etc-cni-netd\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305967 kubelet[3215]: I1213 01:31:32.304776 3215 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-hostproc\") pod \"8f1a73c5-a080-483b-99e3-e4c53bd16003\" (UID: \"8f1a73c5-a080-483b-99e3-e4c53bd16003\") " Dec 13 01:31:32.305967 kubelet[3215]: I1213 01:31:32.304849 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-hostproc" (OuterVolumeSpecName: "hostproc") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:32.308350 kubelet[3215]: I1213 01:31:32.307572 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:32.308350 kubelet[3215]: I1213 01:31:32.307962 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:32.308350 kubelet[3215]: I1213 01:31:32.307997 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:32.308350 kubelet[3215]: I1213 01:31:32.308014 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cni-path" (OuterVolumeSpecName: "cni-path") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:32.308350 kubelet[3215]: I1213 01:31:32.308031 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:32.308527 kubelet[3215]: I1213 01:31:32.308039 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:32.308527 kubelet[3215]: I1213 01:31:32.308048 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:32.308527 kubelet[3215]: I1213 01:31:32.308098 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93eb33a2-0879-4a9f-bb76-b89f8c4b9c75-kube-api-access-7zfgt" (OuterVolumeSpecName: "kube-api-access-7zfgt") pod "93eb33a2-0879-4a9f-bb76-b89f8c4b9c75" (UID: "93eb33a2-0879-4a9f-bb76-b89f8c4b9c75"). InnerVolumeSpecName "kube-api-access-7zfgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:31:32.310381 kubelet[3215]: I1213 01:31:32.310177 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:32.310381 kubelet[3215]: I1213 01:31:32.310321 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:32.312400 kubelet[3215]: I1213 01:31:32.312239 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:31:32.313024 kubelet[3215]: I1213 01:31:32.312914 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1a73c5-a080-483b-99e3-e4c53bd16003-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:31:32.313024 kubelet[3215]: I1213 01:31:32.312978 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1a73c5-a080-483b-99e3-e4c53bd16003-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:31:32.313435 kubelet[3215]: I1213 01:31:32.313404 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1a73c5-a080-483b-99e3-e4c53bd16003-kube-api-access-8ss6w" (OuterVolumeSpecName: "kube-api-access-8ss6w") pod "8f1a73c5-a080-483b-99e3-e4c53bd16003" (UID: "8f1a73c5-a080-483b-99e3-e4c53bd16003"). InnerVolumeSpecName "kube-api-access-8ss6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:31:32.313869 kubelet[3215]: I1213 01:31:32.313847 3215 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93eb33a2-0879-4a9f-bb76-b89f8c4b9c75-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "93eb33a2-0879-4a9f-bb76-b89f8c4b9c75" (UID: "93eb33a2-0879-4a9f-bb76-b89f8c4b9c75"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:31:32.405811 kubelet[3215]: I1213 01:31:32.405742 3215 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-cgroup\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.405811 kubelet[3215]: I1213 01:31:32.405802 3215 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-bpf-maps\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.405811 kubelet[3215]: I1213 01:31:32.405815 3215 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8ss6w\" (UniqueName: \"kubernetes.io/projected/8f1a73c5-a080-483b-99e3-e4c53bd16003-kube-api-access-8ss6w\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406080 kubelet[3215]: I1213 01:31:32.405827 3215 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-host-proc-sys-net\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406080 kubelet[3215]: I1213 01:31:32.405837 3215 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cni-path\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406080 kubelet[3215]: I1213 01:31:32.405848 3215 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-lib-modules\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406080 kubelet[3215]: I1213 01:31:32.405859 3215 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-etc-cni-netd\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406080 kubelet[3215]: I1213 01:31:32.405879 3215 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-hostproc\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406080 kubelet[3215]: I1213 01:31:32.405889 3215 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f1a73c5-a080-483b-99e3-e4c53bd16003-hubble-tls\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406080 kubelet[3215]: I1213 01:31:32.405898 3215 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93eb33a2-0879-4a9f-bb76-b89f8c4b9c75-cilium-config-path\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406080 kubelet[3215]: I1213 01:31:32.405909 3215 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-run\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406397 kubelet[3215]: I1213 01:31:32.405920 3215 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f1a73c5-a080-483b-99e3-e4c53bd16003-clustermesh-secrets\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406397 kubelet[3215]: I1213 01:31:32.405930 3215 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7zfgt\" (UniqueName: \"kubernetes.io/projected/93eb33a2-0879-4a9f-bb76-b89f8c4b9c75-kube-api-access-7zfgt\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406397 kubelet[3215]: I1213 01:31:32.405940 3215 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-xtables-lock\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406397 kubelet[3215]: I1213 01:31:32.405950 3215 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f1a73c5-a080-483b-99e3-e4c53bd16003-cilium-config-path\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.406397 kubelet[3215]: I1213 01:31:32.405959 3215 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f1a73c5-a080-483b-99e3-e4c53bd16003-host-proc-sys-kernel\") on node \"ci-4081.2.1-a-dd942dbb76\" DevicePath \"\"" Dec 13 01:31:32.874801 kubelet[3215]: I1213 01:31:32.874379 3215 scope.go:117] "RemoveContainer" containerID="45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2" Dec 13 01:31:32.876225 containerd[1707]: time="2024-12-13T01:31:32.876119507Z" level=info msg="RemoveContainer for \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\"" Dec 13 01:31:32.884392 systemd[1]: Removed slice kubepods-burstable-pod8f1a73c5_a080_483b_99e3_e4c53bd16003.slice - libcontainer container kubepods-burstable-pod8f1a73c5_a080_483b_99e3_e4c53bd16003.slice. Dec 13 01:31:32.884513 systemd[1]: kubepods-burstable-pod8f1a73c5_a080_483b_99e3_e4c53bd16003.slice: Consumed 6.604s CPU time. Dec 13 01:31:32.888381 systemd[1]: Removed slice kubepods-besteffort-pod93eb33a2_0879_4a9f_bb76_b89f8c4b9c75.slice - libcontainer container kubepods-besteffort-pod93eb33a2_0879_4a9f_bb76_b89f8c4b9c75.slice. Dec 13 01:31:32.904612 containerd[1707]: time="2024-12-13T01:31:32.904570895Z" level=info msg="RemoveContainer for \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\" returns successfully" Dec 13 01:31:32.908194 kubelet[3215]: I1213 01:31:32.907976 3215 scope.go:117] "RemoveContainer" containerID="fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730" Dec 13 01:31:32.910931 containerd[1707]: time="2024-12-13T01:31:32.910879981Z" level=info msg="RemoveContainer for \"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730\"" Dec 13 01:31:32.923526 containerd[1707]: time="2024-12-13T01:31:32.923472873Z" level=info msg="RemoveContainer for \"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730\" returns successfully" Dec 13 01:31:32.923883 kubelet[3215]: I1213 01:31:32.923843 3215 scope.go:117] "RemoveContainer" containerID="d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1" Dec 13 01:31:32.925500 containerd[1707]: time="2024-12-13T01:31:32.925460395Z" level=info msg="RemoveContainer for \"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1\"" Dec 13 01:31:32.934285 containerd[1707]: time="2024-12-13T01:31:32.934239364Z" level=info msg="RemoveContainer for \"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1\" returns successfully" Dec 13 01:31:32.934561 kubelet[3215]: I1213 01:31:32.934541 3215 scope.go:117] "RemoveContainer" containerID="f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1" Dec 13 01:31:32.936192 containerd[1707]: time="2024-12-13T01:31:32.935948565Z" level=info msg="RemoveContainer for \"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1\"" Dec 13 01:31:32.949662 containerd[1707]: time="2024-12-13T01:31:32.949585899Z" level=info msg="RemoveContainer for \"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1\" returns successfully" Dec 13 01:31:32.950043 kubelet[3215]: I1213 01:31:32.949847 3215 scope.go:117] "RemoveContainer" containerID="47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f" Dec 13 01:31:32.951630 containerd[1707]: time="2024-12-13T01:31:32.951468701Z" level=info msg="RemoveContainer for \"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f\"" Dec 13 01:31:32.958702 containerd[1707]: time="2024-12-13T01:31:32.958643188Z" level=info msg="RemoveContainer for \"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f\" returns successfully" Dec 13 01:31:32.959088 kubelet[3215]: I1213 01:31:32.959052 3215 scope.go:117] "RemoveContainer" containerID="45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2" Dec 13 01:31:32.959430 containerd[1707]: time="2024-12-13T01:31:32.959309668Z" level=error msg="ContainerStatus for \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\": not found" Dec 13 01:31:32.959790 kubelet[3215]: E1213 01:31:32.959598 3215 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\": not found" containerID="45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2" Dec 13 01:31:32.959790 kubelet[3215]: I1213 01:31:32.959699 3215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2"} err="failed to get container status \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\": rpc error: code = NotFound desc = an error occurred when try to find container \"45f19e4f66d97e4cf944a72d711a6a87cbfb06327106bf30e6c1ec0c4009abc2\": not found" Dec 13 01:31:32.959790 kubelet[3215]: I1213 01:31:32.959711 3215 scope.go:117] "RemoveContainer" containerID="fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730" Dec 13 01:31:32.959935 containerd[1707]: time="2024-12-13T01:31:32.959894189Z" level=error msg="ContainerStatus for \"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730\": not found" Dec 13 01:31:32.960066 kubelet[3215]: E1213 01:31:32.960024 3215 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730\": not found" containerID="fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730" Dec 13 01:31:32.960112 kubelet[3215]: I1213 01:31:32.960073 3215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730"} err="failed to get container status \"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb73b26e9ceb4ebbf3623304b42501173e44f3404a13493695277d08dd91c730\": not found" Dec 13 01:31:32.960112 kubelet[3215]: I1213 01:31:32.960085 3215 scope.go:117] "RemoveContainer" containerID="d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1" Dec 13 01:31:32.960318 containerd[1707]: time="2024-12-13T01:31:32.960281869Z" level=error msg="ContainerStatus for \"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1\": not found" Dec 13 01:31:32.960591 kubelet[3215]: E1213 01:31:32.960482 3215 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1\": not found" containerID="d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1" Dec 13 01:31:32.960591 kubelet[3215]: I1213 01:31:32.960518 3215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1"} err="failed to get container status \"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4ac6cb434447b6ef0d4dcb8ff71255603c7888b4a0c06a1c81a0b6e105595b1\": not found" Dec 13 01:31:32.960591 kubelet[3215]: I1213 01:31:32.960528 3215 scope.go:117] "RemoveContainer" containerID="f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1" Dec 13 01:31:32.960902 containerd[1707]: time="2024-12-13T01:31:32.960822110Z" level=error msg="ContainerStatus for \"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1\": not found" Dec 13 01:31:32.960989 kubelet[3215]: E1213 01:31:32.960957 3215 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1\": not found" containerID="f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1" Dec 13 01:31:32.961029 kubelet[3215]: I1213 01:31:32.961007 3215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1"} err="failed to get container status \"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1\": rpc error: code = NotFound desc = an error occurred when try to find container \"f97c47b103cd08b4c086872e855c13b94b7768de2beac01565209bc12ada9dc1\": not found" Dec 13 01:31:32.961029 kubelet[3215]: I1213 01:31:32.961019 3215 scope.go:117] "RemoveContainer" containerID="47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f" Dec 13 01:31:32.961206 containerd[1707]: time="2024-12-13T01:31:32.961172510Z" level=error msg="ContainerStatus for \"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f\": not found" Dec 13 01:31:32.961320 kubelet[3215]: E1213 01:31:32.961300 3215 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f\": not found" containerID="47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f" Dec 13 01:31:32.961394 kubelet[3215]: I1213 01:31:32.961328 3215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f"} err="failed to get container status \"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f\": rpc error: code = NotFound desc = an error occurred when try to find container \"47b35e1fdb83a2f386c9a0ad001fbc3f053a58d7134b96c8f5ee7cd5e4e1a61f\": not found" Dec 13 01:31:32.961394 kubelet[3215]: I1213 01:31:32.961356 3215 scope.go:117] "RemoveContainer" containerID="a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe" Dec 13 01:31:32.962597 containerd[1707]: time="2024-12-13T01:31:32.962513911Z" level=info msg="RemoveContainer for \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\"" Dec 13 01:31:32.969461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68-rootfs.mount: Deactivated successfully. Dec 13 01:31:32.969688 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68-shm.mount: Deactivated successfully. Dec 13 01:31:32.969829 systemd[1]: var-lib-kubelet-pods-8f1a73c5\x2da080\x2d483b\x2d99e3\x2de4c53bd16003-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:31:32.969966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0-rootfs.mount: Deactivated successfully. Dec 13 01:31:32.970090 systemd[1]: var-lib-kubelet-pods-93eb33a2\x2d0879\x2d4a9f\x2dbb76\x2db89f8c4b9c75-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7zfgt.mount: Deactivated successfully. Dec 13 01:31:32.970208 systemd[1]: var-lib-kubelet-pods-8f1a73c5\x2da080\x2d483b\x2d99e3\x2de4c53bd16003-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8ss6w.mount: Deactivated successfully. Dec 13 01:31:32.970356 systemd[1]: var-lib-kubelet-pods-8f1a73c5\x2da080\x2d483b\x2d99e3\x2de4c53bd16003-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:31:32.971519 containerd[1707]: time="2024-12-13T01:31:32.971476880Z" level=info msg="RemoveContainer for \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\" returns successfully" Dec 13 01:31:32.972283 containerd[1707]: time="2024-12-13T01:31:32.972120761Z" level=error msg="ContainerStatus for \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\": not found" Dec 13 01:31:32.972320 kubelet[3215]: I1213 01:31:32.971733 3215 scope.go:117] "RemoveContainer" containerID="a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe" Dec 13 01:31:32.972320 kubelet[3215]: E1213 01:31:32.972264 3215 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\": not found" containerID="a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe" Dec 13 01:31:32.972320 kubelet[3215]: I1213 01:31:32.972300 3215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe"} err="failed to get container status \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\": rpc error: code = NotFound desc = an error occurred when try to find container \"a215f87d4c3f4fa41ff4a99b00548d86184e835ce789d385bc23507cff942ebe\": not found" Dec 13 01:31:33.420305 kubelet[3215]: I1213 01:31:33.420269 3215 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8f1a73c5-a080-483b-99e3-e4c53bd16003" path="/var/lib/kubelet/pods/8f1a73c5-a080-483b-99e3-e4c53bd16003/volumes" Dec 13 01:31:33.420923 kubelet[3215]: I1213 01:31:33.420896 3215 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="93eb33a2-0879-4a9f-bb76-b89f8c4b9c75" path="/var/lib/kubelet/pods/93eb33a2-0879-4a9f-bb76-b89f8c4b9c75/volumes" Dec 13 01:31:33.528414 kubelet[3215]: E1213 01:31:33.528385 3215 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:31:33.991622 sshd[4814]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:33.995635 systemd[1]: sshd@22-10.200.20.20:22-10.200.16.10:35992.service: Deactivated successfully. Dec 13 01:31:33.997648 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:31:33.999142 systemd-logind[1668]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:31:34.000836 systemd-logind[1668]: Removed session 25. Dec 13 01:31:34.073667 systemd[1]: Started sshd@23-10.200.20.20:22-10.200.16.10:36006.service - OpenSSH per-connection server daemon (10.200.16.10:36006). Dec 13 01:31:34.500422 sshd[4975]: Accepted publickey for core from 10.200.16.10 port 36006 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:34.501821 sshd[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:34.505792 systemd-logind[1668]: New session 26 of user core. Dec 13 01:31:34.514507 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:31:35.998987 kubelet[3215]: I1213 01:31:35.997501 3215 topology_manager.go:215] "Topology Admit Handler" podUID="79c13f3b-dac0-4241-8eee-b7d1877c8efc" podNamespace="kube-system" podName="cilium-q69lq" Dec 13 01:31:35.998987 kubelet[3215]: E1213 01:31:35.997561 3215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93eb33a2-0879-4a9f-bb76-b89f8c4b9c75" containerName="cilium-operator" Dec 13 01:31:35.998987 kubelet[3215]: E1213 01:31:35.997571 3215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f1a73c5-a080-483b-99e3-e4c53bd16003" containerName="mount-bpf-fs" Dec 13 01:31:35.998987 kubelet[3215]: E1213 01:31:35.997577 3215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f1a73c5-a080-483b-99e3-e4c53bd16003" containerName="clean-cilium-state" Dec 13 01:31:35.998987 kubelet[3215]: E1213 01:31:35.997584 3215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f1a73c5-a080-483b-99e3-e4c53bd16003" containerName="mount-cgroup" Dec 13 01:31:35.998987 kubelet[3215]: E1213 01:31:35.997591 3215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f1a73c5-a080-483b-99e3-e4c53bd16003" containerName="apply-sysctl-overwrites" Dec 13 01:31:35.998987 kubelet[3215]: E1213 01:31:35.997598 3215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f1a73c5-a080-483b-99e3-e4c53bd16003" containerName="cilium-agent" Dec 13 01:31:35.998987 kubelet[3215]: I1213 01:31:35.997620 3215 memory_manager.go:354] "RemoveStaleState removing state" podUID="93eb33a2-0879-4a9f-bb76-b89f8c4b9c75" containerName="cilium-operator" Dec 13 01:31:35.998987 kubelet[3215]: I1213 01:31:35.997627 3215 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1a73c5-a080-483b-99e3-e4c53bd16003" containerName="cilium-agent" Dec 13 01:31:36.009505 systemd[1]: Created slice kubepods-burstable-pod79c13f3b_dac0_4241_8eee_b7d1877c8efc.slice - libcontainer container kubepods-burstable-pod79c13f3b_dac0_4241_8eee_b7d1877c8efc.slice. Dec 13 01:31:36.024161 kubelet[3215]: I1213 01:31:36.024075 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79c13f3b-dac0-4241-8eee-b7d1877c8efc-host-proc-sys-kernel\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.024161 kubelet[3215]: I1213 01:31:36.024117 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/79c13f3b-dac0-4241-8eee-b7d1877c8efc-cilium-ipsec-secrets\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.024161 kubelet[3215]: I1213 01:31:36.024137 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79c13f3b-dac0-4241-8eee-b7d1877c8efc-cilium-run\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.024161 kubelet[3215]: I1213 01:31:36.024210 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79c13f3b-dac0-4241-8eee-b7d1877c8efc-bpf-maps\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.024884 kubelet[3215]: I1213 01:31:36.024258 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79c13f3b-dac0-4241-8eee-b7d1877c8efc-host-proc-sys-net\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.024884 kubelet[3215]: I1213 01:31:36.024282 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79c13f3b-dac0-4241-8eee-b7d1877c8efc-hostproc\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.024884 kubelet[3215]: I1213 01:31:36.024302 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79c13f3b-dac0-4241-8eee-b7d1877c8efc-hubble-tls\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.024884 kubelet[3215]: I1213 01:31:36.024367 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79c13f3b-dac0-4241-8eee-b7d1877c8efc-cilium-config-path\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.024884 kubelet[3215]: I1213 01:31:36.024409 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79c13f3b-dac0-4241-8eee-b7d1877c8efc-cni-path\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.024884 kubelet[3215]: I1213 01:31:36.024433 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79c13f3b-dac0-4241-8eee-b7d1877c8efc-etc-cni-netd\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.025030 kubelet[3215]: I1213 01:31:36.024452 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79c13f3b-dac0-4241-8eee-b7d1877c8efc-xtables-lock\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.025030 kubelet[3215]: I1213 01:31:36.024479 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79c13f3b-dac0-4241-8eee-b7d1877c8efc-clustermesh-secrets\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.025030 kubelet[3215]: I1213 01:31:36.024503 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m92xh\" (UniqueName: \"kubernetes.io/projected/79c13f3b-dac0-4241-8eee-b7d1877c8efc-kube-api-access-m92xh\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.025030 kubelet[3215]: I1213 01:31:36.024523 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79c13f3b-dac0-4241-8eee-b7d1877c8efc-cilium-cgroup\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.025030 kubelet[3215]: I1213 01:31:36.024550 3215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79c13f3b-dac0-4241-8eee-b7d1877c8efc-lib-modules\") pod \"cilium-q69lq\" (UID: \"79c13f3b-dac0-4241-8eee-b7d1877c8efc\") " pod="kube-system/cilium-q69lq" Dec 13 01:31:36.041593 sshd[4975]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:36.046612 systemd[1]: sshd@23-10.200.20.20:22-10.200.16.10:36006.service: Deactivated successfully. Dec 13 01:31:36.051738 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:31:36.052103 systemd[1]: session-26.scope: Consumed 1.150s CPU time. Dec 13 01:31:36.052917 systemd-logind[1668]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:31:36.054450 systemd-logind[1668]: Removed session 26. Dec 13 01:31:36.121613 systemd[1]: Started sshd@24-10.200.20.20:22-10.200.16.10:36010.service - OpenSSH per-connection server daemon (10.200.16.10:36010). Dec 13 01:31:36.317188 containerd[1707]: time="2024-12-13T01:31:36.317067716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q69lq,Uid:79c13f3b-dac0-4241-8eee-b7d1877c8efc,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:36.353941 containerd[1707]: time="2024-12-13T01:31:36.353584712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:36.353941 containerd[1707]: time="2024-12-13T01:31:36.353643392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:36.353941 containerd[1707]: time="2024-12-13T01:31:36.353663672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:36.353941 containerd[1707]: time="2024-12-13T01:31:36.353757392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:36.373545 systemd[1]: Started cri-containerd-12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b.scope - libcontainer container 12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b. Dec 13 01:31:36.394953 containerd[1707]: time="2024-12-13T01:31:36.394900193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q69lq,Uid:79c13f3b-dac0-4241-8eee-b7d1877c8efc,Namespace:kube-system,Attempt:0,} returns sandbox id \"12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b\"" Dec 13 01:31:36.399619 containerd[1707]: time="2024-12-13T01:31:36.399570118Z" level=info msg="CreateContainer within sandbox \"12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:31:36.441295 containerd[1707]: time="2024-12-13T01:31:36.441239279Z" level=info msg="CreateContainer within sandbox \"12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ecf0298cc4cd741f65e888eac6bef66209dc701ad530e8a5646ce9a684f767b\"" Dec 13 01:31:36.442761 containerd[1707]: time="2024-12-13T01:31:36.442204800Z" level=info msg="StartContainer for \"6ecf0298cc4cd741f65e888eac6bef66209dc701ad530e8a5646ce9a684f767b\"" Dec 13 01:31:36.464512 systemd[1]: Started cri-containerd-6ecf0298cc4cd741f65e888eac6bef66209dc701ad530e8a5646ce9a684f767b.scope - libcontainer container 6ecf0298cc4cd741f65e888eac6bef66209dc701ad530e8a5646ce9a684f767b. Dec 13 01:31:36.494032 containerd[1707]: time="2024-12-13T01:31:36.493976811Z" level=info msg="StartContainer for \"6ecf0298cc4cd741f65e888eac6bef66209dc701ad530e8a5646ce9a684f767b\" returns successfully" Dec 13 01:31:36.495391 systemd[1]: cri-containerd-6ecf0298cc4cd741f65e888eac6bef66209dc701ad530e8a5646ce9a684f767b.scope: Deactivated successfully. Dec 13 01:31:36.561399 sshd[4990]: Accepted publickey for core from 10.200.16.10 port 36010 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:36.562803 sshd[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:36.567292 systemd-logind[1668]: New session 27 of user core. Dec 13 01:31:36.574677 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:31:36.582573 containerd[1707]: time="2024-12-13T01:31:36.582485819Z" level=info msg="shim disconnected" id=6ecf0298cc4cd741f65e888eac6bef66209dc701ad530e8a5646ce9a684f767b namespace=k8s.io Dec 13 01:31:36.582573 containerd[1707]: time="2024-12-13T01:31:36.582564699Z" level=warning msg="cleaning up after shim disconnected" id=6ecf0298cc4cd741f65e888eac6bef66209dc701ad530e8a5646ce9a684f767b namespace=k8s.io Dec 13 01:31:36.582573 containerd[1707]: time="2024-12-13T01:31:36.582574459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:36.870176 sshd[4990]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:36.874070 systemd[1]: sshd@24-10.200.20.20:22-10.200.16.10:36010.service: Deactivated successfully. Dec 13 01:31:36.876821 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:31:36.877650 systemd-logind[1668]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:31:36.879544 systemd-logind[1668]: Removed session 27. Dec 13 01:31:36.893976 containerd[1707]: time="2024-12-13T01:31:36.893935128Z" level=info msg="CreateContainer within sandbox \"12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:31:36.927652 containerd[1707]: time="2024-12-13T01:31:36.927564162Z" level=info msg="CreateContainer within sandbox \"12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"130f22b581721d7873b109fd3fffb79c3aaf83ef90a946241ddd4326fa47ec9b\"" Dec 13 01:31:36.929305 containerd[1707]: time="2024-12-13T01:31:36.928646243Z" level=info msg="StartContainer for \"130f22b581721d7873b109fd3fffb79c3aaf83ef90a946241ddd4326fa47ec9b\"" Dec 13 01:31:36.947614 systemd[1]: Started sshd@25-10.200.20.20:22-10.200.16.10:36012.service - OpenSSH per-connection server daemon (10.200.16.10:36012). Dec 13 01:31:36.959988 systemd[1]: Started cri-containerd-130f22b581721d7873b109fd3fffb79c3aaf83ef90a946241ddd4326fa47ec9b.scope - libcontainer container 130f22b581721d7873b109fd3fffb79c3aaf83ef90a946241ddd4326fa47ec9b. Dec 13 01:31:36.990002 containerd[1707]: time="2024-12-13T01:31:36.989955304Z" level=info msg="StartContainer for \"130f22b581721d7873b109fd3fffb79c3aaf83ef90a946241ddd4326fa47ec9b\" returns successfully" Dec 13 01:31:36.995312 systemd[1]: cri-containerd-130f22b581721d7873b109fd3fffb79c3aaf83ef90a946241ddd4326fa47ec9b.scope: Deactivated successfully. Dec 13 01:31:37.027912 containerd[1707]: time="2024-12-13T01:31:37.027844941Z" level=info msg="shim disconnected" id=130f22b581721d7873b109fd3fffb79c3aaf83ef90a946241ddd4326fa47ec9b namespace=k8s.io Dec 13 01:31:37.027912 containerd[1707]: time="2024-12-13T01:31:37.027906021Z" level=warning msg="cleaning up after shim disconnected" id=130f22b581721d7873b109fd3fffb79c3aaf83ef90a946241ddd4326fa47ec9b namespace=k8s.io Dec 13 01:31:37.027912 containerd[1707]: time="2024-12-13T01:31:37.027919541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:37.367926 sshd[5118]: Accepted publickey for core from 10.200.16.10 port 36012 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:37.369682 sshd[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:37.373556 systemd-logind[1668]: New session 28 of user core. Dec 13 01:31:37.380534 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:31:37.900090 containerd[1707]: time="2024-12-13T01:31:37.899822846Z" level=info msg="CreateContainer within sandbox \"12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:31:37.928607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950171142.mount: Deactivated successfully. Dec 13 01:31:37.947332 containerd[1707]: time="2024-12-13T01:31:37.947264693Z" level=info msg="CreateContainer within sandbox \"12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a74236f9076f2e012d0298604357b9797f919498046e64108d09eba4d9dfd698\"" Dec 13 01:31:37.948200 containerd[1707]: time="2024-12-13T01:31:37.948163094Z" level=info msg="StartContainer for \"a74236f9076f2e012d0298604357b9797f919498046e64108d09eba4d9dfd698\"" Dec 13 01:31:37.974536 systemd[1]: Started cri-containerd-a74236f9076f2e012d0298604357b9797f919498046e64108d09eba4d9dfd698.scope - libcontainer container a74236f9076f2e012d0298604357b9797f919498046e64108d09eba4d9dfd698. Dec 13 01:31:38.005852 systemd[1]: cri-containerd-a74236f9076f2e012d0298604357b9797f919498046e64108d09eba4d9dfd698.scope: Deactivated successfully. Dec 13 01:31:38.008944 containerd[1707]: time="2024-12-13T01:31:38.008815674Z" level=info msg="StartContainer for \"a74236f9076f2e012d0298604357b9797f919498046e64108d09eba4d9dfd698\" returns successfully" Dec 13 01:31:38.042523 kubelet[3215]: I1213 01:31:38.042481 3215 setters.go:568] "Node became not ready" node="ci-4081.2.1-a-dd942dbb76" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:31:38Z","lastTransitionTime":"2024-12-13T01:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:31:38.047092 containerd[1707]: time="2024-12-13T01:31:38.046952392Z" level=info msg="shim disconnected" id=a74236f9076f2e012d0298604357b9797f919498046e64108d09eba4d9dfd698 namespace=k8s.io Dec 13 01:31:38.047092 containerd[1707]: time="2024-12-13T01:31:38.047124152Z" level=warning msg="cleaning up after shim disconnected" id=a74236f9076f2e012d0298604357b9797f919498046e64108d09eba4d9dfd698 namespace=k8s.io Dec 13 01:31:38.047092 containerd[1707]: time="2024-12-13T01:31:38.047135352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:38.130852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a74236f9076f2e012d0298604357b9797f919498046e64108d09eba4d9dfd698-rootfs.mount: Deactivated successfully. Dec 13 01:31:38.529484 kubelet[3215]: E1213 01:31:38.529431 3215 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:31:38.903384 containerd[1707]: time="2024-12-13T01:31:38.900802039Z" level=info msg="CreateContainer within sandbox \"12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:31:38.934222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1507512952.mount: Deactivated successfully. Dec 13 01:31:38.952448 containerd[1707]: time="2024-12-13T01:31:38.952317531Z" level=info msg="CreateContainer within sandbox \"12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"68eee3c09fd4e5ffbd01c2c7fb29789820ac92307abc2681a78f02b877c2a1e8\"" Dec 13 01:31:38.952860 containerd[1707]: time="2024-12-13T01:31:38.952836131Z" level=info msg="StartContainer for \"68eee3c09fd4e5ffbd01c2c7fb29789820ac92307abc2681a78f02b877c2a1e8\"" Dec 13 01:31:38.978788 systemd[1]: Started cri-containerd-68eee3c09fd4e5ffbd01c2c7fb29789820ac92307abc2681a78f02b877c2a1e8.scope - libcontainer container 68eee3c09fd4e5ffbd01c2c7fb29789820ac92307abc2681a78f02b877c2a1e8. Dec 13 01:31:39.003105 systemd[1]: cri-containerd-68eee3c09fd4e5ffbd01c2c7fb29789820ac92307abc2681a78f02b877c2a1e8.scope: Deactivated successfully. Dec 13 01:31:39.010216 containerd[1707]: time="2024-12-13T01:31:39.010101228Z" level=info msg="StartContainer for \"68eee3c09fd4e5ffbd01c2c7fb29789820ac92307abc2681a78f02b877c2a1e8\" returns successfully" Dec 13 01:31:39.051306 containerd[1707]: time="2024-12-13T01:31:39.051223229Z" level=info msg="shim disconnected" id=68eee3c09fd4e5ffbd01c2c7fb29789820ac92307abc2681a78f02b877c2a1e8 namespace=k8s.io Dec 13 01:31:39.051306 containerd[1707]: time="2024-12-13T01:31:39.051295629Z" level=warning msg="cleaning up after shim disconnected" id=68eee3c09fd4e5ffbd01c2c7fb29789820ac92307abc2681a78f02b877c2a1e8 namespace=k8s.io Dec 13 01:31:39.051306 containerd[1707]: time="2024-12-13T01:31:39.051305629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:39.131588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68eee3c09fd4e5ffbd01c2c7fb29789820ac92307abc2681a78f02b877c2a1e8-rootfs.mount: Deactivated successfully. Dec 13 01:31:39.905867 containerd[1707]: time="2024-12-13T01:31:39.904978716Z" level=info msg="CreateContainer within sandbox \"12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:31:39.948946 containerd[1707]: time="2024-12-13T01:31:39.948841799Z" level=info msg="CreateContainer within sandbox \"12b775e95e9481879e500fdded51409bec05d1e873292b3074c2dac29b9c738b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"841a9a91767ac87337c201c3fc929975a61a0da7837fe831d1c04e8e9bfd266e\"" Dec 13 01:31:39.950061 containerd[1707]: time="2024-12-13T01:31:39.949314400Z" level=info msg="StartContainer for \"841a9a91767ac87337c201c3fc929975a61a0da7837fe831d1c04e8e9bfd266e\"" Dec 13 01:31:39.976562 systemd[1]: Started cri-containerd-841a9a91767ac87337c201c3fc929975a61a0da7837fe831d1c04e8e9bfd266e.scope - libcontainer container 841a9a91767ac87337c201c3fc929975a61a0da7837fe831d1c04e8e9bfd266e. Dec 13 01:31:40.004889 containerd[1707]: time="2024-12-13T01:31:40.004830375Z" level=info msg="StartContainer for \"841a9a91767ac87337c201c3fc929975a61a0da7837fe831d1c04e8e9bfd266e\" returns successfully" Dec 13 01:31:40.441378 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 01:31:40.924361 kubelet[3215]: I1213 01:31:40.923868 3215 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-q69lq" podStartSLOduration=5.923823247 podStartE2EDuration="5.923823247s" podCreationTimestamp="2024-12-13 01:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:31:40.923498806 +0000 UTC m=+237.630221586" watchObservedRunningTime="2024-12-13 01:31:40.923823247 +0000 UTC m=+237.630546027" Dec 13 01:31:41.869456 kubelet[3215]: E1213 01:31:41.869418 3215 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36818->127.0.0.1:39925: write tcp 127.0.0.1:36818->127.0.0.1:39925: write: broken pipe Dec 13 01:31:43.283730 systemd-networkd[1581]: lxc_health: Link UP Dec 13 01:31:43.303550 systemd-networkd[1581]: lxc_health: Gained carrier Dec 13 01:31:43.414854 containerd[1707]: time="2024-12-13T01:31:43.414813500Z" level=info msg="StopPodSandbox for \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\"" Dec 13 01:31:43.415695 containerd[1707]: time="2024-12-13T01:31:43.415463140Z" level=info msg="TearDown network for sandbox \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\" successfully" Dec 13 01:31:43.415695 containerd[1707]: time="2024-12-13T01:31:43.415486100Z" level=info msg="StopPodSandbox for \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\" returns successfully" Dec 13 01:31:43.416811 containerd[1707]: time="2024-12-13T01:31:43.416543341Z" level=info msg="RemovePodSandbox for \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\"" Dec 13 01:31:43.416811 containerd[1707]: time="2024-12-13T01:31:43.416779341Z" level=info msg="Forcibly stopping sandbox \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\"" Dec 13 01:31:43.417640 containerd[1707]: time="2024-12-13T01:31:43.417536062Z" level=info msg="TearDown network for sandbox \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\" successfully" Dec 13 01:31:43.432563 containerd[1707]: time="2024-12-13T01:31:43.432465515Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:31:43.432563 containerd[1707]: time="2024-12-13T01:31:43.432556395Z" level=info msg="RemovePodSandbox \"8f0df17098521a13283369dbdea6d868b2c22385a33f03ff1dd102011a8a97a0\" returns successfully" Dec 13 01:31:43.434798 containerd[1707]: time="2024-12-13T01:31:43.434761197Z" level=info msg="StopPodSandbox for \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\"" Dec 13 01:31:43.435217 containerd[1707]: time="2024-12-13T01:31:43.435050117Z" level=info msg="TearDown network for sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" successfully" Dec 13 01:31:43.435217 containerd[1707]: time="2024-12-13T01:31:43.435068117Z" level=info msg="StopPodSandbox for \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" returns successfully" Dec 13 01:31:43.436854 containerd[1707]: time="2024-12-13T01:31:43.435557718Z" level=info msg="RemovePodSandbox for \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\"" Dec 13 01:31:43.436854 containerd[1707]: time="2024-12-13T01:31:43.435584758Z" level=info msg="Forcibly stopping sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\"" Dec 13 01:31:43.436854 containerd[1707]: time="2024-12-13T01:31:43.435636678Z" level=info msg="TearDown network for sandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" successfully" Dec 13 01:31:43.446990 containerd[1707]: time="2024-12-13T01:31:43.446942368Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:31:43.447196 containerd[1707]: time="2024-12-13T01:31:43.447177048Z" level=info msg="RemovePodSandbox \"41dc77fecf4c68bcdba16facf0496ddafbda1a9b6643fd07e17e31023095ec68\" returns successfully" Dec 13 01:31:43.971012 systemd[1]: run-containerd-runc-k8s.io-841a9a91767ac87337c201c3fc929975a61a0da7837fe831d1c04e8e9bfd266e-runc.kyPfiF.mount: Deactivated successfully. Dec 13 01:31:44.439608 systemd-networkd[1581]: lxc_health: Gained IPv6LL Dec 13 01:31:50.528056 sshd[5118]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:50.531839 systemd[1]: sshd@25-10.200.20.20:22-10.200.16.10:36012.service: Deactivated successfully. Dec 13 01:31:50.535007 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:31:50.535799 systemd-logind[1668]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:31:50.536939 systemd-logind[1668]: Removed session 28.