May 8 23:52:48.327256 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 23:52:48.327279 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 8 22:24:27 -00 2025 May 8 23:52:48.327287 kernel: KASLR enabled May 8 23:52:48.327293 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 8 23:52:48.327300 kernel: printk: bootconsole [pl11] enabled May 8 23:52:48.327305 kernel: efi: EFI v2.7 by EDK II May 8 23:52:48.327313 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e423d98 May 8 23:52:48.327319 kernel: random: crng init done May 8 23:52:48.327324 kernel: secureboot: Secure boot disabled May 8 23:52:48.327330 kernel: ACPI: Early table checksum verification disabled May 8 23:52:48.327336 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 8 23:52:48.327342 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327348 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327355 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 8 23:52:48.327363 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327369 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327375 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327383 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327389 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327395 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327402 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 8 23:52:48.327408 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327415 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 8 23:52:48.327421 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 8 23:52:48.327427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 8 23:52:48.327433 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 8 23:52:48.327439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 8 23:52:48.327445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 8 23:52:48.327453 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 8 23:52:48.327459 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 8 23:52:48.327466 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 8 23:52:48.327472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 8 23:52:48.327478 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 8 23:52:48.327484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 8 23:52:48.327490 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 8 23:52:48.327496 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] May 8 23:52:48.327503 kernel: Zone ranges: May 8 23:52:48.329616 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 8 23:52:48.329631 kernel: DMA32 empty May 8 23:52:48.329638 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 8 23:52:48.329654 kernel: Movable zone start for each node May 8 23:52:48.329661 kernel: Early memory node ranges May 8 23:52:48.329668 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 8 23:52:48.329674 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] May 8 23:52:48.329681 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 8 23:52:48.329689 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 8 23:52:48.329696 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 8 23:52:48.329703 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 8 23:52:48.329709 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 8 23:52:48.329717 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 8 23:52:48.329723 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 8 23:52:48.329730 kernel: psci: probing for conduit method from ACPI. May 8 23:52:48.329747 kernel: psci: PSCIv1.1 detected in firmware. May 8 23:52:48.329753 kernel: psci: Using standard PSCI v0.2 function IDs May 8 23:52:48.329760 kernel: psci: MIGRATE_INFO_TYPE not supported. May 8 23:52:48.329767 kernel: psci: SMC Calling Convention v1.4 May 8 23:52:48.329773 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 8 23:52:48.329781 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 8 23:52:48.329788 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 8 23:52:48.329795 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 8 23:52:48.329802 kernel: pcpu-alloc: [0] 0 [0] 1 May 8 23:52:48.329808 kernel: Detected PIPT I-cache on CPU0 May 8 23:52:48.329815 kernel: CPU features: detected: GIC system register CPU interface May 8 23:52:48.329821 kernel: CPU features: detected: Hardware dirty bit management May 8 23:52:48.329828 kernel: CPU features: detected: Spectre-BHB May 8 23:52:48.329834 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 23:52:48.329841 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 23:52:48.329848 kernel: CPU features: detected: ARM erratum 1418040 May 8 23:52:48.329856 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 8 23:52:48.329863 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 23:52:48.329870 kernel: alternatives: applying boot alternatives May 8 23:52:48.329878 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 8 23:52:48.329885 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 23:52:48.329892 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 23:52:48.329899 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 23:52:48.329906 kernel: Fallback order for Node 0: 0 May 8 23:52:48.329913 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 8 23:52:48.329919 kernel: Policy zone: Normal May 8 23:52:48.329926 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 23:52:48.329935 kernel: software IO TLB: area num 2. May 8 23:52:48.329941 kernel: software IO TLB: mapped [mem 0x0000000036620000-0x000000003a620000] (64MB) May 8 23:52:48.329948 kernel: Memory: 3982376K/4194160K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 211784K reserved, 0K cma-reserved) May 8 23:52:48.329955 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 23:52:48.329962 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 23:52:48.329969 kernel: rcu: RCU event tracing is enabled. May 8 23:52:48.329976 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 23:52:48.329983 kernel: Trampoline variant of Tasks RCU enabled. May 8 23:52:48.329989 kernel: Tracing variant of Tasks RCU enabled. May 8 23:52:48.329996 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 23:52:48.330003 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 23:52:48.330011 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 23:52:48.330018 kernel: GICv3: 960 SPIs implemented May 8 23:52:48.330024 kernel: GICv3: 0 Extended SPIs implemented May 8 23:52:48.330031 kernel: Root IRQ handler: gic_handle_irq May 8 23:52:48.330037 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 23:52:48.330044 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 8 23:52:48.330050 kernel: ITS: No ITS available, not enabling LPIs May 8 23:52:48.330057 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 23:52:48.330064 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:52:48.330070 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 23:52:48.330077 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 23:52:48.330084 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 23:52:48.330092 kernel: Console: colour dummy device 80x25 May 8 23:52:48.330099 kernel: printk: console [tty1] enabled May 8 23:52:48.330106 kernel: ACPI: Core revision 20230628 May 8 23:52:48.330113 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 23:52:48.330120 kernel: pid_max: default: 32768 minimum: 301 May 8 23:52:48.330127 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 23:52:48.330134 kernel: landlock: Up and running. May 8 23:52:48.330141 kernel: SELinux: Initializing. May 8 23:52:48.330148 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:52:48.330157 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:52:48.330164 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 23:52:48.330171 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 23:52:48.330178 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 8 23:52:48.330185 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 8 23:52:48.330191 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 8 23:52:48.330199 kernel: rcu: Hierarchical SRCU implementation. May 8 23:52:48.330213 kernel: rcu: Max phase no-delay instances is 400. May 8 23:52:48.330220 kernel: Remapping and enabling EFI services. May 8 23:52:48.330227 kernel: smp: Bringing up secondary CPUs ... May 8 23:52:48.330234 kernel: Detected PIPT I-cache on CPU1 May 8 23:52:48.330241 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 8 23:52:48.330250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:52:48.330257 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 23:52:48.330265 kernel: smp: Brought up 1 node, 2 CPUs May 8 23:52:48.330272 kernel: SMP: Total of 2 processors activated. May 8 23:52:48.330279 kernel: CPU features: detected: 32-bit EL0 Support May 8 23:52:48.330288 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 8 23:52:48.330295 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 23:52:48.330302 kernel: CPU features: detected: CRC32 instructions May 8 23:52:48.330309 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 23:52:48.330316 kernel: CPU features: detected: LSE atomic instructions May 8 23:52:48.330323 kernel: CPU features: detected: Privileged Access Never May 8 23:52:48.330331 kernel: CPU: All CPU(s) started at EL1 May 8 23:52:48.330338 kernel: alternatives: applying system-wide alternatives May 8 23:52:48.330345 kernel: devtmpfs: initialized May 8 23:52:48.330354 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 23:52:48.330361 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 23:52:48.330369 kernel: pinctrl core: initialized pinctrl subsystem May 8 23:52:48.330376 kernel: SMBIOS 3.1.0 present. May 8 23:52:48.330383 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 8 23:52:48.330390 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 23:52:48.330398 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 23:52:48.330405 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 23:52:48.330414 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 23:52:48.330422 kernel: audit: initializing netlink subsys (disabled) May 8 23:52:48.330429 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 8 23:52:48.330436 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 23:52:48.330443 kernel: cpuidle: using governor menu May 8 23:52:48.330451 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 23:52:48.330458 kernel: ASID allocator initialised with 32768 entries May 8 23:52:48.330465 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 23:52:48.330472 kernel: Serial: AMBA PL011 UART driver May 8 23:52:48.330480 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 23:52:48.330488 kernel: Modules: 0 pages in range for non-PLT usage May 8 23:52:48.330495 kernel: Modules: 508944 pages in range for PLT usage May 8 23:52:48.330502 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 23:52:48.332163 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 23:52:48.332182 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 23:52:48.332190 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 23:52:48.332198 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 23:52:48.332205 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 23:52:48.332219 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 23:52:48.332227 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 23:52:48.332234 kernel: ACPI: Added _OSI(Module Device) May 8 23:52:48.332242 kernel: ACPI: Added _OSI(Processor Device) May 8 23:52:48.332250 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 23:52:48.332257 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 23:52:48.332264 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 23:52:48.332272 kernel: ACPI: Interpreter enabled May 8 23:52:48.332279 kernel: ACPI: Using GIC for interrupt routing May 8 23:52:48.332286 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 8 23:52:48.332295 kernel: printk: console [ttyAMA0] enabled May 8 23:52:48.332303 kernel: printk: bootconsole [pl11] disabled May 8 23:52:48.332310 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 8 23:52:48.332318 kernel: iommu: Default domain type: Translated May 8 23:52:48.332325 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 23:52:48.332333 kernel: efivars: Registered efivars operations May 8 23:52:48.332340 kernel: vgaarb: loaded May 8 23:52:48.332347 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 23:52:48.332354 kernel: VFS: Disk quotas dquot_6.6.0 May 8 23:52:48.332364 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 23:52:48.332371 kernel: pnp: PnP ACPI init May 8 23:52:48.332378 kernel: pnp: PnP ACPI: found 0 devices May 8 23:52:48.332385 kernel: NET: Registered PF_INET protocol family May 8 23:52:48.332404 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 23:52:48.332421 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 23:52:48.332429 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 23:52:48.332436 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 23:52:48.332445 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 23:52:48.332453 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 23:52:48.332460 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:52:48.332467 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:52:48.332474 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 23:52:48.332482 kernel: PCI: CLS 0 bytes, default 64 May 8 23:52:48.332489 kernel: kvm [1]: HYP mode not available May 8 23:52:48.332496 kernel: Initialise system trusted keyrings May 8 23:52:48.332504 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 23:52:48.332523 kernel: Key type asymmetric registered May 8 23:52:48.332531 kernel: Asymmetric key parser 'x509' registered May 8 23:52:48.332538 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 23:52:48.332545 kernel: io scheduler mq-deadline registered May 8 23:52:48.332552 kernel: io scheduler kyber registered May 8 23:52:48.332559 kernel: io scheduler bfq registered May 8 23:52:48.332566 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 23:52:48.332574 kernel: thunder_xcv, ver 1.0 May 8 23:52:48.332581 kernel: thunder_bgx, ver 1.0 May 8 23:52:48.332588 kernel: nicpf, ver 1.0 May 8 23:52:48.332597 kernel: nicvf, ver 1.0 May 8 23:52:48.332747 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 23:52:48.332822 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T23:52:47 UTC (1746748367) May 8 23:52:48.332832 kernel: efifb: probing for efifb May 8 23:52:48.332840 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 8 23:52:48.332848 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 8 23:52:48.332855 kernel: efifb: scrolling: redraw May 8 23:52:48.332865 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 8 23:52:48.332872 kernel: Console: switching to colour frame buffer device 128x48 May 8 23:52:48.332879 kernel: fb0: EFI VGA frame buffer device May 8 23:52:48.332886 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 8 23:52:48.332894 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 23:52:48.332901 kernel: No ACPI PMU IRQ for CPU0 May 8 23:52:48.332908 kernel: No ACPI PMU IRQ for CPU1 May 8 23:52:48.332915 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 8 23:52:48.332922 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 23:52:48.332931 kernel: watchdog: Hard watchdog permanently disabled May 8 23:52:48.332938 kernel: NET: Registered PF_INET6 protocol family May 8 23:52:48.332945 kernel: Segment Routing with IPv6 May 8 23:52:48.332953 kernel: In-situ OAM (IOAM) with IPv6 May 8 23:52:48.332960 kernel: NET: Registered PF_PACKET protocol family May 8 23:52:48.332967 kernel: Key type dns_resolver registered May 8 23:52:48.332974 kernel: registered taskstats version 1 May 8 23:52:48.332981 kernel: Loading compiled-in X.509 certificates May 8 23:52:48.332989 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: c12e278d643ef0ddd9117a97de150d7afa727d1b' May 8 23:52:48.332996 kernel: Key type .fscrypt registered May 8 23:52:48.333005 kernel: Key type fscrypt-provisioning registered May 8 23:52:48.333012 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 23:52:48.333020 kernel: ima: Allocated hash algorithm: sha1 May 8 23:52:48.333027 kernel: ima: No architecture policies found May 8 23:52:48.333034 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 23:52:48.333041 kernel: clk: Disabling unused clocks May 8 23:52:48.333048 kernel: Freeing unused kernel memory: 39744K May 8 23:52:48.333056 kernel: Run /init as init process May 8 23:52:48.333064 kernel: with arguments: May 8 23:52:48.333072 kernel: /init May 8 23:52:48.333079 kernel: with environment: May 8 23:52:48.333086 kernel: HOME=/ May 8 23:52:48.333093 kernel: TERM=linux May 8 23:52:48.333100 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 23:52:48.333109 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:52:48.333119 systemd[1]: Detected virtualization microsoft. May 8 23:52:48.333128 systemd[1]: Detected architecture arm64. May 8 23:52:48.333136 systemd[1]: Running in initrd. May 8 23:52:48.333144 systemd[1]: No hostname configured, using default hostname. May 8 23:52:48.333151 systemd[1]: Hostname set to . May 8 23:52:48.333159 systemd[1]: Initializing machine ID from random generator. May 8 23:52:48.333167 systemd[1]: Queued start job for default target initrd.target. May 8 23:52:48.333175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:52:48.333182 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:52:48.333193 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 23:52:48.333200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:52:48.333208 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 23:52:48.333216 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 23:52:48.333225 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 23:52:48.333260 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 23:52:48.333272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:52:48.333282 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:52:48.333290 systemd[1]: Reached target paths.target - Path Units. May 8 23:52:48.333298 systemd[1]: Reached target slices.target - Slice Units. May 8 23:52:48.333306 systemd[1]: Reached target swap.target - Swaps. May 8 23:52:48.333314 systemd[1]: Reached target timers.target - Timer Units. May 8 23:52:48.333322 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:52:48.333329 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:52:48.333338 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 23:52:48.333347 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 23:52:48.333355 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:52:48.333363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:52:48.333370 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:52:48.333378 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:52:48.333386 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 23:52:48.333394 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:52:48.333401 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 23:52:48.333409 systemd[1]: Starting systemd-fsck-usr.service... May 8 23:52:48.333418 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:52:48.333426 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:52:48.333458 systemd-journald[218]: Collecting audit messages is disabled. May 8 23:52:48.333478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:52:48.333489 systemd-journald[218]: Journal started May 8 23:52:48.339817 systemd-journald[218]: Runtime Journal (/run/log/journal/cff97c64ccc640759aa0cd20cd7bc12a) is 8.0M, max 78.5M, 70.5M free. May 8 23:52:48.353559 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 23:52:48.353599 kernel: Bridge firewalling registered May 8 23:52:48.331372 systemd-modules-load[219]: Inserted module 'overlay' May 8 23:52:48.367843 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:52:48.356271 systemd-modules-load[219]: Inserted module 'br_netfilter' May 8 23:52:48.379202 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 23:52:48.385342 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:52:48.392584 systemd[1]: Finished systemd-fsck-usr.service. May 8 23:52:48.402546 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:52:48.415947 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:48.440379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:52:48.457487 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:52:48.475788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 23:52:48.495268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:52:48.513056 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:52:48.529593 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:52:48.542807 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 23:52:48.550300 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:52:48.579674 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 23:52:48.598885 dracut-cmdline[252]: dracut-dracut-053 May 8 23:52:48.606624 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 8 23:52:48.601757 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:52:48.617765 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:52:48.677189 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:52:48.681249 systemd-resolved[258]: Positive Trust Anchors: May 8 23:52:48.681261 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:52:48.681293 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:52:48.683474 systemd-resolved[258]: Defaulting to hostname 'linux'. May 8 23:52:48.686325 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:52:48.701791 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:52:48.882541 kernel: SCSI subsystem initialized May 8 23:52:48.890552 kernel: Loading iSCSI transport class v2.0-870. May 8 23:52:48.900532 kernel: iscsi: registered transport (tcp) May 8 23:52:48.918873 kernel: iscsi: registered transport (qla4xxx) May 8 23:52:48.918922 kernel: QLogic iSCSI HBA Driver May 8 23:52:48.960962 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 23:52:48.979806 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 23:52:49.011687 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 23:52:49.011758 kernel: device-mapper: uevent: version 1.0.3 May 8 23:52:49.018088 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 23:52:49.070534 kernel: raid6: neonx8 gen() 15774 MB/s May 8 23:52:49.087522 kernel: raid6: neonx4 gen() 15657 MB/s May 8 23:52:49.107525 kernel: raid6: neonx2 gen() 13236 MB/s May 8 23:52:49.128522 kernel: raid6: neonx1 gen() 10486 MB/s May 8 23:52:49.148524 kernel: raid6: int64x8 gen() 6958 MB/s May 8 23:52:49.168524 kernel: raid6: int64x4 gen() 7350 MB/s May 8 23:52:49.189526 kernel: raid6: int64x2 gen() 6124 MB/s May 8 23:52:49.212773 kernel: raid6: int64x1 gen() 5061 MB/s May 8 23:52:49.212803 kernel: raid6: using algorithm neonx8 gen() 15774 MB/s May 8 23:52:49.236624 kernel: raid6: .... xor() 11937 MB/s, rmw enabled May 8 23:52:49.236634 kernel: raid6: using neon recovery algorithm May 8 23:52:49.248749 kernel: xor: measuring software checksum speed May 8 23:52:49.248762 kernel: 8regs : 19802 MB/sec May 8 23:52:49.252083 kernel: 32regs : 19641 MB/sec May 8 23:52:49.255410 kernel: arm64_neon : 27043 MB/sec May 8 23:52:49.259694 kernel: xor: using function: arm64_neon (27043 MB/sec) May 8 23:52:49.310535 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 23:52:49.320905 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 23:52:49.337648 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:52:49.360375 systemd-udevd[439]: Using default interface naming scheme 'v255'. May 8 23:52:49.365588 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:52:49.384640 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 23:52:49.407361 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation May 8 23:52:49.433549 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:52:49.452832 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:52:49.493504 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:52:49.515771 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 23:52:49.537663 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 23:52:49.545835 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:52:49.553627 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:52:49.567886 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:52:49.598801 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 23:52:49.616627 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 23:52:49.643323 kernel: hv_vmbus: Vmbus version:5.3 May 8 23:52:49.636423 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:52:49.636569 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:52:49.655136 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:52:49.671483 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:52:49.768353 kernel: hv_vmbus: registering driver hid_hyperv May 8 23:52:49.768403 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 23:52:49.768415 kernel: hv_vmbus: registering driver hv_storvsc May 8 23:52:49.768425 kernel: scsi host0: storvsc_host_t May 8 23:52:49.768753 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 8 23:52:49.768766 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 8 23:52:49.768869 kernel: hv_vmbus: registering driver hyperv_keyboard May 8 23:52:49.768879 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 23:52:49.768927 kernel: hv_vmbus: registering driver hv_netvsc May 8 23:52:49.768938 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 8 23:52:49.769070 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 8 23:52:49.769179 kernel: scsi host1: storvsc_host_t May 8 23:52:49.769361 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 8 23:52:49.671669 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:49.678245 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:52:49.781162 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:52:49.810672 kernel: PTP clock support registered May 8 23:52:49.804614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:49.828657 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:52:49.858794 kernel: hv_utils: Registering HyperV Utility Driver May 8 23:52:49.858815 kernel: hv_vmbus: registering driver hv_utils May 8 23:52:49.858824 kernel: hv_netvsc 002248b5-c6ce-0022-48b5-c6ce002248b5 eth0: VF slot 1 added May 8 23:52:49.859104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:52:50.052224 kernel: hv_utils: Heartbeat IC version 3.0 May 8 23:52:50.052245 kernel: hv_utils: Shutdown IC version 3.2 May 8 23:52:50.052254 kernel: hv_utils: TimeSync IC version 4.0 May 8 23:52:49.859216 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:52:50.051704 systemd-resolved[258]: Clock change detected. Flushing caches. May 8 23:52:50.100918 kernel: hv_vmbus: registering driver hv_pci May 8 23:52:50.100948 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 8 23:52:50.101104 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 23:52:50.101114 kernel: hv_pci a545b615-ff41-4dd6-8675-f4d5ed9fcfe6: PCI VMBus probing: Using version 0x10004 May 8 23:52:50.101210 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 8 23:52:50.101293 kernel: hv_pci a545b615-ff41-4dd6-8675-f4d5ed9fcfe6: PCI host bridge to bus ff41:00 May 8 23:52:50.062636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:52:50.128810 kernel: pci_bus ff41:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 8 23:52:50.128992 kernel: pci_bus ff41:00: No busn resource found for root bus, will use [bus 00-ff] May 8 23:52:50.062691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:50.143089 kernel: pci ff41:00:02.0: [15b3:1018] type 00 class 0x020000 May 8 23:52:50.088566 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:52:50.184644 kernel: pci ff41:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 8 23:52:50.184694 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 8 23:52:50.184866 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 8 23:52:50.184952 kernel: pci ff41:00:02.0: enabling Extended Tags May 8 23:52:50.184969 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 23:52:50.185054 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 8 23:52:50.185137 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 8 23:52:50.129712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:52:50.251974 kernel: pci ff41:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ff41:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 8 23:52:50.252702 kernel: pci_bus ff41:00: busn_res: [bus 00-ff] end is updated to 00 May 8 23:52:50.252797 kernel: pci ff41:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 8 23:52:50.252886 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 23:52:50.252897 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 23:52:50.173171 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:50.244718 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:52:50.305468 kernel: mlx5_core ff41:00:02.0: enabling device (0000 -> 0002) May 8 23:52:50.305706 kernel: mlx5_core ff41:00:02.0: firmware version: 16.30.1284 May 8 23:52:50.297921 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:52:50.511834 kernel: hv_netvsc 002248b5-c6ce-0022-48b5-c6ce002248b5 eth0: VF registering: eth1 May 8 23:52:50.512034 kernel: mlx5_core ff41:00:02.0 eth1: joined to eth0 May 8 23:52:50.520565 kernel: mlx5_core ff41:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 8 23:52:50.531468 kernel: mlx5_core ff41:00:02.0 enP65345s1: renamed from eth1 May 8 23:52:50.683986 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 8 23:52:50.717585 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 8 23:52:50.746017 kernel: BTRFS: device fsid 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 devid 1 transid 43 /dev/sda3 scanned by (udev-worker) (485) May 8 23:52:50.746040 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (501) May 8 23:52:50.749105 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 8 23:52:50.756551 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 8 23:52:50.788503 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 8 23:52:50.803652 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 23:52:50.832158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 23:52:51.844476 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 23:52:51.844896 disk-uuid[607]: The operation has completed successfully. May 8 23:52:51.895000 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 23:52:51.895092 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 23:52:51.929577 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 23:52:51.942333 sh[693]: Success May 8 23:52:51.970483 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 23:52:52.125995 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 23:52:52.135577 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 23:52:52.145476 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 23:52:52.175960 kernel: BTRFS info (device dm-0): first mount of filesystem 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 May 8 23:52:52.176002 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 23:52:52.183497 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 23:52:52.188966 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 23:52:52.193738 kernel: BTRFS info (device dm-0): using free space tree May 8 23:52:52.415477 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 23:52:52.420962 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 23:52:52.438712 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 23:52:52.446340 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 23:52:52.485800 kernel: BTRFS info (device sda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:52:52.485859 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:52:52.490633 kernel: BTRFS info (device sda6): using free space tree May 8 23:52:52.510203 kernel: BTRFS info (device sda6): auto enabling async discard May 8 23:52:52.517942 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 23:52:52.531449 kernel: BTRFS info (device sda6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:52:52.538653 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 23:52:52.554736 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 23:52:52.582212 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:52:52.598619 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:52:52.631894 systemd-networkd[877]: lo: Link UP May 8 23:52:52.631903 systemd-networkd[877]: lo: Gained carrier May 8 23:52:52.636661 systemd-networkd[877]: Enumeration completed May 8 23:52:52.636864 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:52:52.647278 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:52:52.647282 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:52:52.648020 systemd[1]: Reached target network.target - Network. May 8 23:52:52.734455 kernel: mlx5_core ff41:00:02.0 enP65345s1: Link up May 8 23:52:52.772698 kernel: hv_netvsc 002248b5-c6ce-0022-48b5-c6ce002248b5 eth0: Data path switched to VF: enP65345s1 May 8 23:52:52.775018 systemd-networkd[877]: enP65345s1: Link UP May 8 23:52:52.775100 systemd-networkd[877]: eth0: Link UP May 8 23:52:52.775200 systemd-networkd[877]: eth0: Gained carrier May 8 23:52:52.775209 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:52:52.787729 systemd-networkd[877]: enP65345s1: Gained carrier May 8 23:52:52.814477 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 8 23:52:53.319867 ignition[856]: Ignition 2.20.0 May 8 23:52:53.319879 ignition[856]: Stage: fetch-offline May 8 23:52:53.321713 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:52:53.319914 ignition[856]: no configs at "/usr/lib/ignition/base.d" May 8 23:52:53.345579 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 23:52:53.319922 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:53.320033 ignition[856]: parsed url from cmdline: "" May 8 23:52:53.320037 ignition[856]: no config URL provided May 8 23:52:53.320042 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" May 8 23:52:53.320051 ignition[856]: no config at "/usr/lib/ignition/user.ign" May 8 23:52:53.320056 ignition[856]: failed to fetch config: resource requires networking May 8 23:52:53.320249 ignition[856]: Ignition finished successfully May 8 23:52:53.362014 ignition[886]: Ignition 2.20.0 May 8 23:52:53.362022 ignition[886]: Stage: fetch May 8 23:52:53.362233 ignition[886]: no configs at "/usr/lib/ignition/base.d" May 8 23:52:53.362244 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:53.362346 ignition[886]: parsed url from cmdline: "" May 8 23:52:53.362350 ignition[886]: no config URL provided May 8 23:52:53.362354 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" May 8 23:52:53.362362 ignition[886]: no config at "/usr/lib/ignition/user.ign" May 8 23:52:53.362389 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 8 23:52:53.490597 ignition[886]: GET result: OK May 8 23:52:53.490680 ignition[886]: config has been read from IMDS userdata May 8 23:52:53.490719 ignition[886]: parsing config with SHA512: ddbdeebdd143268af7bb392b41d0790d17c330d1c87c82efc5ea26bd9465d6df6613c99516b067f795a003136e71385d26d69fd0669893e099560af7c3b23c29 May 8 23:52:53.495218 unknown[886]: fetched base config from "system" May 8 23:52:53.495650 ignition[886]: fetch: fetch complete May 8 23:52:53.495226 unknown[886]: fetched base config from "system" May 8 23:52:53.495654 ignition[886]: fetch: fetch passed May 8 23:52:53.495230 unknown[886]: fetched user config from "azure" May 8 23:52:53.495696 ignition[886]: Ignition finished successfully May 8 23:52:53.501298 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 23:52:53.524591 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 23:52:53.552145 ignition[893]: Ignition 2.20.0 May 8 23:52:53.552166 ignition[893]: Stage: kargs May 8 23:52:53.552339 ignition[893]: no configs at "/usr/lib/ignition/base.d" May 8 23:52:53.562836 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 23:52:53.552348 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:53.553377 ignition[893]: kargs: kargs passed May 8 23:52:53.553426 ignition[893]: Ignition finished successfully May 8 23:52:53.582591 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 23:52:53.602022 ignition[899]: Ignition 2.20.0 May 8 23:52:53.602028 ignition[899]: Stage: disks May 8 23:52:53.607047 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 23:52:53.602207 ignition[899]: no configs at "/usr/lib/ignition/base.d" May 8 23:52:53.615907 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 23:52:53.602217 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:53.628338 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 23:52:53.603312 ignition[899]: disks: disks passed May 8 23:52:53.640310 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:52:53.603362 ignition[899]: Ignition finished successfully May 8 23:52:53.653520 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:52:53.665988 systemd[1]: Reached target basic.target - Basic System. May 8 23:52:53.692637 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 23:52:53.761610 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 8 23:52:53.771753 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 23:52:53.791683 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 23:52:53.849493 kernel: EXT4-fs (sda9): mounted filesystem ad4e3afa-b242-4ca7-a808-1f37a4d41793 r/w with ordered data mode. Quota mode: none. May 8 23:52:53.849805 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 23:52:53.854908 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 23:52:53.892514 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:52:53.902736 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 23:52:53.910605 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 8 23:52:53.930057 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 23:52:53.955674 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) May 8 23:52:53.955700 kernel: BTRFS info (device sda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:52:53.955710 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:52:53.930094 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:52:53.989933 kernel: BTRFS info (device sda6): using free space tree May 8 23:52:53.989956 kernel: BTRFS info (device sda6): auto enabling async discard May 8 23:52:53.974015 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 23:52:53.985559 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:52:54.006688 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 23:52:54.065588 systemd-networkd[877]: enP65345s1: Gained IPv6LL May 8 23:52:54.384117 coreos-metadata[921]: May 08 23:52:54.384 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 8 23:52:54.391919 systemd-networkd[877]: eth0: Gained IPv6LL May 8 23:52:54.397101 coreos-metadata[921]: May 08 23:52:54.396 INFO Fetch successful May 8 23:52:54.397101 coreos-metadata[921]: May 08 23:52:54.396 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 8 23:52:54.414098 coreos-metadata[921]: May 08 23:52:54.413 INFO Fetch successful May 8 23:52:54.424757 coreos-metadata[921]: May 08 23:52:54.424 INFO wrote hostname ci-4152.2.3-n-71d56f534c to /sysroot/etc/hostname May 8 23:52:54.433847 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 23:52:54.637963 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory May 8 23:52:54.658579 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory May 8 23:52:54.668203 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory May 8 23:52:54.676525 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory May 8 23:52:55.343528 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 23:52:55.357615 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 23:52:55.365626 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 23:52:55.387566 kernel: BTRFS info (device sda6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:52:55.387179 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 23:52:55.414481 ignition[1039]: INFO : Ignition 2.20.0 May 8 23:52:55.414481 ignition[1039]: INFO : Stage: mount May 8 23:52:55.414481 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:52:55.414481 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:55.416623 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 23:52:55.460530 ignition[1039]: INFO : mount: mount passed May 8 23:52:55.460530 ignition[1039]: INFO : Ignition finished successfully May 8 23:52:55.424246 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 23:52:55.449669 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 23:52:55.468677 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:52:55.503459 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1052) May 8 23:52:55.516156 kernel: BTRFS info (device sda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:52:55.516195 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:52:55.520439 kernel: BTRFS info (device sda6): using free space tree May 8 23:52:55.527469 kernel: BTRFS info (device sda6): auto enabling async discard May 8 23:52:55.528519 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:52:55.557089 ignition[1069]: INFO : Ignition 2.20.0 May 8 23:52:55.557089 ignition[1069]: INFO : Stage: files May 8 23:52:55.564815 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:52:55.564815 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:55.564815 ignition[1069]: DEBUG : files: compiled without relabeling support, skipping May 8 23:52:55.582657 ignition[1069]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 23:52:55.582657 ignition[1069]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 23:52:55.598759 ignition[1069]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 23:52:55.606557 ignition[1069]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 23:52:55.613978 ignition[1069]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 23:52:55.611865 unknown[1069]: wrote ssh authorized keys file for user: core May 8 23:52:55.627412 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 23:52:55.627412 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 23:52:55.736884 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 23:52:55.950883 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 23:52:55.962180 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:52:55.962180 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 8 23:52:56.480737 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 23:52:56.684761 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:52:56.684761 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 23:52:57.148026 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 23:52:57.896545 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:52:57.896545 ignition[1069]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 23:52:57.915781 ignition[1069]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 23:52:57.915781 ignition[1069]: INFO : files: files passed May 8 23:52:57.915781 ignition[1069]: INFO : Ignition finished successfully May 8 23:52:57.921753 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 23:52:57.960754 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 23:52:57.976618 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 23:52:58.055052 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:52:58.055052 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 23:52:58.001346 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 23:52:58.086673 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:52:58.001499 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 23:52:58.026477 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:52:58.034952 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 23:52:58.064720 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 23:52:58.104316 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 23:52:58.104472 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 23:52:58.116248 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 23:52:58.127453 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 23:52:58.139883 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 23:52:58.152198 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 23:52:58.176473 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:52:58.201811 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 23:52:58.220339 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 23:52:58.227387 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:52:58.239979 systemd[1]: Stopped target timers.target - Timer Units. May 8 23:52:58.250953 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 23:52:58.251077 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:52:58.267419 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 23:52:58.273302 systemd[1]: Stopped target basic.target - Basic System. May 8 23:52:58.284762 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 23:52:58.295902 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:52:58.306800 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 23:52:58.318395 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 23:52:58.329886 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:52:58.342258 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 23:52:58.353770 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 23:52:58.367264 systemd[1]: Stopped target swap.target - Swaps. May 8 23:52:58.377940 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 23:52:58.378078 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 23:52:58.394516 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 23:52:58.401277 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:52:58.413890 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 23:52:58.413961 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:52:58.427280 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 23:52:58.427408 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 23:52:58.446631 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 23:52:58.446754 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:52:58.520349 ignition[1121]: INFO : Ignition 2.20.0 May 8 23:52:58.520349 ignition[1121]: INFO : Stage: umount May 8 23:52:58.520349 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:52:58.520349 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:58.520349 ignition[1121]: INFO : umount: umount passed May 8 23:52:58.520349 ignition[1121]: INFO : Ignition finished successfully May 8 23:52:58.454626 systemd[1]: ignition-files.service: Deactivated successfully. May 8 23:52:58.454722 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 23:52:58.465612 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 8 23:52:58.465710 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 23:52:58.486769 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 23:52:58.504342 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 23:52:58.504549 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:52:58.527598 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 23:52:58.536649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 23:52:58.536871 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:52:58.555848 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 23:52:58.556011 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:52:58.581247 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 23:52:58.582036 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 23:52:58.582140 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 23:52:58.591166 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 23:52:58.591267 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 23:52:58.600159 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 23:52:58.600221 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 23:52:58.610654 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 23:52:58.610701 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 23:52:58.621855 systemd[1]: Stopped target network.target - Network. May 8 23:52:58.632532 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 23:52:58.632603 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:52:58.645806 systemd[1]: Stopped target paths.target - Path Units. May 8 23:52:58.656629 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 23:52:58.661479 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:52:58.671131 systemd[1]: Stopped target slices.target - Slice Units. May 8 23:52:58.682387 systemd[1]: Stopped target sockets.target - Socket Units. May 8 23:52:58.692705 systemd[1]: iscsid.socket: Deactivated successfully. May 8 23:52:58.692759 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:52:58.705652 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 23:52:58.705690 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:52:58.718619 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 23:52:58.718678 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 23:52:58.729700 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 23:52:58.729750 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 23:52:59.007643 kernel: hv_netvsc 002248b5-c6ce-0022-48b5-c6ce002248b5 eth0: Data path switched from VF: enP65345s1 May 8 23:52:58.742251 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 23:52:58.752739 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 23:52:58.764465 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 23:52:58.764562 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 23:52:58.776494 systemd-networkd[877]: eth0: DHCPv6 lease lost May 8 23:52:58.783852 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 23:52:58.784036 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 23:52:58.794999 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 23:52:58.795142 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 23:52:58.809184 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 23:52:58.809238 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 23:52:58.836690 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 23:52:58.847466 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 23:52:58.847547 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:52:58.861268 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:52:58.861331 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:52:58.872889 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 23:52:58.872958 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 23:52:58.887743 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 23:52:58.887807 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:52:58.904606 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:52:58.938917 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 23:52:58.940275 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:52:58.952023 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 23:52:58.952111 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 23:52:58.963660 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 23:52:58.963699 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:52:58.975189 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 23:52:58.975245 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 23:52:59.001857 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 23:52:59.001926 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 23:52:59.020345 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:52:59.020413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:52:59.308962 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). May 8 23:52:59.065800 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 23:52:59.085806 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 23:52:59.085893 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:52:59.097921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:52:59.097979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:59.112256 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 23:52:59.112378 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 23:52:59.123103 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 23:52:59.124464 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 23:52:59.146060 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 23:52:59.146175 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 23:52:59.158104 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 23:52:59.158244 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 23:52:59.169351 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 23:52:59.204734 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 23:52:59.228688 systemd[1]: Switching root. May 8 23:52:59.407655 systemd-journald[218]: Journal stopped May 8 23:52:48.327256 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 23:52:48.327279 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 8 22:24:27 -00 2025 May 8 23:52:48.327287 kernel: KASLR enabled May 8 23:52:48.327293 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 8 23:52:48.327300 kernel: printk: bootconsole [pl11] enabled May 8 23:52:48.327305 kernel: efi: EFI v2.7 by EDK II May 8 23:52:48.327313 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e423d98 May 8 23:52:48.327319 kernel: random: crng init done May 8 23:52:48.327324 kernel: secureboot: Secure boot disabled May 8 23:52:48.327330 kernel: ACPI: Early table checksum verification disabled May 8 23:52:48.327336 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 8 23:52:48.327342 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327348 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327355 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 8 23:52:48.327363 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327369 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327375 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327383 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327389 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327395 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327402 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 8 23:52:48.327408 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 8 23:52:48.327415 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 8 23:52:48.327421 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 8 23:52:48.327427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 8 23:52:48.327433 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 8 23:52:48.327439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 8 23:52:48.327445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 8 23:52:48.327453 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 8 23:52:48.327459 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 8 23:52:48.327466 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 8 23:52:48.327472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 8 23:52:48.327478 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 8 23:52:48.327484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 8 23:52:48.327490 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 8 23:52:48.327496 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] May 8 23:52:48.327503 kernel: Zone ranges: May 8 23:52:48.329616 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 8 23:52:48.329631 kernel: DMA32 empty May 8 23:52:48.329638 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 8 23:52:48.329654 kernel: Movable zone start for each node May 8 23:52:48.329661 kernel: Early memory node ranges May 8 23:52:48.329668 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 8 23:52:48.329674 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] May 8 23:52:48.329681 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 8 23:52:48.329689 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 8 23:52:48.329696 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 8 23:52:48.329703 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 8 23:52:48.329709 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 8 23:52:48.329717 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 8 23:52:48.329723 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 8 23:52:48.329730 kernel: psci: probing for conduit method from ACPI. May 8 23:52:48.329747 kernel: psci: PSCIv1.1 detected in firmware. May 8 23:52:48.329753 kernel: psci: Using standard PSCI v0.2 function IDs May 8 23:52:48.329760 kernel: psci: MIGRATE_INFO_TYPE not supported. May 8 23:52:48.329767 kernel: psci: SMC Calling Convention v1.4 May 8 23:52:48.329773 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 8 23:52:48.329781 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 8 23:52:48.329788 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 8 23:52:48.329795 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 8 23:52:48.329802 kernel: pcpu-alloc: [0] 0 [0] 1 May 8 23:52:48.329808 kernel: Detected PIPT I-cache on CPU0 May 8 23:52:48.329815 kernel: CPU features: detected: GIC system register CPU interface May 8 23:52:48.329821 kernel: CPU features: detected: Hardware dirty bit management May 8 23:52:48.329828 kernel: CPU features: detected: Spectre-BHB May 8 23:52:48.329834 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 23:52:48.329841 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 23:52:48.329848 kernel: CPU features: detected: ARM erratum 1418040 May 8 23:52:48.329856 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 8 23:52:48.329863 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 23:52:48.329870 kernel: alternatives: applying boot alternatives May 8 23:52:48.329878 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 8 23:52:48.329885 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 23:52:48.329892 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 23:52:48.329899 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 23:52:48.329906 kernel: Fallback order for Node 0: 0 May 8 23:52:48.329913 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 8 23:52:48.329919 kernel: Policy zone: Normal May 8 23:52:48.329926 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 23:52:48.329935 kernel: software IO TLB: area num 2. May 8 23:52:48.329941 kernel: software IO TLB: mapped [mem 0x0000000036620000-0x000000003a620000] (64MB) May 8 23:52:48.329948 kernel: Memory: 3982376K/4194160K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 211784K reserved, 0K cma-reserved) May 8 23:52:48.329955 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 23:52:48.329962 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 23:52:48.329969 kernel: rcu: RCU event tracing is enabled. May 8 23:52:48.329976 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 23:52:48.329983 kernel: Trampoline variant of Tasks RCU enabled. May 8 23:52:48.329989 kernel: Tracing variant of Tasks RCU enabled. May 8 23:52:48.329996 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 23:52:48.330003 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 23:52:48.330011 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 23:52:48.330018 kernel: GICv3: 960 SPIs implemented May 8 23:52:48.330024 kernel: GICv3: 0 Extended SPIs implemented May 8 23:52:48.330031 kernel: Root IRQ handler: gic_handle_irq May 8 23:52:48.330037 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 23:52:48.330044 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 8 23:52:48.330050 kernel: ITS: No ITS available, not enabling LPIs May 8 23:52:48.330057 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 23:52:48.330064 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:52:48.330070 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 23:52:48.330077 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 23:52:48.330084 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 23:52:48.330092 kernel: Console: colour dummy device 80x25 May 8 23:52:48.330099 kernel: printk: console [tty1] enabled May 8 23:52:48.330106 kernel: ACPI: Core revision 20230628 May 8 23:52:48.330113 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 23:52:48.330120 kernel: pid_max: default: 32768 minimum: 301 May 8 23:52:48.330127 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 23:52:48.330134 kernel: landlock: Up and running. May 8 23:52:48.330141 kernel: SELinux: Initializing. May 8 23:52:48.330148 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:52:48.330157 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:52:48.330164 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 23:52:48.330171 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 23:52:48.330178 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 8 23:52:48.330185 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 8 23:52:48.330191 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 8 23:52:48.330199 kernel: rcu: Hierarchical SRCU implementation. May 8 23:52:48.330213 kernel: rcu: Max phase no-delay instances is 400. May 8 23:52:48.330220 kernel: Remapping and enabling EFI services. May 8 23:52:48.330227 kernel: smp: Bringing up secondary CPUs ... May 8 23:52:48.330234 kernel: Detected PIPT I-cache on CPU1 May 8 23:52:48.330241 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 8 23:52:48.330250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:52:48.330257 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 23:52:48.330265 kernel: smp: Brought up 1 node, 2 CPUs May 8 23:52:48.330272 kernel: SMP: Total of 2 processors activated. May 8 23:52:48.330279 kernel: CPU features: detected: 32-bit EL0 Support May 8 23:52:48.330288 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 8 23:52:48.330295 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 23:52:48.330302 kernel: CPU features: detected: CRC32 instructions May 8 23:52:48.330309 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 23:52:48.330316 kernel: CPU features: detected: LSE atomic instructions May 8 23:52:48.330323 kernel: CPU features: detected: Privileged Access Never May 8 23:52:48.330331 kernel: CPU: All CPU(s) started at EL1 May 8 23:52:48.330338 kernel: alternatives: applying system-wide alternatives May 8 23:52:48.330345 kernel: devtmpfs: initialized May 8 23:52:48.330354 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 23:52:48.330361 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 23:52:48.330369 kernel: pinctrl core: initialized pinctrl subsystem May 8 23:52:48.330376 kernel: SMBIOS 3.1.0 present. May 8 23:52:48.330383 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 8 23:52:48.330390 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 23:52:48.330398 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 23:52:48.330405 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 23:52:48.330414 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 23:52:48.330422 kernel: audit: initializing netlink subsys (disabled) May 8 23:52:48.330429 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 8 23:52:48.330436 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 23:52:48.330443 kernel: cpuidle: using governor menu May 8 23:52:48.330451 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 23:52:48.330458 kernel: ASID allocator initialised with 32768 entries May 8 23:52:48.330465 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 23:52:48.330472 kernel: Serial: AMBA PL011 UART driver May 8 23:52:48.330480 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 23:52:48.330488 kernel: Modules: 0 pages in range for non-PLT usage May 8 23:52:48.330495 kernel: Modules: 508944 pages in range for PLT usage May 8 23:52:48.330502 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 23:52:48.332163 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 23:52:48.332182 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 23:52:48.332190 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 23:52:48.332198 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 23:52:48.332205 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 23:52:48.332219 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 23:52:48.332227 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 23:52:48.332234 kernel: ACPI: Added _OSI(Module Device) May 8 23:52:48.332242 kernel: ACPI: Added _OSI(Processor Device) May 8 23:52:48.332250 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 23:52:48.332257 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 23:52:48.332264 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 23:52:48.332272 kernel: ACPI: Interpreter enabled May 8 23:52:48.332279 kernel: ACPI: Using GIC for interrupt routing May 8 23:52:48.332286 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 8 23:52:48.332295 kernel: printk: console [ttyAMA0] enabled May 8 23:52:48.332303 kernel: printk: bootconsole [pl11] disabled May 8 23:52:48.332310 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 8 23:52:48.332318 kernel: iommu: Default domain type: Translated May 8 23:52:48.332325 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 23:52:48.332333 kernel: efivars: Registered efivars operations May 8 23:52:48.332340 kernel: vgaarb: loaded May 8 23:52:48.332347 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 23:52:48.332354 kernel: VFS: Disk quotas dquot_6.6.0 May 8 23:52:48.332364 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 23:52:48.332371 kernel: pnp: PnP ACPI init May 8 23:52:48.332378 kernel: pnp: PnP ACPI: found 0 devices May 8 23:52:48.332385 kernel: NET: Registered PF_INET protocol family May 8 23:52:48.332404 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 23:52:48.332421 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 23:52:48.332429 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 23:52:48.332436 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 23:52:48.332445 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 23:52:48.332453 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 23:52:48.332460 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:52:48.332467 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:52:48.332474 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 23:52:48.332482 kernel: PCI: CLS 0 bytes, default 64 May 8 23:52:48.332489 kernel: kvm [1]: HYP mode not available May 8 23:52:48.332496 kernel: Initialise system trusted keyrings May 8 23:52:48.332504 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 23:52:48.332523 kernel: Key type asymmetric registered May 8 23:52:48.332531 kernel: Asymmetric key parser 'x509' registered May 8 23:52:48.332538 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 23:52:48.332545 kernel: io scheduler mq-deadline registered May 8 23:52:48.332552 kernel: io scheduler kyber registered May 8 23:52:48.332559 kernel: io scheduler bfq registered May 8 23:52:48.332566 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 23:52:48.332574 kernel: thunder_xcv, ver 1.0 May 8 23:52:48.332581 kernel: thunder_bgx, ver 1.0 May 8 23:52:48.332588 kernel: nicpf, ver 1.0 May 8 23:52:48.332597 kernel: nicvf, ver 1.0 May 8 23:52:48.332747 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 23:52:48.332822 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T23:52:47 UTC (1746748367) May 8 23:52:48.332832 kernel: efifb: probing for efifb May 8 23:52:48.332840 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 8 23:52:48.332848 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 8 23:52:48.332855 kernel: efifb: scrolling: redraw May 8 23:52:48.332865 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 8 23:52:48.332872 kernel: Console: switching to colour frame buffer device 128x48 May 8 23:52:48.332879 kernel: fb0: EFI VGA frame buffer device May 8 23:52:48.332886 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 8 23:52:48.332894 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 23:52:48.332901 kernel: No ACPI PMU IRQ for CPU0 May 8 23:52:48.332908 kernel: No ACPI PMU IRQ for CPU1 May 8 23:52:48.332915 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 8 23:52:48.332922 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 23:52:48.332931 kernel: watchdog: Hard watchdog permanently disabled May 8 23:52:48.332938 kernel: NET: Registered PF_INET6 protocol family May 8 23:52:48.332945 kernel: Segment Routing with IPv6 May 8 23:52:48.332953 kernel: In-situ OAM (IOAM) with IPv6 May 8 23:52:48.332960 kernel: NET: Registered PF_PACKET protocol family May 8 23:52:48.332967 kernel: Key type dns_resolver registered May 8 23:52:48.332974 kernel: registered taskstats version 1 May 8 23:52:48.332981 kernel: Loading compiled-in X.509 certificates May 8 23:52:48.332989 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: c12e278d643ef0ddd9117a97de150d7afa727d1b' May 8 23:52:48.332996 kernel: Key type .fscrypt registered May 8 23:52:48.333005 kernel: Key type fscrypt-provisioning registered May 8 23:52:48.333012 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 23:52:48.333020 kernel: ima: Allocated hash algorithm: sha1 May 8 23:52:48.333027 kernel: ima: No architecture policies found May 8 23:52:48.333034 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 23:52:48.333041 kernel: clk: Disabling unused clocks May 8 23:52:48.333048 kernel: Freeing unused kernel memory: 39744K May 8 23:52:48.333056 kernel: Run /init as init process May 8 23:52:48.333064 kernel: with arguments: May 8 23:52:48.333072 kernel: /init May 8 23:52:48.333079 kernel: with environment: May 8 23:52:48.333086 kernel: HOME=/ May 8 23:52:48.333093 kernel: TERM=linux May 8 23:52:48.333100 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 23:52:48.333109 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:52:48.333119 systemd[1]: Detected virtualization microsoft. May 8 23:52:48.333128 systemd[1]: Detected architecture arm64. May 8 23:52:48.333136 systemd[1]: Running in initrd. May 8 23:52:48.333144 systemd[1]: No hostname configured, using default hostname. May 8 23:52:48.333151 systemd[1]: Hostname set to . May 8 23:52:48.333159 systemd[1]: Initializing machine ID from random generator. May 8 23:52:48.333167 systemd[1]: Queued start job for default target initrd.target. May 8 23:52:48.333175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:52:48.333182 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:52:48.333193 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 23:52:48.333200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:52:48.333208 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 23:52:48.333216 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 23:52:48.333225 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 23:52:48.333260 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 23:52:48.333272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:52:48.333282 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:52:48.333290 systemd[1]: Reached target paths.target - Path Units. May 8 23:52:48.333298 systemd[1]: Reached target slices.target - Slice Units. May 8 23:52:48.333306 systemd[1]: Reached target swap.target - Swaps. May 8 23:52:48.333314 systemd[1]: Reached target timers.target - Timer Units. May 8 23:52:48.333322 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:52:48.333329 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:52:48.333338 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 23:52:48.333347 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 23:52:48.333355 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:52:48.333363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:52:48.333370 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:52:48.333378 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:52:48.333386 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 23:52:48.333394 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:52:48.333401 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 23:52:48.333409 systemd[1]: Starting systemd-fsck-usr.service... May 8 23:52:48.333418 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:52:48.333426 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:52:48.333458 systemd-journald[218]: Collecting audit messages is disabled. May 8 23:52:48.333478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:52:48.333489 systemd-journald[218]: Journal started May 8 23:52:48.339817 systemd-journald[218]: Runtime Journal (/run/log/journal/cff97c64ccc640759aa0cd20cd7bc12a) is 8.0M, max 78.5M, 70.5M free. May 8 23:52:48.353559 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 23:52:48.353599 kernel: Bridge firewalling registered May 8 23:52:48.331372 systemd-modules-load[219]: Inserted module 'overlay' May 8 23:52:48.367843 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:52:48.356271 systemd-modules-load[219]: Inserted module 'br_netfilter' May 8 23:52:48.379202 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 23:52:48.385342 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:52:48.392584 systemd[1]: Finished systemd-fsck-usr.service. May 8 23:52:48.402546 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:52:48.415947 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:48.440379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:52:48.457487 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:52:48.475788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 23:52:48.495268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:52:48.513056 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:52:48.529593 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:52:48.542807 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 23:52:48.550300 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:52:48.579674 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 23:52:48.598885 dracut-cmdline[252]: dracut-dracut-053 May 8 23:52:48.606624 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 8 23:52:48.601757 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:52:48.617765 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:52:48.677189 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:52:48.681249 systemd-resolved[258]: Positive Trust Anchors: May 8 23:52:48.681261 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:52:48.681293 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:52:48.683474 systemd-resolved[258]: Defaulting to hostname 'linux'. May 8 23:52:48.686325 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:52:48.701791 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:52:48.882541 kernel: SCSI subsystem initialized May 8 23:52:48.890552 kernel: Loading iSCSI transport class v2.0-870. May 8 23:52:48.900532 kernel: iscsi: registered transport (tcp) May 8 23:52:48.918873 kernel: iscsi: registered transport (qla4xxx) May 8 23:52:48.918922 kernel: QLogic iSCSI HBA Driver May 8 23:52:48.960962 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 23:52:48.979806 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 23:52:49.011687 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 23:52:49.011758 kernel: device-mapper: uevent: version 1.0.3 May 8 23:52:49.018088 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 23:52:49.070534 kernel: raid6: neonx8 gen() 15774 MB/s May 8 23:52:49.087522 kernel: raid6: neonx4 gen() 15657 MB/s May 8 23:52:49.107525 kernel: raid6: neonx2 gen() 13236 MB/s May 8 23:52:49.128522 kernel: raid6: neonx1 gen() 10486 MB/s May 8 23:52:49.148524 kernel: raid6: int64x8 gen() 6958 MB/s May 8 23:52:49.168524 kernel: raid6: int64x4 gen() 7350 MB/s May 8 23:52:49.189526 kernel: raid6: int64x2 gen() 6124 MB/s May 8 23:52:49.212773 kernel: raid6: int64x1 gen() 5061 MB/s May 8 23:52:49.212803 kernel: raid6: using algorithm neonx8 gen() 15774 MB/s May 8 23:52:49.236624 kernel: raid6: .... xor() 11937 MB/s, rmw enabled May 8 23:52:49.236634 kernel: raid6: using neon recovery algorithm May 8 23:52:49.248749 kernel: xor: measuring software checksum speed May 8 23:52:49.248762 kernel: 8regs : 19802 MB/sec May 8 23:52:49.252083 kernel: 32regs : 19641 MB/sec May 8 23:52:49.255410 kernel: arm64_neon : 27043 MB/sec May 8 23:52:49.259694 kernel: xor: using function: arm64_neon (27043 MB/sec) May 8 23:52:49.310535 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 23:52:49.320905 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 23:52:49.337648 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:52:49.360375 systemd-udevd[439]: Using default interface naming scheme 'v255'. May 8 23:52:49.365588 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:52:49.384640 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 23:52:49.407361 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation May 8 23:52:49.433549 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:52:49.452832 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:52:49.493504 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:52:49.515771 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 23:52:49.537663 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 23:52:49.545835 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:52:49.553627 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:52:49.567886 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:52:49.598801 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 23:52:49.616627 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 23:52:49.643323 kernel: hv_vmbus: Vmbus version:5.3 May 8 23:52:49.636423 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:52:49.636569 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:52:49.655136 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:52:49.671483 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:52:49.768353 kernel: hv_vmbus: registering driver hid_hyperv May 8 23:52:49.768403 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 23:52:49.768415 kernel: hv_vmbus: registering driver hv_storvsc May 8 23:52:49.768425 kernel: scsi host0: storvsc_host_t May 8 23:52:49.768753 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 8 23:52:49.768766 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 8 23:52:49.768869 kernel: hv_vmbus: registering driver hyperv_keyboard May 8 23:52:49.768879 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 23:52:49.768927 kernel: hv_vmbus: registering driver hv_netvsc May 8 23:52:49.768938 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 8 23:52:49.769070 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 8 23:52:49.769179 kernel: scsi host1: storvsc_host_t May 8 23:52:49.769361 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 8 23:52:49.671669 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:49.678245 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:52:49.781162 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:52:49.810672 kernel: PTP clock support registered May 8 23:52:49.804614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:49.828657 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:52:49.858794 kernel: hv_utils: Registering HyperV Utility Driver May 8 23:52:49.858815 kernel: hv_vmbus: registering driver hv_utils May 8 23:52:49.858824 kernel: hv_netvsc 002248b5-c6ce-0022-48b5-c6ce002248b5 eth0: VF slot 1 added May 8 23:52:49.859104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:52:50.052224 kernel: hv_utils: Heartbeat IC version 3.0 May 8 23:52:50.052245 kernel: hv_utils: Shutdown IC version 3.2 May 8 23:52:50.052254 kernel: hv_utils: TimeSync IC version 4.0 May 8 23:52:49.859216 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:52:50.051704 systemd-resolved[258]: Clock change detected. Flushing caches. May 8 23:52:50.100918 kernel: hv_vmbus: registering driver hv_pci May 8 23:52:50.100948 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 8 23:52:50.101104 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 23:52:50.101114 kernel: hv_pci a545b615-ff41-4dd6-8675-f4d5ed9fcfe6: PCI VMBus probing: Using version 0x10004 May 8 23:52:50.101210 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 8 23:52:50.101293 kernel: hv_pci a545b615-ff41-4dd6-8675-f4d5ed9fcfe6: PCI host bridge to bus ff41:00 May 8 23:52:50.062636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:52:50.128810 kernel: pci_bus ff41:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 8 23:52:50.128992 kernel: pci_bus ff41:00: No busn resource found for root bus, will use [bus 00-ff] May 8 23:52:50.062691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:50.143089 kernel: pci ff41:00:02.0: [15b3:1018] type 00 class 0x020000 May 8 23:52:50.088566 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:52:50.184644 kernel: pci ff41:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 8 23:52:50.184694 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 8 23:52:50.184866 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 8 23:52:50.184952 kernel: pci ff41:00:02.0: enabling Extended Tags May 8 23:52:50.184969 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 23:52:50.185054 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 8 23:52:50.185137 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 8 23:52:50.129712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:52:50.251974 kernel: pci ff41:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ff41:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 8 23:52:50.252702 kernel: pci_bus ff41:00: busn_res: [bus 00-ff] end is updated to 00 May 8 23:52:50.252797 kernel: pci ff41:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 8 23:52:50.252886 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 23:52:50.252897 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 23:52:50.173171 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:50.244718 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:52:50.305468 kernel: mlx5_core ff41:00:02.0: enabling device (0000 -> 0002) May 8 23:52:50.305706 kernel: mlx5_core ff41:00:02.0: firmware version: 16.30.1284 May 8 23:52:50.297921 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:52:50.511834 kernel: hv_netvsc 002248b5-c6ce-0022-48b5-c6ce002248b5 eth0: VF registering: eth1 May 8 23:52:50.512034 kernel: mlx5_core ff41:00:02.0 eth1: joined to eth0 May 8 23:52:50.520565 kernel: mlx5_core ff41:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 8 23:52:50.531468 kernel: mlx5_core ff41:00:02.0 enP65345s1: renamed from eth1 May 8 23:52:50.683986 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 8 23:52:50.717585 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 8 23:52:50.746017 kernel: BTRFS: device fsid 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 devid 1 transid 43 /dev/sda3 scanned by (udev-worker) (485) May 8 23:52:50.746040 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (501) May 8 23:52:50.749105 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 8 23:52:50.756551 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 8 23:52:50.788503 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 8 23:52:50.803652 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 23:52:50.832158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 23:52:51.844476 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 23:52:51.844896 disk-uuid[607]: The operation has completed successfully. May 8 23:52:51.895000 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 23:52:51.895092 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 23:52:51.929577 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 23:52:51.942333 sh[693]: Success May 8 23:52:51.970483 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 23:52:52.125995 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 23:52:52.135577 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 23:52:52.145476 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 23:52:52.175960 kernel: BTRFS info (device dm-0): first mount of filesystem 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 May 8 23:52:52.176002 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 23:52:52.183497 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 23:52:52.188966 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 23:52:52.193738 kernel: BTRFS info (device dm-0): using free space tree May 8 23:52:52.415477 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 23:52:52.420962 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 23:52:52.438712 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 23:52:52.446340 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 23:52:52.485800 kernel: BTRFS info (device sda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:52:52.485859 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:52:52.490633 kernel: BTRFS info (device sda6): using free space tree May 8 23:52:52.510203 kernel: BTRFS info (device sda6): auto enabling async discard May 8 23:52:52.517942 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 23:52:52.531449 kernel: BTRFS info (device sda6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:52:52.538653 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 23:52:52.554736 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 23:52:52.582212 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:52:52.598619 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:52:52.631894 systemd-networkd[877]: lo: Link UP May 8 23:52:52.631903 systemd-networkd[877]: lo: Gained carrier May 8 23:52:52.636661 systemd-networkd[877]: Enumeration completed May 8 23:52:52.636864 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:52:52.647278 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:52:52.647282 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:52:52.648020 systemd[1]: Reached target network.target - Network. May 8 23:52:52.734455 kernel: mlx5_core ff41:00:02.0 enP65345s1: Link up May 8 23:52:52.772698 kernel: hv_netvsc 002248b5-c6ce-0022-48b5-c6ce002248b5 eth0: Data path switched to VF: enP65345s1 May 8 23:52:52.775018 systemd-networkd[877]: enP65345s1: Link UP May 8 23:52:52.775100 systemd-networkd[877]: eth0: Link UP May 8 23:52:52.775200 systemd-networkd[877]: eth0: Gained carrier May 8 23:52:52.775209 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:52:52.787729 systemd-networkd[877]: enP65345s1: Gained carrier May 8 23:52:52.814477 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 8 23:52:53.319867 ignition[856]: Ignition 2.20.0 May 8 23:52:53.319879 ignition[856]: Stage: fetch-offline May 8 23:52:53.321713 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:52:53.319914 ignition[856]: no configs at "/usr/lib/ignition/base.d" May 8 23:52:53.345579 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 23:52:53.319922 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:53.320033 ignition[856]: parsed url from cmdline: "" May 8 23:52:53.320037 ignition[856]: no config URL provided May 8 23:52:53.320042 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" May 8 23:52:53.320051 ignition[856]: no config at "/usr/lib/ignition/user.ign" May 8 23:52:53.320056 ignition[856]: failed to fetch config: resource requires networking May 8 23:52:53.320249 ignition[856]: Ignition finished successfully May 8 23:52:53.362014 ignition[886]: Ignition 2.20.0 May 8 23:52:53.362022 ignition[886]: Stage: fetch May 8 23:52:53.362233 ignition[886]: no configs at "/usr/lib/ignition/base.d" May 8 23:52:53.362244 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:53.362346 ignition[886]: parsed url from cmdline: "" May 8 23:52:53.362350 ignition[886]: no config URL provided May 8 23:52:53.362354 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" May 8 23:52:53.362362 ignition[886]: no config at "/usr/lib/ignition/user.ign" May 8 23:52:53.362389 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 8 23:52:53.490597 ignition[886]: GET result: OK May 8 23:52:53.490680 ignition[886]: config has been read from IMDS userdata May 8 23:52:53.490719 ignition[886]: parsing config with SHA512: ddbdeebdd143268af7bb392b41d0790d17c330d1c87c82efc5ea26bd9465d6df6613c99516b067f795a003136e71385d26d69fd0669893e099560af7c3b23c29 May 8 23:52:53.495218 unknown[886]: fetched base config from "system" May 8 23:52:53.495650 ignition[886]: fetch: fetch complete May 8 23:52:53.495226 unknown[886]: fetched base config from "system" May 8 23:52:53.495654 ignition[886]: fetch: fetch passed May 8 23:52:53.495230 unknown[886]: fetched user config from "azure" May 8 23:52:53.495696 ignition[886]: Ignition finished successfully May 8 23:52:53.501298 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 23:52:53.524591 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 23:52:53.552145 ignition[893]: Ignition 2.20.0 May 8 23:52:53.552166 ignition[893]: Stage: kargs May 8 23:52:53.552339 ignition[893]: no configs at "/usr/lib/ignition/base.d" May 8 23:52:53.562836 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 23:52:53.552348 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:53.553377 ignition[893]: kargs: kargs passed May 8 23:52:53.553426 ignition[893]: Ignition finished successfully May 8 23:52:53.582591 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 23:52:53.602022 ignition[899]: Ignition 2.20.0 May 8 23:52:53.602028 ignition[899]: Stage: disks May 8 23:52:53.607047 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 23:52:53.602207 ignition[899]: no configs at "/usr/lib/ignition/base.d" May 8 23:52:53.615907 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 23:52:53.602217 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:53.628338 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 23:52:53.603312 ignition[899]: disks: disks passed May 8 23:52:53.640310 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:52:53.603362 ignition[899]: Ignition finished successfully May 8 23:52:53.653520 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:52:53.665988 systemd[1]: Reached target basic.target - Basic System. May 8 23:52:53.692637 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 23:52:53.761610 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 8 23:52:53.771753 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 23:52:53.791683 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 23:52:53.849493 kernel: EXT4-fs (sda9): mounted filesystem ad4e3afa-b242-4ca7-a808-1f37a4d41793 r/w with ordered data mode. Quota mode: none. May 8 23:52:53.849805 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 23:52:53.854908 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 23:52:53.892514 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:52:53.902736 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 23:52:53.910605 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 8 23:52:53.930057 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 23:52:53.955674 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) May 8 23:52:53.955700 kernel: BTRFS info (device sda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:52:53.955710 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:52:53.930094 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:52:53.989933 kernel: BTRFS info (device sda6): using free space tree May 8 23:52:53.989956 kernel: BTRFS info (device sda6): auto enabling async discard May 8 23:52:53.974015 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 23:52:53.985559 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:52:54.006688 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 23:52:54.065588 systemd-networkd[877]: enP65345s1: Gained IPv6LL May 8 23:52:54.384117 coreos-metadata[921]: May 08 23:52:54.384 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 8 23:52:54.391919 systemd-networkd[877]: eth0: Gained IPv6LL May 8 23:52:54.397101 coreos-metadata[921]: May 08 23:52:54.396 INFO Fetch successful May 8 23:52:54.397101 coreos-metadata[921]: May 08 23:52:54.396 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 8 23:52:54.414098 coreos-metadata[921]: May 08 23:52:54.413 INFO Fetch successful May 8 23:52:54.424757 coreos-metadata[921]: May 08 23:52:54.424 INFO wrote hostname ci-4152.2.3-n-71d56f534c to /sysroot/etc/hostname May 8 23:52:54.433847 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 23:52:54.637963 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory May 8 23:52:54.658579 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory May 8 23:52:54.668203 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory May 8 23:52:54.676525 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory May 8 23:52:55.343528 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 23:52:55.357615 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 23:52:55.365626 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 23:52:55.387566 kernel: BTRFS info (device sda6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:52:55.387179 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 23:52:55.414481 ignition[1039]: INFO : Ignition 2.20.0 May 8 23:52:55.414481 ignition[1039]: INFO : Stage: mount May 8 23:52:55.414481 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:52:55.414481 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:55.416623 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 23:52:55.460530 ignition[1039]: INFO : mount: mount passed May 8 23:52:55.460530 ignition[1039]: INFO : Ignition finished successfully May 8 23:52:55.424246 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 23:52:55.449669 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 23:52:55.468677 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:52:55.503459 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1052) May 8 23:52:55.516156 kernel: BTRFS info (device sda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:52:55.516195 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:52:55.520439 kernel: BTRFS info (device sda6): using free space tree May 8 23:52:55.527469 kernel: BTRFS info (device sda6): auto enabling async discard May 8 23:52:55.528519 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:52:55.557089 ignition[1069]: INFO : Ignition 2.20.0 May 8 23:52:55.557089 ignition[1069]: INFO : Stage: files May 8 23:52:55.564815 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:52:55.564815 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:55.564815 ignition[1069]: DEBUG : files: compiled without relabeling support, skipping May 8 23:52:55.582657 ignition[1069]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 23:52:55.582657 ignition[1069]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 23:52:55.598759 ignition[1069]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 23:52:55.606557 ignition[1069]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 23:52:55.613978 ignition[1069]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 23:52:55.611865 unknown[1069]: wrote ssh authorized keys file for user: core May 8 23:52:55.627412 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 23:52:55.627412 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 23:52:55.736884 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 23:52:55.950883 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 23:52:55.962180 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:52:55.962180 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 8 23:52:56.480737 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 23:52:56.684761 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:52:56.684761 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:52:56.714983 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 23:52:57.148026 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 23:52:57.896545 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:52:57.896545 ignition[1069]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 8 23:52:57.915781 ignition[1069]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 23:52:57.915781 ignition[1069]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 23:52:57.915781 ignition[1069]: INFO : files: files passed May 8 23:52:57.915781 ignition[1069]: INFO : Ignition finished successfully May 8 23:52:57.921753 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 23:52:57.960754 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 23:52:57.976618 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 23:52:58.055052 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:52:58.055052 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 23:52:58.001346 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 23:52:58.086673 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:52:58.001499 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 23:52:58.026477 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:52:58.034952 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 23:52:58.064720 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 23:52:58.104316 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 23:52:58.104472 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 23:52:58.116248 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 23:52:58.127453 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 23:52:58.139883 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 23:52:58.152198 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 23:52:58.176473 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:52:58.201811 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 23:52:58.220339 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 23:52:58.227387 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:52:58.239979 systemd[1]: Stopped target timers.target - Timer Units. May 8 23:52:58.250953 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 23:52:58.251077 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:52:58.267419 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 23:52:58.273302 systemd[1]: Stopped target basic.target - Basic System. May 8 23:52:58.284762 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 23:52:58.295902 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:52:58.306800 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 23:52:58.318395 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 23:52:58.329886 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:52:58.342258 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 23:52:58.353770 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 23:52:58.367264 systemd[1]: Stopped target swap.target - Swaps. May 8 23:52:58.377940 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 23:52:58.378078 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 23:52:58.394516 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 23:52:58.401277 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:52:58.413890 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 23:52:58.413961 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:52:58.427280 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 23:52:58.427408 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 23:52:58.446631 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 23:52:58.446754 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:52:58.520349 ignition[1121]: INFO : Ignition 2.20.0 May 8 23:52:58.520349 ignition[1121]: INFO : Stage: umount May 8 23:52:58.520349 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:52:58.520349 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 8 23:52:58.520349 ignition[1121]: INFO : umount: umount passed May 8 23:52:58.520349 ignition[1121]: INFO : Ignition finished successfully May 8 23:52:58.454626 systemd[1]: ignition-files.service: Deactivated successfully. May 8 23:52:58.454722 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 23:52:58.465612 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 8 23:52:58.465710 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 23:52:58.486769 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 23:52:58.504342 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 23:52:58.504549 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:52:58.527598 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 23:52:58.536649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 23:52:58.536871 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:52:58.555848 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 23:52:58.556011 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:52:58.581247 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 23:52:58.582036 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 23:52:58.582140 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 23:52:58.591166 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 23:52:58.591267 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 23:52:58.600159 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 23:52:58.600221 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 23:52:58.610654 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 23:52:58.610701 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 23:52:58.621855 systemd[1]: Stopped target network.target - Network. May 8 23:52:58.632532 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 23:52:58.632603 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:52:58.645806 systemd[1]: Stopped target paths.target - Path Units. May 8 23:52:58.656629 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 23:52:58.661479 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:52:58.671131 systemd[1]: Stopped target slices.target - Slice Units. May 8 23:52:58.682387 systemd[1]: Stopped target sockets.target - Socket Units. May 8 23:52:58.692705 systemd[1]: iscsid.socket: Deactivated successfully. May 8 23:52:58.692759 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:52:58.705652 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 23:52:58.705690 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:52:58.718619 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 23:52:58.718678 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 23:52:58.729700 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 23:52:58.729750 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 23:52:59.007643 kernel: hv_netvsc 002248b5-c6ce-0022-48b5-c6ce002248b5 eth0: Data path switched from VF: enP65345s1 May 8 23:52:58.742251 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 23:52:58.752739 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 23:52:58.764465 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 23:52:58.764562 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 23:52:58.776494 systemd-networkd[877]: eth0: DHCPv6 lease lost May 8 23:52:58.783852 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 23:52:58.784036 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 23:52:58.794999 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 23:52:58.795142 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 23:52:58.809184 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 23:52:58.809238 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 23:52:58.836690 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 23:52:58.847466 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 23:52:58.847547 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:52:58.861268 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:52:58.861331 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:52:58.872889 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 23:52:58.872958 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 23:52:58.887743 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 23:52:58.887807 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:52:58.904606 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:52:58.938917 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 23:52:58.940275 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:52:58.952023 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 23:52:58.952111 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 23:52:58.963660 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 23:52:58.963699 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:52:58.975189 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 23:52:58.975245 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 23:52:59.001857 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 23:52:59.001926 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 23:52:59.020345 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:52:59.020413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:52:59.308962 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). May 8 23:52:59.065800 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 23:52:59.085806 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 23:52:59.085893 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:52:59.097921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:52:59.097979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:52:59.112256 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 23:52:59.112378 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 23:52:59.123103 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 23:52:59.124464 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 23:52:59.146060 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 23:52:59.146175 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 23:52:59.158104 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 23:52:59.158244 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 23:52:59.169351 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 23:52:59.204734 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 23:52:59.228688 systemd[1]: Switching root. May 8 23:52:59.407655 systemd-journald[218]: Journal stopped May 8 23:53:03.178667 kernel: SELinux: policy capability network_peer_controls=1 May 8 23:53:03.178693 kernel: SELinux: policy capability open_perms=1 May 8 23:53:03.178703 kernel: SELinux: policy capability extended_socket_class=1 May 8 23:53:03.178711 kernel: SELinux: policy capability always_check_network=0 May 8 23:53:03.178721 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 23:53:03.178729 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 23:53:03.178738 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 23:53:03.178746 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 23:53:03.178754 kernel: audit: type=1403 audit(1746748380.400:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 23:53:03.178764 systemd[1]: Successfully loaded SELinux policy in 134.184ms. May 8 23:53:03.178776 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.467ms. May 8 23:53:03.178786 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:53:03.178796 systemd[1]: Detected virtualization microsoft. May 8 23:53:03.178805 systemd[1]: Detected architecture arm64. May 8 23:53:03.178814 systemd[1]: Detected first boot. May 8 23:53:03.178825 systemd[1]: Hostname set to . May 8 23:53:03.178834 systemd[1]: Initializing machine ID from random generator. May 8 23:53:03.178843 zram_generator::config[1164]: No configuration found. May 8 23:53:03.178852 systemd[1]: Populated /etc with preset unit settings. May 8 23:53:03.178861 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 23:53:03.178870 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 23:53:03.178878 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 23:53:03.178890 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 23:53:03.178899 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 23:53:03.178908 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 23:53:03.178917 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 23:53:03.178926 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 23:53:03.178935 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 23:53:03.178944 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 23:53:03.178955 systemd[1]: Created slice user.slice - User and Session Slice. May 8 23:53:03.178964 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:53:03.178973 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:53:03.178982 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 23:53:03.178992 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 23:53:03.179001 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 23:53:03.179011 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:53:03.179020 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 23:53:03.179030 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:53:03.179039 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 23:53:03.179048 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 23:53:03.179060 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 23:53:03.179070 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 23:53:03.179079 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:53:03.179088 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:53:03.179097 systemd[1]: Reached target slices.target - Slice Units. May 8 23:53:03.179108 systemd[1]: Reached target swap.target - Swaps. May 8 23:53:03.179117 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 23:53:03.179127 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 23:53:03.179136 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:53:03.179145 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:53:03.179155 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:53:03.179166 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 23:53:03.179175 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 23:53:03.179185 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 23:53:03.179195 systemd[1]: Mounting media.mount - External Media Directory... May 8 23:53:03.179204 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 23:53:03.179214 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 23:53:03.179223 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 23:53:03.179234 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 23:53:03.179244 systemd[1]: Reached target machines.target - Containers. May 8 23:53:03.179253 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 23:53:03.179263 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:53:03.179272 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:53:03.179282 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 23:53:03.179291 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:53:03.179301 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:53:03.179312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:53:03.179321 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 23:53:03.179331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:53:03.179340 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 23:53:03.179350 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 23:53:03.179359 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 23:53:03.179368 kernel: fuse: init (API version 7.39) May 8 23:53:03.179377 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 23:53:03.179388 systemd[1]: Stopped systemd-fsck-usr.service. May 8 23:53:03.179398 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:53:03.179408 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:53:03.179417 kernel: loop: module loaded May 8 23:53:03.179426 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 23:53:03.179454 kernel: ACPI: bus type drm_connector registered May 8 23:53:03.179465 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 23:53:03.179491 systemd-journald[1267]: Collecting audit messages is disabled. May 8 23:53:03.179513 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:53:03.179524 systemd-journald[1267]: Journal started May 8 23:53:03.179543 systemd-journald[1267]: Runtime Journal (/run/log/journal/3e705e7df2614cd085b93f306818ba2e) is 8.0M, max 78.5M, 70.5M free. May 8 23:53:02.097175 systemd[1]: Queued start job for default target multi-user.target. May 8 23:53:02.187476 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 23:53:02.187853 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 23:53:02.188160 systemd[1]: systemd-journald.service: Consumed 3.494s CPU time. May 8 23:53:03.205076 systemd[1]: verity-setup.service: Deactivated successfully. May 8 23:53:03.205126 systemd[1]: Stopped verity-setup.service. May 8 23:53:03.225315 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:53:03.226181 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 23:53:03.232411 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 23:53:03.239211 systemd[1]: Mounted media.mount - External Media Directory. May 8 23:53:03.254801 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 23:53:03.273954 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 23:53:03.280990 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 23:53:03.287270 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 23:53:03.297103 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:53:03.305738 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 23:53:03.305875 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 23:53:03.314134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:53:03.314273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:53:03.322641 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:53:03.322772 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:53:03.330486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:53:03.330622 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:53:03.339748 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 23:53:03.339882 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 23:53:03.347571 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:53:03.348607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:53:03.356758 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:53:03.363944 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 23:53:03.372391 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 23:53:03.381297 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:53:03.398855 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 23:53:03.417534 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 23:53:03.427423 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 23:53:03.435492 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 23:53:03.435599 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:53:03.443732 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 23:53:03.461605 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 23:53:03.470533 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 23:53:03.477322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:53:03.479018 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 23:53:03.487955 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 23:53:03.497363 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:53:03.498706 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 23:53:03.505846 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:53:03.508017 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:53:03.518798 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 23:53:03.529802 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 23:53:03.545658 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 23:53:03.561395 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 23:53:03.563332 systemd-journald[1267]: Time spent on flushing to /var/log/journal/3e705e7df2614cd085b93f306818ba2e is 13.705ms for 906 entries. May 8 23:53:03.563332 systemd-journald[1267]: System Journal (/var/log/journal/3e705e7df2614cd085b93f306818ba2e) is 8.0M, max 2.6G, 2.6G free. May 8 23:53:03.609582 systemd-journald[1267]: Received client request to flush runtime journal. May 8 23:53:03.609617 kernel: loop0: detected capacity change from 0 to 194096 May 8 23:53:03.583967 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 23:53:03.597199 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 23:53:03.622933 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 23:53:03.632481 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 23:53:03.647021 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 23:53:03.660868 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 23:53:03.670367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:53:03.682833 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 23:53:03.685234 udevadm[1302]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 23:53:03.717238 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 23:53:03.719324 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 23:53:03.734171 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 23:53:03.736459 kernel: loop1: detected capacity change from 0 to 28720 May 8 23:53:03.749719 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:53:03.825112 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. May 8 23:53:03.825132 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. May 8 23:53:03.829430 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:53:04.031477 kernel: loop2: detected capacity change from 0 to 113536 May 8 23:53:04.314473 kernel: loop3: detected capacity change from 0 to 116808 May 8 23:53:04.550525 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 23:53:04.560969 kernel: loop4: detected capacity change from 0 to 194096 May 8 23:53:04.572072 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:53:04.577460 kernel: loop5: detected capacity change from 0 to 28720 May 8 23:53:04.590476 kernel: loop6: detected capacity change from 0 to 113536 May 8 23:53:04.593686 systemd-udevd[1325]: Using default interface naming scheme 'v255'. May 8 23:53:04.602471 kernel: loop7: detected capacity change from 0 to 116808 May 8 23:53:04.606774 (sd-merge)[1323]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 8 23:53:04.607229 (sd-merge)[1323]: Merged extensions into '/usr'. May 8 23:53:04.610257 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... May 8 23:53:04.610411 systemd[1]: Reloading... May 8 23:53:04.736383 zram_generator::config[1360]: No configuration found. May 8 23:53:04.877504 kernel: mousedev: PS/2 mouse device common for all mice May 8 23:53:04.924300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:53:04.937240 kernel: hv_vmbus: registering driver hv_balloon May 8 23:53:04.937360 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 8 23:53:04.944753 kernel: hv_balloon: Memory hot add disabled on ARM64 May 8 23:53:04.982514 kernel: hv_vmbus: registering driver hyperv_fb May 8 23:53:04.997095 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 8 23:53:04.997446 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 8 23:53:05.002760 kernel: Console: switching to colour dummy device 80x25 May 8 23:53:05.014468 kernel: Console: switching to colour frame buffer device 128x48 May 8 23:53:05.028486 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1366) May 8 23:53:05.036131 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 23:53:05.036916 systemd[1]: Reloading finished in 426 ms. May 8 23:53:05.061164 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:53:05.078206 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 23:53:05.139215 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 8 23:53:05.163069 systemd[1]: Starting ensure-sysext.service... May 8 23:53:05.169919 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 23:53:05.189271 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:53:05.197076 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:53:05.207762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:53:05.218091 systemd[1]: Reloading requested from client PID 1503 ('systemctl') (unit ensure-sysext.service)... May 8 23:53:05.218104 systemd[1]: Reloading... May 8 23:53:05.255913 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 23:53:05.256186 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 23:53:05.256862 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 23:53:05.257075 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. May 8 23:53:05.257119 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. May 8 23:53:05.289857 systemd-tmpfiles[1507]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:53:05.290012 systemd-tmpfiles[1507]: Skipping /boot May 8 23:53:05.298296 zram_generator::config[1543]: No configuration found. May 8 23:53:05.298416 systemd-tmpfiles[1507]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:53:05.298527 systemd-tmpfiles[1507]: Skipping /boot May 8 23:53:05.402992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:53:05.481294 systemd[1]: Reloading finished in 262 ms. May 8 23:53:05.499493 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 23:53:05.516504 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 23:53:05.524824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:53:05.532898 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:53:05.557764 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 23:53:05.564753 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 23:53:05.572734 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 23:53:05.581354 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 23:53:05.590731 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:53:05.605863 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 23:53:05.615733 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 23:53:05.626183 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:53:05.632761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:53:05.642426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:53:05.647454 lvm[1605]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:53:05.657885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:53:05.664916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:53:05.667363 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:53:05.668571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:53:05.684164 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 23:53:05.694416 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 23:53:05.706990 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 23:53:05.720576 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 23:53:05.734494 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:53:05.735218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:53:05.745404 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:53:05.745622 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:53:05.758923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:53:05.766325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:53:05.771885 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 23:53:05.785068 lvm[1640]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:53:05.799871 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:53:05.805685 augenrules[1645]: No rules May 8 23:53:05.810716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:53:05.810864 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:53:05.811626 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:53:05.811860 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 23:53:05.830901 systemd-networkd[1506]: lo: Link UP May 8 23:53:05.831857 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 23:53:05.833484 systemd-networkd[1506]: lo: Gained carrier May 8 23:53:05.837670 systemd-networkd[1506]: Enumeration completed May 8 23:53:05.839023 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:53:05.839421 systemd-networkd[1506]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:53:05.840223 systemd-networkd[1506]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:53:05.842358 systemd-resolved[1607]: Positive Trust Anchors: May 8 23:53:05.842945 systemd-resolved[1607]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:53:05.842978 systemd-resolved[1607]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:53:05.845229 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:53:05.855763 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:53:05.859779 systemd-resolved[1607]: Using system hostname 'ci-4152.2.3-n-71d56f534c'. May 8 23:53:05.866941 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:53:05.876306 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:53:05.876842 systemd[1]: Reached target time-set.target - System Time Set. May 8 23:53:05.885324 augenrules[1652]: /sbin/augenrules: No change May 8 23:53:05.886489 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:53:05.899914 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 23:53:05.904924 augenrules[1672]: No rules May 8 23:53:05.907042 kernel: mlx5_core ff41:00:02.0 enP65345s1: Link up May 8 23:53:05.907904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:53:05.908049 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:53:05.915578 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:53:05.915773 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 23:53:05.923029 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:53:05.923168 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:53:05.937995 kernel: hv_netvsc 002248b5-c6ce-0022-48b5-c6ce002248b5 eth0: Data path switched to VF: enP65345s1 May 8 23:53:05.937736 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:53:05.937870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:53:05.938769 systemd-networkd[1506]: enP65345s1: Link UP May 8 23:53:05.938862 systemd-networkd[1506]: eth0: Link UP May 8 23:53:05.938865 systemd-networkd[1506]: eth0: Gained carrier May 8 23:53:05.938879 systemd-networkd[1506]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:53:05.945691 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:53:05.945868 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:53:05.953123 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:53:05.960578 systemd-networkd[1506]: enP65345s1: Gained carrier May 8 23:53:05.965537 systemd-networkd[1506]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 8 23:53:05.965626 systemd[1]: Reached target network.target - Network. May 8 23:53:05.970798 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:53:05.982828 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 23:53:05.990006 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:53:05.990144 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:53:05.990731 systemd[1]: Finished ensure-sysext.service. May 8 23:53:06.154607 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 23:53:06.162463 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 23:53:07.697880 systemd-networkd[1506]: enP65345s1: Gained IPv6LL May 8 23:53:07.825535 systemd-networkd[1506]: eth0: Gained IPv6LL May 8 23:53:07.830424 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 23:53:07.838603 systemd[1]: Reached target network-online.target - Network is Online. May 8 23:53:07.938556 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 23:53:07.948934 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 23:53:07.962642 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 23:53:07.977514 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 23:53:07.984872 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:53:07.991230 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 23:53:07.998720 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 23:53:08.006235 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 23:53:08.012259 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 23:53:08.020053 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 23:53:08.027544 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 23:53:08.027578 systemd[1]: Reached target paths.target - Path Units. May 8 23:53:08.032928 systemd[1]: Reached target timers.target - Timer Units. May 8 23:53:08.038967 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 23:53:08.047077 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 23:53:08.058189 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 23:53:08.065862 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 23:53:08.072267 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:53:08.077948 systemd[1]: Reached target basic.target - Basic System. May 8 23:53:08.083293 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 23:53:08.083323 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 23:53:08.090537 systemd[1]: Starting chronyd.service - NTP client/server... May 8 23:53:08.099586 systemd[1]: Starting containerd.service - containerd container runtime... May 8 23:53:08.110679 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 23:53:08.119633 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 23:53:08.129121 (chronyd)[1692]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 8 23:53:08.130464 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 23:53:08.138307 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 23:53:08.144602 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 23:53:08.144763 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 8 23:53:08.146321 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 8 23:53:08.158099 jq[1699]: false May 8 23:53:08.157126 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 8 23:53:08.159048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:08.168128 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 23:53:08.171476 chronyd[1706]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 8 23:53:08.175845 KVP[1701]: KVP starting; pid is:1701 May 8 23:53:08.184029 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 23:53:08.193565 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 23:53:08.199115 chronyd[1706]: Timezone right/UTC failed leap second check, ignoring May 8 23:53:08.199497 chronyd[1706]: Loaded seccomp filter (level 2) May 8 23:53:08.199793 KVP[1701]: KVP LIC Version: 3.1 May 8 23:53:08.203848 kernel: hv_utils: KVP IC version 4.0 May 8 23:53:08.206748 extend-filesystems[1700]: Found loop4 May 8 23:53:08.206748 extend-filesystems[1700]: Found loop5 May 8 23:53:08.206748 extend-filesystems[1700]: Found loop6 May 8 23:53:08.206748 extend-filesystems[1700]: Found loop7 May 8 23:53:08.206748 extend-filesystems[1700]: Found sda May 8 23:53:08.206748 extend-filesystems[1700]: Found sda1 May 8 23:53:08.206748 extend-filesystems[1700]: Found sda2 May 8 23:53:08.206748 extend-filesystems[1700]: Found sda3 May 8 23:53:08.206748 extend-filesystems[1700]: Found usr May 8 23:53:08.206748 extend-filesystems[1700]: Found sda4 May 8 23:53:08.206748 extend-filesystems[1700]: Found sda6 May 8 23:53:08.206748 extend-filesystems[1700]: Found sda7 May 8 23:53:08.206748 extend-filesystems[1700]: Found sda9 May 8 23:53:08.206748 extend-filesystems[1700]: Checking size of /dev/sda9 May 8 23:53:08.438640 extend-filesystems[1700]: Old size kept for /dev/sda9 May 8 23:53:08.438640 extend-filesystems[1700]: Found sr0 May 8 23:53:08.517582 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1746) May 8 23:53:08.211183 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 23:53:08.517783 coreos-metadata[1694]: May 08 23:53:08.367 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 8 23:53:08.517783 coreos-metadata[1694]: May 08 23:53:08.378 INFO Fetch successful May 8 23:53:08.517783 coreos-metadata[1694]: May 08 23:53:08.378 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 8 23:53:08.517783 coreos-metadata[1694]: May 08 23:53:08.384 INFO Fetch successful May 8 23:53:08.517783 coreos-metadata[1694]: May 08 23:53:08.385 INFO Fetching http://168.63.129.16/machine/eca9b78a-45bd-4d4d-84ed-d8ca36e35984/6ffa4829%2D03c8%2D4b57%2Dbef5%2D10dc826c0953.%5Fci%2D4152.2.3%2Dn%2D71d56f534c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 8 23:53:08.517783 coreos-metadata[1694]: May 08 23:53:08.387 INFO Fetch successful May 8 23:53:08.517783 coreos-metadata[1694]: May 08 23:53:08.387 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 8 23:53:08.517783 coreos-metadata[1694]: May 08 23:53:08.401 INFO Fetch successful May 8 23:53:08.268174 dbus-daemon[1695]: [system] SELinux support is enabled May 8 23:53:08.240691 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 23:53:08.258308 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 23:53:08.278248 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 23:53:08.278782 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 23:53:08.285864 systemd[1]: Starting update-engine.service - Update Engine... May 8 23:53:08.539514 update_engine[1729]: I20250508 23:53:08.360423 1729 main.cc:92] Flatcar Update Engine starting May 8 23:53:08.539514 update_engine[1729]: I20250508 23:53:08.376614 1729 update_check_scheduler.cc:74] Next update check in 4m19s May 8 23:53:08.319680 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 23:53:08.558576 jq[1732]: true May 8 23:53:08.352380 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 23:53:08.367482 systemd[1]: Started chronyd.service - NTP client/server. May 8 23:53:08.384095 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 23:53:08.384289 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 23:53:08.384591 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 23:53:08.384727 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 23:53:08.407804 systemd[1]: motdgen.service: Deactivated successfully. May 8 23:53:08.407978 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 23:53:08.450918 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 23:53:08.455019 systemd-logind[1722]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 8 23:53:08.476911 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 23:53:08.478578 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 23:53:08.499384 systemd-logind[1722]: New seat seat0. May 8 23:53:08.509218 systemd[1]: Started systemd-logind.service - User Login Management. May 8 23:53:08.577462 jq[1777]: true May 8 23:53:08.587473 dbus-daemon[1695]: [system] Successfully activated service 'org.freedesktop.systemd1' May 8 23:53:08.586732 (ntainerd)[1785]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 23:53:08.586829 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 23:53:08.586876 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 23:53:08.599664 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 23:53:08.599693 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 23:53:08.614699 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 23:53:08.628080 systemd[1]: Started update-engine.service - Update Engine. May 8 23:53:08.639272 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 23:53:08.646922 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 23:53:08.658192 tar[1753]: linux-arm64/helm May 8 23:53:08.725461 bash[1834]: Updated "/home/core/.ssh/authorized_keys" May 8 23:53:08.733237 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 23:53:08.747169 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 23:53:08.936721 sshd_keygen[1724]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 23:53:08.960975 locksmithd[1822]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 23:53:08.962066 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 23:53:08.977094 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 23:53:08.994532 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 8 23:53:09.024005 systemd[1]: issuegen.service: Deactivated successfully. May 8 23:53:09.024211 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 23:53:09.047113 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 23:53:09.063428 containerd[1785]: time="2025-05-08T23:53:09.063316560Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 23:53:09.067702 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 8 23:53:09.097369 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 23:53:09.117826 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 23:53:09.135560 containerd[1785]: time="2025-05-08T23:53:09.134625000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 23:53:09.141282 containerd[1785]: time="2025-05-08T23:53:09.137744640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 23:53:09.141282 containerd[1785]: time="2025-05-08T23:53:09.137798200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 23:53:09.141282 containerd[1785]: time="2025-05-08T23:53:09.137818880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 23:53:09.141282 containerd[1785]: time="2025-05-08T23:53:09.138002760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 23:53:09.141282 containerd[1785]: time="2025-05-08T23:53:09.138024840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 23:53:09.141282 containerd[1785]: time="2025-05-08T23:53:09.138108600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:53:09.141282 containerd[1785]: time="2025-05-08T23:53:09.138126000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 23:53:09.141282 containerd[1785]: time="2025-05-08T23:53:09.138290600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:53:09.141282 containerd[1785]: time="2025-05-08T23:53:09.138306240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 23:53:09.141282 containerd[1785]: time="2025-05-08T23:53:09.138319040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:53:09.141282 containerd[1785]: time="2025-05-08T23:53:09.138328120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 23:53:09.138926 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 23:53:09.141826 containerd[1785]: time="2025-05-08T23:53:09.138410440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 23:53:09.141826 containerd[1785]: time="2025-05-08T23:53:09.138664520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 23:53:09.141826 containerd[1785]: time="2025-05-08T23:53:09.138766160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:53:09.141826 containerd[1785]: time="2025-05-08T23:53:09.138780800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 23:53:09.141826 containerd[1785]: time="2025-05-08T23:53:09.138876400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 23:53:09.141826 containerd[1785]: time="2025-05-08T23:53:09.138922400Z" level=info msg="metadata content store policy set" policy=shared May 8 23:53:09.152754 systemd[1]: Reached target getty.target - Login Prompts. May 8 23:53:09.170476 containerd[1785]: time="2025-05-08T23:53:09.169892160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 23:53:09.170476 containerd[1785]: time="2025-05-08T23:53:09.169976960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 23:53:09.170476 containerd[1785]: time="2025-05-08T23:53:09.169994720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 23:53:09.170476 containerd[1785]: time="2025-05-08T23:53:09.170015000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 23:53:09.170476 containerd[1785]: time="2025-05-08T23:53:09.170031720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 23:53:09.170476 containerd[1785]: time="2025-05-08T23:53:09.170204800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 23:53:09.170476 containerd[1785]: time="2025-05-08T23:53:09.170429320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 23:53:09.170702 containerd[1785]: time="2025-05-08T23:53:09.170552440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 23:53:09.170702 containerd[1785]: time="2025-05-08T23:53:09.170569280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 23:53:09.170702 containerd[1785]: time="2025-05-08T23:53:09.170584080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 23:53:09.170702 containerd[1785]: time="2025-05-08T23:53:09.170611760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 23:53:09.170702 containerd[1785]: time="2025-05-08T23:53:09.170626480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 23:53:09.170702 containerd[1785]: time="2025-05-08T23:53:09.170640040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 23:53:09.170702 containerd[1785]: time="2025-05-08T23:53:09.170653960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 23:53:09.170702 containerd[1785]: time="2025-05-08T23:53:09.170670000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 23:53:09.170702 containerd[1785]: time="2025-05-08T23:53:09.170685080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 23:53:09.170702 containerd[1785]: time="2025-05-08T23:53:09.170698560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170710560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170733880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170751480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170764480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170777160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170789400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170803360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170815000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170827080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170839320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170853880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 23:53:09.170869 containerd[1785]: time="2025-05-08T23:53:09.170865840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.170877680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.170890800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.170904240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.170930240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.170946680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.170958440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.171005400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.171023880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.171034240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.171046560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.171055400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.171066680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 23:53:09.171074 containerd[1785]: time="2025-05-08T23:53:09.171076040Z" level=info msg="NRI interface is disabled by configuration." May 8 23:53:09.171306 containerd[1785]: time="2025-05-08T23:53:09.171086040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 23:53:09.171741 containerd[1785]: time="2025-05-08T23:53:09.171358760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 23:53:09.171741 containerd[1785]: time="2025-05-08T23:53:09.171419520Z" level=info msg="Connect containerd service" May 8 23:53:09.175604 containerd[1785]: time="2025-05-08T23:53:09.172558040Z" level=info msg="using legacy CRI server" May 8 23:53:09.175604 containerd[1785]: time="2025-05-08T23:53:09.172590160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 23:53:09.175604 containerd[1785]: time="2025-05-08T23:53:09.172766560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 23:53:09.178142 containerd[1785]: time="2025-05-08T23:53:09.173594520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:53:09.178469 containerd[1785]: time="2025-05-08T23:53:09.178407080Z" level=info msg="Start subscribing containerd event" May 8 23:53:09.178524 containerd[1785]: time="2025-05-08T23:53:09.178488200Z" level=info msg="Start recovering state" May 8 23:53:09.178601 containerd[1785]: time="2025-05-08T23:53:09.178581640Z" level=info msg="Start event monitor" May 8 23:53:09.178628 containerd[1785]: time="2025-05-08T23:53:09.178599200Z" level=info msg="Start snapshots syncer" May 8 23:53:09.178628 containerd[1785]: time="2025-05-08T23:53:09.178614800Z" level=info msg="Start cni network conf syncer for default" May 8 23:53:09.178628 containerd[1785]: time="2025-05-08T23:53:09.178622720Z" level=info msg="Start streaming server" May 8 23:53:09.179970 containerd[1785]: time="2025-05-08T23:53:09.179934600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 23:53:09.181193 containerd[1785]: time="2025-05-08T23:53:09.181164240Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 23:53:09.181943 containerd[1785]: time="2025-05-08T23:53:09.181920640Z" level=info msg="containerd successfully booted in 0.120264s" May 8 23:53:09.181989 systemd[1]: Started containerd.service - containerd container runtime. May 8 23:53:09.345578 tar[1753]: linux-arm64/LICENSE May 8 23:53:09.345759 tar[1753]: linux-arm64/README.md May 8 23:53:09.362655 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 23:53:09.509630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:09.517604 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:53:09.519344 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 23:53:09.529565 systemd[1]: Startup finished in 667ms (kernel) + 12.349s (initrd) + 9.261s (userspace) = 22.279s. May 8 23:53:09.727646 login[1872]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:09.732848 login[1873]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:09.741359 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 23:53:09.750793 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 23:53:09.753572 systemd-logind[1722]: New session 2 of user core. May 8 23:53:09.761137 systemd-logind[1722]: New session 1 of user core. May 8 23:53:09.772243 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 23:53:09.781703 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 23:53:09.807974 (systemd)[1895]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 23:53:10.010962 systemd[1895]: Queued start job for default target default.target. May 8 23:53:10.019466 systemd[1895]: Created slice app.slice - User Application Slice. May 8 23:53:10.019505 systemd[1895]: Reached target paths.target - Paths. May 8 23:53:10.019519 systemd[1895]: Reached target timers.target - Timers. May 8 23:53:10.022642 systemd[1895]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 23:53:10.033898 systemd[1895]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 23:53:10.034026 systemd[1895]: Reached target sockets.target - Sockets. May 8 23:53:10.034040 systemd[1895]: Reached target basic.target - Basic System. May 8 23:53:10.034080 systemd[1895]: Reached target default.target - Main User Target. May 8 23:53:10.034107 systemd[1895]: Startup finished in 216ms. May 8 23:53:10.034242 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 23:53:10.040706 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 23:53:10.041916 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 23:53:10.078069 kubelet[1883]: E0508 23:53:10.077994 1883 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:53:10.080953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:53:10.081121 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:53:10.661477 waagent[1867]: 2025-05-08T23:53:10.659622Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 May 8 23:53:10.665814 waagent[1867]: 2025-05-08T23:53:10.665736Z INFO Daemon Daemon OS: flatcar 4152.2.3 May 8 23:53:10.670379 waagent[1867]: 2025-05-08T23:53:10.670310Z INFO Daemon Daemon Python: 3.11.10 May 8 23:53:10.674889 waagent[1867]: 2025-05-08T23:53:10.674818Z INFO Daemon Daemon Run daemon May 8 23:53:10.679015 waagent[1867]: 2025-05-08T23:53:10.678951Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.3' May 8 23:53:10.690252 waagent[1867]: 2025-05-08T23:53:10.690168Z INFO Daemon Daemon Using waagent for provisioning May 8 23:53:10.695659 waagent[1867]: 2025-05-08T23:53:10.695598Z INFO Daemon Daemon Activate resource disk May 8 23:53:10.700353 waagent[1867]: 2025-05-08T23:53:10.700289Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 8 23:53:10.713928 waagent[1867]: 2025-05-08T23:53:10.713847Z INFO Daemon Daemon Found device: None May 8 23:53:10.718513 waagent[1867]: 2025-05-08T23:53:10.718449Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 8 23:53:10.726838 waagent[1867]: 2025-05-08T23:53:10.726771Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 8 23:53:10.738223 waagent[1867]: 2025-05-08T23:53:10.738165Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 8 23:53:10.744059 waagent[1867]: 2025-05-08T23:53:10.743995Z INFO Daemon Daemon Running default provisioning handler May 8 23:53:10.756478 waagent[1867]: 2025-05-08T23:53:10.755728Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 8 23:53:10.770599 waagent[1867]: 2025-05-08T23:53:10.770507Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 8 23:53:10.781628 waagent[1867]: 2025-05-08T23:53:10.781539Z INFO Daemon Daemon cloud-init is enabled: False May 8 23:53:10.788643 waagent[1867]: 2025-05-08T23:53:10.788574Z INFO Daemon Daemon Copying ovf-env.xml May 8 23:53:10.853461 waagent[1867]: 2025-05-08T23:53:10.851670Z INFO Daemon Daemon Successfully mounted dvd May 8 23:53:10.879397 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 8 23:53:10.881631 waagent[1867]: 2025-05-08T23:53:10.881564Z INFO Daemon Daemon Detect protocol endpoint May 8 23:53:10.887602 waagent[1867]: 2025-05-08T23:53:10.887520Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 8 23:53:10.894043 waagent[1867]: 2025-05-08T23:53:10.893966Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 8 23:53:10.906465 waagent[1867]: 2025-05-08T23:53:10.902763Z INFO Daemon Daemon Test for route to 168.63.129.16 May 8 23:53:10.910075 waagent[1867]: 2025-05-08T23:53:10.910013Z INFO Daemon Daemon Route to 168.63.129.16 exists May 8 23:53:10.916094 waagent[1867]: 2025-05-08T23:53:10.916036Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 8 23:53:10.945591 waagent[1867]: 2025-05-08T23:53:10.945542Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 8 23:53:10.952430 waagent[1867]: 2025-05-08T23:53:10.952400Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 8 23:53:10.959453 waagent[1867]: 2025-05-08T23:53:10.959386Z INFO Daemon Daemon Server preferred version:2015-04-05 May 8 23:53:11.154537 waagent[1867]: 2025-05-08T23:53:11.154424Z INFO Daemon Daemon Initializing goal state during protocol detection May 8 23:53:11.161063 waagent[1867]: 2025-05-08T23:53:11.160999Z INFO Daemon Daemon Forcing an update of the goal state. May 8 23:53:11.177304 waagent[1867]: 2025-05-08T23:53:11.177206Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 8 23:53:11.198924 waagent[1867]: 2025-05-08T23:53:11.198878Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 8 23:53:11.205230 waagent[1867]: 2025-05-08T23:53:11.205182Z INFO Daemon May 8 23:53:11.208210 waagent[1867]: 2025-05-08T23:53:11.208155Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: efb032de-53f8-497b-a051-62cf60a71800 eTag: 3070678549634339709 source: Fabric] May 8 23:53:11.220123 waagent[1867]: 2025-05-08T23:53:11.220072Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 8 23:53:11.227091 waagent[1867]: 2025-05-08T23:53:11.227042Z INFO Daemon May 8 23:53:11.229879 waagent[1867]: 2025-05-08T23:53:11.229827Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 8 23:53:11.245412 waagent[1867]: 2025-05-08T23:53:11.245369Z INFO Daemon Daemon Downloading artifacts profile blob May 8 23:53:11.419689 waagent[1867]: 2025-05-08T23:53:11.419598Z INFO Daemon Downloaded certificate {'thumbprint': 'A423A15274303330A71FDAA42C41334115E05B7D', 'hasPrivateKey': True} May 8 23:53:11.429570 waagent[1867]: 2025-05-08T23:53:11.429489Z INFO Daemon Downloaded certificate {'thumbprint': '498DF78DEECCDDEC86EEFFB125345FFF6D90E134', 'hasPrivateKey': False} May 8 23:53:11.439630 waagent[1867]: 2025-05-08T23:53:11.439574Z INFO Daemon Fetch goal state completed May 8 23:53:11.486404 waagent[1867]: 2025-05-08T23:53:11.486360Z INFO Daemon Daemon Starting provisioning May 8 23:53:11.491420 waagent[1867]: 2025-05-08T23:53:11.491351Z INFO Daemon Daemon Handle ovf-env.xml. May 8 23:53:11.496202 waagent[1867]: 2025-05-08T23:53:11.496146Z INFO Daemon Daemon Set hostname [ci-4152.2.3-n-71d56f534c] May 8 23:53:11.527461 waagent[1867]: 2025-05-08T23:53:11.526620Z INFO Daemon Daemon Publish hostname [ci-4152.2.3-n-71d56f534c] May 8 23:53:11.533177 waagent[1867]: 2025-05-08T23:53:11.533110Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 8 23:53:11.539306 waagent[1867]: 2025-05-08T23:53:11.539249Z INFO Daemon Daemon Primary interface is [eth0] May 8 23:53:11.575199 systemd-networkd[1506]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:53:11.575206 systemd-networkd[1506]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:53:11.575233 systemd-networkd[1506]: eth0: DHCP lease lost May 8 23:53:11.576619 waagent[1867]: 2025-05-08T23:53:11.576537Z INFO Daemon Daemon Create user account if not exists May 8 23:53:11.582524 waagent[1867]: 2025-05-08T23:53:11.582457Z INFO Daemon Daemon User core already exists, skip useradd May 8 23:53:11.588020 waagent[1867]: 2025-05-08T23:53:11.587956Z INFO Daemon Daemon Configure sudoer May 8 23:53:11.588093 systemd-networkd[1506]: eth0: DHCPv6 lease lost May 8 23:53:11.592636 waagent[1867]: 2025-05-08T23:53:11.592569Z INFO Daemon Daemon Configure sshd May 8 23:53:11.597016 waagent[1867]: 2025-05-08T23:53:11.596957Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 8 23:53:11.609172 waagent[1867]: 2025-05-08T23:53:11.609105Z INFO Daemon Daemon Deploy ssh public key. May 8 23:53:11.622512 systemd-networkd[1506]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 8 23:53:12.720933 waagent[1867]: 2025-05-08T23:53:12.715904Z INFO Daemon Daemon Provisioning complete May 8 23:53:12.735244 waagent[1867]: 2025-05-08T23:53:12.735168Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 8 23:53:12.743072 waagent[1867]: 2025-05-08T23:53:12.742989Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 8 23:53:12.754953 waagent[1867]: 2025-05-08T23:53:12.754879Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent May 8 23:53:12.899346 waagent[1953]: 2025-05-08T23:53:12.898818Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) May 8 23:53:12.899346 waagent[1953]: 2025-05-08T23:53:12.898974Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.3 May 8 23:53:12.899346 waagent[1953]: 2025-05-08T23:53:12.899027Z INFO ExtHandler ExtHandler Python: 3.11.10 May 8 23:53:12.916735 waagent[1953]: 2025-05-08T23:53:12.916649Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 8 23:53:12.917073 waagent[1953]: 2025-05-08T23:53:12.917035Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 8 23:53:12.917209 waagent[1953]: 2025-05-08T23:53:12.917176Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 8 23:53:12.925929 waagent[1953]: 2025-05-08T23:53:12.925843Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 8 23:53:12.932244 waagent[1953]: 2025-05-08T23:53:12.932195Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 8 23:53:12.933461 waagent[1953]: 2025-05-08T23:53:12.932910Z INFO ExtHandler May 8 23:53:12.933461 waagent[1953]: 2025-05-08T23:53:12.932988Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d4bcc9b0-0ade-445e-8d7e-6ae7ab0dec78 eTag: 3070678549634339709 source: Fabric] May 8 23:53:12.933461 waagent[1953]: 2025-05-08T23:53:12.933255Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 8 23:53:12.934028 waagent[1953]: 2025-05-08T23:53:12.933985Z INFO ExtHandler May 8 23:53:12.934165 waagent[1953]: 2025-05-08T23:53:12.934134Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 8 23:53:12.938457 waagent[1953]: 2025-05-08T23:53:12.938394Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 8 23:53:13.024134 waagent[1953]: 2025-05-08T23:53:13.023996Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A423A15274303330A71FDAA42C41334115E05B7D', 'hasPrivateKey': True} May 8 23:53:13.024768 waagent[1953]: 2025-05-08T23:53:13.024724Z INFO ExtHandler Downloaded certificate {'thumbprint': '498DF78DEECCDDEC86EEFFB125345FFF6D90E134', 'hasPrivateKey': False} May 8 23:53:13.026272 waagent[1953]: 2025-05-08T23:53:13.025333Z INFO ExtHandler Fetch goal state completed May 8 23:53:13.042479 waagent[1953]: 2025-05-08T23:53:13.041980Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1953 May 8 23:53:13.042479 waagent[1953]: 2025-05-08T23:53:13.042183Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 8 23:53:13.043977 waagent[1953]: 2025-05-08T23:53:13.043918Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.3', '', 'Flatcar Container Linux by Kinvolk'] May 8 23:53:13.044362 waagent[1953]: 2025-05-08T23:53:13.044324Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 8 23:53:13.073319 waagent[1953]: 2025-05-08T23:53:13.073271Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 8 23:53:13.073559 waagent[1953]: 2025-05-08T23:53:13.073518Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 8 23:53:13.080386 waagent[1953]: 2025-05-08T23:53:13.079796Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 8 23:53:13.086997 systemd[1]: Reloading requested from client PID 1968 ('systemctl') (unit waagent.service)... May 8 23:53:13.087011 systemd[1]: Reloading... May 8 23:53:13.164587 zram_generator::config[2002]: No configuration found. May 8 23:53:13.291935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:53:13.378470 systemd[1]: Reloading finished in 291 ms. May 8 23:53:13.401473 waagent[1953]: 2025-05-08T23:53:13.400794Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service May 8 23:53:13.407325 systemd[1]: Reloading requested from client PID 2056 ('systemctl') (unit waagent.service)... May 8 23:53:13.407342 systemd[1]: Reloading... May 8 23:53:13.483472 zram_generator::config[2093]: No configuration found. May 8 23:53:13.586670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:53:13.668703 systemd[1]: Reloading finished in 261 ms. May 8 23:53:13.702368 waagent[1953]: 2025-05-08T23:53:13.698904Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 8 23:53:13.702368 waagent[1953]: 2025-05-08T23:53:13.699084Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 8 23:53:14.628479 waagent[1953]: 2025-05-08T23:53:14.628229Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 8 23:53:14.628970 waagent[1953]: 2025-05-08T23:53:14.628909Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] May 8 23:53:14.629874 waagent[1953]: 2025-05-08T23:53:14.629783Z INFO ExtHandler ExtHandler Starting env monitor service. May 8 23:53:14.630355 waagent[1953]: 2025-05-08T23:53:14.630192Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 8 23:53:14.631301 waagent[1953]: 2025-05-08T23:53:14.630581Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 8 23:53:14.631301 waagent[1953]: 2025-05-08T23:53:14.630679Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 8 23:53:14.631301 waagent[1953]: 2025-05-08T23:53:14.630870Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 8 23:53:14.631301 waagent[1953]: 2025-05-08T23:53:14.631038Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 8 23:53:14.631301 waagent[1953]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 8 23:53:14.631301 waagent[1953]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 8 23:53:14.631301 waagent[1953]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 8 23:53:14.631301 waagent[1953]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 8 23:53:14.631301 waagent[1953]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 8 23:53:14.631301 waagent[1953]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 8 23:53:14.631677 waagent[1953]: 2025-05-08T23:53:14.631614Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 8 23:53:14.631769 waagent[1953]: 2025-05-08T23:53:14.631719Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 8 23:53:14.631975 waagent[1953]: 2025-05-08T23:53:14.631925Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 8 23:53:14.632103 waagent[1953]: 2025-05-08T23:53:14.632051Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 8 23:53:14.632501 waagent[1953]: 2025-05-08T23:53:14.632416Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 8 23:53:14.632605 waagent[1953]: 2025-05-08T23:53:14.632571Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 8 23:53:14.632754 waagent[1953]: 2025-05-08T23:53:14.632711Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 8 23:53:14.633008 waagent[1953]: 2025-05-08T23:53:14.632960Z INFO EnvHandler ExtHandler Configure routes May 8 23:53:14.633685 waagent[1953]: 2025-05-08T23:53:14.633639Z INFO EnvHandler ExtHandler Gateway:None May 8 23:53:14.634591 waagent[1953]: 2025-05-08T23:53:14.634549Z INFO EnvHandler ExtHandler Routes:None May 8 23:53:14.646663 waagent[1953]: 2025-05-08T23:53:14.646605Z INFO ExtHandler ExtHandler May 8 23:53:14.646932 waagent[1953]: 2025-05-08T23:53:14.646894Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 76f57127-3923-4797-ba8c-a378f2fc3a45 correlation a729df0d-5071-423f-b43e-8e082b0ba2e0 created: 2025-05-08T23:52:08.311707Z] May 8 23:53:14.647425 waagent[1953]: 2025-05-08T23:53:14.647384Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 8 23:53:14.648702 waagent[1953]: 2025-05-08T23:53:14.648104Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] May 8 23:53:14.677169 waagent[1953]: 2025-05-08T23:53:14.677095Z INFO MonitorHandler ExtHandler Network interfaces: May 8 23:53:14.677169 waagent[1953]: Executing ['ip', '-a', '-o', 'link']: May 8 23:53:14.677169 waagent[1953]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 8 23:53:14.677169 waagent[1953]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:c6:ce brd ff:ff:ff:ff:ff:ff May 8 23:53:14.677169 waagent[1953]: 3: enP65345s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:c6:ce brd ff:ff:ff:ff:ff:ff\ altname enP65345p0s2 May 8 23:53:14.677169 waagent[1953]: Executing ['ip', '-4', '-a', '-o', 'address']: May 8 23:53:14.677169 waagent[1953]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 8 23:53:14.677169 waagent[1953]: 2: eth0 inet 10.200.20.33/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 8 23:53:14.677169 waagent[1953]: Executing ['ip', '-6', '-a', '-o', 'address']: May 8 23:53:14.677169 waagent[1953]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 8 23:53:14.677169 waagent[1953]: 2: eth0 inet6 fe80::222:48ff:feb5:c6ce/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 8 23:53:14.677169 waagent[1953]: 3: enP65345s1 inet6 fe80::222:48ff:feb5:c6ce/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 8 23:53:14.704718 waagent[1953]: 2025-05-08T23:53:14.704650Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4C3C0AD9-DCE5-47F7-859A-8D19CBB08CC6;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] May 8 23:53:14.743259 waagent[1953]: 2025-05-08T23:53:14.743184Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: May 8 23:53:14.743259 waagent[1953]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 8 23:53:14.743259 waagent[1953]: pkts bytes target prot opt in out source destination May 8 23:53:14.743259 waagent[1953]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 8 23:53:14.743259 waagent[1953]: pkts bytes target prot opt in out source destination May 8 23:53:14.743259 waagent[1953]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 8 23:53:14.743259 waagent[1953]: pkts bytes target prot opt in out source destination May 8 23:53:14.743259 waagent[1953]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 8 23:53:14.743259 waagent[1953]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 8 23:53:14.743259 waagent[1953]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 8 23:53:14.746860 waagent[1953]: 2025-05-08T23:53:14.746796Z INFO EnvHandler ExtHandler Current Firewall rules: May 8 23:53:14.746860 waagent[1953]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 8 23:53:14.746860 waagent[1953]: pkts bytes target prot opt in out source destination May 8 23:53:14.746860 waagent[1953]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 8 23:53:14.746860 waagent[1953]: pkts bytes target prot opt in out source destination May 8 23:53:14.746860 waagent[1953]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 8 23:53:14.746860 waagent[1953]: pkts bytes target prot opt in out source destination May 8 23:53:14.746860 waagent[1953]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 8 23:53:14.746860 waagent[1953]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 8 23:53:14.746860 waagent[1953]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 8 23:53:14.747462 waagent[1953]: 2025-05-08T23:53:14.747402Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 8 23:53:20.163121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 23:53:20.171639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:20.272188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:20.283796 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:53:20.326949 kubelet[2183]: E0508 23:53:20.326901 2183 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:53:20.330489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:53:20.330652 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:53:25.405277 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 23:53:25.406491 systemd[1]: Started sshd@0-10.200.20.33:22-10.200.16.10:56778.service - OpenSSH per-connection server daemon (10.200.16.10:56778). May 8 23:53:25.972583 sshd[2192]: Accepted publickey for core from 10.200.16.10 port 56778 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:53:25.973916 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:25.977797 systemd-logind[1722]: New session 3 of user core. May 8 23:53:25.985665 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 23:53:26.403597 systemd[1]: Started sshd@1-10.200.20.33:22-10.200.16.10:56792.service - OpenSSH per-connection server daemon (10.200.16.10:56792). May 8 23:53:26.895720 sshd[2197]: Accepted publickey for core from 10.200.16.10 port 56792 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:53:26.897027 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:26.902220 systemd-logind[1722]: New session 4 of user core. May 8 23:53:26.909626 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 23:53:27.244420 sshd[2199]: Connection closed by 10.200.16.10 port 56792 May 8 23:53:27.245012 sshd-session[2197]: pam_unix(sshd:session): session closed for user core May 8 23:53:27.247639 systemd[1]: sshd@1-10.200.20.33:22-10.200.16.10:56792.service: Deactivated successfully. May 8 23:53:27.249344 systemd[1]: session-4.scope: Deactivated successfully. May 8 23:53:27.251315 systemd-logind[1722]: Session 4 logged out. Waiting for processes to exit. May 8 23:53:27.252247 systemd-logind[1722]: Removed session 4. May 8 23:53:27.334554 systemd[1]: Started sshd@2-10.200.20.33:22-10.200.16.10:56796.service - OpenSSH per-connection server daemon (10.200.16.10:56796). May 8 23:53:27.818357 sshd[2204]: Accepted publickey for core from 10.200.16.10 port 56796 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:53:27.819723 sshd-session[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:27.823629 systemd-logind[1722]: New session 5 of user core. May 8 23:53:27.834660 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 23:53:28.161977 sshd[2206]: Connection closed by 10.200.16.10 port 56796 May 8 23:53:28.161387 sshd-session[2204]: pam_unix(sshd:session): session closed for user core May 8 23:53:28.164966 systemd[1]: sshd@2-10.200.20.33:22-10.200.16.10:56796.service: Deactivated successfully. May 8 23:53:28.168234 systemd[1]: session-5.scope: Deactivated successfully. May 8 23:53:28.168978 systemd-logind[1722]: Session 5 logged out. Waiting for processes to exit. May 8 23:53:28.170034 systemd-logind[1722]: Removed session 5. May 8 23:53:28.247952 systemd[1]: Started sshd@3-10.200.20.33:22-10.200.16.10:56810.service - OpenSSH per-connection server daemon (10.200.16.10:56810). May 8 23:53:28.732253 sshd[2211]: Accepted publickey for core from 10.200.16.10 port 56810 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:53:28.733948 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:28.738653 systemd-logind[1722]: New session 6 of user core. May 8 23:53:28.746632 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 23:53:29.079581 sshd[2213]: Connection closed by 10.200.16.10 port 56810 May 8 23:53:29.079381 sshd-session[2211]: pam_unix(sshd:session): session closed for user core May 8 23:53:29.083302 systemd[1]: sshd@3-10.200.20.33:22-10.200.16.10:56810.service: Deactivated successfully. May 8 23:53:29.085323 systemd[1]: session-6.scope: Deactivated successfully. May 8 23:53:29.086143 systemd-logind[1722]: Session 6 logged out. Waiting for processes to exit. May 8 23:53:29.087110 systemd-logind[1722]: Removed session 6. May 8 23:53:29.166953 systemd[1]: Started sshd@4-10.200.20.33:22-10.200.16.10:44998.service - OpenSSH per-connection server daemon (10.200.16.10:44998). May 8 23:53:29.650157 sshd[2218]: Accepted publickey for core from 10.200.16.10 port 44998 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:53:29.651513 sshd-session[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:29.656425 systemd-logind[1722]: New session 7 of user core. May 8 23:53:29.661688 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 23:53:30.031103 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 23:53:30.031387 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:53:30.059399 sudo[2221]: pam_unix(sudo:session): session closed for user root May 8 23:53:30.136242 sshd[2220]: Connection closed by 10.200.16.10 port 44998 May 8 23:53:30.135482 sshd-session[2218]: pam_unix(sshd:session): session closed for user core May 8 23:53:30.138642 systemd[1]: sshd@4-10.200.20.33:22-10.200.16.10:44998.service: Deactivated successfully. May 8 23:53:30.140220 systemd[1]: session-7.scope: Deactivated successfully. May 8 23:53:30.141842 systemd-logind[1722]: Session 7 logged out. Waiting for processes to exit. May 8 23:53:30.142998 systemd-logind[1722]: Removed session 7. May 8 23:53:30.220763 systemd[1]: Started sshd@5-10.200.20.33:22-10.200.16.10:45000.service - OpenSSH per-connection server daemon (10.200.16.10:45000). May 8 23:53:30.413126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 23:53:30.418643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:30.518279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:30.529736 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:53:30.580297 kubelet[2236]: E0508 23:53:30.580252 2236 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:53:30.583026 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:53:30.583173 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:53:30.707649 sshd[2226]: Accepted publickey for core from 10.200.16.10 port 45000 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:53:30.708729 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:30.713467 systemd-logind[1722]: New session 8 of user core. May 8 23:53:30.726605 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 23:53:30.977229 sudo[2246]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 23:53:30.978148 sudo[2246]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:53:30.981691 sudo[2246]: pam_unix(sudo:session): session closed for user root May 8 23:53:30.986379 sudo[2245]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 23:53:30.986685 sudo[2245]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:53:30.997762 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 23:53:31.023390 augenrules[2268]: No rules May 8 23:53:31.024869 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:53:31.025051 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 23:53:31.027861 sudo[2245]: pam_unix(sudo:session): session closed for user root May 8 23:53:31.101197 sshd[2244]: Connection closed by 10.200.16.10 port 45000 May 8 23:53:31.101619 sshd-session[2226]: pam_unix(sshd:session): session closed for user core May 8 23:53:31.104518 systemd[1]: sshd@5-10.200.20.33:22-10.200.16.10:45000.service: Deactivated successfully. May 8 23:53:31.106024 systemd[1]: session-8.scope: Deactivated successfully. May 8 23:53:31.107593 systemd-logind[1722]: Session 8 logged out. Waiting for processes to exit. May 8 23:53:31.108718 systemd-logind[1722]: Removed session 8. May 8 23:53:31.187035 systemd[1]: Started sshd@6-10.200.20.33:22-10.200.16.10:45002.service - OpenSSH per-connection server daemon (10.200.16.10:45002). May 8 23:53:31.675304 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 45002 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:53:31.676635 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:31.681449 systemd-logind[1722]: New session 9 of user core. May 8 23:53:31.687642 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 23:53:31.947715 sudo[2279]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 23:53:31.948094 sudo[2279]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:53:31.993037 chronyd[1706]: Selected source PHC0 May 8 23:53:33.118980 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 23:53:33.119857 (dockerd)[2296]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 23:53:33.867470 dockerd[2296]: time="2025-05-08T23:53:33.867152839Z" level=info msg="Starting up" May 8 23:53:34.334756 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4075626819-merged.mount: Deactivated successfully. May 8 23:53:34.428398 dockerd[2296]: time="2025-05-08T23:53:34.428332691Z" level=info msg="Loading containers: start." May 8 23:53:34.632482 kernel: Initializing XFRM netlink socket May 8 23:53:34.771035 systemd-networkd[1506]: docker0: Link UP May 8 23:53:34.798779 dockerd[2296]: time="2025-05-08T23:53:34.798732708Z" level=info msg="Loading containers: done." May 8 23:53:34.820625 dockerd[2296]: time="2025-05-08T23:53:34.820553780Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 23:53:34.820790 dockerd[2296]: time="2025-05-08T23:53:34.820712380Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 8 23:53:34.820897 dockerd[2296]: time="2025-05-08T23:53:34.820871261Z" level=info msg="Daemon has completed initialization" May 8 23:53:34.874279 dockerd[2296]: time="2025-05-08T23:53:34.874195418Z" level=info msg="API listen on /run/docker.sock" May 8 23:53:34.874819 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 23:53:36.823027 containerd[1785]: time="2025-05-08T23:53:36.822920885Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 23:53:37.638385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888311978.mount: Deactivated successfully. May 8 23:53:38.878467 containerd[1785]: time="2025-05-08T23:53:38.878402547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:38.881595 containerd[1785]: time="2025-05-08T23:53:38.881529792Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" May 8 23:53:38.884906 containerd[1785]: time="2025-05-08T23:53:38.884804237Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:38.889944 containerd[1785]: time="2025-05-08T23:53:38.889878964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:38.891614 containerd[1785]: time="2025-05-08T23:53:38.891043886Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.068080401s" May 8 23:53:38.891614 containerd[1785]: time="2025-05-08T23:53:38.891094006Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 8 23:53:38.915109 containerd[1785]: time="2025-05-08T23:53:38.914877960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 23:53:40.261168 containerd[1785]: time="2025-05-08T23:53:40.261114165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:40.263359 containerd[1785]: time="2025-05-08T23:53:40.263302209Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" May 8 23:53:40.266051 containerd[1785]: time="2025-05-08T23:53:40.266019013Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:40.271321 containerd[1785]: time="2025-05-08T23:53:40.271260060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:40.272455 containerd[1785]: time="2025-05-08T23:53:40.272316062Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.357393822s" May 8 23:53:40.272455 containerd[1785]: time="2025-05-08T23:53:40.272354022Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 8 23:53:40.293484 containerd[1785]: time="2025-05-08T23:53:40.293432614Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 23:53:40.663176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 23:53:40.669636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:40.771202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:40.781718 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:53:40.825066 kubelet[2559]: E0508 23:53:40.825002 2559 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:53:40.827904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:53:40.828060 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:53:41.645343 containerd[1785]: time="2025-05-08T23:53:41.644455037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:41.646883 containerd[1785]: time="2025-05-08T23:53:41.646626160Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" May 8 23:53:41.649867 containerd[1785]: time="2025-05-08T23:53:41.649784445Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:41.656640 containerd[1785]: time="2025-05-08T23:53:41.656579895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:41.658269 containerd[1785]: time="2025-05-08T23:53:41.657361696Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.363877002s" May 8 23:53:41.658269 containerd[1785]: time="2025-05-08T23:53:41.657398256Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 8 23:53:41.681474 containerd[1785]: time="2025-05-08T23:53:41.681414892Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 23:53:43.502370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296578923.mount: Deactivated successfully. May 8 23:53:43.832120 containerd[1785]: time="2025-05-08T23:53:43.831973912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:43.835088 containerd[1785]: time="2025-05-08T23:53:43.835030117Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" May 8 23:53:43.838640 containerd[1785]: time="2025-05-08T23:53:43.838563562Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:43.842352 containerd[1785]: time="2025-05-08T23:53:43.842293767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:43.843062 containerd[1785]: time="2025-05-08T23:53:43.842919728Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 2.161446956s" May 8 23:53:43.843062 containerd[1785]: time="2025-05-08T23:53:43.842958648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 8 23:53:43.864155 containerd[1785]: time="2025-05-08T23:53:43.863915200Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 23:53:44.444817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459812604.mount: Deactivated successfully. May 8 23:53:45.361699 containerd[1785]: time="2025-05-08T23:53:45.361639042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:45.364714 containerd[1785]: time="2025-05-08T23:53:45.364650367Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 8 23:53:45.368251 containerd[1785]: time="2025-05-08T23:53:45.368212212Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:45.375465 containerd[1785]: time="2025-05-08T23:53:45.374209101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:45.375605 containerd[1785]: time="2025-05-08T23:53:45.375487263Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.511528583s" May 8 23:53:45.375605 containerd[1785]: time="2025-05-08T23:53:45.375530063Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 23:53:45.396690 containerd[1785]: time="2025-05-08T23:53:45.396614335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 23:53:45.963557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011874250.mount: Deactivated successfully. May 8 23:53:45.988488 containerd[1785]: time="2025-05-08T23:53:45.987749420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:45.989913 containerd[1785]: time="2025-05-08T23:53:45.989865783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" May 8 23:53:45.994077 containerd[1785]: time="2025-05-08T23:53:45.994021909Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:45.998884 containerd[1785]: time="2025-05-08T23:53:45.998820916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:45.999763 containerd[1785]: time="2025-05-08T23:53:45.999625358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 602.714703ms" May 8 23:53:45.999763 containerd[1785]: time="2025-05-08T23:53:45.999662958Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 8 23:53:46.021045 containerd[1785]: time="2025-05-08T23:53:46.020927590Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 23:53:46.653770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234448871.mount: Deactivated successfully. May 8 23:53:49.057416 containerd[1785]: time="2025-05-08T23:53:49.056479062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:49.059350 containerd[1785]: time="2025-05-08T23:53:49.059062551Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" May 8 23:53:49.062542 containerd[1785]: time="2025-05-08T23:53:49.062494123Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:49.067939 containerd[1785]: time="2025-05-08T23:53:49.067875022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:49.069212 containerd[1785]: time="2025-05-08T23:53:49.069081506Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.048117036s" May 8 23:53:49.069212 containerd[1785]: time="2025-05-08T23:53:49.069117506Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 8 23:53:50.913120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 8 23:53:50.921120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:51.024198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:51.036769 (kubelet)[2757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:53:51.080997 kubelet[2757]: E0508 23:53:51.080952 2757 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:53:51.084148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:53:51.084429 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:53:53.058456 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 8 23:53:53.261238 update_engine[1729]: I20250508 23:53:53.260612 1729 update_attempter.cc:509] Updating boot flags... May 8 23:53:53.331509 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (2780) May 8 23:53:55.596681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:55.608692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:55.628871 systemd[1]: Reloading requested from client PID 2835 ('systemctl') (unit session-9.scope)... May 8 23:53:55.629019 systemd[1]: Reloading... May 8 23:53:55.742473 zram_generator::config[2871]: No configuration found. May 8 23:53:55.862502 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:53:55.944707 systemd[1]: Reloading finished in 315 ms. May 8 23:53:55.998766 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 23:53:55.998861 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 23:53:55.999177 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:56.004769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:56.108171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:56.118755 (kubelet)[2942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 23:53:56.159325 kubelet[2942]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:53:56.159839 kubelet[2942]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 23:53:56.159839 kubelet[2942]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:53:56.161470 kubelet[2942]: I0508 23:53:56.160733 2942 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 23:53:56.597559 kubelet[2942]: I0508 23:53:56.597511 2942 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 23:53:56.597559 kubelet[2942]: I0508 23:53:56.597551 2942 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 23:53:56.597798 kubelet[2942]: I0508 23:53:56.597778 2942 server.go:927] "Client rotation is on, will bootstrap in background" May 8 23:53:56.609545 kubelet[2942]: E0508 23:53:56.609496 2942 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:56.609942 kubelet[2942]: I0508 23:53:56.609830 2942 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:53:56.618112 kubelet[2942]: I0508 23:53:56.618064 2942 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 23:53:56.620417 kubelet[2942]: I0508 23:53:56.619749 2942 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 23:53:56.620417 kubelet[2942]: I0508 23:53:56.619823 2942 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.3-n-71d56f534c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 23:53:56.620417 kubelet[2942]: I0508 23:53:56.620105 2942 topology_manager.go:138] "Creating topology manager with none policy" May 8 23:53:56.620417 kubelet[2942]: I0508 23:53:56.620116 2942 container_manager_linux.go:301] "Creating device plugin manager" May 8 23:53:56.620697 kubelet[2942]: I0508 23:53:56.620263 2942 state_mem.go:36] "Initialized new in-memory state store" May 8 23:53:56.621619 kubelet[2942]: I0508 23:53:56.621215 2942 kubelet.go:400] "Attempting to sync node with API server" May 8 23:53:56.621619 kubelet[2942]: I0508 23:53:56.621241 2942 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 23:53:56.621619 kubelet[2942]: I0508 23:53:56.621277 2942 kubelet.go:312] "Adding apiserver pod source" May 8 23:53:56.621619 kubelet[2942]: I0508 23:53:56.621298 2942 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 23:53:56.624044 kubelet[2942]: W0508 23:53:56.623995 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-n-71d56f534c&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:56.624165 kubelet[2942]: E0508 23:53:56.624151 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-n-71d56f534c&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:56.624290 kubelet[2942]: W0508 23:53:56.624265 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:56.624362 kubelet[2942]: E0508 23:53:56.624352 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:56.624801 kubelet[2942]: I0508 23:53:56.624781 2942 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 23:53:56.625556 kubelet[2942]: I0508 23:53:56.625034 2942 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 23:53:56.625556 kubelet[2942]: W0508 23:53:56.625080 2942 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 23:53:56.626505 kubelet[2942]: I0508 23:53:56.626489 2942 server.go:1264] "Started kubelet" May 8 23:53:56.630559 kubelet[2942]: E0508 23:53:56.630395 2942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.3-n-71d56f534c.183db272cd16dec3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.3-n-71d56f534c,UID:ci-4152.2.3-n-71d56f534c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.3-n-71d56f534c,},FirstTimestamp:2025-05-08 23:53:56.626464451 +0000 UTC m=+0.504533637,LastTimestamp:2025-05-08 23:53:56.626464451 +0000 UTC m=+0.504533637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.3-n-71d56f534c,}" May 8 23:53:56.631498 kubelet[2942]: I0508 23:53:56.631161 2942 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 23:53:56.631566 kubelet[2942]: I0508 23:53:56.631518 2942 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 23:53:56.631601 kubelet[2942]: I0508 23:53:56.631564 2942 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 23:53:56.632176 kubelet[2942]: I0508 23:53:56.632157 2942 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 23:53:56.633395 kubelet[2942]: I0508 23:53:56.633351 2942 server.go:455] "Adding debug handlers to kubelet server" May 8 23:53:56.636230 kubelet[2942]: I0508 23:53:56.635749 2942 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 23:53:56.636230 kubelet[2942]: I0508 23:53:56.635839 2942 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 23:53:56.636230 kubelet[2942]: I0508 23:53:56.635896 2942 reconciler.go:26] "Reconciler: start to sync state" May 8 23:53:56.636413 kubelet[2942]: W0508 23:53:56.636239 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:56.636413 kubelet[2942]: E0508 23:53:56.636279 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:56.636413 kubelet[2942]: E0508 23:53:56.636388 2942 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 23:53:56.637322 kubelet[2942]: E0508 23:53:56.636864 2942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-n-71d56f534c?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="200ms" May 8 23:53:56.639724 kubelet[2942]: I0508 23:53:56.639691 2942 factory.go:221] Registration of the systemd container factory successfully May 8 23:53:56.639822 kubelet[2942]: I0508 23:53:56.639809 2942 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 23:53:56.642681 kubelet[2942]: I0508 23:53:56.642649 2942 factory.go:221] Registration of the containerd container factory successfully May 8 23:53:56.652859 kubelet[2942]: I0508 23:53:56.652222 2942 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 23:53:56.655418 kubelet[2942]: I0508 23:53:56.655388 2942 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 23:53:56.655950 kubelet[2942]: I0508 23:53:56.655583 2942 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 23:53:56.655950 kubelet[2942]: I0508 23:53:56.655609 2942 kubelet.go:2337] "Starting kubelet main sync loop" May 8 23:53:56.655950 kubelet[2942]: E0508 23:53:56.655656 2942 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 23:53:56.663185 kubelet[2942]: W0508 23:53:56.663136 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:56.663348 kubelet[2942]: E0508 23:53:56.663336 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:56.756094 kubelet[2942]: E0508 23:53:56.756059 2942 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 23:53:56.774242 kubelet[2942]: I0508 23:53:56.774214 2942 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-n-71d56f534c" May 8 23:53:56.774815 kubelet[2942]: E0508 23:53:56.774786 2942 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4152.2.3-n-71d56f534c" May 8 23:53:56.775135 kubelet[2942]: I0508 23:53:56.775058 2942 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 23:53:56.775135 kubelet[2942]: I0508 23:53:56.775070 2942 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 23:53:56.775135 kubelet[2942]: I0508 23:53:56.775095 2942 state_mem.go:36] "Initialized new in-memory state store" May 8 23:53:56.780986 kubelet[2942]: I0508 23:53:56.780846 2942 policy_none.go:49] "None policy: Start" May 8 23:53:56.781560 kubelet[2942]: I0508 23:53:56.781527 2942 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 23:53:56.781560 kubelet[2942]: I0508 23:53:56.781551 2942 state_mem.go:35] "Initializing new in-memory state store" May 8 23:53:56.789890 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 23:53:56.804132 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 23:53:56.807602 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 23:53:56.820593 kubelet[2942]: I0508 23:53:56.818524 2942 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 23:53:56.820593 kubelet[2942]: I0508 23:53:56.818745 2942 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 23:53:56.820593 kubelet[2942]: I0508 23:53:56.819237 2942 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 23:53:56.825723 kubelet[2942]: E0508 23:53:56.825684 2942 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:53:56.837893 kubelet[2942]: E0508 23:53:56.837849 2942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-n-71d56f534c?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="400ms" May 8 23:53:56.957314 kubelet[2942]: I0508 23:53:56.957262 2942 topology_manager.go:215] "Topology Admit Handler" podUID="0d6aa4358ad648f75a5a8e9fb6a8d22c" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.3-n-71d56f534c" May 8 23:53:56.958983 kubelet[2942]: I0508 23:53:56.958944 2942 topology_manager.go:215] "Topology Admit Handler" podUID="ae68286796df0ed24304202ef64c13b0" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:53:56.960936 kubelet[2942]: I0508 23:53:56.960531 2942 topology_manager.go:215] "Topology Admit Handler" podUID="ab6a83e02c408d364880c069888f9ded" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.3-n-71d56f534c" May 8 23:53:56.967730 systemd[1]: Created slice kubepods-burstable-pod0d6aa4358ad648f75a5a8e9fb6a8d22c.slice - libcontainer container kubepods-burstable-pod0d6aa4358ad648f75a5a8e9fb6a8d22c.slice. May 8 23:53:56.978395 kubelet[2942]: I0508 23:53:56.978363 2942 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-n-71d56f534c" May 8 23:53:56.978733 kubelet[2942]: E0508 23:53:56.978685 2942 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4152.2.3-n-71d56f534c" May 8 23:53:56.982203 systemd[1]: Created slice kubepods-burstable-podae68286796df0ed24304202ef64c13b0.slice - libcontainer container kubepods-burstable-podae68286796df0ed24304202ef64c13b0.slice. May 8 23:53:56.986152 systemd[1]: Created slice kubepods-burstable-podab6a83e02c408d364880c069888f9ded.slice - libcontainer container kubepods-burstable-podab6a83e02c408d364880c069888f9ded.slice. May 8 23:53:57.038879 kubelet[2942]: I0508 23:53:57.038836 2942 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae68286796df0ed24304202ef64c13b0-ca-certs\") pod \"kube-controller-manager-ci-4152.2.3-n-71d56f534c\" (UID: \"ae68286796df0ed24304202ef64c13b0\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:53:57.038879 kubelet[2942]: I0508 23:53:57.038878 2942 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae68286796df0ed24304202ef64c13b0-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.3-n-71d56f534c\" (UID: \"ae68286796df0ed24304202ef64c13b0\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:53:57.039028 kubelet[2942]: I0508 23:53:57.038900 2942 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae68286796df0ed24304202ef64c13b0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.3-n-71d56f534c\" (UID: \"ae68286796df0ed24304202ef64c13b0\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:53:57.039028 kubelet[2942]: I0508 23:53:57.038925 2942 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab6a83e02c408d364880c069888f9ded-kubeconfig\") pod \"kube-scheduler-ci-4152.2.3-n-71d56f534c\" (UID: \"ab6a83e02c408d364880c069888f9ded\") " pod="kube-system/kube-scheduler-ci-4152.2.3-n-71d56f534c" May 8 23:53:57.039028 kubelet[2942]: I0508 23:53:57.038942 2942 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d6aa4358ad648f75a5a8e9fb6a8d22c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.3-n-71d56f534c\" (UID: \"0d6aa4358ad648f75a5a8e9fb6a8d22c\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-71d56f534c" May 8 23:53:57.039028 kubelet[2942]: I0508 23:53:57.038957 2942 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae68286796df0ed24304202ef64c13b0-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.3-n-71d56f534c\" (UID: \"ae68286796df0ed24304202ef64c13b0\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:53:57.039028 kubelet[2942]: I0508 23:53:57.038973 2942 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae68286796df0ed24304202ef64c13b0-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.3-n-71d56f534c\" (UID: \"ae68286796df0ed24304202ef64c13b0\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:53:57.039136 kubelet[2942]: I0508 23:53:57.038991 2942 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d6aa4358ad648f75a5a8e9fb6a8d22c-ca-certs\") pod \"kube-apiserver-ci-4152.2.3-n-71d56f534c\" (UID: \"0d6aa4358ad648f75a5a8e9fb6a8d22c\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-71d56f534c" May 8 23:53:57.039136 kubelet[2942]: I0508 23:53:57.039005 2942 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d6aa4358ad648f75a5a8e9fb6a8d22c-k8s-certs\") pod \"kube-apiserver-ci-4152.2.3-n-71d56f534c\" (UID: \"0d6aa4358ad648f75a5a8e9fb6a8d22c\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-71d56f534c" May 8 23:53:57.128899 kubelet[2942]: E0508 23:53:57.128780 2942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.3-n-71d56f534c.183db272cd16dec3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.3-n-71d56f534c,UID:ci-4152.2.3-n-71d56f534c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.3-n-71d56f534c,},FirstTimestamp:2025-05-08 23:53:56.626464451 +0000 UTC m=+0.504533637,LastTimestamp:2025-05-08 23:53:56.626464451 +0000 UTC m=+0.504533637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.3-n-71d56f534c,}" May 8 23:53:57.239471 kubelet[2942]: E0508 23:53:57.239309 2942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-n-71d56f534c?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="800ms" May 8 23:53:57.278323 containerd[1785]: time="2025-05-08T23:53:57.278278211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.3-n-71d56f534c,Uid:0d6aa4358ad648f75a5a8e9fb6a8d22c,Namespace:kube-system,Attempt:0,}" May 8 23:53:57.286268 containerd[1785]: time="2025-05-08T23:53:57.286000984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.3-n-71d56f534c,Uid:ae68286796df0ed24304202ef64c13b0,Namespace:kube-system,Attempt:0,}" May 8 23:53:57.289332 containerd[1785]: time="2025-05-08T23:53:57.289286469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.3-n-71d56f534c,Uid:ab6a83e02c408d364880c069888f9ded,Namespace:kube-system,Attempt:0,}" May 8 23:53:57.381215 kubelet[2942]: I0508 23:53:57.381187 2942 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-n-71d56f534c" May 8 23:53:57.381545 kubelet[2942]: E0508 23:53:57.381517 2942 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4152.2.3-n-71d56f534c" May 8 23:53:57.873883 kubelet[2942]: W0508 23:53:57.873799 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:57.873883 kubelet[2942]: E0508 23:53:57.873889 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:57.906517 kubelet[2942]: W0508 23:53:57.906459 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-n-71d56f534c&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:57.906517 kubelet[2942]: E0508 23:53:57.906518 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-n-71d56f534c&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:58.009146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768805150.mount: Deactivated successfully. May 8 23:53:58.034262 containerd[1785]: time="2025-05-08T23:53:58.034221063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:53:58.038990 containerd[1785]: time="2025-05-08T23:53:58.038866071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 8 23:53:58.040413 kubelet[2942]: E0508 23:53:58.040369 2942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-n-71d56f534c?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="1.6s" May 8 23:53:58.053482 containerd[1785]: time="2025-05-08T23:53:58.053137454Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:53:58.055788 containerd[1785]: time="2025-05-08T23:53:58.055744619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:53:58.058029 kubelet[2942]: W0508 23:53:58.057949 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:58.058029 kubelet[2942]: E0508 23:53:58.058011 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:58.059339 containerd[1785]: time="2025-05-08T23:53:58.059288384Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 23:53:58.106521 kubelet[2942]: W0508 23:53:58.106466 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:58.106592 kubelet[2942]: E0508 23:53:58.106529 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:58.108633 containerd[1785]: time="2025-05-08T23:53:58.108567546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:53:58.109393 containerd[1785]: time="2025-05-08T23:53:58.109129507Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 830.766816ms" May 8 23:53:58.113468 containerd[1785]: time="2025-05-08T23:53:58.113394514Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:53:58.156936 containerd[1785]: time="2025-05-08T23:53:58.156814906Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 23:53:58.184211 kubelet[2942]: I0508 23:53:58.183876 2942 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-n-71d56f534c" May 8 23:53:58.184211 kubelet[2942]: E0508 23:53:58.184165 2942 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4152.2.3-n-71d56f534c" May 8 23:53:58.204176 containerd[1785]: time="2025-05-08T23:53:58.204130544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 914.768235ms" May 8 23:53:58.460544 containerd[1785]: time="2025-05-08T23:53:58.460429729Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.174347425s" May 8 23:53:58.709714 kubelet[2942]: E0508 23:53:58.709681 2942 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:59.641225 kubelet[2942]: E0508 23:53:59.641173 2942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-n-71d56f534c?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="3.2s" May 8 23:53:59.786431 kubelet[2942]: I0508 23:53:59.786377 2942 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-n-71d56f534c" May 8 23:53:59.786771 kubelet[2942]: E0508 23:53:59.786751 2942 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4152.2.3-n-71d56f534c" May 8 23:53:59.890762 kubelet[2942]: W0508 23:53:59.890730 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:53:59.890832 kubelet[2942]: E0508 23:53:59.890769 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:00.243665 kubelet[2942]: W0508 23:54:00.243604 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:00.243665 kubelet[2942]: E0508 23:54:00.243646 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:00.885137 kubelet[2942]: W0508 23:54:00.885075 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-n-71d56f534c&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:00.885137 kubelet[2942]: E0508 23:54:00.885116 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-n-71d56f534c&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:01.169031 kubelet[2942]: W0508 23:54:01.168999 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:01.169134 kubelet[2942]: E0508 23:54:01.169044 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:02.842471 kubelet[2942]: E0508 23:54:02.842410 2942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-n-71d56f534c?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="6.4s" May 8 23:54:02.988960 kubelet[2942]: I0508 23:54:02.988581 2942 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-n-71d56f534c" May 8 23:54:02.989153 kubelet[2942]: E0508 23:54:02.989101 2942 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4152.2.3-n-71d56f534c" May 8 23:54:03.070317 kubelet[2942]: E0508 23:54:03.070284 2942 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:05.501714 kubelet[2942]: W0508 23:54:05.501651 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:05.501714 kubelet[2942]: E0508 23:54:05.501694 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:05.922625 kubelet[2942]: W0508 23:54:05.922593 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:05.922750 kubelet[2942]: E0508 23:54:05.922634 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:06.550838 kubelet[2942]: W0508 23:54:06.550801 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-n-71d56f534c&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:06.550838 kubelet[2942]: E0508 23:54:06.550843 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-n-71d56f534c&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:06.826894 kubelet[2942]: E0508 23:54:06.826787 2942 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:06.954943 kubelet[2942]: W0508 23:54:06.954903 2942 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:06.954943 kubelet[2942]: E0508 23:54:06.954948 2942 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused May 8 23:54:07.115108 containerd[1785]: time="2025-05-08T23:54:07.113393315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:07.116033 containerd[1785]: time="2025-05-08T23:54:07.114995037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:07.116275 containerd[1785]: time="2025-05-08T23:54:07.116126119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:07.116368 containerd[1785]: time="2025-05-08T23:54:07.116245239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:07.126169 containerd[1785]: time="2025-05-08T23:54:07.125903655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:07.126169 containerd[1785]: time="2025-05-08T23:54:07.125976095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:07.126169 containerd[1785]: time="2025-05-08T23:54:07.125992295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:07.126425 containerd[1785]: time="2025-05-08T23:54:07.126275295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:07.131050 kubelet[2942]: E0508 23:54:07.130910 2942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.3-n-71d56f534c.183db272cd16dec3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.3-n-71d56f534c,UID:ci-4152.2.3-n-71d56f534c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.3-n-71d56f534c,},FirstTimestamp:2025-05-08 23:53:56.626464451 +0000 UTC m=+0.504533637,LastTimestamp:2025-05-08 23:53:56.626464451 +0000 UTC m=+0.504533637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.3-n-71d56f534c,}" May 8 23:54:07.134744 containerd[1785]: time="2025-05-08T23:54:07.134525508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:07.134744 containerd[1785]: time="2025-05-08T23:54:07.134593268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:07.134744 containerd[1785]: time="2025-05-08T23:54:07.134610148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:07.134744 containerd[1785]: time="2025-05-08T23:54:07.134695988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:07.146827 systemd[1]: Started cri-containerd-9fb5e512f136233493d2fc07e165032173f50a739ec45c24609c5e106fa108ce.scope - libcontainer container 9fb5e512f136233493d2fc07e165032173f50a739ec45c24609c5e106fa108ce. May 8 23:54:07.168662 systemd[1]: Started cri-containerd-3fdc98e4bf364e072fe26e7ef158413c233f5258e2ce4d25e233104e4ff1ac78.scope - libcontainer container 3fdc98e4bf364e072fe26e7ef158413c233f5258e2ce4d25e233104e4ff1ac78. May 8 23:54:07.170680 systemd[1]: Started cri-containerd-744d39445be1b92dcd9d58684419a674fb703c7e463e59d12675bdf9139b18b1.scope - libcontainer container 744d39445be1b92dcd9d58684419a674fb703c7e463e59d12675bdf9139b18b1. May 8 23:54:07.210501 containerd[1785]: time="2025-05-08T23:54:07.210246908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.3-n-71d56f534c,Uid:ae68286796df0ed24304202ef64c13b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fb5e512f136233493d2fc07e165032173f50a739ec45c24609c5e106fa108ce\"" May 8 23:54:07.221324 containerd[1785]: time="2025-05-08T23:54:07.221066325Z" level=info msg="CreateContainer within sandbox \"9fb5e512f136233493d2fc07e165032173f50a739ec45c24609c5e106fa108ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 23:54:07.228850 containerd[1785]: time="2025-05-08T23:54:07.228603017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.3-n-71d56f534c,Uid:ab6a83e02c408d364880c069888f9ded,Namespace:kube-system,Attempt:0,} returns sandbox id \"744d39445be1b92dcd9d58684419a674fb703c7e463e59d12675bdf9139b18b1\"" May 8 23:54:07.229309 containerd[1785]: time="2025-05-08T23:54:07.229244098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.3-n-71d56f534c,Uid:0d6aa4358ad648f75a5a8e9fb6a8d22c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fdc98e4bf364e072fe26e7ef158413c233f5258e2ce4d25e233104e4ff1ac78\"" May 8 23:54:07.232288 containerd[1785]: time="2025-05-08T23:54:07.232251903Z" level=info msg="CreateContainer within sandbox \"744d39445be1b92dcd9d58684419a674fb703c7e463e59d12675bdf9139b18b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 23:54:07.232483 containerd[1785]: time="2025-05-08T23:54:07.232431223Z" level=info msg="CreateContainer within sandbox \"3fdc98e4bf364e072fe26e7ef158413c233f5258e2ce4d25e233104e4ff1ac78\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 23:54:07.306548 containerd[1785]: time="2025-05-08T23:54:07.306498820Z" level=info msg="CreateContainer within sandbox \"9fb5e512f136233493d2fc07e165032173f50a739ec45c24609c5e106fa108ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"76500dc4533f556ba7b02f7ab1d16bea18dba76f52e3b1b94ee9c3e6d47e3c67\"" May 8 23:54:07.307180 containerd[1785]: time="2025-05-08T23:54:07.307154581Z" level=info msg="StartContainer for \"76500dc4533f556ba7b02f7ab1d16bea18dba76f52e3b1b94ee9c3e6d47e3c67\"" May 8 23:54:07.316526 containerd[1785]: time="2025-05-08T23:54:07.316475116Z" level=info msg="CreateContainer within sandbox \"744d39445be1b92dcd9d58684419a674fb703c7e463e59d12675bdf9139b18b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"037d0eeae5c6c344cb456f34d882fef00a3addb4107b670dbfb8ab5dfbd85e5b\"" May 8 23:54:07.317012 containerd[1785]: time="2025-05-08T23:54:07.316982597Z" level=info msg="StartContainer for \"037d0eeae5c6c344cb456f34d882fef00a3addb4107b670dbfb8ab5dfbd85e5b\"" May 8 23:54:07.325823 containerd[1785]: time="2025-05-08T23:54:07.325523410Z" level=info msg="CreateContainer within sandbox \"3fdc98e4bf364e072fe26e7ef158413c233f5258e2ce4d25e233104e4ff1ac78\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d381c316fabaf3a954723b02c38dcec44f1ee189029edcfbc4fff029bb43d7e1\"" May 8 23:54:07.326764 containerd[1785]: time="2025-05-08T23:54:07.326729612Z" level=info msg="StartContainer for \"d381c316fabaf3a954723b02c38dcec44f1ee189029edcfbc4fff029bb43d7e1\"" May 8 23:54:07.334635 systemd[1]: Started cri-containerd-76500dc4533f556ba7b02f7ab1d16bea18dba76f52e3b1b94ee9c3e6d47e3c67.scope - libcontainer container 76500dc4533f556ba7b02f7ab1d16bea18dba76f52e3b1b94ee9c3e6d47e3c67. May 8 23:54:07.352628 systemd[1]: Started cri-containerd-037d0eeae5c6c344cb456f34d882fef00a3addb4107b670dbfb8ab5dfbd85e5b.scope - libcontainer container 037d0eeae5c6c344cb456f34d882fef00a3addb4107b670dbfb8ab5dfbd85e5b. May 8 23:54:07.371816 systemd[1]: Started cri-containerd-d381c316fabaf3a954723b02c38dcec44f1ee189029edcfbc4fff029bb43d7e1.scope - libcontainer container d381c316fabaf3a954723b02c38dcec44f1ee189029edcfbc4fff029bb43d7e1. May 8 23:54:07.411767 containerd[1785]: time="2025-05-08T23:54:07.411704667Z" level=info msg="StartContainer for \"76500dc4533f556ba7b02f7ab1d16bea18dba76f52e3b1b94ee9c3e6d47e3c67\" returns successfully" May 8 23:54:07.430714 containerd[1785]: time="2025-05-08T23:54:07.430664617Z" level=info msg="StartContainer for \"037d0eeae5c6c344cb456f34d882fef00a3addb4107b670dbfb8ab5dfbd85e5b\" returns successfully" May 8 23:54:07.438643 containerd[1785]: time="2025-05-08T23:54:07.438587949Z" level=info msg="StartContainer for \"d381c316fabaf3a954723b02c38dcec44f1ee189029edcfbc4fff029bb43d7e1\" returns successfully" May 8 23:54:09.391054 kubelet[2942]: I0508 23:54:09.390984 2942 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-n-71d56f534c" May 8 23:54:10.006314 kubelet[2942]: E0508 23:54:10.006262 2942 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.3-n-71d56f534c\" not found" node="ci-4152.2.3-n-71d56f534c" May 8 23:54:10.035046 kubelet[2942]: I0508 23:54:10.034709 2942 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.3-n-71d56f534c" May 8 23:54:10.056856 kubelet[2942]: E0508 23:54:10.056813 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:10.157287 kubelet[2942]: E0508 23:54:10.157239 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:10.257775 kubelet[2942]: E0508 23:54:10.257645 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:10.358131 kubelet[2942]: E0508 23:54:10.358072 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:10.458589 kubelet[2942]: E0508 23:54:10.458540 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:10.559070 kubelet[2942]: E0508 23:54:10.558961 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:10.659992 kubelet[2942]: E0508 23:54:10.659950 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:10.760858 kubelet[2942]: E0508 23:54:10.760807 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:10.861823 kubelet[2942]: E0508 23:54:10.861690 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:10.962798 kubelet[2942]: E0508 23:54:10.962734 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:11.063108 kubelet[2942]: E0508 23:54:11.063064 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:11.163520 kubelet[2942]: E0508 23:54:11.163485 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:11.264364 kubelet[2942]: E0508 23:54:11.264290 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:11.365722 kubelet[2942]: E0508 23:54:11.365669 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:11.466341 kubelet[2942]: E0508 23:54:11.466204 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:11.567265 kubelet[2942]: E0508 23:54:11.567219 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:11.668282 kubelet[2942]: E0508 23:54:11.668240 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:11.769067 kubelet[2942]: E0508 23:54:11.768946 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:11.869537 kubelet[2942]: E0508 23:54:11.869485 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:11.970067 kubelet[2942]: E0508 23:54:11.970024 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:12.044909 systemd[1]: Reloading requested from client PID 3211 ('systemctl') (unit session-9.scope)... May 8 23:54:12.045245 systemd[1]: Reloading... May 8 23:54:12.070860 kubelet[2942]: E0508 23:54:12.070709 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:12.133609 zram_generator::config[3254]: No configuration found. May 8 23:54:12.171511 kubelet[2942]: E0508 23:54:12.171468 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:12.241887 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:54:12.272016 kubelet[2942]: E0508 23:54:12.271973 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:12.344956 systemd[1]: Reloading finished in 299 ms. May 8 23:54:12.374545 kubelet[2942]: E0508 23:54:12.374505 2942 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-71d56f534c\" not found" May 8 23:54:12.380977 kubelet[2942]: E0508 23:54:12.379885 2942 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4152.2.3-n-71d56f534c.183db272cd16dec3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.3-n-71d56f534c,UID:ci-4152.2.3-n-71d56f534c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.3-n-71d56f534c,},FirstTimestamp:2025-05-08 23:53:56.626464451 +0000 UTC m=+0.504533637,LastTimestamp:2025-05-08 23:53:56.626464451 +0000 UTC m=+0.504533637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.3-n-71d56f534c,}" May 8 23:54:12.380384 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:54:12.397000 systemd[1]: kubelet.service: Deactivated successfully. May 8 23:54:12.397542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:54:12.407558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:54:12.675695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:54:12.688823 (kubelet)[3315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 23:54:12.749427 kubelet[3315]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:54:12.749427 kubelet[3315]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 23:54:12.749427 kubelet[3315]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:54:12.749828 kubelet[3315]: I0508 23:54:12.749522 3315 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 23:54:12.755521 kubelet[3315]: I0508 23:54:12.755456 3315 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 23:54:12.755521 kubelet[3315]: I0508 23:54:12.755481 3315 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 23:54:12.755728 kubelet[3315]: I0508 23:54:12.755686 3315 server.go:927] "Client rotation is on, will bootstrap in background" May 8 23:54:12.759470 kubelet[3315]: I0508 23:54:12.759147 3315 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 23:54:12.761061 kubelet[3315]: I0508 23:54:12.761010 3315 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:54:12.768565 kubelet[3315]: I0508 23:54:12.768521 3315 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 23:54:12.768736 kubelet[3315]: I0508 23:54:12.768703 3315 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 23:54:12.768918 kubelet[3315]: I0508 23:54:12.768734 3315 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.3-n-71d56f534c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 23:54:12.768918 kubelet[3315]: I0508 23:54:12.768912 3315 topology_manager.go:138] "Creating topology manager with none policy" May 8 23:54:12.768918 kubelet[3315]: I0508 23:54:12.768921 3315 container_manager_linux.go:301] "Creating device plugin manager" May 8 23:54:12.769069 kubelet[3315]: I0508 23:54:12.768952 3315 state_mem.go:36] "Initialized new in-memory state store" May 8 23:54:12.769069 kubelet[3315]: I0508 23:54:12.769057 3315 kubelet.go:400] "Attempting to sync node with API server" May 8 23:54:12.769069 kubelet[3315]: I0508 23:54:12.769070 3315 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 23:54:12.769134 kubelet[3315]: I0508 23:54:12.769096 3315 kubelet.go:312] "Adding apiserver pod source" May 8 23:54:12.769134 kubelet[3315]: I0508 23:54:12.769114 3315 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 23:54:12.772842 kubelet[3315]: I0508 23:54:12.772809 3315 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 23:54:12.772987 kubelet[3315]: I0508 23:54:12.772970 3315 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 23:54:12.773363 kubelet[3315]: I0508 23:54:12.773343 3315 server.go:1264] "Started kubelet" May 8 23:54:12.775096 kubelet[3315]: I0508 23:54:12.775066 3315 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 23:54:12.794044 kubelet[3315]: I0508 23:54:12.793500 3315 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 23:54:12.794675 kubelet[3315]: I0508 23:54:12.794656 3315 server.go:455] "Adding debug handlers to kubelet server" May 8 23:54:12.795990 kubelet[3315]: I0508 23:54:12.795909 3315 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 23:54:12.796254 kubelet[3315]: I0508 23:54:12.796239 3315 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 23:54:12.797930 kubelet[3315]: I0508 23:54:12.797908 3315 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 23:54:12.798450 kubelet[3315]: I0508 23:54:12.798417 3315 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 23:54:12.798674 kubelet[3315]: I0508 23:54:12.798661 3315 reconciler.go:26] "Reconciler: start to sync state" May 8 23:54:12.810482 kubelet[3315]: I0508 23:54:12.809549 3315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 23:54:12.813235 kubelet[3315]: I0508 23:54:12.813202 3315 factory.go:221] Registration of the systemd container factory successfully May 8 23:54:12.813606 kubelet[3315]: I0508 23:54:12.813314 3315 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 23:54:12.813916 kubelet[3315]: I0508 23:54:12.813824 3315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 23:54:12.813916 kubelet[3315]: I0508 23:54:12.813860 3315 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 23:54:12.813916 kubelet[3315]: I0508 23:54:12.813874 3315 kubelet.go:2337] "Starting kubelet main sync loop" May 8 23:54:12.813916 kubelet[3315]: E0508 23:54:12.813909 3315 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 23:54:12.817170 kubelet[3315]: E0508 23:54:12.817126 3315 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 23:54:12.819479 kubelet[3315]: I0508 23:54:12.819269 3315 factory.go:221] Registration of the containerd container factory successfully May 8 23:54:12.865467 kubelet[3315]: I0508 23:54:12.865424 3315 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 23:54:12.865909 kubelet[3315]: I0508 23:54:12.865627 3315 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 23:54:12.865909 kubelet[3315]: I0508 23:54:12.865654 3315 state_mem.go:36] "Initialized new in-memory state store" May 8 23:54:12.865909 kubelet[3315]: I0508 23:54:12.865810 3315 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 23:54:12.865909 kubelet[3315]: I0508 23:54:12.865820 3315 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 23:54:12.865909 kubelet[3315]: I0508 23:54:12.865839 3315 policy_none.go:49] "None policy: Start" May 8 23:54:12.866820 kubelet[3315]: I0508 23:54:12.866803 3315 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 23:54:12.867488 kubelet[3315]: I0508 23:54:12.866924 3315 state_mem.go:35] "Initializing new in-memory state store" May 8 23:54:12.867488 kubelet[3315]: I0508 23:54:12.867070 3315 state_mem.go:75] "Updated machine memory state" May 8 23:54:12.872206 kubelet[3315]: I0508 23:54:12.871792 3315 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 23:54:12.872206 kubelet[3315]: I0508 23:54:12.871984 3315 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 23:54:12.872206 kubelet[3315]: I0508 23:54:12.872109 3315 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 23:54:12.901425 kubelet[3315]: I0508 23:54:12.901389 3315 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-n-71d56f534c" May 8 23:54:12.914419 kubelet[3315]: I0508 23:54:12.914355 3315 topology_manager.go:215] "Topology Admit Handler" podUID="0d6aa4358ad648f75a5a8e9fb6a8d22c" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.3-n-71d56f534c" May 8 23:54:12.914600 kubelet[3315]: I0508 23:54:12.914505 3315 topology_manager.go:215] "Topology Admit Handler" podUID="ae68286796df0ed24304202ef64c13b0" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:54:12.914600 kubelet[3315]: I0508 23:54:12.914548 3315 topology_manager.go:215] "Topology Admit Handler" podUID="ab6a83e02c408d364880c069888f9ded" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.3-n-71d56f534c" May 8 23:54:12.916824 kubelet[3315]: I0508 23:54:12.916782 3315 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.3-n-71d56f534c" May 8 23:54:12.916981 kubelet[3315]: I0508 23:54:12.916871 3315 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.3-n-71d56f534c" May 8 23:54:12.927406 kubelet[3315]: W0508 23:54:12.926403 3315 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 23:54:12.931897 kubelet[3315]: W0508 23:54:12.931740 3315 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 23:54:12.932502 kubelet[3315]: W0508 23:54:12.931823 3315 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 23:54:13.099668 kubelet[3315]: I0508 23:54:13.099600 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d6aa4358ad648f75a5a8e9fb6a8d22c-ca-certs\") pod \"kube-apiserver-ci-4152.2.3-n-71d56f534c\" (UID: \"0d6aa4358ad648f75a5a8e9fb6a8d22c\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-71d56f534c" May 8 23:54:13.099668 kubelet[3315]: I0508 23:54:13.099649 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d6aa4358ad648f75a5a8e9fb6a8d22c-k8s-certs\") pod \"kube-apiserver-ci-4152.2.3-n-71d56f534c\" (UID: \"0d6aa4358ad648f75a5a8e9fb6a8d22c\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-71d56f534c" May 8 23:54:13.099668 kubelet[3315]: I0508 23:54:13.099671 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d6aa4358ad648f75a5a8e9fb6a8d22c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.3-n-71d56f534c\" (UID: \"0d6aa4358ad648f75a5a8e9fb6a8d22c\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-71d56f534c" May 8 23:54:13.099859 kubelet[3315]: I0508 23:54:13.099692 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae68286796df0ed24304202ef64c13b0-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.3-n-71d56f534c\" (UID: \"ae68286796df0ed24304202ef64c13b0\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:54:13.099859 kubelet[3315]: I0508 23:54:13.099713 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab6a83e02c408d364880c069888f9ded-kubeconfig\") pod \"kube-scheduler-ci-4152.2.3-n-71d56f534c\" (UID: \"ab6a83e02c408d364880c069888f9ded\") " pod="kube-system/kube-scheduler-ci-4152.2.3-n-71d56f534c" May 8 23:54:13.099859 kubelet[3315]: I0508 23:54:13.099728 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae68286796df0ed24304202ef64c13b0-ca-certs\") pod \"kube-controller-manager-ci-4152.2.3-n-71d56f534c\" (UID: \"ae68286796df0ed24304202ef64c13b0\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:54:13.099859 kubelet[3315]: I0508 23:54:13.099742 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae68286796df0ed24304202ef64c13b0-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.3-n-71d56f534c\" (UID: \"ae68286796df0ed24304202ef64c13b0\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:54:13.099859 kubelet[3315]: I0508 23:54:13.099756 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae68286796df0ed24304202ef64c13b0-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.3-n-71d56f534c\" (UID: \"ae68286796df0ed24304202ef64c13b0\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:54:13.099966 kubelet[3315]: I0508 23:54:13.099772 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae68286796df0ed24304202ef64c13b0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.3-n-71d56f534c\" (UID: \"ae68286796df0ed24304202ef64c13b0\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-71d56f534c" May 8 23:54:13.770596 kubelet[3315]: I0508 23:54:13.770321 3315 apiserver.go:52] "Watching apiserver" May 8 23:54:13.799499 kubelet[3315]: I0508 23:54:13.799453 3315 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 23:54:13.879253 kubelet[3315]: I0508 23:54:13.879186 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.3-n-71d56f534c" podStartSLOduration=1.879167724 podStartE2EDuration="1.879167724s" podCreationTimestamp="2025-05-08 23:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:13.868493907 +0000 UTC m=+1.176296557" watchObservedRunningTime="2025-05-08 23:54:13.879167724 +0000 UTC m=+1.186970374" May 8 23:54:13.890103 kubelet[3315]: I0508 23:54:13.889796 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.3-n-71d56f534c" podStartSLOduration=1.8897792610000002 podStartE2EDuration="1.889779261s" podCreationTimestamp="2025-05-08 23:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:13.879742205 +0000 UTC m=+1.187544815" watchObservedRunningTime="2025-05-08 23:54:13.889779261 +0000 UTC m=+1.197581911" May 8 23:54:13.900757 kubelet[3315]: I0508 23:54:13.900696 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.3-n-71d56f534c" podStartSLOduration=1.9006775980000001 podStartE2EDuration="1.900677598s" podCreationTimestamp="2025-05-08 23:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:13.889702461 +0000 UTC m=+1.197505111" watchObservedRunningTime="2025-05-08 23:54:13.900677598 +0000 UTC m=+1.208480248" May 8 23:54:15.329109 sudo[3349]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 23:54:15.329403 sudo[3349]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 23:54:15.774263 sudo[3349]: pam_unix(sudo:session): session closed for user root May 8 23:54:17.567959 sudo[2279]: pam_unix(sudo:session): session closed for user root May 8 23:54:17.643842 sshd[2278]: Connection closed by 10.200.16.10 port 45002 May 8 23:54:17.644407 sshd-session[2276]: pam_unix(sshd:session): session closed for user core May 8 23:54:17.648109 systemd[1]: sshd@6-10.200.20.33:22-10.200.16.10:45002.service: Deactivated successfully. May 8 23:54:17.651047 systemd[1]: session-9.scope: Deactivated successfully. May 8 23:54:17.652504 systemd[1]: session-9.scope: Consumed 8.470s CPU time, 187.9M memory peak, 0B memory swap peak. May 8 23:54:17.653027 systemd-logind[1722]: Session 9 logged out. Waiting for processes to exit. May 8 23:54:17.653964 systemd-logind[1722]: Removed session 9. May 8 23:54:25.698017 kubelet[3315]: I0508 23:54:25.697962 3315 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 23:54:25.699076 containerd[1785]: time="2025-05-08T23:54:25.698840904Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 23:54:25.699412 kubelet[3315]: I0508 23:54:25.699053 3315 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 23:54:25.742113 kubelet[3315]: I0508 23:54:25.742067 3315 topology_manager.go:215] "Topology Admit Handler" podUID="8a51bc23-d1d6-49b2-b81a-d032361f7f5c" podNamespace="kube-system" podName="kube-proxy-785th" May 8 23:54:25.751371 kubelet[3315]: I0508 23:54:25.751302 3315 topology_manager.go:215] "Topology Admit Handler" podUID="6691d159-3269-4c83-8526-b76df9680080" podNamespace="kube-system" podName="cilium-kfrr8" May 8 23:54:25.754199 systemd[1]: Created slice kubepods-besteffort-pod8a51bc23_d1d6_49b2_b81a_d032361f7f5c.slice - libcontainer container kubepods-besteffort-pod8a51bc23_d1d6_49b2_b81a_d032361f7f5c.slice. May 8 23:54:25.768099 systemd[1]: Created slice kubepods-burstable-pod6691d159_3269_4c83_8526_b76df9680080.slice - libcontainer container kubepods-burstable-pod6691d159_3269_4c83_8526_b76df9680080.slice. May 8 23:54:25.877935 kubelet[3315]: I0508 23:54:25.877793 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cni-path\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.877935 kubelet[3315]: I0508 23:54:25.877836 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlsl7\" (UniqueName: \"kubernetes.io/projected/6691d159-3269-4c83-8526-b76df9680080-kube-api-access-nlsl7\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.877935 kubelet[3315]: I0508 23:54:25.877859 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-etc-cni-netd\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.877935 kubelet[3315]: I0508 23:54:25.877877 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-host-proc-sys-net\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.877935 kubelet[3315]: I0508 23:54:25.877893 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-hostproc\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.877935 kubelet[3315]: I0508 23:54:25.877910 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-xtables-lock\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.878456 kubelet[3315]: I0508 23:54:25.878244 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a51bc23-d1d6-49b2-b81a-d032361f7f5c-kube-proxy\") pod \"kube-proxy-785th\" (UID: \"8a51bc23-d1d6-49b2-b81a-d032361f7f5c\") " pod="kube-system/kube-proxy-785th" May 8 23:54:25.878456 kubelet[3315]: I0508 23:54:25.878274 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cilium-run\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.878456 kubelet[3315]: I0508 23:54:25.878309 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-lib-modules\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.878456 kubelet[3315]: I0508 23:54:25.878328 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6691d159-3269-4c83-8526-b76df9680080-clustermesh-secrets\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.878456 kubelet[3315]: I0508 23:54:25.878345 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6691d159-3269-4c83-8526-b76df9680080-cilium-config-path\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.878456 kubelet[3315]: I0508 23:54:25.878383 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a51bc23-d1d6-49b2-b81a-d032361f7f5c-lib-modules\") pod \"kube-proxy-785th\" (UID: \"8a51bc23-d1d6-49b2-b81a-d032361f7f5c\") " pod="kube-system/kube-proxy-785th" May 8 23:54:25.878619 kubelet[3315]: I0508 23:54:25.878400 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-host-proc-sys-kernel\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.878619 kubelet[3315]: I0508 23:54:25.878417 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cilium-cgroup\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.878849 kubelet[3315]: I0508 23:54:25.878432 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6691d159-3269-4c83-8526-b76df9680080-hubble-tls\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:25.878849 kubelet[3315]: I0508 23:54:25.878720 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a51bc23-d1d6-49b2-b81a-d032361f7f5c-xtables-lock\") pod \"kube-proxy-785th\" (UID: \"8a51bc23-d1d6-49b2-b81a-d032361f7f5c\") " pod="kube-system/kube-proxy-785th" May 8 23:54:25.878849 kubelet[3315]: I0508 23:54:25.878754 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8ftl\" (UniqueName: \"kubernetes.io/projected/8a51bc23-d1d6-49b2-b81a-d032361f7f5c-kube-api-access-v8ftl\") pod \"kube-proxy-785th\" (UID: \"8a51bc23-d1d6-49b2-b81a-d032361f7f5c\") " pod="kube-system/kube-proxy-785th" May 8 23:54:25.878849 kubelet[3315]: I0508 23:54:25.878775 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-bpf-maps\") pod \"cilium-kfrr8\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " pod="kube-system/cilium-kfrr8" May 8 23:54:26.008990 kubelet[3315]: E0508 23:54:26.008790 3315 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 23:54:26.008990 kubelet[3315]: E0508 23:54:26.008827 3315 projected.go:200] Error preparing data for projected volume kube-api-access-v8ftl for pod kube-system/kube-proxy-785th: configmap "kube-root-ca.crt" not found May 8 23:54:26.008990 kubelet[3315]: E0508 23:54:26.008907 3315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a51bc23-d1d6-49b2-b81a-d032361f7f5c-kube-api-access-v8ftl podName:8a51bc23-d1d6-49b2-b81a-d032361f7f5c nodeName:}" failed. No retries permitted until 2025-05-08 23:54:26.508868593 +0000 UTC m=+13.816671243 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v8ftl" (UniqueName: "kubernetes.io/projected/8a51bc23-d1d6-49b2-b81a-d032361f7f5c-kube-api-access-v8ftl") pod "kube-proxy-785th" (UID: "8a51bc23-d1d6-49b2-b81a-d032361f7f5c") : configmap "kube-root-ca.crt" not found May 8 23:54:26.009482 kubelet[3315]: E0508 23:54:26.009123 3315 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 23:54:26.009482 kubelet[3315]: E0508 23:54:26.009139 3315 projected.go:200] Error preparing data for projected volume kube-api-access-nlsl7 for pod kube-system/cilium-kfrr8: configmap "kube-root-ca.crt" not found May 8 23:54:26.009482 kubelet[3315]: E0508 23:54:26.009167 3315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6691d159-3269-4c83-8526-b76df9680080-kube-api-access-nlsl7 podName:6691d159-3269-4c83-8526-b76df9680080 nodeName:}" failed. No retries permitted until 2025-05-08 23:54:26.509157433 +0000 UTC m=+13.816960083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nlsl7" (UniqueName: "kubernetes.io/projected/6691d159-3269-4c83-8526-b76df9680080-kube-api-access-nlsl7") pod "cilium-kfrr8" (UID: "6691d159-3269-4c83-8526-b76df9680080") : configmap "kube-root-ca.crt" not found May 8 23:54:26.662510 containerd[1785]: time="2025-05-08T23:54:26.662313179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-785th,Uid:8a51bc23-d1d6-49b2-b81a-d032361f7f5c,Namespace:kube-system,Attempt:0,}" May 8 23:54:26.671755 containerd[1785]: time="2025-05-08T23:54:26.671429512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kfrr8,Uid:6691d159-3269-4c83-8526-b76df9680080,Namespace:kube-system,Attempt:0,}" May 8 23:54:26.731512 containerd[1785]: time="2025-05-08T23:54:26.729156795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:26.732705 containerd[1785]: time="2025-05-08T23:54:26.732387480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:26.732847 containerd[1785]: time="2025-05-08T23:54:26.732563880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:26.732847 containerd[1785]: time="2025-05-08T23:54:26.732670160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:26.739632 kubelet[3315]: I0508 23:54:26.738959 3315 topology_manager.go:215] "Topology Admit Handler" podUID="8670c929-31bb-401f-9e83-601f9dbfaa7b" podNamespace="kube-system" podName="cilium-operator-599987898-bnz6l" May 8 23:54:26.749638 systemd[1]: Created slice kubepods-besteffort-pod8670c929_31bb_401f_9e83_601f9dbfaa7b.slice - libcontainer container kubepods-besteffort-pod8670c929_31bb_401f_9e83_601f9dbfaa7b.slice. May 8 23:54:26.754048 containerd[1785]: time="2025-05-08T23:54:26.752617789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:26.754048 containerd[1785]: time="2025-05-08T23:54:26.752997870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:26.754048 containerd[1785]: time="2025-05-08T23:54:26.753019230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:26.754772 containerd[1785]: time="2025-05-08T23:54:26.754679352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:26.771366 systemd[1]: Started cri-containerd-6ea4d8fa47a95bcc6b9ecc719d3c701fe67985f567f24b9863ce589983baf174.scope - libcontainer container 6ea4d8fa47a95bcc6b9ecc719d3c701fe67985f567f24b9863ce589983baf174. May 8 23:54:26.785907 kubelet[3315]: I0508 23:54:26.785791 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbwc\" (UniqueName: \"kubernetes.io/projected/8670c929-31bb-401f-9e83-601f9dbfaa7b-kube-api-access-qnbwc\") pod \"cilium-operator-599987898-bnz6l\" (UID: \"8670c929-31bb-401f-9e83-601f9dbfaa7b\") " pod="kube-system/cilium-operator-599987898-bnz6l" May 8 23:54:26.785907 kubelet[3315]: I0508 23:54:26.785840 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8670c929-31bb-401f-9e83-601f9dbfaa7b-cilium-config-path\") pod \"cilium-operator-599987898-bnz6l\" (UID: \"8670c929-31bb-401f-9e83-601f9dbfaa7b\") " pod="kube-system/cilium-operator-599987898-bnz6l" May 8 23:54:26.790685 systemd[1]: Started cri-containerd-fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5.scope - libcontainer container fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5. May 8 23:54:26.831193 containerd[1785]: time="2025-05-08T23:54:26.831146223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kfrr8,Uid:6691d159-3269-4c83-8526-b76df9680080,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\"" May 8 23:54:26.835010 containerd[1785]: time="2025-05-08T23:54:26.834245148Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 23:54:26.848746 containerd[1785]: time="2025-05-08T23:54:26.848705528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-785th,Uid:8a51bc23-d1d6-49b2-b81a-d032361f7f5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ea4d8fa47a95bcc6b9ecc719d3c701fe67985f567f24b9863ce589983baf174\"" May 8 23:54:26.851813 containerd[1785]: time="2025-05-08T23:54:26.851773693Z" level=info msg="CreateContainer within sandbox \"6ea4d8fa47a95bcc6b9ecc719d3c701fe67985f567f24b9863ce589983baf174\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 23:54:26.899775 containerd[1785]: time="2025-05-08T23:54:26.899669842Z" level=info msg="CreateContainer within sandbox \"6ea4d8fa47a95bcc6b9ecc719d3c701fe67985f567f24b9863ce589983baf174\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e9f944fe76707770d58213c98e1efa19229ab36b95dfd0671e43b49e06343ca4\"" May 8 23:54:26.900712 containerd[1785]: time="2025-05-08T23:54:26.900622004Z" level=info msg="StartContainer for \"e9f944fe76707770d58213c98e1efa19229ab36b95dfd0671e43b49e06343ca4\"" May 8 23:54:26.924213 systemd[1]: Started cri-containerd-e9f944fe76707770d58213c98e1efa19229ab36b95dfd0671e43b49e06343ca4.scope - libcontainer container e9f944fe76707770d58213c98e1efa19229ab36b95dfd0671e43b49e06343ca4. May 8 23:54:26.959212 containerd[1785]: time="2025-05-08T23:54:26.959144328Z" level=info msg="StartContainer for \"e9f944fe76707770d58213c98e1efa19229ab36b95dfd0671e43b49e06343ca4\" returns successfully" May 8 23:54:27.054332 containerd[1785]: time="2025-05-08T23:54:27.054251306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bnz6l,Uid:8670c929-31bb-401f-9e83-601f9dbfaa7b,Namespace:kube-system,Attempt:0,}" May 8 23:54:27.106067 containerd[1785]: time="2025-05-08T23:54:27.105706140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:27.106067 containerd[1785]: time="2025-05-08T23:54:27.105840341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:27.106067 containerd[1785]: time="2025-05-08T23:54:27.105860181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:27.106067 containerd[1785]: time="2025-05-08T23:54:27.105966061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:27.129115 systemd[1]: Started cri-containerd-54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa.scope - libcontainer container 54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa. May 8 23:54:27.170966 containerd[1785]: time="2025-05-08T23:54:27.170901995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bnz6l,Uid:8670c929-31bb-401f-9e83-601f9dbfaa7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa\"" May 8 23:54:27.887848 kubelet[3315]: I0508 23:54:27.887702 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-785th" podStartSLOduration=2.887683552 podStartE2EDuration="2.887683552s" podCreationTimestamp="2025-05-08 23:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:27.887559512 +0000 UTC m=+15.195362162" watchObservedRunningTime="2025-05-08 23:54:27.887683552 +0000 UTC m=+15.195486202" May 8 23:54:31.932168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149956540.mount: Deactivated successfully. May 8 23:54:33.926408 containerd[1785]: time="2025-05-08T23:54:33.925690017Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:54:33.928503 containerd[1785]: time="2025-05-08T23:54:33.928459782Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 8 23:54:33.933130 containerd[1785]: time="2025-05-08T23:54:33.933065629Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:54:33.934794 containerd[1785]: time="2025-05-08T23:54:33.934656712Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.099846524s" May 8 23:54:33.934794 containerd[1785]: time="2025-05-08T23:54:33.934694752Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 8 23:54:33.936374 containerd[1785]: time="2025-05-08T23:54:33.936155554Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 23:54:33.937667 containerd[1785]: time="2025-05-08T23:54:33.937511876Z" level=info msg="CreateContainer within sandbox \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 23:54:33.985802 containerd[1785]: time="2025-05-08T23:54:33.985713755Z" level=info msg="CreateContainer within sandbox \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6\"" May 8 23:54:33.986470 containerd[1785]: time="2025-05-08T23:54:33.986287036Z" level=info msg="StartContainer for \"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6\"" May 8 23:54:34.012621 systemd[1]: Started cri-containerd-b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6.scope - libcontainer container b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6. May 8 23:54:34.037862 containerd[1785]: time="2025-05-08T23:54:34.037814920Z" level=info msg="StartContainer for \"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6\" returns successfully" May 8 23:54:34.045721 systemd[1]: cri-containerd-b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6.scope: Deactivated successfully. May 8 23:54:34.967857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6-rootfs.mount: Deactivated successfully. May 8 23:54:35.379233 containerd[1785]: time="2025-05-08T23:54:35.378816910Z" level=info msg="shim disconnected" id=b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6 namespace=k8s.io May 8 23:54:35.379233 containerd[1785]: time="2025-05-08T23:54:35.378872310Z" level=warning msg="cleaning up after shim disconnected" id=b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6 namespace=k8s.io May 8 23:54:35.379233 containerd[1785]: time="2025-05-08T23:54:35.378881110Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:54:35.894040 containerd[1785]: time="2025-05-08T23:54:35.893851391Z" level=info msg="CreateContainer within sandbox \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 23:54:35.930523 containerd[1785]: time="2025-05-08T23:54:35.930394811Z" level=info msg="CreateContainer within sandbox \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b\"" May 8 23:54:35.932224 containerd[1785]: time="2025-05-08T23:54:35.932124934Z" level=info msg="StartContainer for \"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b\"" May 8 23:54:35.959629 systemd[1]: Started cri-containerd-e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b.scope - libcontainer container e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b. May 8 23:54:35.990118 containerd[1785]: time="2025-05-08T23:54:35.989848988Z" level=info msg="StartContainer for \"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b\" returns successfully" May 8 23:54:36.001361 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:54:36.001601 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:54:36.001671 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 23:54:36.009597 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:54:36.009931 systemd[1]: cri-containerd-e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b.scope: Deactivated successfully. May 8 23:54:36.032360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b-rootfs.mount: Deactivated successfully. May 8 23:54:36.034104 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:54:36.048970 containerd[1785]: time="2025-05-08T23:54:36.048756164Z" level=info msg="shim disconnected" id=e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b namespace=k8s.io May 8 23:54:36.048970 containerd[1785]: time="2025-05-08T23:54:36.048823804Z" level=warning msg="cleaning up after shim disconnected" id=e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b namespace=k8s.io May 8 23:54:36.048970 containerd[1785]: time="2025-05-08T23:54:36.048831564Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:54:36.568161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount625283063.mount: Deactivated successfully. May 8 23:54:36.902410 containerd[1785]: time="2025-05-08T23:54:36.901377029Z" level=info msg="CreateContainer within sandbox \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 23:54:36.936755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265511866.mount: Deactivated successfully. May 8 23:54:36.954609 containerd[1785]: time="2025-05-08T23:54:36.954557396Z" level=info msg="CreateContainer within sandbox \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3\"" May 8 23:54:36.956343 containerd[1785]: time="2025-05-08T23:54:36.956298199Z" level=info msg="StartContainer for \"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3\"" May 8 23:54:36.996001 systemd[1]: Started cri-containerd-4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3.scope - libcontainer container 4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3. May 8 23:54:37.041773 containerd[1785]: time="2025-05-08T23:54:37.041712897Z" level=info msg="StartContainer for \"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3\" returns successfully" May 8 23:54:37.042022 systemd[1]: cri-containerd-4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3.scope: Deactivated successfully. May 8 23:54:37.080406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3-rootfs.mount: Deactivated successfully. May 8 23:54:37.293275 containerd[1785]: time="2025-05-08T23:54:37.293054746Z" level=info msg="shim disconnected" id=4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3 namespace=k8s.io May 8 23:54:37.293275 containerd[1785]: time="2025-05-08T23:54:37.293115066Z" level=warning msg="cleaning up after shim disconnected" id=4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3 namespace=k8s.io May 8 23:54:37.293275 containerd[1785]: time="2025-05-08T23:54:37.293123386Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:54:37.411478 containerd[1785]: time="2025-05-08T23:54:37.411333498Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:54:37.415337 containerd[1785]: time="2025-05-08T23:54:37.415234544Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 8 23:54:37.419857 containerd[1785]: time="2025-05-08T23:54:37.419790192Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:54:37.421463 containerd[1785]: time="2025-05-08T23:54:37.421291434Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.4851008s" May 8 23:54:37.421463 containerd[1785]: time="2025-05-08T23:54:37.421326514Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 8 23:54:37.424124 containerd[1785]: time="2025-05-08T23:54:37.424048038Z" level=info msg="CreateContainer within sandbox \"54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 23:54:37.455913 containerd[1785]: time="2025-05-08T23:54:37.455854530Z" level=info msg="CreateContainer within sandbox \"54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\"" May 8 23:54:37.456806 containerd[1785]: time="2025-05-08T23:54:37.456771092Z" level=info msg="StartContainer for \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\"" May 8 23:54:37.493673 systemd[1]: Started cri-containerd-08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434.scope - libcontainer container 08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434. May 8 23:54:37.526742 containerd[1785]: time="2025-05-08T23:54:37.526686245Z" level=info msg="StartContainer for \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\" returns successfully" May 8 23:54:37.905693 containerd[1785]: time="2025-05-08T23:54:37.905639021Z" level=info msg="CreateContainer within sandbox \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 23:54:37.943571 containerd[1785]: time="2025-05-08T23:54:37.943293402Z" level=info msg="CreateContainer within sandbox \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16\"" May 8 23:54:37.943914 containerd[1785]: time="2025-05-08T23:54:37.943881323Z" level=info msg="StartContainer for \"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16\"" May 8 23:54:37.992673 systemd[1]: Started cri-containerd-de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16.scope - libcontainer container de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16. May 8 23:54:38.032369 systemd[1]: run-containerd-runc-k8s.io-08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434-runc.HCjPLY.mount: Deactivated successfully. May 8 23:54:38.057595 systemd[1]: cri-containerd-de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16.scope: Deactivated successfully. May 8 23:54:38.064309 containerd[1785]: time="2025-05-08T23:54:38.064253998Z" level=info msg="StartContainer for \"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16\" returns successfully" May 8 23:54:38.096327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16-rootfs.mount: Deactivated successfully. May 8 23:54:38.137353 containerd[1785]: time="2025-05-08T23:54:38.137227637Z" level=info msg="shim disconnected" id=de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16 namespace=k8s.io May 8 23:54:38.137353 containerd[1785]: time="2025-05-08T23:54:38.137281997Z" level=warning msg="cleaning up after shim disconnected" id=de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16 namespace=k8s.io May 8 23:54:38.137353 containerd[1785]: time="2025-05-08T23:54:38.137291917Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:54:38.912272 containerd[1785]: time="2025-05-08T23:54:38.912147736Z" level=info msg="CreateContainer within sandbox \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 23:54:38.929710 kubelet[3315]: I0508 23:54:38.928469 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-bnz6l" podStartSLOduration=2.679126165 podStartE2EDuration="12.928423002s" podCreationTimestamp="2025-05-08 23:54:26 +0000 UTC" firstStartedPulling="2025-05-08 23:54:27.172858558 +0000 UTC m=+14.480661168" lastFinishedPulling="2025-05-08 23:54:37.422155395 +0000 UTC m=+24.729958005" observedRunningTime="2025-05-08 23:54:38.044167446 +0000 UTC m=+25.351970096" watchObservedRunningTime="2025-05-08 23:54:38.928423002 +0000 UTC m=+26.236225652" May 8 23:54:38.952163 containerd[1785]: time="2025-05-08T23:54:38.952108561Z" level=info msg="CreateContainer within sandbox \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\"" May 8 23:54:38.952823 containerd[1785]: time="2025-05-08T23:54:38.952759242Z" level=info msg="StartContainer for \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\"" May 8 23:54:38.984639 systemd[1]: Started cri-containerd-190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a.scope - libcontainer container 190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a. May 8 23:54:39.040382 containerd[1785]: time="2025-05-08T23:54:39.040331744Z" level=info msg="StartContainer for \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\" returns successfully" May 8 23:54:39.141041 kubelet[3315]: I0508 23:54:39.141012 3315 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 23:54:39.173674 kubelet[3315]: I0508 23:54:39.173028 3315 topology_manager.go:215] "Topology Admit Handler" podUID="9fedd60d-c5de-425c-a207-764f61c45219" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xnn42" May 8 23:54:39.181707 systemd[1]: Created slice kubepods-burstable-pod9fedd60d_c5de_425c_a207_764f61c45219.slice - libcontainer container kubepods-burstable-pod9fedd60d_c5de_425c_a207_764f61c45219.slice. May 8 23:54:39.184176 kubelet[3315]: I0508 23:54:39.183108 3315 topology_manager.go:215] "Topology Admit Handler" podUID="7cd40e22-b0cc-4ddc-bc78-27164019dc67" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j6cs6" May 8 23:54:39.192916 systemd[1]: Created slice kubepods-burstable-pod7cd40e22_b0cc_4ddc_bc78_27164019dc67.slice - libcontainer container kubepods-burstable-pod7cd40e22_b0cc_4ddc_bc78_27164019dc67.slice. May 8 23:54:39.367870 kubelet[3315]: I0508 23:54:39.367822 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd40e22-b0cc-4ddc-bc78-27164019dc67-config-volume\") pod \"coredns-7db6d8ff4d-j6cs6\" (UID: \"7cd40e22-b0cc-4ddc-bc78-27164019dc67\") " pod="kube-system/coredns-7db6d8ff4d-j6cs6" May 8 23:54:39.367870 kubelet[3315]: I0508 23:54:39.367869 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fedd60d-c5de-425c-a207-764f61c45219-config-volume\") pod \"coredns-7db6d8ff4d-xnn42\" (UID: \"9fedd60d-c5de-425c-a207-764f61c45219\") " pod="kube-system/coredns-7db6d8ff4d-xnn42" May 8 23:54:39.368038 kubelet[3315]: I0508 23:54:39.367891 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrmsx\" (UniqueName: \"kubernetes.io/projected/9fedd60d-c5de-425c-a207-764f61c45219-kube-api-access-mrmsx\") pod \"coredns-7db6d8ff4d-xnn42\" (UID: \"9fedd60d-c5de-425c-a207-764f61c45219\") " pod="kube-system/coredns-7db6d8ff4d-xnn42" May 8 23:54:39.368038 kubelet[3315]: I0508 23:54:39.367912 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sssvm\" (UniqueName: \"kubernetes.io/projected/7cd40e22-b0cc-4ddc-bc78-27164019dc67-kube-api-access-sssvm\") pod \"coredns-7db6d8ff4d-j6cs6\" (UID: \"7cd40e22-b0cc-4ddc-bc78-27164019dc67\") " pod="kube-system/coredns-7db6d8ff4d-j6cs6" May 8 23:54:39.491345 containerd[1785]: time="2025-05-08T23:54:39.491220917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xnn42,Uid:9fedd60d-c5de-425c-a207-764f61c45219,Namespace:kube-system,Attempt:0,}" May 8 23:54:39.498078 containerd[1785]: time="2025-05-08T23:54:39.497923807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j6cs6,Uid:7cd40e22-b0cc-4ddc-bc78-27164019dc67,Namespace:kube-system,Attempt:0,}" May 8 23:54:39.930688 kubelet[3315]: I0508 23:54:39.930060 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kfrr8" podStartSLOduration=7.827375782 podStartE2EDuration="14.930042109s" podCreationTimestamp="2025-05-08 23:54:25 +0000 UTC" firstStartedPulling="2025-05-08 23:54:26.832933906 +0000 UTC m=+14.140736556" lastFinishedPulling="2025-05-08 23:54:33.935600233 +0000 UTC m=+21.243402883" observedRunningTime="2025-05-08 23:54:39.930026189 +0000 UTC m=+27.237828839" watchObservedRunningTime="2025-05-08 23:54:39.930042109 +0000 UTC m=+27.237844759" May 8 23:54:41.254976 systemd-networkd[1506]: cilium_host: Link UP May 8 23:54:41.255124 systemd-networkd[1506]: cilium_net: Link UP May 8 23:54:41.255684 systemd-networkd[1506]: cilium_net: Gained carrier May 8 23:54:41.255837 systemd-networkd[1506]: cilium_host: Gained carrier May 8 23:54:41.255925 systemd-networkd[1506]: cilium_net: Gained IPv6LL May 8 23:54:41.256035 systemd-networkd[1506]: cilium_host: Gained IPv6LL May 8 23:54:41.396264 systemd-networkd[1506]: cilium_vxlan: Link UP May 8 23:54:41.396271 systemd-networkd[1506]: cilium_vxlan: Gained carrier May 8 23:54:41.703550 kernel: NET: Registered PF_ALG protocol family May 8 23:54:42.342423 systemd-networkd[1506]: lxc_health: Link UP May 8 23:54:42.353822 systemd-networkd[1506]: lxc_health: Gained carrier May 8 23:54:42.598348 systemd-networkd[1506]: lxc5b046c59ed0d: Link UP May 8 23:54:42.606562 kernel: eth0: renamed from tmpe6db7 May 8 23:54:42.617234 systemd-networkd[1506]: lxc5b046c59ed0d: Gained carrier May 8 23:54:42.630465 kernel: eth0: renamed from tmpc4844 May 8 23:54:42.640891 systemd-networkd[1506]: tmpc4844: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:54:42.640961 systemd-networkd[1506]: tmpc4844: Cannot enable IPv6, ignoring: No such file or directory May 8 23:54:42.640992 systemd-networkd[1506]: tmpc4844: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory May 8 23:54:42.641002 systemd-networkd[1506]: tmpc4844: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory May 8 23:54:42.641012 systemd-networkd[1506]: tmpc4844: Cannot set IPv6 proxy NDP, ignoring: No such file or directory May 8 23:54:42.641025 systemd-networkd[1506]: tmpc4844: Cannot enable promote_secondaries for interface, ignoring: No such file or directory May 8 23:54:42.643038 systemd-networkd[1506]: lxcd9e136ecd6a2: Link UP May 8 23:54:42.646001 systemd-networkd[1506]: lxcd9e136ecd6a2: Gained carrier May 8 23:54:43.123543 systemd-networkd[1506]: cilium_vxlan: Gained IPv6LL May 8 23:54:44.081636 systemd-networkd[1506]: lxc_health: Gained IPv6LL May 8 23:54:44.146665 systemd-networkd[1506]: lxcd9e136ecd6a2: Gained IPv6LL May 8 23:54:44.338650 systemd-networkd[1506]: lxc5b046c59ed0d: Gained IPv6LL May 8 23:54:46.358724 containerd[1785]: time="2025-05-08T23:54:46.358485408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:46.358724 containerd[1785]: time="2025-05-08T23:54:46.358560728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:46.358724 containerd[1785]: time="2025-05-08T23:54:46.358574568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:46.361286 containerd[1785]: time="2025-05-08T23:54:46.360140570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:46.366536 containerd[1785]: time="2025-05-08T23:54:46.366303777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:46.366776 containerd[1785]: time="2025-05-08T23:54:46.366409657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:46.366776 containerd[1785]: time="2025-05-08T23:54:46.366428537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:46.367519 containerd[1785]: time="2025-05-08T23:54:46.367408019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:46.395707 systemd[1]: Started cri-containerd-e6db7de772e83e95db5b9d47f8fb4524d597b60cfb0a9138012be726ab0c5fda.scope - libcontainer container e6db7de772e83e95db5b9d47f8fb4524d597b60cfb0a9138012be726ab0c5fda. May 8 23:54:46.418617 systemd[1]: Started cri-containerd-c4844ef389dddde75d22e7933c73626917d8cf128849ba9872ed2a90a6888c7e.scope - libcontainer container c4844ef389dddde75d22e7933c73626917d8cf128849ba9872ed2a90a6888c7e. May 8 23:54:46.465715 containerd[1785]: time="2025-05-08T23:54:46.464901140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j6cs6,Uid:7cd40e22-b0cc-4ddc-bc78-27164019dc67,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4844ef389dddde75d22e7933c73626917d8cf128849ba9872ed2a90a6888c7e\"" May 8 23:54:46.469900 containerd[1785]: time="2025-05-08T23:54:46.469197625Z" level=info msg="CreateContainer within sandbox \"c4844ef389dddde75d22e7933c73626917d8cf128849ba9872ed2a90a6888c7e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 23:54:46.476117 containerd[1785]: time="2025-05-08T23:54:46.473940471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xnn42,Uid:9fedd60d-c5de-425c-a207-764f61c45219,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6db7de772e83e95db5b9d47f8fb4524d597b60cfb0a9138012be726ab0c5fda\"" May 8 23:54:46.480473 containerd[1785]: time="2025-05-08T23:54:46.479038437Z" level=info msg="CreateContainer within sandbox \"e6db7de772e83e95db5b9d47f8fb4524d597b60cfb0a9138012be726ab0c5fda\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 23:54:46.524867 containerd[1785]: time="2025-05-08T23:54:46.524750654Z" level=info msg="CreateContainer within sandbox \"c4844ef389dddde75d22e7933c73626917d8cf128849ba9872ed2a90a6888c7e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfa443d79283b618b2185292b40d3b53388f22ba630ffd41edd0cf58530f5f63\"" May 8 23:54:46.525727 containerd[1785]: time="2025-05-08T23:54:46.525642175Z" level=info msg="StartContainer for \"cfa443d79283b618b2185292b40d3b53388f22ba630ffd41edd0cf58530f5f63\"" May 8 23:54:46.539112 containerd[1785]: time="2025-05-08T23:54:46.539010352Z" level=info msg="CreateContainer within sandbox \"e6db7de772e83e95db5b9d47f8fb4524d597b60cfb0a9138012be726ab0c5fda\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62561dbb92d927709709381d32f086748625e01cdd203a0e0c8ae596e1ba0960\"" May 8 23:54:46.541245 containerd[1785]: time="2025-05-08T23:54:46.539836113Z" level=info msg="StartContainer for \"62561dbb92d927709709381d32f086748625e01cdd203a0e0c8ae596e1ba0960\"" May 8 23:54:46.552634 systemd[1]: Started cri-containerd-cfa443d79283b618b2185292b40d3b53388f22ba630ffd41edd0cf58530f5f63.scope - libcontainer container cfa443d79283b618b2185292b40d3b53388f22ba630ffd41edd0cf58530f5f63. May 8 23:54:46.574602 systemd[1]: Started cri-containerd-62561dbb92d927709709381d32f086748625e01cdd203a0e0c8ae596e1ba0960.scope - libcontainer container 62561dbb92d927709709381d32f086748625e01cdd203a0e0c8ae596e1ba0960. May 8 23:54:46.590191 containerd[1785]: time="2025-05-08T23:54:46.590065775Z" level=info msg="StartContainer for \"cfa443d79283b618b2185292b40d3b53388f22ba630ffd41edd0cf58530f5f63\" returns successfully" May 8 23:54:46.611447 containerd[1785]: time="2025-05-08T23:54:46.611316922Z" level=info msg="StartContainer for \"62561dbb92d927709709381d32f086748625e01cdd203a0e0c8ae596e1ba0960\" returns successfully" May 8 23:54:46.944940 kubelet[3315]: I0508 23:54:46.944824 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j6cs6" podStartSLOduration=20.944809216 podStartE2EDuration="20.944809216s" podCreationTimestamp="2025-05-08 23:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:46.944276455 +0000 UTC m=+34.252079065" watchObservedRunningTime="2025-05-08 23:54:46.944809216 +0000 UTC m=+34.252611826" May 8 23:54:46.966704 kubelet[3315]: I0508 23:54:46.966085 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xnn42" podStartSLOduration=20.966065802 podStartE2EDuration="20.966065802s" podCreationTimestamp="2025-05-08 23:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:46.96445644 +0000 UTC m=+34.272259090" watchObservedRunningTime="2025-05-08 23:54:46.966065802 +0000 UTC m=+34.273868452" May 8 23:54:50.088658 kubelet[3315]: I0508 23:54:50.088540 3315 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 23:55:51.342755 systemd[1]: Started sshd@7-10.200.20.33:22-10.200.16.10:49626.service - OpenSSH per-connection server daemon (10.200.16.10:49626). May 8 23:55:51.829673 sshd[4701]: Accepted publickey for core from 10.200.16.10 port 49626 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:55:51.831058 sshd-session[4701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:51.835541 systemd-logind[1722]: New session 10 of user core. May 8 23:55:51.840609 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 23:55:52.248337 sshd[4703]: Connection closed by 10.200.16.10 port 49626 May 8 23:55:52.248241 sshd-session[4701]: pam_unix(sshd:session): session closed for user core May 8 23:55:52.251573 systemd[1]: sshd@7-10.200.20.33:22-10.200.16.10:49626.service: Deactivated successfully. May 8 23:55:52.253152 systemd[1]: session-10.scope: Deactivated successfully. May 8 23:55:52.253858 systemd-logind[1722]: Session 10 logged out. Waiting for processes to exit. May 8 23:55:52.255084 systemd-logind[1722]: Removed session 10. May 8 23:55:57.334728 systemd[1]: Started sshd@8-10.200.20.33:22-10.200.16.10:49642.service - OpenSSH per-connection server daemon (10.200.16.10:49642). May 8 23:55:57.784174 sshd[4717]: Accepted publickey for core from 10.200.16.10 port 49642 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:55:57.785431 sshd-session[4717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:57.789658 systemd-logind[1722]: New session 11 of user core. May 8 23:55:57.791582 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 23:55:58.196071 sshd[4719]: Connection closed by 10.200.16.10 port 49642 May 8 23:55:58.196672 sshd-session[4717]: pam_unix(sshd:session): session closed for user core May 8 23:55:58.200253 systemd-logind[1722]: Session 11 logged out. Waiting for processes to exit. May 8 23:55:58.200987 systemd[1]: sshd@8-10.200.20.33:22-10.200.16.10:49642.service: Deactivated successfully. May 8 23:55:58.203205 systemd[1]: session-11.scope: Deactivated successfully. May 8 23:55:58.204516 systemd-logind[1722]: Removed session 11. May 8 23:56:03.278119 systemd[1]: Started sshd@9-10.200.20.33:22-10.200.16.10:37712.service - OpenSSH per-connection server daemon (10.200.16.10:37712). May 8 23:56:03.727829 sshd[4731]: Accepted publickey for core from 10.200.16.10 port 37712 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:03.729534 sshd-session[4731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:03.733096 systemd-logind[1722]: New session 12 of user core. May 8 23:56:03.741589 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 23:56:04.126553 sshd[4733]: Connection closed by 10.200.16.10 port 37712 May 8 23:56:04.127091 sshd-session[4731]: pam_unix(sshd:session): session closed for user core May 8 23:56:04.130250 systemd-logind[1722]: Session 12 logged out. Waiting for processes to exit. May 8 23:56:04.130949 systemd[1]: sshd@9-10.200.20.33:22-10.200.16.10:37712.service: Deactivated successfully. May 8 23:56:04.133289 systemd[1]: session-12.scope: Deactivated successfully. May 8 23:56:04.134851 systemd-logind[1722]: Removed session 12. May 8 23:56:09.216772 systemd[1]: Started sshd@10-10.200.20.33:22-10.200.16.10:59912.service - OpenSSH per-connection server daemon (10.200.16.10:59912). May 8 23:56:09.661832 sshd[4745]: Accepted publickey for core from 10.200.16.10 port 59912 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:09.663140 sshd-session[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:09.666995 systemd-logind[1722]: New session 13 of user core. May 8 23:56:09.674658 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 23:56:10.066388 sshd[4747]: Connection closed by 10.200.16.10 port 59912 May 8 23:56:10.067049 sshd-session[4745]: pam_unix(sshd:session): session closed for user core May 8 23:56:10.071094 systemd[1]: sshd@10-10.200.20.33:22-10.200.16.10:59912.service: Deactivated successfully. May 8 23:56:10.073634 systemd[1]: session-13.scope: Deactivated successfully. May 8 23:56:10.075173 systemd-logind[1722]: Session 13 logged out. Waiting for processes to exit. May 8 23:56:10.076664 systemd-logind[1722]: Removed session 13. May 8 23:56:10.148504 systemd[1]: Started sshd@11-10.200.20.33:22-10.200.16.10:59928.service - OpenSSH per-connection server daemon (10.200.16.10:59928). May 8 23:56:10.603841 sshd[4759]: Accepted publickey for core from 10.200.16.10 port 59928 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:10.605106 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:10.610056 systemd-logind[1722]: New session 14 of user core. May 8 23:56:10.618668 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 23:56:11.029082 sshd[4761]: Connection closed by 10.200.16.10 port 59928 May 8 23:56:11.029793 sshd-session[4759]: pam_unix(sshd:session): session closed for user core May 8 23:56:11.034177 systemd[1]: sshd@11-10.200.20.33:22-10.200.16.10:59928.service: Deactivated successfully. May 8 23:56:11.034204 systemd-logind[1722]: Session 14 logged out. Waiting for processes to exit. May 8 23:56:11.036566 systemd[1]: session-14.scope: Deactivated successfully. May 8 23:56:11.037708 systemd-logind[1722]: Removed session 14. May 8 23:56:11.109474 systemd[1]: Started sshd@12-10.200.20.33:22-10.200.16.10:59932.service - OpenSSH per-connection server daemon (10.200.16.10:59932). May 8 23:56:11.558762 sshd[4769]: Accepted publickey for core from 10.200.16.10 port 59932 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:11.560315 sshd-session[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:11.564193 systemd-logind[1722]: New session 15 of user core. May 8 23:56:11.571603 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 23:56:11.941489 sshd[4771]: Connection closed by 10.200.16.10 port 59932 May 8 23:56:11.942058 sshd-session[4769]: pam_unix(sshd:session): session closed for user core May 8 23:56:11.945271 systemd[1]: sshd@12-10.200.20.33:22-10.200.16.10:59932.service: Deactivated successfully. May 8 23:56:11.947077 systemd[1]: session-15.scope: Deactivated successfully. May 8 23:56:11.948089 systemd-logind[1722]: Session 15 logged out. Waiting for processes to exit. May 8 23:56:11.949013 systemd-logind[1722]: Removed session 15. May 8 23:56:17.034679 systemd[1]: Started sshd@13-10.200.20.33:22-10.200.16.10:59936.service - OpenSSH per-connection server daemon (10.200.16.10:59936). May 8 23:56:17.515665 sshd[4784]: Accepted publickey for core from 10.200.16.10 port 59936 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:17.517345 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:17.521600 systemd-logind[1722]: New session 16 of user core. May 8 23:56:17.527608 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 23:56:17.923417 sshd[4786]: Connection closed by 10.200.16.10 port 59936 May 8 23:56:17.923289 sshd-session[4784]: pam_unix(sshd:session): session closed for user core May 8 23:56:17.927035 systemd-logind[1722]: Session 16 logged out. Waiting for processes to exit. May 8 23:56:17.928043 systemd[1]: sshd@13-10.200.20.33:22-10.200.16.10:59936.service: Deactivated successfully. May 8 23:56:17.930249 systemd[1]: session-16.scope: Deactivated successfully. May 8 23:56:17.931420 systemd-logind[1722]: Removed session 16. May 8 23:56:23.006027 systemd[1]: Started sshd@14-10.200.20.33:22-10.200.16.10:52770.service - OpenSSH per-connection server daemon (10.200.16.10:52770). May 8 23:56:23.460505 sshd[4796]: Accepted publickey for core from 10.200.16.10 port 52770 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:23.461847 sshd-session[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:23.465862 systemd-logind[1722]: New session 17 of user core. May 8 23:56:23.469615 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 23:56:23.847562 sshd[4798]: Connection closed by 10.200.16.10 port 52770 May 8 23:56:23.848141 sshd-session[4796]: pam_unix(sshd:session): session closed for user core May 8 23:56:23.851874 systemd-logind[1722]: Session 17 logged out. Waiting for processes to exit. May 8 23:56:23.852070 systemd[1]: sshd@14-10.200.20.33:22-10.200.16.10:52770.service: Deactivated successfully. May 8 23:56:23.854254 systemd[1]: session-17.scope: Deactivated successfully. May 8 23:56:23.855279 systemd-logind[1722]: Removed session 17. May 8 23:56:23.935746 systemd[1]: Started sshd@15-10.200.20.33:22-10.200.16.10:52784.service - OpenSSH per-connection server daemon (10.200.16.10:52784). May 8 23:56:24.381774 sshd[4809]: Accepted publickey for core from 10.200.16.10 port 52784 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:24.383033 sshd-session[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:24.387540 systemd-logind[1722]: New session 18 of user core. May 8 23:56:24.394607 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 23:56:24.805141 sshd[4811]: Connection closed by 10.200.16.10 port 52784 May 8 23:56:24.805749 sshd-session[4809]: pam_unix(sshd:session): session closed for user core May 8 23:56:24.809389 systemd[1]: sshd@15-10.200.20.33:22-10.200.16.10:52784.service: Deactivated successfully. May 8 23:56:24.811052 systemd[1]: session-18.scope: Deactivated successfully. May 8 23:56:24.811884 systemd-logind[1722]: Session 18 logged out. Waiting for processes to exit. May 8 23:56:24.813195 systemd-logind[1722]: Removed session 18. May 8 23:56:24.891038 systemd[1]: Started sshd@16-10.200.20.33:22-10.200.16.10:52796.service - OpenSSH per-connection server daemon (10.200.16.10:52796). May 8 23:56:25.372644 sshd[4820]: Accepted publickey for core from 10.200.16.10 port 52796 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:25.373930 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:25.378671 systemd-logind[1722]: New session 19 of user core. May 8 23:56:25.383819 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 23:56:27.024739 sshd[4822]: Connection closed by 10.200.16.10 port 52796 May 8 23:56:27.024641 sshd-session[4820]: pam_unix(sshd:session): session closed for user core May 8 23:56:27.027964 systemd-logind[1722]: Session 19 logged out. Waiting for processes to exit. May 8 23:56:27.028219 systemd[1]: sshd@16-10.200.20.33:22-10.200.16.10:52796.service: Deactivated successfully. May 8 23:56:27.031128 systemd[1]: session-19.scope: Deactivated successfully. May 8 23:56:27.034401 systemd-logind[1722]: Removed session 19. May 8 23:56:27.120964 systemd[1]: Started sshd@17-10.200.20.33:22-10.200.16.10:52812.service - OpenSSH per-connection server daemon (10.200.16.10:52812). May 8 23:56:27.604870 sshd[4840]: Accepted publickey for core from 10.200.16.10 port 52812 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:27.606498 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:27.610765 systemd-logind[1722]: New session 20 of user core. May 8 23:56:27.618597 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 23:56:28.124946 sshd[4842]: Connection closed by 10.200.16.10 port 52812 May 8 23:56:28.125289 sshd-session[4840]: pam_unix(sshd:session): session closed for user core May 8 23:56:28.129664 systemd[1]: sshd@17-10.200.20.33:22-10.200.16.10:52812.service: Deactivated successfully. May 8 23:56:28.131719 systemd[1]: session-20.scope: Deactivated successfully. May 8 23:56:28.132620 systemd-logind[1722]: Session 20 logged out. Waiting for processes to exit. May 8 23:56:28.134026 systemd-logind[1722]: Removed session 20. May 8 23:56:28.213469 systemd[1]: Started sshd@18-10.200.20.33:22-10.200.16.10:52828.service - OpenSSH per-connection server daemon (10.200.16.10:52828). May 8 23:56:28.696656 sshd[4851]: Accepted publickey for core from 10.200.16.10 port 52828 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:28.698003 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:28.703396 systemd-logind[1722]: New session 21 of user core. May 8 23:56:28.709780 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 23:56:29.103914 sshd[4853]: Connection closed by 10.200.16.10 port 52828 May 8 23:56:29.104636 sshd-session[4851]: pam_unix(sshd:session): session closed for user core May 8 23:56:29.108046 systemd-logind[1722]: Session 21 logged out. Waiting for processes to exit. May 8 23:56:29.108772 systemd[1]: sshd@18-10.200.20.33:22-10.200.16.10:52828.service: Deactivated successfully. May 8 23:56:29.111681 systemd[1]: session-21.scope: Deactivated successfully. May 8 23:56:29.113208 systemd-logind[1722]: Removed session 21. May 8 23:56:34.192738 systemd[1]: Started sshd@19-10.200.20.33:22-10.200.16.10:53570.service - OpenSSH per-connection server daemon (10.200.16.10:53570). May 8 23:56:34.687822 sshd[4866]: Accepted publickey for core from 10.200.16.10 port 53570 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:34.689102 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:34.692764 systemd-logind[1722]: New session 22 of user core. May 8 23:56:34.699652 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 23:56:35.100541 sshd[4868]: Connection closed by 10.200.16.10 port 53570 May 8 23:56:35.101084 sshd-session[4866]: pam_unix(sshd:session): session closed for user core May 8 23:56:35.104400 systemd[1]: sshd@19-10.200.20.33:22-10.200.16.10:53570.service: Deactivated successfully. May 8 23:56:35.106800 systemd[1]: session-22.scope: Deactivated successfully. May 8 23:56:35.107834 systemd-logind[1722]: Session 22 logged out. Waiting for processes to exit. May 8 23:56:35.109014 systemd-logind[1722]: Removed session 22. May 8 23:56:40.186745 systemd[1]: Started sshd@20-10.200.20.33:22-10.200.16.10:40440.service - OpenSSH per-connection server daemon (10.200.16.10:40440). May 8 23:56:40.669915 sshd[4880]: Accepted publickey for core from 10.200.16.10 port 40440 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:40.671243 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:40.675820 systemd-logind[1722]: New session 23 of user core. May 8 23:56:40.680599 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 23:56:41.074557 sshd[4882]: Connection closed by 10.200.16.10 port 40440 May 8 23:56:41.075397 sshd-session[4880]: pam_unix(sshd:session): session closed for user core May 8 23:56:41.078801 systemd[1]: sshd@20-10.200.20.33:22-10.200.16.10:40440.service: Deactivated successfully. May 8 23:56:41.081123 systemd[1]: session-23.scope: Deactivated successfully. May 8 23:56:41.082035 systemd-logind[1722]: Session 23 logged out. Waiting for processes to exit. May 8 23:56:41.083252 systemd-logind[1722]: Removed session 23. May 8 23:56:46.165765 systemd[1]: Started sshd@21-10.200.20.33:22-10.200.16.10:40448.service - OpenSSH per-connection server daemon (10.200.16.10:40448). May 8 23:56:46.654225 sshd[4893]: Accepted publickey for core from 10.200.16.10 port 40448 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:46.655597 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:46.659889 systemd-logind[1722]: New session 24 of user core. May 8 23:56:46.666831 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 23:56:47.064486 sshd[4895]: Connection closed by 10.200.16.10 port 40448 May 8 23:56:47.065017 sshd-session[4893]: pam_unix(sshd:session): session closed for user core May 8 23:56:47.068478 systemd[1]: sshd@21-10.200.20.33:22-10.200.16.10:40448.service: Deactivated successfully. May 8 23:56:47.070625 systemd[1]: session-24.scope: Deactivated successfully. May 8 23:56:47.071357 systemd-logind[1722]: Session 24 logged out. Waiting for processes to exit. May 8 23:56:47.072678 systemd-logind[1722]: Removed session 24. May 8 23:56:47.150121 systemd[1]: Started sshd@22-10.200.20.33:22-10.200.16.10:40464.service - OpenSSH per-connection server daemon (10.200.16.10:40464). May 8 23:56:47.595562 sshd[4905]: Accepted publickey for core from 10.200.16.10 port 40464 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:47.596838 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:47.601415 systemd-logind[1722]: New session 25 of user core. May 8 23:56:47.608591 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 23:56:49.429288 containerd[1785]: time="2025-05-08T23:56:49.429239108Z" level=info msg="StopContainer for \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\" with timeout 30 (s)" May 8 23:56:49.430000 containerd[1785]: time="2025-05-08T23:56:49.429963749Z" level=info msg="Stop container \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\" with signal terminated" May 8 23:56:49.437194 containerd[1785]: time="2025-05-08T23:56:49.437145395Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:56:49.442534 systemd[1]: cri-containerd-08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434.scope: Deactivated successfully. May 8 23:56:49.448161 containerd[1785]: time="2025-05-08T23:56:49.447788205Z" level=info msg="StopContainer for \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\" with timeout 2 (s)" May 8 23:56:49.448161 containerd[1785]: time="2025-05-08T23:56:49.448137366Z" level=info msg="Stop container \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\" with signal terminated" May 8 23:56:49.458756 systemd-networkd[1506]: lxc_health: Link DOWN May 8 23:56:49.458766 systemd-networkd[1506]: lxc_health: Lost carrier May 8 23:56:49.477273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434-rootfs.mount: Deactivated successfully. May 8 23:56:49.478463 systemd[1]: cri-containerd-190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a.scope: Deactivated successfully. May 8 23:56:49.478737 systemd[1]: cri-containerd-190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a.scope: Consumed 6.430s CPU time. May 8 23:56:49.498915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a-rootfs.mount: Deactivated successfully. May 8 23:56:49.545582 containerd[1785]: time="2025-05-08T23:56:49.545512576Z" level=info msg="shim disconnected" id=190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a namespace=k8s.io May 8 23:56:49.545582 containerd[1785]: time="2025-05-08T23:56:49.545563816Z" level=warning msg="cleaning up after shim disconnected" id=190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a namespace=k8s.io May 8 23:56:49.545582 containerd[1785]: time="2025-05-08T23:56:49.545571576Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:49.547465 containerd[1785]: time="2025-05-08T23:56:49.545949257Z" level=info msg="shim disconnected" id=08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434 namespace=k8s.io May 8 23:56:49.547465 containerd[1785]: time="2025-05-08T23:56:49.546013417Z" level=warning msg="cleaning up after shim disconnected" id=08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434 namespace=k8s.io May 8 23:56:49.547465 containerd[1785]: time="2025-05-08T23:56:49.546028217Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:49.575711 containerd[1785]: time="2025-05-08T23:56:49.575669004Z" level=info msg="StopContainer for \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\" returns successfully" May 8 23:56:49.576563 containerd[1785]: time="2025-05-08T23:56:49.576532685Z" level=info msg="StopPodSandbox for \"54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa\"" May 8 23:56:49.577033 containerd[1785]: time="2025-05-08T23:56:49.577011125Z" level=info msg="Container to stop \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:56:49.577237 containerd[1785]: time="2025-05-08T23:56:49.576941925Z" level=info msg="StopContainer for \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\" returns successfully" May 8 23:56:49.578899 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa-shm.mount: Deactivated successfully. May 8 23:56:49.580076 containerd[1785]: time="2025-05-08T23:56:49.580028488Z" level=info msg="StopPodSandbox for \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\"" May 8 23:56:49.580147 containerd[1785]: time="2025-05-08T23:56:49.580080768Z" level=info msg="Container to stop \"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:56:49.580147 containerd[1785]: time="2025-05-08T23:56:49.580093728Z" level=info msg="Container to stop \"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:56:49.580147 containerd[1785]: time="2025-05-08T23:56:49.580102248Z" level=info msg="Container to stop \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:56:49.580147 containerd[1785]: time="2025-05-08T23:56:49.580111528Z" level=info msg="Container to stop \"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:56:49.580147 containerd[1785]: time="2025-05-08T23:56:49.580119408Z" level=info msg="Container to stop \"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:56:49.581795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5-shm.mount: Deactivated successfully. May 8 23:56:49.588811 systemd[1]: cri-containerd-54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa.scope: Deactivated successfully. May 8 23:56:49.593539 systemd[1]: cri-containerd-fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5.scope: Deactivated successfully. May 8 23:56:49.648043 containerd[1785]: time="2025-05-08T23:56:49.647628274Z" level=info msg="shim disconnected" id=fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5 namespace=k8s.io May 8 23:56:49.648043 containerd[1785]: time="2025-05-08T23:56:49.647688754Z" level=warning msg="cleaning up after shim disconnected" id=fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5 namespace=k8s.io May 8 23:56:49.648043 containerd[1785]: time="2025-05-08T23:56:49.647705594Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:49.651891 containerd[1785]: time="2025-05-08T23:56:49.651654639Z" level=info msg="shim disconnected" id=54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa namespace=k8s.io May 8 23:56:49.651891 containerd[1785]: time="2025-05-08T23:56:49.651725639Z" level=warning msg="cleaning up after shim disconnected" id=54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa namespace=k8s.io May 8 23:56:49.651891 containerd[1785]: time="2025-05-08T23:56:49.651734239Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:49.661521 containerd[1785]: time="2025-05-08T23:56:49.661420250Z" level=warning msg="cleanup warnings time=\"2025-05-08T23:56:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 23:56:49.662867 containerd[1785]: time="2025-05-08T23:56:49.662511371Z" level=warning msg="cleanup warnings time=\"2025-05-08T23:56:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 23:56:49.663718 containerd[1785]: time="2025-05-08T23:56:49.662828612Z" level=info msg="TearDown network for sandbox \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" successfully" May 8 23:56:49.663718 containerd[1785]: time="2025-05-08T23:56:49.663594373Z" level=info msg="StopPodSandbox for \"fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5\" returns successfully" May 8 23:56:49.663718 containerd[1785]: time="2025-05-08T23:56:49.663693933Z" level=info msg="TearDown network for sandbox \"54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa\" successfully" May 8 23:56:49.663851 containerd[1785]: time="2025-05-08T23:56:49.663719853Z" level=info msg="StopPodSandbox for \"54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa\" returns successfully" May 8 23:56:49.722983 kubelet[3315]: I0508 23:56:49.722849 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6691d159-3269-4c83-8526-b76df9680080-hubble-tls\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723325 kubelet[3315]: I0508 23:56:49.723241 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-xtables-lock\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723325 kubelet[3315]: I0508 23:56:49.723262 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cilium-cgroup\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723325 kubelet[3315]: I0508 23:56:49.723279 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cni-path\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723325 kubelet[3315]: I0508 23:56:49.723313 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-host-proc-sys-net\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723516 kubelet[3315]: I0508 23:56:49.723328 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-hostproc\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723516 kubelet[3315]: I0508 23:56:49.723342 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cilium-run\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723516 kubelet[3315]: I0508 23:56:49.723356 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-etc-cni-netd\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723516 kubelet[3315]: I0508 23:56:49.723373 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlsl7\" (UniqueName: \"kubernetes.io/projected/6691d159-3269-4c83-8526-b76df9680080-kube-api-access-nlsl7\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723516 kubelet[3315]: I0508 23:56:49.723392 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8670c929-31bb-401f-9e83-601f9dbfaa7b-cilium-config-path\") pod \"8670c929-31bb-401f-9e83-601f9dbfaa7b\" (UID: \"8670c929-31bb-401f-9e83-601f9dbfaa7b\") " May 8 23:56:49.723516 kubelet[3315]: I0508 23:56:49.723407 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-lib-modules\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723644 kubelet[3315]: I0508 23:56:49.723423 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6691d159-3269-4c83-8526-b76df9680080-clustermesh-secrets\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723644 kubelet[3315]: I0508 23:56:49.723469 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-host-proc-sys-kernel\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723644 kubelet[3315]: I0508 23:56:49.723485 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-bpf-maps\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.723644 kubelet[3315]: I0508 23:56:49.723501 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnbwc\" (UniqueName: \"kubernetes.io/projected/8670c929-31bb-401f-9e83-601f9dbfaa7b-kube-api-access-qnbwc\") pod \"8670c929-31bb-401f-9e83-601f9dbfaa7b\" (UID: \"8670c929-31bb-401f-9e83-601f9dbfaa7b\") " May 8 23:56:49.723644 kubelet[3315]: I0508 23:56:49.723516 3315 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6691d159-3269-4c83-8526-b76df9680080-cilium-config-path\") pod \"6691d159-3269-4c83-8526-b76df9680080\" (UID: \"6691d159-3269-4c83-8526-b76df9680080\") " May 8 23:56:49.729268 kubelet[3315]: I0508 23:56:49.726240 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6691d159-3269-4c83-8526-b76df9680080-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 23:56:49.729268 kubelet[3315]: I0508 23:56:49.726304 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:56:49.729268 kubelet[3315]: I0508 23:56:49.726323 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:56:49.729268 kubelet[3315]: I0508 23:56:49.726338 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cni-path" (OuterVolumeSpecName: "cni-path") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:56:49.729268 kubelet[3315]: I0508 23:56:49.726353 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:56:49.729486 kubelet[3315]: I0508 23:56:49.726367 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-hostproc" (OuterVolumeSpecName: "hostproc") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:56:49.729486 kubelet[3315]: I0508 23:56:49.726383 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:56:49.729486 kubelet[3315]: I0508 23:56:49.726397 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:56:49.729486 kubelet[3315]: I0508 23:56:49.728789 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:56:49.729486 kubelet[3315]: I0508 23:56:49.728825 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:56:49.733463 kubelet[3315]: I0508 23:56:49.731730 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:56:49.734007 kubelet[3315]: I0508 23:56:49.733963 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8670c929-31bb-401f-9e83-601f9dbfaa7b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8670c929-31bb-401f-9e83-601f9dbfaa7b" (UID: "8670c929-31bb-401f-9e83-601f9dbfaa7b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 23:56:49.737707 kubelet[3315]: I0508 23:56:49.737666 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6691d159-3269-4c83-8526-b76df9680080-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 23:56:49.738221 kubelet[3315]: I0508 23:56:49.738164 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8670c929-31bb-401f-9e83-601f9dbfaa7b-kube-api-access-qnbwc" (OuterVolumeSpecName: "kube-api-access-qnbwc") pod "8670c929-31bb-401f-9e83-601f9dbfaa7b" (UID: "8670c929-31bb-401f-9e83-601f9dbfaa7b"). InnerVolumeSpecName "kube-api-access-qnbwc". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 23:56:49.740470 kubelet[3315]: I0508 23:56:49.738627 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6691d159-3269-4c83-8526-b76df9680080-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 23:56:49.740470 kubelet[3315]: I0508 23:56:49.739596 3315 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6691d159-3269-4c83-8526-b76df9680080-kube-api-access-nlsl7" (OuterVolumeSpecName: "kube-api-access-nlsl7") pod "6691d159-3269-4c83-8526-b76df9680080" (UID: "6691d159-3269-4c83-8526-b76df9680080"). InnerVolumeSpecName "kube-api-access-nlsl7". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 23:56:49.824106 kubelet[3315]: I0508 23:56:49.824062 3315 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-etc-cni-netd\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824106 kubelet[3315]: I0508 23:56:49.824101 3315 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nlsl7\" (UniqueName: \"kubernetes.io/projected/6691d159-3269-4c83-8526-b76df9680080-kube-api-access-nlsl7\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824106 kubelet[3315]: I0508 23:56:49.824112 3315 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8670c929-31bb-401f-9e83-601f9dbfaa7b-cilium-config-path\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824285 kubelet[3315]: I0508 23:56:49.824122 3315 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6691d159-3269-4c83-8526-b76df9680080-clustermesh-secrets\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824285 kubelet[3315]: I0508 23:56:49.824130 3315 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-host-proc-sys-kernel\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824285 kubelet[3315]: I0508 23:56:49.824139 3315 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-bpf-maps\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824285 kubelet[3315]: I0508 23:56:49.824148 3315 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qnbwc\" (UniqueName: \"kubernetes.io/projected/8670c929-31bb-401f-9e83-601f9dbfaa7b-kube-api-access-qnbwc\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824285 kubelet[3315]: I0508 23:56:49.824159 3315 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-lib-modules\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824285 kubelet[3315]: I0508 23:56:49.824167 3315 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6691d159-3269-4c83-8526-b76df9680080-cilium-config-path\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824285 kubelet[3315]: I0508 23:56:49.824175 3315 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6691d159-3269-4c83-8526-b76df9680080-hubble-tls\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824285 kubelet[3315]: I0508 23:56:49.824183 3315 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cilium-cgroup\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824490 kubelet[3315]: I0508 23:56:49.824190 3315 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-xtables-lock\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824490 kubelet[3315]: I0508 23:56:49.824199 3315 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cni-path\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824490 kubelet[3315]: I0508 23:56:49.824208 3315 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-host-proc-sys-net\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824490 kubelet[3315]: I0508 23:56:49.824215 3315 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-hostproc\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:49.824490 kubelet[3315]: I0508 23:56:49.824223 3315 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6691d159-3269-4c83-8526-b76df9680080-cilium-run\") on node \"ci-4152.2.3-n-71d56f534c\" DevicePath \"\"" May 8 23:56:50.155665 kubelet[3315]: I0508 23:56:50.153569 3315 scope.go:117] "RemoveContainer" containerID="190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a" May 8 23:56:50.158784 containerd[1785]: time="2025-05-08T23:56:50.158680064Z" level=info msg="RemoveContainer for \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\"" May 8 23:56:50.159471 systemd[1]: Removed slice kubepods-burstable-pod6691d159_3269_4c83_8526_b76df9680080.slice - libcontainer container kubepods-burstable-pod6691d159_3269_4c83_8526_b76df9680080.slice. May 8 23:56:50.159557 systemd[1]: kubepods-burstable-pod6691d159_3269_4c83_8526_b76df9680080.slice: Consumed 6.503s CPU time. May 8 23:56:50.168208 systemd[1]: Removed slice kubepods-besteffort-pod8670c929_31bb_401f_9e83_601f9dbfaa7b.slice - libcontainer container kubepods-besteffort-pod8670c929_31bb_401f_9e83_601f9dbfaa7b.slice. May 8 23:56:50.174629 containerd[1785]: time="2025-05-08T23:56:50.174590882Z" level=info msg="RemoveContainer for \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\" returns successfully" May 8 23:56:50.176322 kubelet[3315]: I0508 23:56:50.175492 3315 scope.go:117] "RemoveContainer" containerID="de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16" May 8 23:56:50.177668 containerd[1785]: time="2025-05-08T23:56:50.177489685Z" level=info msg="RemoveContainer for \"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16\"" May 8 23:56:50.186221 containerd[1785]: time="2025-05-08T23:56:50.186154375Z" level=info msg="RemoveContainer for \"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16\" returns successfully" May 8 23:56:50.186964 kubelet[3315]: I0508 23:56:50.186387 3315 scope.go:117] "RemoveContainer" containerID="4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3" May 8 23:56:50.187959 containerd[1785]: time="2025-05-08T23:56:50.187922257Z" level=info msg="RemoveContainer for \"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3\"" May 8 23:56:50.195547 containerd[1785]: time="2025-05-08T23:56:50.195485946Z" level=info msg="RemoveContainer for \"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3\" returns successfully" May 8 23:56:50.195789 kubelet[3315]: I0508 23:56:50.195664 3315 scope.go:117] "RemoveContainer" containerID="e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b" May 8 23:56:50.197061 containerd[1785]: time="2025-05-08T23:56:50.197032228Z" level=info msg="RemoveContainer for \"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b\"" May 8 23:56:50.207662 containerd[1785]: time="2025-05-08T23:56:50.207615960Z" level=info msg="RemoveContainer for \"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b\" returns successfully" May 8 23:56:50.208106 kubelet[3315]: I0508 23:56:50.207993 3315 scope.go:117] "RemoveContainer" containerID="b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6" May 8 23:56:50.209462 containerd[1785]: time="2025-05-08T23:56:50.209406642Z" level=info msg="RemoveContainer for \"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6\"" May 8 23:56:50.219862 containerd[1785]: time="2025-05-08T23:56:50.219824414Z" level=info msg="RemoveContainer for \"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6\" returns successfully" May 8 23:56:50.220165 kubelet[3315]: I0508 23:56:50.220061 3315 scope.go:117] "RemoveContainer" containerID="190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a" May 8 23:56:50.220337 containerd[1785]: time="2025-05-08T23:56:50.220297375Z" level=error msg="ContainerStatus for \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\": not found" May 8 23:56:50.220498 kubelet[3315]: E0508 23:56:50.220472 3315 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\": not found" containerID="190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a" May 8 23:56:50.220609 kubelet[3315]: I0508 23:56:50.220506 3315 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a"} err="failed to get container status \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\": rpc error: code = NotFound desc = an error occurred when try to find container \"190d2bb65b3f3c3223f6b59ceaa062f2766d58281e19578ea58ded31769d322a\": not found" May 8 23:56:50.220609 kubelet[3315]: I0508 23:56:50.220607 3315 scope.go:117] "RemoveContainer" containerID="de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16" May 8 23:56:50.220918 containerd[1785]: time="2025-05-08T23:56:50.220881175Z" level=error msg="ContainerStatus for \"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16\": not found" May 8 23:56:50.221073 kubelet[3315]: E0508 23:56:50.221045 3315 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16\": not found" containerID="de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16" May 8 23:56:50.221104 kubelet[3315]: I0508 23:56:50.221080 3315 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16"} err="failed to get container status \"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16\": rpc error: code = NotFound desc = an error occurred when try to find container \"de241c6209824d29c2bb36f343bf55d3ab5187f6986001ada99dff5cf7ea8d16\": not found" May 8 23:56:50.221128 kubelet[3315]: I0508 23:56:50.221102 3315 scope.go:117] "RemoveContainer" containerID="4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3" May 8 23:56:50.221388 containerd[1785]: time="2025-05-08T23:56:50.221358856Z" level=error msg="ContainerStatus for \"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3\": not found" May 8 23:56:50.221515 kubelet[3315]: E0508 23:56:50.221491 3315 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3\": not found" containerID="4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3" May 8 23:56:50.221548 kubelet[3315]: I0508 23:56:50.221520 3315 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3"} err="failed to get container status \"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3\": rpc error: code = NotFound desc = an error occurred when try to find container \"4422c5c9ffa28f551e7bdb2d352d98b7208e1ac733cb97346c9cc63fb4779aa3\": not found" May 8 23:56:50.221548 kubelet[3315]: I0508 23:56:50.221535 3315 scope.go:117] "RemoveContainer" containerID="e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b" May 8 23:56:50.221700 containerd[1785]: time="2025-05-08T23:56:50.221668936Z" level=error msg="ContainerStatus for \"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b\": not found" May 8 23:56:50.221892 kubelet[3315]: E0508 23:56:50.221871 3315 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b\": not found" containerID="e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b" May 8 23:56:50.221924 kubelet[3315]: I0508 23:56:50.221894 3315 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b"} err="failed to get container status \"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0e8a2a271dfc68e602b8cad410df2e22b2ea609361b3cd1ef14423f4651e60b\": not found" May 8 23:56:50.221924 kubelet[3315]: I0508 23:56:50.221920 3315 scope.go:117] "RemoveContainer" containerID="b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6" May 8 23:56:50.222215 containerd[1785]: time="2025-05-08T23:56:50.222189417Z" level=error msg="ContainerStatus for \"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6\": not found" May 8 23:56:50.222453 kubelet[3315]: E0508 23:56:50.222324 3315 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6\": not found" containerID="b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6" May 8 23:56:50.222453 kubelet[3315]: I0508 23:56:50.222352 3315 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6"} err="failed to get container status \"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5b6cf4a1a80395112e0daeb29f2bdda7f90c8e972abac8e6539339c025d59b6\": not found" May 8 23:56:50.222453 kubelet[3315]: I0508 23:56:50.222368 3315 scope.go:117] "RemoveContainer" containerID="08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434" May 8 23:56:50.223646 containerd[1785]: time="2025-05-08T23:56:50.223621579Z" level=info msg="RemoveContainer for \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\"" May 8 23:56:50.231149 containerd[1785]: time="2025-05-08T23:56:50.231118027Z" level=info msg="RemoveContainer for \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\" returns successfully" May 8 23:56:50.231672 kubelet[3315]: I0508 23:56:50.231410 3315 scope.go:117] "RemoveContainer" containerID="08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434" May 8 23:56:50.231749 containerd[1785]: time="2025-05-08T23:56:50.231616148Z" level=error msg="ContainerStatus for \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\": not found" May 8 23:56:50.231783 kubelet[3315]: E0508 23:56:50.231710 3315 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\": not found" containerID="08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434" May 8 23:56:50.231783 kubelet[3315]: I0508 23:56:50.231731 3315 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434"} err="failed to get container status \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\": rpc error: code = NotFound desc = an error occurred when try to find container \"08ec5b48b9d108d5ccf241e39d9d981b302fb287bddbfb15f254c2b61fda6434\": not found" May 8 23:56:50.423650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54d859e2dd3c1f02ad508a89795f6d87fc838e8c39e05e877d240ac40a6f66aa-rootfs.mount: Deactivated successfully. May 8 23:56:50.423746 systemd[1]: var-lib-kubelet-pods-8670c929\x2d31bb\x2d401f\x2d9e83\x2d601f9dbfaa7b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqnbwc.mount: Deactivated successfully. May 8 23:56:50.423807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe8a5dce1551f6cf8d36b4a0a50d3183b7b1f4803fb5350fef368ed2bf3e16f5-rootfs.mount: Deactivated successfully. May 8 23:56:50.423856 systemd[1]: var-lib-kubelet-pods-6691d159\x2d3269\x2d4c83\x2d8526\x2db76df9680080-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnlsl7.mount: Deactivated successfully. May 8 23:56:50.423903 systemd[1]: var-lib-kubelet-pods-6691d159\x2d3269\x2d4c83\x2d8526\x2db76df9680080-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 23:56:50.423952 systemd[1]: var-lib-kubelet-pods-6691d159\x2d3269\x2d4c83\x2d8526\x2db76df9680080-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 23:56:50.817615 kubelet[3315]: I0508 23:56:50.817511 3315 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6691d159-3269-4c83-8526-b76df9680080" path="/var/lib/kubelet/pods/6691d159-3269-4c83-8526-b76df9680080/volumes" May 8 23:56:50.818303 kubelet[3315]: I0508 23:56:50.818279 3315 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8670c929-31bb-401f-9e83-601f9dbfaa7b" path="/var/lib/kubelet/pods/8670c929-31bb-401f-9e83-601f9dbfaa7b/volumes" May 8 23:56:51.426158 sshd[4907]: Connection closed by 10.200.16.10 port 40464 May 8 23:56:51.426790 sshd-session[4905]: pam_unix(sshd:session): session closed for user core May 8 23:56:51.430337 systemd[1]: sshd@22-10.200.20.33:22-10.200.16.10:40464.service: Deactivated successfully. May 8 23:56:51.431941 systemd[1]: session-25.scope: Deactivated successfully. May 8 23:56:51.433015 systemd-logind[1722]: Session 25 logged out. Waiting for processes to exit. May 8 23:56:51.434273 systemd-logind[1722]: Removed session 25. May 8 23:56:51.510920 systemd[1]: Started sshd@23-10.200.20.33:22-10.200.16.10:40488.service - OpenSSH per-connection server daemon (10.200.16.10:40488). May 8 23:56:51.975002 sshd[5070]: Accepted publickey for core from 10.200.16.10 port 40488 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:51.976304 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:51.980744 systemd-logind[1722]: New session 26 of user core. May 8 23:56:51.986582 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 23:56:52.909145 kubelet[3315]: E0508 23:56:52.909104 3315 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 23:56:53.031025 kubelet[3315]: I0508 23:56:53.029310 3315 topology_manager.go:215] "Topology Admit Handler" podUID="760d9b3e-f758-4657-87b0-7494acbe9f4a" podNamespace="kube-system" podName="cilium-j7fwf" May 8 23:56:53.031025 kubelet[3315]: E0508 23:56:53.029372 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6691d159-3269-4c83-8526-b76df9680080" containerName="clean-cilium-state" May 8 23:56:53.031025 kubelet[3315]: E0508 23:56:53.029383 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6691d159-3269-4c83-8526-b76df9680080" containerName="apply-sysctl-overwrites" May 8 23:56:53.031025 kubelet[3315]: E0508 23:56:53.029390 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6691d159-3269-4c83-8526-b76df9680080" containerName="mount-bpf-fs" May 8 23:56:53.031025 kubelet[3315]: E0508 23:56:53.029395 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6691d159-3269-4c83-8526-b76df9680080" containerName="mount-cgroup" May 8 23:56:53.031025 kubelet[3315]: E0508 23:56:53.029402 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8670c929-31bb-401f-9e83-601f9dbfaa7b" containerName="cilium-operator" May 8 23:56:53.031025 kubelet[3315]: E0508 23:56:53.029409 3315 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6691d159-3269-4c83-8526-b76df9680080" containerName="cilium-agent" May 8 23:56:53.031025 kubelet[3315]: I0508 23:56:53.029429 3315 memory_manager.go:354] "RemoveStaleState removing state" podUID="6691d159-3269-4c83-8526-b76df9680080" containerName="cilium-agent" May 8 23:56:53.031025 kubelet[3315]: I0508 23:56:53.029450 3315 memory_manager.go:354] "RemoveStaleState removing state" podUID="8670c929-31bb-401f-9e83-601f9dbfaa7b" containerName="cilium-operator" May 8 23:56:53.039244 kubelet[3315]: I0508 23:56:53.039205 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/760d9b3e-f758-4657-87b0-7494acbe9f4a-hubble-tls\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.040481 kubelet[3315]: I0508 23:56:53.039611 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/760d9b3e-f758-4657-87b0-7494acbe9f4a-cilium-run\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.040481 kubelet[3315]: I0508 23:56:53.039641 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/760d9b3e-f758-4657-87b0-7494acbe9f4a-hostproc\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.040481 kubelet[3315]: I0508 23:56:53.039659 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/760d9b3e-f758-4657-87b0-7494acbe9f4a-xtables-lock\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.040481 kubelet[3315]: I0508 23:56:53.039680 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvwh6\" (UniqueName: \"kubernetes.io/projected/760d9b3e-f758-4657-87b0-7494acbe9f4a-kube-api-access-lvwh6\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.040481 kubelet[3315]: I0508 23:56:53.039696 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/760d9b3e-f758-4657-87b0-7494acbe9f4a-cilium-ipsec-secrets\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.040481 kubelet[3315]: I0508 23:56:53.039712 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/760d9b3e-f758-4657-87b0-7494acbe9f4a-host-proc-sys-net\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.041402 kubelet[3315]: I0508 23:56:53.040714 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/760d9b3e-f758-4657-87b0-7494acbe9f4a-clustermesh-secrets\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.041402 kubelet[3315]: I0508 23:56:53.040755 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/760d9b3e-f758-4657-87b0-7494acbe9f4a-cilium-config-path\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.041402 kubelet[3315]: I0508 23:56:53.040776 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/760d9b3e-f758-4657-87b0-7494acbe9f4a-etc-cni-netd\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.041402 kubelet[3315]: I0508 23:56:53.040793 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/760d9b3e-f758-4657-87b0-7494acbe9f4a-cilium-cgroup\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.041402 kubelet[3315]: I0508 23:56:53.040810 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/760d9b3e-f758-4657-87b0-7494acbe9f4a-cni-path\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.041402 kubelet[3315]: I0508 23:56:53.040826 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/760d9b3e-f758-4657-87b0-7494acbe9f4a-host-proc-sys-kernel\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.041617 kubelet[3315]: I0508 23:56:53.040842 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/760d9b3e-f758-4657-87b0-7494acbe9f4a-bpf-maps\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.041617 kubelet[3315]: I0508 23:56:53.040856 3315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/760d9b3e-f758-4657-87b0-7494acbe9f4a-lib-modules\") pod \"cilium-j7fwf\" (UID: \"760d9b3e-f758-4657-87b0-7494acbe9f4a\") " pod="kube-system/cilium-j7fwf" May 8 23:56:53.041931 systemd[1]: Created slice kubepods-burstable-pod760d9b3e_f758_4657_87b0_7494acbe9f4a.slice - libcontainer container kubepods-burstable-pod760d9b3e_f758_4657_87b0_7494acbe9f4a.slice. May 8 23:56:53.075462 sshd[5072]: Connection closed by 10.200.16.10 port 40488 May 8 23:56:53.075950 sshd-session[5070]: pam_unix(sshd:session): session closed for user core May 8 23:56:53.079397 systemd[1]: sshd@23-10.200.20.33:22-10.200.16.10:40488.service: Deactivated successfully. May 8 23:56:53.084797 systemd[1]: session-26.scope: Deactivated successfully. May 8 23:56:53.087285 systemd-logind[1722]: Session 26 logged out. Waiting for processes to exit. May 8 23:56:53.090038 systemd-logind[1722]: Removed session 26. May 8 23:56:53.181780 systemd[1]: Started sshd@24-10.200.20.33:22-10.200.16.10:40502.service - OpenSSH per-connection server daemon (10.200.16.10:40502). May 8 23:56:53.346409 containerd[1785]: time="2025-05-08T23:56:53.346368248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7fwf,Uid:760d9b3e-f758-4657-87b0-7494acbe9f4a,Namespace:kube-system,Attempt:0,}" May 8 23:56:53.390918 containerd[1785]: time="2025-05-08T23:56:53.390833174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:56:53.391264 containerd[1785]: time="2025-05-08T23:56:53.391145734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:56:53.391264 containerd[1785]: time="2025-05-08T23:56:53.391208374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:53.391522 containerd[1785]: time="2025-05-08T23:56:53.391485895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:53.410618 systemd[1]: Started cri-containerd-1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36.scope - libcontainer container 1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36. May 8 23:56:53.431031 containerd[1785]: time="2025-05-08T23:56:53.430881855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7fwf,Uid:760d9b3e-f758-4657-87b0-7494acbe9f4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36\"" May 8 23:56:53.434997 containerd[1785]: time="2025-05-08T23:56:53.434660019Z" level=info msg="CreateContainer within sandbox \"1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 23:56:53.467357 containerd[1785]: time="2025-05-08T23:56:53.467284653Z" level=info msg="CreateContainer within sandbox \"1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1006a54b84ecab634b21e0179e0d50d10632dbe919dd7f36690ee322639a32aa\"" May 8 23:56:53.468096 containerd[1785]: time="2025-05-08T23:56:53.467880213Z" level=info msg="StartContainer for \"1006a54b84ecab634b21e0179e0d50d10632dbe919dd7f36690ee322639a32aa\"" May 8 23:56:53.490611 systemd[1]: Started cri-containerd-1006a54b84ecab634b21e0179e0d50d10632dbe919dd7f36690ee322639a32aa.scope - libcontainer container 1006a54b84ecab634b21e0179e0d50d10632dbe919dd7f36690ee322639a32aa. May 8 23:56:53.516486 containerd[1785]: time="2025-05-08T23:56:53.516411224Z" level=info msg="StartContainer for \"1006a54b84ecab634b21e0179e0d50d10632dbe919dd7f36690ee322639a32aa\" returns successfully" May 8 23:56:53.522956 systemd[1]: cri-containerd-1006a54b84ecab634b21e0179e0d50d10632dbe919dd7f36690ee322639a32aa.scope: Deactivated successfully. May 8 23:56:53.578178 containerd[1785]: time="2025-05-08T23:56:53.578121607Z" level=info msg="shim disconnected" id=1006a54b84ecab634b21e0179e0d50d10632dbe919dd7f36690ee322639a32aa namespace=k8s.io May 8 23:56:53.578385 containerd[1785]: time="2025-05-08T23:56:53.578189407Z" level=warning msg="cleaning up after shim disconnected" id=1006a54b84ecab634b21e0179e0d50d10632dbe919dd7f36690ee322639a32aa namespace=k8s.io May 8 23:56:53.578385 containerd[1785]: time="2025-05-08T23:56:53.578199687Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:53.673016 sshd[5086]: Accepted publickey for core from 10.200.16.10 port 40502 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:53.674316 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:53.678287 systemd-logind[1722]: New session 27 of user core. May 8 23:56:53.686599 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 23:56:54.017907 sshd[5192]: Connection closed by 10.200.16.10 port 40502 May 8 23:56:54.017378 sshd-session[5086]: pam_unix(sshd:session): session closed for user core May 8 23:56:54.020951 systemd[1]: sshd@24-10.200.20.33:22-10.200.16.10:40502.service: Deactivated successfully. May 8 23:56:54.023698 systemd[1]: session-27.scope: Deactivated successfully. May 8 23:56:54.024620 systemd-logind[1722]: Session 27 logged out. Waiting for processes to exit. May 8 23:56:54.025524 systemd-logind[1722]: Removed session 27. May 8 23:56:54.100739 systemd[1]: Started sshd@25-10.200.20.33:22-10.200.16.10:40506.service - OpenSSH per-connection server daemon (10.200.16.10:40506). May 8 23:56:54.163608 systemd[1]: run-containerd-runc-k8s.io-1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36-runc.JSnyOD.mount: Deactivated successfully. May 8 23:56:54.183786 containerd[1785]: time="2025-05-08T23:56:54.183746191Z" level=info msg="CreateContainer within sandbox \"1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 23:56:54.217978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount283700498.mount: Deactivated successfully. May 8 23:56:54.225971 containerd[1785]: time="2025-05-08T23:56:54.225923915Z" level=info msg="CreateContainer within sandbox \"1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"27bf238b2bfe5e74ee520fe3c5c3b0c8a9f67094e840d40917f079e523ec621f\"" May 8 23:56:54.227500 containerd[1785]: time="2025-05-08T23:56:54.226623476Z" level=info msg="StartContainer for \"27bf238b2bfe5e74ee520fe3c5c3b0c8a9f67094e840d40917f079e523ec621f\"" May 8 23:56:54.258611 systemd[1]: Started cri-containerd-27bf238b2bfe5e74ee520fe3c5c3b0c8a9f67094e840d40917f079e523ec621f.scope - libcontainer container 27bf238b2bfe5e74ee520fe3c5c3b0c8a9f67094e840d40917f079e523ec621f. May 8 23:56:54.286626 containerd[1785]: time="2025-05-08T23:56:54.286480097Z" level=info msg="StartContainer for \"27bf238b2bfe5e74ee520fe3c5c3b0c8a9f67094e840d40917f079e523ec621f\" returns successfully" May 8 23:56:54.291702 systemd[1]: cri-containerd-27bf238b2bfe5e74ee520fe3c5c3b0c8a9f67094e840d40917f079e523ec621f.scope: Deactivated successfully. May 8 23:56:54.326331 containerd[1785]: time="2025-05-08T23:56:54.326129578Z" level=info msg="shim disconnected" id=27bf238b2bfe5e74ee520fe3c5c3b0c8a9f67094e840d40917f079e523ec621f namespace=k8s.io May 8 23:56:54.326331 containerd[1785]: time="2025-05-08T23:56:54.326204458Z" level=warning msg="cleaning up after shim disconnected" id=27bf238b2bfe5e74ee520fe3c5c3b0c8a9f67094e840d40917f079e523ec621f namespace=k8s.io May 8 23:56:54.326331 containerd[1785]: time="2025-05-08T23:56:54.326213818Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:54.548658 sshd[5198]: Accepted publickey for core from 10.200.16.10 port 40506 ssh2: RSA SHA256:adnolIS9hn4fqCxl0BTxqzaH+vUQ55A48oooQi9uRvI May 8 23:56:54.549979 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:54.554120 systemd-logind[1722]: New session 28 of user core. May 8 23:56:54.556585 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 23:56:55.163608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27bf238b2bfe5e74ee520fe3c5c3b0c8a9f67094e840d40917f079e523ec621f-rootfs.mount: Deactivated successfully. May 8 23:56:55.188208 containerd[1785]: time="2025-05-08T23:56:55.188155467Z" level=info msg="CreateContainer within sandbox \"1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 23:56:55.224151 containerd[1785]: time="2025-05-08T23:56:55.224094824Z" level=info msg="CreateContainer within sandbox \"1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0a4c1bd0ed84529388b9e349e7d3e236b7bd1cfa69a6db08f734d2d17341a1c7\"" May 8 23:56:55.225105 containerd[1785]: time="2025-05-08T23:56:55.225068745Z" level=info msg="StartContainer for \"0a4c1bd0ed84529388b9e349e7d3e236b7bd1cfa69a6db08f734d2d17341a1c7\"" May 8 23:56:55.256634 systemd[1]: Started cri-containerd-0a4c1bd0ed84529388b9e349e7d3e236b7bd1cfa69a6db08f734d2d17341a1c7.scope - libcontainer container 0a4c1bd0ed84529388b9e349e7d3e236b7bd1cfa69a6db08f734d2d17341a1c7. May 8 23:56:55.288379 systemd[1]: cri-containerd-0a4c1bd0ed84529388b9e349e7d3e236b7bd1cfa69a6db08f734d2d17341a1c7.scope: Deactivated successfully. May 8 23:56:55.290866 containerd[1785]: time="2025-05-08T23:56:55.290671492Z" level=info msg="StartContainer for \"0a4c1bd0ed84529388b9e349e7d3e236b7bd1cfa69a6db08f734d2d17341a1c7\" returns successfully" May 8 23:56:55.321808 containerd[1785]: time="2025-05-08T23:56:55.321745484Z" level=info msg="shim disconnected" id=0a4c1bd0ed84529388b9e349e7d3e236b7bd1cfa69a6db08f734d2d17341a1c7 namespace=k8s.io May 8 23:56:55.321808 containerd[1785]: time="2025-05-08T23:56:55.321801125Z" level=warning msg="cleaning up after shim disconnected" id=0a4c1bd0ed84529388b9e349e7d3e236b7bd1cfa69a6db08f734d2d17341a1c7 namespace=k8s.io May 8 23:56:55.321808 containerd[1785]: time="2025-05-08T23:56:55.321810805Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:55.890856 kubelet[3315]: I0508 23:56:55.889602 3315 setters.go:580] "Node became not ready" node="ci-4152.2.3-n-71d56f534c" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T23:56:55Z","lastTransitionTime":"2025-05-08T23:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 23:56:56.163649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a4c1bd0ed84529388b9e349e7d3e236b7bd1cfa69a6db08f734d2d17341a1c7-rootfs.mount: Deactivated successfully. May 8 23:56:56.190770 containerd[1785]: time="2025-05-08T23:56:56.190697460Z" level=info msg="CreateContainer within sandbox \"1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 23:56:56.224345 containerd[1785]: time="2025-05-08T23:56:56.224294775Z" level=info msg="CreateContainer within sandbox \"1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bdce854a4706edf082022bd9cfcfae3f05914c5e4d9e262a18684c57760e25b7\"" May 8 23:56:56.224908 containerd[1785]: time="2025-05-08T23:56:56.224786535Z" level=info msg="StartContainer for \"bdce854a4706edf082022bd9cfcfae3f05914c5e4d9e262a18684c57760e25b7\"" May 8 23:56:56.257748 systemd[1]: Started cri-containerd-bdce854a4706edf082022bd9cfcfae3f05914c5e4d9e262a18684c57760e25b7.scope - libcontainer container bdce854a4706edf082022bd9cfcfae3f05914c5e4d9e262a18684c57760e25b7. May 8 23:56:56.279198 systemd[1]: cri-containerd-bdce854a4706edf082022bd9cfcfae3f05914c5e4d9e262a18684c57760e25b7.scope: Deactivated successfully. May 8 23:56:56.280612 containerd[1785]: time="2025-05-08T23:56:56.280227072Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod760d9b3e_f758_4657_87b0_7494acbe9f4a.slice/cri-containerd-bdce854a4706edf082022bd9cfcfae3f05914c5e4d9e262a18684c57760e25b7.scope/memory.events\": no such file or directory" May 8 23:56:56.285926 containerd[1785]: time="2025-05-08T23:56:56.285879598Z" level=info msg="StartContainer for \"bdce854a4706edf082022bd9cfcfae3f05914c5e4d9e262a18684c57760e25b7\" returns successfully" May 8 23:56:56.315261 containerd[1785]: time="2025-05-08T23:56:56.315025548Z" level=info msg="shim disconnected" id=bdce854a4706edf082022bd9cfcfae3f05914c5e4d9e262a18684c57760e25b7 namespace=k8s.io May 8 23:56:56.315261 containerd[1785]: time="2025-05-08T23:56:56.315100108Z" level=warning msg="cleaning up after shim disconnected" id=bdce854a4706edf082022bd9cfcfae3f05914c5e4d9e262a18684c57760e25b7 namespace=k8s.io May 8 23:56:56.315261 containerd[1785]: time="2025-05-08T23:56:56.315110468Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:56.327885 containerd[1785]: time="2025-05-08T23:56:56.327638961Z" level=warning msg="cleanup warnings time=\"2025-05-08T23:56:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 23:56:57.163743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdce854a4706edf082022bd9cfcfae3f05914c5e4d9e262a18684c57760e25b7-rootfs.mount: Deactivated successfully. May 8 23:56:57.195077 containerd[1785]: time="2025-05-08T23:56:57.194934015Z" level=info msg="CreateContainer within sandbox \"1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 23:56:57.240814 containerd[1785]: time="2025-05-08T23:56:57.240755903Z" level=info msg="CreateContainer within sandbox \"1a868b9bbd1195a9e8c26cda29dc56ef39434dbd0ee7898d2690f0b98c836c36\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2e2dac065c961df90789c7830aea51ad2540a9903a08c3f8a6e1b356335dc578\"" May 8 23:56:57.242483 containerd[1785]: time="2025-05-08T23:56:57.241512983Z" level=info msg="StartContainer for \"2e2dac065c961df90789c7830aea51ad2540a9903a08c3f8a6e1b356335dc578\"" May 8 23:56:57.273608 systemd[1]: Started cri-containerd-2e2dac065c961df90789c7830aea51ad2540a9903a08c3f8a6e1b356335dc578.scope - libcontainer container 2e2dac065c961df90789c7830aea51ad2540a9903a08c3f8a6e1b356335dc578. May 8 23:56:57.303637 containerd[1785]: time="2025-05-08T23:56:57.303591007Z" level=info msg="StartContainer for \"2e2dac065c961df90789c7830aea51ad2540a9903a08c3f8a6e1b356335dc578\" returns successfully" May 8 23:56:57.706580 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 8 23:57:00.351858 systemd-networkd[1506]: lxc_health: Link UP May 8 23:57:00.359390 systemd-networkd[1506]: lxc_health: Gained carrier May 8 23:57:01.368815 kubelet[3315]: I0508 23:57:01.368739 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j7fwf" podStartSLOduration=8.368723998 podStartE2EDuration="8.368723998s" podCreationTimestamp="2025-05-08 23:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:56:58.216463628 +0000 UTC m=+165.524266278" watchObservedRunningTime="2025-05-08 23:57:01.368723998 +0000 UTC m=+168.676526648" May 8 23:57:02.322584 systemd-networkd[1506]: lxc_health: Gained IPv6LL May 8 23:57:05.440969 systemd[1]: run-containerd-runc-k8s.io-2e2dac065c961df90789c7830aea51ad2540a9903a08c3f8a6e1b356335dc578-runc.kJczVV.mount: Deactivated successfully. May 8 23:57:05.555497 sshd[5260]: Connection closed by 10.200.16.10 port 40506 May 8 23:57:05.556094 sshd-session[5198]: pam_unix(sshd:session): session closed for user core May 8 23:57:05.559383 systemd[1]: sshd@25-10.200.20.33:22-10.200.16.10:40506.service: Deactivated successfully. May 8 23:57:05.561394 systemd[1]: session-28.scope: Deactivated successfully. May 8 23:57:05.562171 systemd-logind[1722]: Session 28 logged out. Waiting for processes to exit. May 8 23:57:05.563061 systemd-logind[1722]: Removed session 28.