Mar 6 02:53:15.113484 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Mar 6 02:53:15.113503 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Mar 5 23:10:47 -00 2026 Mar 6 02:53:15.113509 kernel: KASLR enabled Mar 6 02:53:15.113513 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 6 02:53:15.113517 kernel: printk: legacy bootconsole [pl11] enabled Mar 6 02:53:15.113522 kernel: efi: EFI v2.7 by EDK II Mar 6 02:53:15.113527 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Mar 6 02:53:15.113531 kernel: random: crng init done Mar 6 02:53:15.113535 kernel: secureboot: Secure boot disabled Mar 6 02:53:15.113539 kernel: ACPI: Early table checksum verification disabled Mar 6 02:53:15.113543 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Mar 6 02:53:15.113547 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 6 02:53:15.113551 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 6 02:53:15.113555 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 6 02:53:15.113560 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 6 02:53:15.113565 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 6 02:53:15.113569 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 6 02:53:15.113573 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 6 02:53:15.113577 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 6 02:53:15.113583 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 6 02:53:15.113587 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 6 02:53:15.113591 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 6 02:53:15.113595 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 6 02:53:15.113599 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 6 02:53:15.113604 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 6 02:53:15.113608 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Mar 6 02:53:15.113612 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Mar 6 02:53:15.113616 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 6 02:53:15.113621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 6 02:53:15.113625 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 6 02:53:15.113630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 6 02:53:15.113634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 6 02:53:15.113638 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 6 02:53:15.113642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 6 02:53:15.113647 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 6 02:53:15.113651 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 6 02:53:15.113655 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Mar 6 02:53:15.113659 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Mar 6 02:53:15.113663 kernel: Zone ranges: Mar 6 02:53:15.113668 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 6 02:53:15.113675 kernel: DMA32 empty Mar 6 02:53:15.113679 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 6 02:53:15.113683 kernel: Device empty Mar 6 02:53:15.113688 kernel: Movable zone start for each node Mar 6 02:53:15.113692 kernel: Early memory node ranges Mar 6 02:53:15.113696 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 6 02:53:15.113702 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Mar 6 02:53:15.113706 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Mar 6 02:53:15.113710 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Mar 6 02:53:15.113715 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Mar 6 02:53:15.113719 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Mar 6 02:53:15.113723 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 6 02:53:15.113728 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 6 02:53:15.113732 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 6 02:53:15.113737 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Mar 6 02:53:15.113741 kernel: psci: probing for conduit method from ACPI. Mar 6 02:53:15.113745 kernel: psci: PSCIv1.3 detected in firmware. Mar 6 02:53:15.113750 kernel: psci: Using standard PSCI v0.2 function IDs Mar 6 02:53:15.113755 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 6 02:53:15.113759 kernel: psci: SMC Calling Convention v1.4 Mar 6 02:53:15.113764 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 6 02:53:15.113768 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 6 02:53:15.113772 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 6 02:53:15.113777 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 6 02:53:15.113781 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 6 02:53:15.113786 kernel: Detected PIPT I-cache on CPU0 Mar 6 02:53:15.113790 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Mar 6 02:53:15.113795 kernel: CPU features: detected: GIC system register CPU interface Mar 6 02:53:15.113799 kernel: CPU features: detected: Spectre-v4 Mar 6 02:53:15.113803 kernel: CPU features: detected: Spectre-BHB Mar 6 02:53:15.113808 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 6 02:53:15.113813 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 6 02:53:15.113817 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Mar 6 02:53:15.113822 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 6 02:53:15.113826 kernel: alternatives: applying boot alternatives Mar 6 02:53:15.113831 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=68c9ef230e3eed1360dd8114dada95b6a934f07952c3a5d42725f3006977f027 Mar 6 02:53:15.113836 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 02:53:15.113840 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 02:53:15.113845 kernel: Fallback order for Node 0: 0 Mar 6 02:53:15.113849 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Mar 6 02:53:15.113854 kernel: Policy zone: Normal Mar 6 02:53:15.113859 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 02:53:15.113863 kernel: software IO TLB: area num 2. Mar 6 02:53:15.113868 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Mar 6 02:53:15.113872 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 6 02:53:15.113876 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 02:53:15.113881 kernel: rcu: RCU event tracing is enabled. Mar 6 02:53:15.113886 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 6 02:53:15.113890 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 02:53:15.113895 kernel: Tracing variant of Tasks RCU enabled. Mar 6 02:53:15.113899 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 02:53:15.113904 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 6 02:53:15.113909 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 02:53:15.113914 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 02:53:15.113918 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 6 02:53:15.113922 kernel: GICv3: 960 SPIs implemented Mar 6 02:53:15.113927 kernel: GICv3: 0 Extended SPIs implemented Mar 6 02:53:15.113931 kernel: Root IRQ handler: gic_handle_irq Mar 6 02:53:15.113935 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 6 02:53:15.113940 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Mar 6 02:53:15.113944 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 6 02:53:15.113948 kernel: ITS: No ITS available, not enabling LPIs Mar 6 02:53:15.113953 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 02:53:15.113958 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Mar 6 02:53:15.113963 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 02:53:15.113967 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Mar 6 02:53:15.113972 kernel: Console: colour dummy device 80x25 Mar 6 02:53:15.113976 kernel: printk: legacy console [tty1] enabled Mar 6 02:53:15.113981 kernel: ACPI: Core revision 20240827 Mar 6 02:53:15.113986 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Mar 6 02:53:15.113990 kernel: pid_max: default: 32768 minimum: 301 Mar 6 02:53:15.113995 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 6 02:53:15.114000 kernel: landlock: Up and running. Mar 6 02:53:15.114005 kernel: SELinux: Initializing. Mar 6 02:53:15.114010 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 02:53:15.114014 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 02:53:15.114019 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Mar 6 02:53:15.114024 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 6 02:53:15.114031 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 6 02:53:15.114037 kernel: rcu: Hierarchical SRCU implementation. Mar 6 02:53:15.114042 kernel: rcu: Max phase no-delay instances is 400. Mar 6 02:53:15.114047 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 6 02:53:15.114051 kernel: Remapping and enabling EFI services. Mar 6 02:53:15.114056 kernel: smp: Bringing up secondary CPUs ... Mar 6 02:53:15.114061 kernel: Detected PIPT I-cache on CPU1 Mar 6 02:53:15.114066 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 6 02:53:15.114071 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Mar 6 02:53:15.114076 kernel: smp: Brought up 1 node, 2 CPUs Mar 6 02:53:15.114081 kernel: SMP: Total of 2 processors activated. Mar 6 02:53:15.114085 kernel: CPU: All CPU(s) started at EL1 Mar 6 02:53:15.114091 kernel: CPU features: detected: 32-bit EL0 Support Mar 6 02:53:15.114096 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 6 02:53:15.114101 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 6 02:53:15.114106 kernel: CPU features: detected: Common not Private translations Mar 6 02:53:15.114110 kernel: CPU features: detected: CRC32 instructions Mar 6 02:53:15.114115 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Mar 6 02:53:15.114120 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 6 02:53:15.114125 kernel: CPU features: detected: LSE atomic instructions Mar 6 02:53:15.114129 kernel: CPU features: detected: Privileged Access Never Mar 6 02:53:15.114135 kernel: CPU features: detected: Speculation barrier (SB) Mar 6 02:53:15.114140 kernel: CPU features: detected: TLB range maintenance instructions Mar 6 02:53:15.114145 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 6 02:53:15.114149 kernel: CPU features: detected: Scalable Vector Extension Mar 6 02:53:15.114154 kernel: alternatives: applying system-wide alternatives Mar 6 02:53:15.114159 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Mar 6 02:53:15.114164 kernel: SVE: maximum available vector length 16 bytes per vector Mar 6 02:53:15.114168 kernel: SVE: default vector length 16 bytes per vector Mar 6 02:53:15.114173 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Mar 6 02:53:15.114179 kernel: devtmpfs: initialized Mar 6 02:53:15.114184 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 02:53:15.114189 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 6 02:53:15.114193 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 6 02:53:15.114198 kernel: 0 pages in range for non-PLT usage Mar 6 02:53:15.114203 kernel: 508400 pages in range for PLT usage Mar 6 02:53:15.114207 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 02:53:15.114212 kernel: SMBIOS 3.1.0 present. Mar 6 02:53:15.114228 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 6 02:53:15.114233 kernel: DMI: Memory slots populated: 2/2 Mar 6 02:53:15.114237 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 02:53:15.114242 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 6 02:53:15.114247 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 6 02:53:15.114252 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 6 02:53:15.114257 kernel: audit: initializing netlink subsys (disabled) Mar 6 02:53:15.114262 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Mar 6 02:53:15.114267 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 02:53:15.114272 kernel: cpuidle: using governor menu Mar 6 02:53:15.114277 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 6 02:53:15.114282 kernel: ASID allocator initialised with 32768 entries Mar 6 02:53:15.114287 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 02:53:15.114292 kernel: Serial: AMBA PL011 UART driver Mar 6 02:53:15.114296 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 02:53:15.114301 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 02:53:15.114306 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 6 02:53:15.114311 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 6 02:53:15.114316 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 02:53:15.114321 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 02:53:15.114326 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 6 02:53:15.114331 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 6 02:53:15.114336 kernel: ACPI: Added _OSI(Module Device) Mar 6 02:53:15.114340 kernel: ACPI: Added _OSI(Processor Device) Mar 6 02:53:15.114345 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 02:53:15.114350 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 02:53:15.114355 kernel: ACPI: Interpreter enabled Mar 6 02:53:15.114360 kernel: ACPI: Using GIC for interrupt routing Mar 6 02:53:15.114365 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 6 02:53:15.114370 kernel: printk: legacy console [ttyAMA0] enabled Mar 6 02:53:15.114375 kernel: printk: legacy bootconsole [pl11] disabled Mar 6 02:53:15.114380 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 6 02:53:15.114385 kernel: ACPI: CPU0 has been hot-added Mar 6 02:53:15.114389 kernel: ACPI: CPU1 has been hot-added Mar 6 02:53:15.114394 kernel: iommu: Default domain type: Translated Mar 6 02:53:15.114399 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 6 02:53:15.114404 kernel: efivars: Registered efivars operations Mar 6 02:53:15.114409 kernel: vgaarb: loaded Mar 6 02:53:15.114414 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 6 02:53:15.114419 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 02:53:15.114423 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 02:53:15.114428 kernel: pnp: PnP ACPI init Mar 6 02:53:15.114433 kernel: pnp: PnP ACPI: found 0 devices Mar 6 02:53:15.114437 kernel: NET: Registered PF_INET protocol family Mar 6 02:53:15.114442 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 02:53:15.114447 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 02:53:15.114453 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 02:53:15.114458 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 02:53:15.114462 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 02:53:15.114467 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 02:53:15.114472 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 02:53:15.114477 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 02:53:15.114482 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 02:53:15.114486 kernel: PCI: CLS 0 bytes, default 64 Mar 6 02:53:15.114491 kernel: kvm [1]: HYP mode not available Mar 6 02:53:15.114496 kernel: Initialise system trusted keyrings Mar 6 02:53:15.114501 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 02:53:15.114506 kernel: Key type asymmetric registered Mar 6 02:53:15.114511 kernel: Asymmetric key parser 'x509' registered Mar 6 02:53:15.114515 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 6 02:53:15.114520 kernel: io scheduler mq-deadline registered Mar 6 02:53:15.114525 kernel: io scheduler kyber registered Mar 6 02:53:15.114530 kernel: io scheduler bfq registered Mar 6 02:53:15.114535 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 02:53:15.114540 kernel: thunder_xcv, ver 1.0 Mar 6 02:53:15.114545 kernel: thunder_bgx, ver 1.0 Mar 6 02:53:15.114549 kernel: nicpf, ver 1.0 Mar 6 02:53:15.114554 kernel: nicvf, ver 1.0 Mar 6 02:53:15.114669 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 6 02:53:15.114720 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-06T02:53:14 UTC (1772765594) Mar 6 02:53:15.114726 kernel: efifb: probing for efifb Mar 6 02:53:15.114733 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 6 02:53:15.114738 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 6 02:53:15.114743 kernel: efifb: scrolling: redraw Mar 6 02:53:15.114747 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 6 02:53:15.114752 kernel: Console: switching to colour frame buffer device 128x48 Mar 6 02:53:15.114757 kernel: fb0: EFI VGA frame buffer device Mar 6 02:53:15.114762 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 6 02:53:15.114767 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 6 02:53:15.114771 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Mar 6 02:53:15.114777 kernel: watchdog: NMI not fully supported Mar 6 02:53:15.114782 kernel: watchdog: Hard watchdog permanently disabled Mar 6 02:53:15.114787 kernel: NET: Registered PF_INET6 protocol family Mar 6 02:53:15.114791 kernel: Segment Routing with IPv6 Mar 6 02:53:15.114796 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 02:53:15.114801 kernel: NET: Registered PF_PACKET protocol family Mar 6 02:53:15.114806 kernel: Key type dns_resolver registered Mar 6 02:53:15.114810 kernel: registered taskstats version 1 Mar 6 02:53:15.114815 kernel: Loading compiled-in X.509 certificates Mar 6 02:53:15.114820 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 3a2ba669b0bb3660035f2ce1faaa856d46d520ff' Mar 6 02:53:15.114825 kernel: Demotion targets for Node 0: null Mar 6 02:53:15.114830 kernel: Key type .fscrypt registered Mar 6 02:53:15.114835 kernel: Key type fscrypt-provisioning registered Mar 6 02:53:15.114840 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 02:53:15.114844 kernel: ima: Allocated hash algorithm: sha1 Mar 6 02:53:15.114849 kernel: ima: No architecture policies found Mar 6 02:53:15.114854 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 6 02:53:15.114859 kernel: clk: Disabling unused clocks Mar 6 02:53:15.114863 kernel: PM: genpd: Disabling unused power domains Mar 6 02:53:15.114869 kernel: Warning: unable to open an initial console. Mar 6 02:53:15.114874 kernel: Freeing unused kernel memory: 39552K Mar 6 02:53:15.114879 kernel: Run /init as init process Mar 6 02:53:15.114883 kernel: with arguments: Mar 6 02:53:15.114888 kernel: /init Mar 6 02:53:15.114892 kernel: with environment: Mar 6 02:53:15.114897 kernel: HOME=/ Mar 6 02:53:15.114902 kernel: TERM=linux Mar 6 02:53:15.114908 systemd[1]: Successfully made /usr/ read-only. Mar 6 02:53:15.114915 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 02:53:15.114921 systemd[1]: Detected virtualization microsoft. Mar 6 02:53:15.114926 systemd[1]: Detected architecture arm64. Mar 6 02:53:15.114931 systemd[1]: Running in initrd. Mar 6 02:53:15.114936 systemd[1]: No hostname configured, using default hostname. Mar 6 02:53:15.114941 systemd[1]: Hostname set to . Mar 6 02:53:15.114946 systemd[1]: Initializing machine ID from random generator. Mar 6 02:53:15.114952 systemd[1]: Queued start job for default target initrd.target. Mar 6 02:53:15.114958 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:53:15.114963 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:53:15.114969 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 02:53:15.114974 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 02:53:15.114979 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 02:53:15.114985 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 02:53:15.114992 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 02:53:15.114997 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 02:53:15.115002 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:53:15.115008 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:53:15.115013 systemd[1]: Reached target paths.target - Path Units. Mar 6 02:53:15.115018 systemd[1]: Reached target slices.target - Slice Units. Mar 6 02:53:15.115023 systemd[1]: Reached target swap.target - Swaps. Mar 6 02:53:15.115028 systemd[1]: Reached target timers.target - Timer Units. Mar 6 02:53:15.115034 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 02:53:15.115039 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 02:53:15.115045 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 02:53:15.115050 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 6 02:53:15.115055 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:53:15.115060 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 02:53:15.115065 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:53:15.115070 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 02:53:15.115076 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 02:53:15.115082 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 02:53:15.115087 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 02:53:15.115092 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 6 02:53:15.115098 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 02:53:15.115103 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 02:53:15.115108 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 02:53:15.115125 systemd-journald[225]: Collecting audit messages is disabled. Mar 6 02:53:15.115140 systemd-journald[225]: Journal started Mar 6 02:53:15.115154 systemd-journald[225]: Runtime Journal (/run/log/journal/3275171e6f8241c98f712043ca7a7495) is 8M, max 78.3M, 70.3M free. Mar 6 02:53:15.129541 systemd-modules-load[227]: Inserted module 'overlay' Mar 6 02:53:15.135488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:53:15.154149 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 02:53:15.154197 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 02:53:15.158179 kernel: Bridge firewalling registered Mar 6 02:53:15.160605 systemd-modules-load[227]: Inserted module 'br_netfilter' Mar 6 02:53:15.167247 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 02:53:15.176299 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:53:15.187642 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 02:53:15.191786 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 02:53:15.200083 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:53:15.211375 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 02:53:15.226491 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:53:15.238422 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 02:53:15.254772 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 02:53:15.269824 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:53:15.277399 systemd-tmpfiles[250]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 6 02:53:15.284914 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:53:15.296829 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 02:53:15.309240 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:53:15.319448 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 02:53:15.341368 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 02:53:15.355783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 02:53:15.367374 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=68c9ef230e3eed1360dd8114dada95b6a934f07952c3a5d42725f3006977f027 Mar 6 02:53:15.406526 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:53:15.422292 systemd-resolved[263]: Positive Trust Anchors: Mar 6 02:53:15.422308 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 02:53:15.422328 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 02:53:15.424397 systemd-resolved[263]: Defaulting to hostname 'linux'. Mar 6 02:53:15.425094 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 02:53:15.431862 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:53:15.519237 kernel: SCSI subsystem initialized Mar 6 02:53:15.524245 kernel: Loading iSCSI transport class v2.0-870. Mar 6 02:53:15.532246 kernel: iscsi: registered transport (tcp) Mar 6 02:53:15.545327 kernel: iscsi: registered transport (qla4xxx) Mar 6 02:53:15.545386 kernel: QLogic iSCSI HBA Driver Mar 6 02:53:15.558656 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 02:53:15.585249 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:53:15.593365 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 02:53:15.641863 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 02:53:15.647865 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 02:53:15.709236 kernel: raid6: neonx8 gen() 18525 MB/s Mar 6 02:53:15.725230 kernel: raid6: neonx4 gen() 18561 MB/s Mar 6 02:53:15.744227 kernel: raid6: neonx2 gen() 17090 MB/s Mar 6 02:53:15.764228 kernel: raid6: neonx1 gen() 15108 MB/s Mar 6 02:53:15.785228 kernel: raid6: int64x8 gen() 10546 MB/s Mar 6 02:53:15.804317 kernel: raid6: int64x4 gen() 10606 MB/s Mar 6 02:53:15.824229 kernel: raid6: int64x2 gen() 9000 MB/s Mar 6 02:53:15.846010 kernel: raid6: int64x1 gen() 7059 MB/s Mar 6 02:53:15.846020 kernel: raid6: using algorithm neonx4 gen() 18561 MB/s Mar 6 02:53:15.867853 kernel: raid6: .... xor() 15147 MB/s, rmw enabled Mar 6 02:53:15.867915 kernel: raid6: using neon recovery algorithm Mar 6 02:53:15.877100 kernel: xor: measuring software checksum speed Mar 6 02:53:15.877174 kernel: 8regs : 28645 MB/sec Mar 6 02:53:15.881477 kernel: 32regs : 28733 MB/sec Mar 6 02:53:15.884568 kernel: arm64_neon : 37611 MB/sec Mar 6 02:53:15.888399 kernel: xor: using function: arm64_neon (37611 MB/sec) Mar 6 02:53:15.928253 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 02:53:15.933617 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 02:53:15.944373 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:53:15.966038 systemd-udevd[474]: Using default interface naming scheme 'v255'. Mar 6 02:53:15.969019 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:53:15.988125 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 02:53:16.012800 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Mar 6 02:53:16.036000 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 02:53:16.042418 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 02:53:16.088023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:53:16.101783 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 02:53:16.158250 kernel: hv_vmbus: Vmbus version:5.3 Mar 6 02:53:16.171134 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:53:16.194751 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 6 02:53:16.194776 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 6 02:53:16.194783 kernel: hv_vmbus: registering driver hid_hyperv Mar 6 02:53:16.194790 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 6 02:53:16.171246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:53:16.208932 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 6 02:53:16.188777 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:53:16.229106 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 6 02:53:16.231527 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 6 02:53:16.232067 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:53:16.256502 kernel: hv_vmbus: registering driver hv_netvsc Mar 6 02:53:16.256521 kernel: PTP clock support registered Mar 6 02:53:16.256535 kernel: hv_utils: Registering HyperV Utility Driver Mar 6 02:53:16.256542 kernel: hv_vmbus: registering driver hv_utils Mar 6 02:53:16.256548 kernel: hv_utils: Heartbeat IC version 3.0 Mar 6 02:53:16.256554 kernel: hv_utils: Shutdown IC version 3.2 Mar 6 02:53:16.256560 kernel: hv_utils: TimeSync IC version 4.0 Mar 6 02:53:16.256566 kernel: hv_vmbus: registering driver hv_storvsc Mar 6 02:53:16.256572 kernel: scsi host1: storvsc_host_t Mar 6 02:53:16.256708 kernel: scsi host0: storvsc_host_t Mar 6 02:53:16.256802 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 6 02:53:16.256874 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 6 02:53:16.216970 systemd-resolved[263]: Clock change detected. Flushing caches. Mar 6 02:53:16.276000 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 6 02:53:16.276165 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 6 02:53:16.276232 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 6 02:53:16.261842 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 02:53:16.294274 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 6 02:53:16.298941 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 6 02:53:16.299026 kernel: hv_netvsc 000d3af6-0c55-000d-3af6-0c55000d3af6 eth0: VF slot 1 added Mar 6 02:53:16.268158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:53:16.268239 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:53:16.294466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:53:16.327747 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 6 02:53:16.327794 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 6 02:53:16.332232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:53:16.350849 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 6 02:53:16.351048 kernel: hv_vmbus: registering driver hv_pci Mar 6 02:53:16.351056 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 02:53:16.351062 kernel: hv_pci c7916990-7467-43ce-9902-5147464cff33: PCI VMBus probing: Using version 0x10004 Mar 6 02:53:16.357973 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 6 02:53:16.370468 kernel: hv_pci c7916990-7467-43ce-9902-5147464cff33: PCI host bridge to bus 7467:00 Mar 6 02:53:16.370692 kernel: pci_bus 7467:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 6 02:53:16.370811 kernel: pci_bus 7467:00: No busn resource found for root bus, will use [bus 00-ff] Mar 6 02:53:16.377765 kernel: pci 7467:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Mar 6 02:53:16.388344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#303 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 6 02:53:16.388577 kernel: pci 7467:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 6 02:53:16.398777 kernel: pci 7467:00:02.0: enabling Extended Tags Mar 6 02:53:16.424479 kernel: pci 7467:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7467:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Mar 6 02:53:16.424702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 6 02:53:16.425144 kernel: pci_bus 7467:00: busn_res: [bus 00-ff] end is updated to 00 Mar 6 02:53:16.433719 kernel: pci 7467:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Mar 6 02:53:16.496779 kernel: mlx5_core 7467:00:02.0: enabling device (0000 -> 0002) Mar 6 02:53:16.505912 kernel: mlx5_core 7467:00:02.0: PTM is not supported by PCIe Mar 6 02:53:16.506071 kernel: mlx5_core 7467:00:02.0: firmware version: 16.30.5026 Mar 6 02:53:16.687289 kernel: hv_netvsc 000d3af6-0c55-000d-3af6-0c55000d3af6 eth0: VF registering: eth1 Mar 6 02:53:16.687514 kernel: mlx5_core 7467:00:02.0 eth1: joined to eth0 Mar 6 02:53:16.693767 kernel: mlx5_core 7467:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 6 02:53:16.703797 kernel: mlx5_core 7467:00:02.0 enP29799s1: renamed from eth1 Mar 6 02:53:16.865969 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 6 02:53:16.971324 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 6 02:53:16.984672 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 6 02:53:16.990369 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 6 02:53:16.996650 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 02:53:17.029582 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 6 02:53:17.045279 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 02:53:17.051216 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 02:53:17.061884 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:53:17.072234 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 02:53:17.083877 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 02:53:17.101340 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 6 02:53:17.111423 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 02:53:18.122762 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 6 02:53:18.123532 disk-uuid[665]: The operation has completed successfully. Mar 6 02:53:18.197309 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 02:53:18.197417 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 02:53:18.223398 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 02:53:18.248263 sh[833]: Success Mar 6 02:53:18.284748 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 02:53:18.284826 kernel: device-mapper: uevent: version 1.0.3 Mar 6 02:53:18.291890 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 6 02:53:18.301756 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 6 02:53:18.577033 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 02:53:18.592938 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 02:53:18.600283 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 02:53:18.632762 kernel: BTRFS: device fsid fcb4e7bf-1206-4803-90fb-6606b15e3aea devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (851) Mar 6 02:53:18.644226 kernel: BTRFS info (device dm-0): first mount of filesystem fcb4e7bf-1206-4803-90fb-6606b15e3aea Mar 6 02:53:18.644250 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 6 02:53:18.880038 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 6 02:53:18.880123 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 6 02:53:18.914584 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 02:53:18.919391 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 6 02:53:18.928918 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 02:53:18.929677 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 02:53:18.956798 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 02:53:18.992749 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (874) Mar 6 02:53:19.006828 kernel: BTRFS info (device sda6): first mount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 02:53:19.006892 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 6 02:53:19.034279 kernel: BTRFS info (device sda6): turning on async discard Mar 6 02:53:19.034353 kernel: BTRFS info (device sda6): enabling free space tree Mar 6 02:53:19.045788 kernel: BTRFS info (device sda6): last unmount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 02:53:19.048953 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 02:53:19.058058 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 02:53:19.103138 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 02:53:19.116452 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 02:53:19.146611 systemd-networkd[1020]: lo: Link UP Mar 6 02:53:19.146619 systemd-networkd[1020]: lo: Gained carrier Mar 6 02:53:19.147865 systemd-networkd[1020]: Enumeration completed Mar 6 02:53:19.150845 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 02:53:19.157028 systemd-networkd[1020]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:53:19.157031 systemd-networkd[1020]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 02:53:19.157572 systemd[1]: Reached target network.target - Network. Mar 6 02:53:19.234762 kernel: mlx5_core 7467:00:02.0 enP29799s1: Link up Mar 6 02:53:19.272762 kernel: hv_netvsc 000d3af6-0c55-000d-3af6-0c55000d3af6 eth0: Data path switched to VF: enP29799s1 Mar 6 02:53:19.273091 systemd-networkd[1020]: enP29799s1: Link UP Mar 6 02:53:19.273157 systemd-networkd[1020]: eth0: Link UP Mar 6 02:53:19.273223 systemd-networkd[1020]: eth0: Gained carrier Mar 6 02:53:19.273236 systemd-networkd[1020]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:53:19.295356 systemd-networkd[1020]: enP29799s1: Gained carrier Mar 6 02:53:19.309778 systemd-networkd[1020]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 6 02:53:20.195404 ignition[969]: Ignition 2.22.0 Mar 6 02:53:20.195416 ignition[969]: Stage: fetch-offline Mar 6 02:53:20.198451 ignition[969]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:53:20.202294 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 02:53:20.198459 ignition[969]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 6 02:53:20.214287 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 6 02:53:20.198547 ignition[969]: parsed url from cmdline: "" Mar 6 02:53:20.198549 ignition[969]: no config URL provided Mar 6 02:53:20.198553 ignition[969]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 02:53:20.198558 ignition[969]: no config at "/usr/lib/ignition/user.ign" Mar 6 02:53:20.198562 ignition[969]: failed to fetch config: resource requires networking Mar 6 02:53:20.198690 ignition[969]: Ignition finished successfully Mar 6 02:53:20.248950 ignition[1031]: Ignition 2.22.0 Mar 6 02:53:20.248956 ignition[1031]: Stage: fetch Mar 6 02:53:20.249193 ignition[1031]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:53:20.249201 ignition[1031]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 6 02:53:20.249283 ignition[1031]: parsed url from cmdline: "" Mar 6 02:53:20.249286 ignition[1031]: no config URL provided Mar 6 02:53:20.249289 ignition[1031]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 02:53:20.249296 ignition[1031]: no config at "/usr/lib/ignition/user.ign" Mar 6 02:53:20.249311 ignition[1031]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 6 02:53:20.349771 ignition[1031]: GET result: OK Mar 6 02:53:20.349839 ignition[1031]: config has been read from IMDS userdata Mar 6 02:53:20.352717 unknown[1031]: fetched base config from "system" Mar 6 02:53:20.349870 ignition[1031]: parsing config with SHA512: eb4152bab720f4752858e25533205bb66e4bf29fc64e280e619782e3e72d05c670c60d3399fe7ea26e98dd5eeb4f59f0a5f69936302783014c2f80caaf954437 Mar 6 02:53:20.352722 unknown[1031]: fetched base config from "system" Mar 6 02:53:20.353132 ignition[1031]: fetch: fetch complete Mar 6 02:53:20.352725 unknown[1031]: fetched user config from "azure" Mar 6 02:53:20.353136 ignition[1031]: fetch: fetch passed Mar 6 02:53:20.358272 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 6 02:53:20.353177 ignition[1031]: Ignition finished successfully Mar 6 02:53:20.367477 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 02:53:20.409095 ignition[1037]: Ignition 2.22.0 Mar 6 02:53:20.409105 ignition[1037]: Stage: kargs Mar 6 02:53:20.409298 ignition[1037]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:53:20.418769 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 02:53:20.409305 ignition[1037]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 6 02:53:20.426022 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 02:53:20.412572 ignition[1037]: kargs: kargs passed Mar 6 02:53:20.412626 ignition[1037]: Ignition finished successfully Mar 6 02:53:20.456042 ignition[1044]: Ignition 2.22.0 Mar 6 02:53:20.456055 ignition[1044]: Stage: disks Mar 6 02:53:20.461971 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 02:53:20.456218 ignition[1044]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:53:20.467349 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 02:53:20.456225 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 6 02:53:20.477189 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 02:53:20.456710 ignition[1044]: disks: disks passed Mar 6 02:53:20.486384 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 02:53:20.456791 ignition[1044]: Ignition finished successfully Mar 6 02:53:20.496023 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 02:53:20.506441 systemd[1]: Reached target basic.target - Basic System. Mar 6 02:53:20.517708 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 02:53:20.616227 systemd-fsck[1052]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Mar 6 02:53:20.625296 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 02:53:20.639230 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 02:53:20.787891 systemd-networkd[1020]: eth0: Gained IPv6LL Mar 6 02:53:20.879753 kernel: EXT4-fs (sda9): mounted filesystem f0884ab3-756d-49e8-9d95-af187b4f35fb r/w with ordered data mode. Quota mode: none. Mar 6 02:53:20.880446 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 02:53:20.884470 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 02:53:20.908302 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 02:53:20.923323 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 02:53:20.942752 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1066) Mar 6 02:53:20.950571 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 6 02:53:20.963473 kernel: BTRFS info (device sda6): first mount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 02:53:20.963510 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 6 02:53:20.968245 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 02:53:20.990821 kernel: BTRFS info (device sda6): turning on async discard Mar 6 02:53:20.990842 kernel: BTRFS info (device sda6): enabling free space tree Mar 6 02:53:20.968287 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 02:53:20.978396 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 02:53:20.998779 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 02:53:21.006796 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 02:53:21.569221 coreos-metadata[1068]: Mar 06 02:53:21.569 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 6 02:53:21.578329 coreos-metadata[1068]: Mar 06 02:53:21.578 INFO Fetch successful Mar 6 02:53:21.583185 coreos-metadata[1068]: Mar 06 02:53:21.578 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 6 02:53:21.593430 coreos-metadata[1068]: Mar 06 02:53:21.593 INFO Fetch successful Mar 6 02:53:21.608797 coreos-metadata[1068]: Mar 06 02:53:21.608 INFO wrote hostname ci-4459.2.3-n-b98e3238ca to /sysroot/etc/hostname Mar 6 02:53:21.616593 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 6 02:53:21.690007 initrd-setup-root[1097]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 02:53:21.733791 initrd-setup-root[1104]: cut: /sysroot/etc/group: No such file or directory Mar 6 02:53:21.741953 initrd-setup-root[1111]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 02:53:21.749556 initrd-setup-root[1118]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 02:53:22.847802 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 02:53:22.855011 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 02:53:22.874445 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 02:53:22.887950 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 02:53:22.898748 kernel: BTRFS info (device sda6): last unmount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 02:53:22.922804 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 02:53:22.931376 ignition[1186]: INFO : Ignition 2.22.0 Mar 6 02:53:22.931376 ignition[1186]: INFO : Stage: mount Mar 6 02:53:22.931376 ignition[1186]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:53:22.931376 ignition[1186]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 6 02:53:22.931376 ignition[1186]: INFO : mount: mount passed Mar 6 02:53:22.931376 ignition[1186]: INFO : Ignition finished successfully Mar 6 02:53:22.934011 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 02:53:22.940253 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 02:53:22.971844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 02:53:23.004753 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1198) Mar 6 02:53:23.017107 kernel: BTRFS info (device sda6): first mount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 02:53:23.017165 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 6 02:53:23.027352 kernel: BTRFS info (device sda6): turning on async discard Mar 6 02:53:23.027423 kernel: BTRFS info (device sda6): enabling free space tree Mar 6 02:53:23.028829 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 02:53:23.056554 ignition[1216]: INFO : Ignition 2.22.0 Mar 6 02:53:23.056554 ignition[1216]: INFO : Stage: files Mar 6 02:53:23.065058 ignition[1216]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:53:23.065058 ignition[1216]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 6 02:53:23.065058 ignition[1216]: DEBUG : files: compiled without relabeling support, skipping Mar 6 02:53:23.083817 ignition[1216]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 02:53:23.083817 ignition[1216]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 02:53:23.137063 ignition[1216]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 02:53:23.144825 ignition[1216]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 02:53:23.144825 ignition[1216]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 02:53:23.143244 unknown[1216]: wrote ssh authorized keys file for user: core Mar 6 02:53:23.179570 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 6 02:53:23.188365 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 6 02:53:23.212171 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 02:53:23.332308 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 6 02:53:23.332308 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 6 02:53:23.350974 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 6 02:53:23.653866 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 6 02:53:23.768633 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 6 02:53:23.777052 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 6 02:53:23.777052 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 02:53:23.777052 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 02:53:23.777052 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 02:53:23.777052 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 02:53:23.777052 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 02:53:23.777052 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 02:53:23.777052 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 02:53:23.777052 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 02:53:23.777052 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 02:53:23.777052 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 6 02:53:23.875183 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 6 02:53:23.875183 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 6 02:53:23.875183 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 6 02:53:24.204641 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 6 02:53:24.880681 ignition[1216]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 6 02:53:24.880681 ignition[1216]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 6 02:53:24.934696 ignition[1216]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 02:53:24.948770 ignition[1216]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 02:53:24.948770 ignition[1216]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 6 02:53:24.966969 ignition[1216]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 6 02:53:24.966969 ignition[1216]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 02:53:24.966969 ignition[1216]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 02:53:24.966969 ignition[1216]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 02:53:24.966969 ignition[1216]: INFO : files: files passed Mar 6 02:53:24.966969 ignition[1216]: INFO : Ignition finished successfully Mar 6 02:53:24.958496 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 02:53:24.974881 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 02:53:25.002303 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 02:53:25.022984 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 02:53:25.023067 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 02:53:25.064758 initrd-setup-root-after-ignition[1245]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:53:25.064758 initrd-setup-root-after-ignition[1245]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:53:25.080398 initrd-setup-root-after-ignition[1249]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:53:25.074041 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 02:53:25.086935 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 02:53:25.100009 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 02:53:25.155320 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 02:53:25.155425 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 02:53:25.168073 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 02:53:25.178319 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 02:53:25.188126 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 02:53:25.188937 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 02:53:25.227412 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 02:53:25.238913 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 02:53:25.265431 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:53:25.271101 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:53:25.283157 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 02:53:25.293281 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 02:53:25.293401 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 02:53:25.309652 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 02:53:25.314862 systemd[1]: Stopped target basic.target - Basic System. Mar 6 02:53:25.325267 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 02:53:25.335714 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 02:53:25.345306 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 02:53:25.355987 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 6 02:53:25.367393 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 02:53:25.378206 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 02:53:25.390011 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 02:53:25.400396 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 02:53:25.412822 systemd[1]: Stopped target swap.target - Swaps. Mar 6 02:53:25.421535 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 02:53:25.421656 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 02:53:25.434708 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:53:25.440152 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:53:25.451485 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 02:53:25.456792 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:53:25.463978 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 02:53:25.464087 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 02:53:25.479588 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 02:53:25.479680 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 02:53:25.485568 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 02:53:25.485642 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 02:53:25.495049 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 6 02:53:25.569541 ignition[1269]: INFO : Ignition 2.22.0 Mar 6 02:53:25.569541 ignition[1269]: INFO : Stage: umount Mar 6 02:53:25.569541 ignition[1269]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:53:25.569541 ignition[1269]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 6 02:53:25.569541 ignition[1269]: INFO : umount: umount passed Mar 6 02:53:25.569541 ignition[1269]: INFO : Ignition finished successfully Mar 6 02:53:25.495120 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 6 02:53:25.509942 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 02:53:25.545961 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 02:53:25.558444 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 02:53:25.558618 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:53:25.573958 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 02:53:25.574056 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 02:53:25.590055 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 02:53:25.590140 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 02:53:25.602827 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 02:53:25.605673 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 02:53:25.605777 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 02:53:25.614989 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 02:53:25.615031 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 02:53:25.625606 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 02:53:25.625662 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 02:53:25.636010 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 6 02:53:25.636048 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 6 02:53:25.645919 systemd[1]: Stopped target network.target - Network. Mar 6 02:53:25.655323 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 02:53:25.655373 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 02:53:25.667479 systemd[1]: Stopped target paths.target - Path Units. Mar 6 02:53:25.680182 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 02:53:25.683751 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:53:25.691230 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 02:53:25.702173 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 02:53:25.711292 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 02:53:25.711343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 02:53:25.721312 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 02:53:25.721345 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 02:53:25.731704 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 02:53:25.731768 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 02:53:25.741271 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 02:53:25.741299 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 02:53:25.754181 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 02:53:25.765022 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 02:53:25.789269 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 02:53:25.789459 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 02:53:25.993531 kernel: hv_netvsc 000d3af6-0c55-000d-3af6-0c55000d3af6 eth0: Data path switched from VF: enP29799s1 Mar 6 02:53:25.801698 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 6 02:53:25.801907 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 02:53:25.802011 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 02:53:25.817568 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 6 02:53:25.818107 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 6 02:53:25.828678 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 02:53:25.828716 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:53:25.839214 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 02:53:25.850835 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 02:53:25.850897 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 02:53:25.864265 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 02:53:25.864313 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:53:25.877651 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 02:53:25.877691 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 02:53:25.883621 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 02:53:25.883661 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:53:25.894082 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:53:25.903265 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 02:53:25.903323 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 6 02:53:25.924434 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 02:53:25.924626 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:53:25.934656 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 02:53:25.934694 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 02:53:25.944652 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 02:53:25.944684 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:53:25.955574 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 02:53:25.955621 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 02:53:25.979289 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 02:53:25.979360 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 02:53:25.993636 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 02:53:25.993692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 02:53:26.008706 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 02:53:26.033433 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 6 02:53:26.033509 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:53:26.040664 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 02:53:26.040705 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:53:26.062707 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 02:53:26.062798 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:53:26.080889 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 02:53:26.080951 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:53:26.087415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:53:26.328679 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Mar 6 02:53:26.087456 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:53:26.108722 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 6 02:53:26.108791 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 6 02:53:26.108813 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 6 02:53:26.108840 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 02:53:26.109151 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 02:53:26.109247 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 02:53:26.117790 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 02:53:26.117861 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 02:53:26.127223 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 02:53:26.127294 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 02:53:26.138385 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 02:53:26.148012 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 02:53:26.148118 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 02:53:26.159859 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 02:53:26.191953 systemd[1]: Switching root. Mar 6 02:53:26.416747 systemd-journald[225]: Journal stopped Mar 6 02:53:31.112255 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 02:53:31.117738 kernel: SELinux: policy capability open_perms=1 Mar 6 02:53:31.117761 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 02:53:31.117768 kernel: SELinux: policy capability always_check_network=0 Mar 6 02:53:31.117777 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 02:53:31.117798 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 02:53:31.117804 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 02:53:31.117809 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 02:53:31.117815 kernel: SELinux: policy capability userspace_initial_context=0 Mar 6 02:53:31.117820 kernel: audit: type=1403 audit(1772765607.459:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 02:53:31.117830 systemd[1]: Successfully loaded SELinux policy in 259.043ms. Mar 6 02:53:31.117840 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.335ms. Mar 6 02:53:31.117846 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 02:53:31.117854 systemd[1]: Detected virtualization microsoft. Mar 6 02:53:31.117860 systemd[1]: Detected architecture arm64. Mar 6 02:53:31.117866 systemd[1]: Detected first boot. Mar 6 02:53:31.117873 systemd[1]: Hostname set to . Mar 6 02:53:31.117879 systemd[1]: Initializing machine ID from random generator. Mar 6 02:53:31.117886 zram_generator::config[1315]: No configuration found. Mar 6 02:53:31.117892 kernel: NET: Registered PF_VSOCK protocol family Mar 6 02:53:31.117898 systemd[1]: Populated /etc with preset unit settings. Mar 6 02:53:31.117905 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 6 02:53:31.117911 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 02:53:31.117917 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 02:53:31.117924 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 02:53:31.117930 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 02:53:31.117936 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 02:53:31.117942 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 02:53:31.117949 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 02:53:31.117955 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 02:53:31.117962 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 02:53:31.117969 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 02:53:31.117975 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 02:53:31.117981 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:53:31.117988 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:53:31.117993 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 02:53:31.118000 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 02:53:31.118006 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 02:53:31.118013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 02:53:31.118019 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 6 02:53:31.118027 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:53:31.118033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:53:31.118039 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 02:53:31.118045 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 02:53:31.118052 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 02:53:31.118058 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 02:53:31.118065 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:53:31.118072 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 02:53:31.118078 systemd[1]: Reached target slices.target - Slice Units. Mar 6 02:53:31.118084 systemd[1]: Reached target swap.target - Swaps. Mar 6 02:53:31.118090 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 02:53:31.118096 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 02:53:31.118104 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 6 02:53:31.118111 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:53:31.118117 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 02:53:31.118123 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:53:31.118130 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 02:53:31.118136 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 02:53:31.118142 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 02:53:31.118149 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 02:53:31.118156 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 02:53:31.118162 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 02:53:31.118168 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 02:53:31.118174 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 02:53:31.118181 systemd[1]: Reached target machines.target - Containers. Mar 6 02:53:31.118187 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 02:53:31.118194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:53:31.118202 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 02:53:31.118208 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 02:53:31.118214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:53:31.118221 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 02:53:31.118227 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:53:31.118233 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 02:53:31.118240 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:53:31.118247 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 02:53:31.118253 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 02:53:31.118260 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 02:53:31.118266 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 02:53:31.118273 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 02:53:31.118279 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:53:31.118286 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 02:53:31.118292 kernel: fuse: init (API version 7.41) Mar 6 02:53:31.118322 systemd-journald[1419]: Collecting audit messages is disabled. Mar 6 02:53:31.118339 kernel: loop: module loaded Mar 6 02:53:31.118345 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 02:53:31.118353 systemd-journald[1419]: Journal started Mar 6 02:53:31.118370 systemd-journald[1419]: Runtime Journal (/run/log/journal/1db20421cac449329e6217079c7d63a3) is 8M, max 78.3M, 70.3M free. Mar 6 02:53:30.324158 systemd[1]: Queued start job for default target multi-user.target. Mar 6 02:53:30.331269 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 6 02:53:30.331681 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 02:53:30.331967 systemd[1]: systemd-journald.service: Consumed 2.922s CPU time. Mar 6 02:53:31.138813 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 02:53:31.138878 kernel: ACPI: bus type drm_connector registered Mar 6 02:53:31.169322 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 02:53:31.185868 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 6 02:53:31.200474 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 02:53:31.205742 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 02:53:31.212627 systemd[1]: Stopped verity-setup.service. Mar 6 02:53:31.224817 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 02:53:31.225504 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 02:53:31.230534 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 02:53:31.235632 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 02:53:31.240059 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 02:53:31.246275 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 02:53:31.252783 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 02:53:31.257620 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 02:53:31.263542 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:53:31.269725 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 02:53:31.269889 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 02:53:31.275953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:53:31.276078 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:53:31.281625 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 02:53:31.281764 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 02:53:31.286817 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:53:31.286938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:53:31.293278 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 02:53:31.293407 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 02:53:31.298723 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:53:31.298872 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:53:31.304314 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 02:53:31.311501 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:53:31.318570 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 02:53:31.325207 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 6 02:53:31.331419 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:53:31.346055 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 02:53:31.352368 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 02:53:31.367856 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 02:53:31.373009 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 02:53:31.373044 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 02:53:31.378359 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 6 02:53:31.388643 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 02:53:31.393906 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:53:31.395378 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 02:53:31.402062 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 02:53:31.408309 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 02:53:31.409876 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 02:53:31.417572 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 02:53:31.420877 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:53:31.427878 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 02:53:31.437885 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 02:53:31.444863 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 02:53:31.459586 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 02:53:31.466108 kernel: loop0: detected capacity change from 0 to 197488 Mar 6 02:53:31.466937 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 02:53:31.472559 systemd-journald[1419]: Time spent on flushing to /var/log/journal/1db20421cac449329e6217079c7d63a3 is 24.433ms for 939 entries. Mar 6 02:53:31.472559 systemd-journald[1419]: System Journal (/var/log/journal/1db20421cac449329e6217079c7d63a3) is 8M, max 2.6G, 2.6G free. Mar 6 02:53:31.532343 systemd-journald[1419]: Received client request to flush runtime journal. Mar 6 02:53:31.479724 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 02:53:31.489118 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Mar 6 02:53:31.489126 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Mar 6 02:53:31.490229 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 6 02:53:31.497571 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:53:31.511226 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 02:53:31.535783 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 02:53:31.544575 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:53:31.567569 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 02:53:31.622513 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 02:53:31.623473 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 6 02:53:31.645763 kernel: loop1: detected capacity change from 0 to 100632 Mar 6 02:53:31.681372 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 02:53:31.690056 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 02:53:31.720523 systemd-tmpfiles[1476]: ACLs are not supported, ignoring. Mar 6 02:53:31.720835 systemd-tmpfiles[1476]: ACLs are not supported, ignoring. Mar 6 02:53:31.723415 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:53:31.947049 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 02:53:31.956990 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:53:31.984104 systemd-udevd[1480]: Using default interface naming scheme 'v255'. Mar 6 02:53:32.147669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:53:32.159915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 02:53:32.215362 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 6 02:53:32.219038 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 02:53:32.303876 kernel: loop2: detected capacity change from 0 to 27936 Mar 6 02:53:32.310144 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 02:53:32.331722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 6 02:53:32.371765 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 02:53:32.394761 kernel: hv_vmbus: registering driver hv_balloon Mar 6 02:53:32.404416 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 6 02:53:32.404522 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 6 02:53:32.434379 systemd-networkd[1493]: lo: Link UP Mar 6 02:53:32.435763 kernel: hv_vmbus: registering driver hyperv_fb Mar 6 02:53:32.435569 systemd-networkd[1493]: lo: Gained carrier Mar 6 02:53:32.436913 systemd-networkd[1493]: Enumeration completed Mar 6 02:53:32.437008 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 02:53:32.438159 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:53:32.438241 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 02:53:32.450691 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 6 02:53:32.450778 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 6 02:53:32.455960 kernel: Console: switching to colour dummy device 80x25 Mar 6 02:53:32.456221 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 6 02:53:32.465750 kernel: Console: switching to colour frame buffer device 128x48 Mar 6 02:53:32.473498 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 02:53:32.540754 kernel: mlx5_core 7467:00:02.0 enP29799s1: Link up Mar 6 02:53:32.533552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:53:32.567754 kernel: hv_netvsc 000d3af6-0c55-000d-3af6-0c55000d3af6 eth0: Data path switched to VF: enP29799s1 Mar 6 02:53:32.561049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:53:32.561211 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:53:32.564956 systemd-networkd[1493]: enP29799s1: Link UP Mar 6 02:53:32.565085 systemd-networkd[1493]: eth0: Link UP Mar 6 02:53:32.565087 systemd-networkd[1493]: eth0: Gained carrier Mar 6 02:53:32.565107 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:53:32.575450 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:53:32.584098 systemd-networkd[1493]: enP29799s1: Gained carrier Mar 6 02:53:32.587000 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 6 02:53:32.596893 systemd-networkd[1493]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 6 02:53:32.608310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:53:32.608903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:53:32.619667 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 02:53:32.628370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:53:32.647895 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 6 02:53:32.656883 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 02:53:32.659749 kernel: MACsec IEEE 802.1AE Mar 6 02:53:32.729922 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 02:53:32.763772 kernel: loop3: detected capacity change from 0 to 119840 Mar 6 02:53:33.031382 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:53:33.114752 kernel: loop4: detected capacity change from 0 to 197488 Mar 6 02:53:33.137769 kernel: loop5: detected capacity change from 0 to 100632 Mar 6 02:53:33.156756 kernel: loop6: detected capacity change from 0 to 27936 Mar 6 02:53:33.179762 kernel: loop7: detected capacity change from 0 to 119840 Mar 6 02:53:33.189722 (sd-merge)[1630]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 6 02:53:33.190146 (sd-merge)[1630]: Merged extensions into '/usr'. Mar 6 02:53:33.192853 systemd[1]: Reload requested from client PID 1455 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 02:53:33.193077 systemd[1]: Reloading... Mar 6 02:53:33.251199 zram_generator::config[1659]: No configuration found. Mar 6 02:53:33.415124 systemd[1]: Reloading finished in 221 ms. Mar 6 02:53:33.435227 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 02:53:33.447819 systemd[1]: Starting ensure-sysext.service... Mar 6 02:53:33.458889 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 02:53:33.472218 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 6 02:53:33.472243 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 6 02:53:33.472396 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 02:53:33.472525 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 02:53:33.472973 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 02:53:33.473117 systemd-tmpfiles[1715]: ACLs are not supported, ignoring. Mar 6 02:53:33.473149 systemd-tmpfiles[1715]: ACLs are not supported, ignoring. Mar 6 02:53:33.473528 systemd[1]: Reload requested from client PID 1714 ('systemctl') (unit ensure-sysext.service)... Mar 6 02:53:33.473634 systemd[1]: Reloading... Mar 6 02:53:33.475597 systemd-tmpfiles[1715]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 02:53:33.475608 systemd-tmpfiles[1715]: Skipping /boot Mar 6 02:53:33.482641 systemd-tmpfiles[1715]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 02:53:33.482656 systemd-tmpfiles[1715]: Skipping /boot Mar 6 02:53:33.533759 zram_generator::config[1746]: No configuration found. Mar 6 02:53:33.684655 systemd[1]: Reloading finished in 210 ms. Mar 6 02:53:33.713861 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:53:33.727204 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 02:53:33.740994 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 02:53:33.748448 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 02:53:33.758853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 02:53:33.769105 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 02:53:33.778359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:53:33.782477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:53:33.791415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:53:33.801258 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:53:33.806802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:53:33.807294 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:53:33.809277 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:53:33.810777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:53:33.817235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:53:33.818931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:53:33.826845 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:53:33.827012 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:53:33.844575 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 02:53:33.854303 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:53:33.855809 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:53:33.864092 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 02:53:33.875871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:53:33.883217 systemd-resolved[1807]: Positive Trust Anchors: Mar 6 02:53:33.883510 systemd-resolved[1807]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 02:53:33.883574 systemd-resolved[1807]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 02:53:33.888728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:53:33.894910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:53:33.895031 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:53:33.895139 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 02:53:33.903862 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 02:53:33.911907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:53:33.912044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:53:33.921027 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 02:53:33.921332 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 02:53:33.921681 systemd-resolved[1807]: Using system hostname 'ci-4459.2.3-n-b98e3238ca'. Mar 6 02:53:33.927839 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 02:53:33.934278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:53:33.934557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:53:33.941259 augenrules[1839]: No rules Mar 6 02:53:33.942334 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 02:53:33.943775 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 02:53:33.949712 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:53:33.950151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:53:33.959255 systemd[1]: Finished ensure-sysext.service. Mar 6 02:53:33.968504 systemd[1]: Reached target network.target - Network. Mar 6 02:53:33.974090 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:53:33.980851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 02:53:33.981019 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 02:53:34.291846 systemd-networkd[1493]: eth0: Gained IPv6LL Mar 6 02:53:34.293968 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 02:53:34.300007 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 02:53:34.387382 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 02:53:34.393372 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 02:53:37.165872 ldconfig[1449]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 02:53:37.181340 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 02:53:37.188218 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 02:53:37.228112 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 02:53:37.233417 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 02:53:37.238597 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 02:53:37.243981 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 02:53:37.250128 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 02:53:37.255500 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 02:53:37.261959 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 02:53:37.268039 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 02:53:37.268068 systemd[1]: Reached target paths.target - Path Units. Mar 6 02:53:37.272464 systemd[1]: Reached target timers.target - Timer Units. Mar 6 02:53:37.278063 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 02:53:37.284400 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 02:53:37.290312 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 6 02:53:37.295919 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 6 02:53:37.301325 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 6 02:53:37.308212 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 02:53:37.313713 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 6 02:53:37.320406 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 02:53:37.325306 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 02:53:37.329257 systemd[1]: Reached target basic.target - Basic System. Mar 6 02:53:37.334006 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 02:53:37.334033 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 02:53:37.336328 systemd[1]: Starting chronyd.service - NTP client/server... Mar 6 02:53:37.349861 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 02:53:37.361997 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 6 02:53:37.370391 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 02:53:37.381937 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 02:53:37.388315 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 02:53:37.394880 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 02:53:37.399923 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 02:53:37.401048 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 6 02:53:37.405842 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 6 02:53:37.406937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:53:37.414149 jq[1866]: false Mar 6 02:53:37.416922 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 02:53:37.423056 KVP[1868]: KVP starting; pid is:1868 Mar 6 02:53:37.423936 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 02:53:37.432749 kernel: hv_utils: KVP IC version 4.0 Mar 6 02:53:37.432800 chronyd[1858]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Mar 6 02:53:37.433066 KVP[1868]: KVP LIC Version: 3.1 Mar 6 02:53:37.435576 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 02:53:37.442064 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 02:53:37.448714 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 02:53:37.459199 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 02:53:37.459935 extend-filesystems[1867]: Found /dev/sda6 Mar 6 02:53:37.468802 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 02:53:37.469247 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 02:53:37.470930 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 02:53:37.482808 extend-filesystems[1867]: Found /dev/sda9 Mar 6 02:53:37.481940 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 02:53:37.502037 extend-filesystems[1867]: Checking size of /dev/sda9 Mar 6 02:53:37.486749 chronyd[1858]: Timezone right/UTC failed leap second check, ignoring Mar 6 02:53:37.497372 systemd[1]: Started chronyd.service - NTP client/server. Mar 6 02:53:37.486930 chronyd[1858]: Loaded seccomp filter (level 2) Mar 6 02:53:37.512301 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 02:53:37.518485 jq[1887]: true Mar 6 02:53:37.523095 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 02:53:37.523986 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 02:53:37.524943 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 02:53:37.529127 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 02:53:37.547847 extend-filesystems[1867]: Old size kept for /dev/sda9 Mar 6 02:53:37.546811 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 02:53:37.554050 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 02:53:37.564537 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 02:53:37.564708 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 02:53:37.579077 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 02:53:37.579489 update_engine[1885]: I20260306 02:53:37.579071 1885 main.cc:92] Flatcar Update Engine starting Mar 6 02:53:37.613209 (ntainerd)[1911]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 02:53:37.613645 jq[1908]: true Mar 6 02:53:37.614894 tar[1902]: linux-arm64/LICENSE Mar 6 02:53:37.615405 tar[1902]: linux-arm64/helm Mar 6 02:53:37.640570 systemd-logind[1879]: New seat seat0. Mar 6 02:53:37.641325 systemd-logind[1879]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 6 02:53:37.641496 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 02:53:37.741726 dbus-daemon[1861]: [system] SELinux support is enabled Mar 6 02:53:37.741902 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 02:53:37.750029 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 02:53:37.750066 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 02:53:37.755124 dbus-daemon[1861]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 6 02:53:37.756834 update_engine[1885]: I20260306 02:53:37.756587 1885 update_check_scheduler.cc:74] Next update check in 10m47s Mar 6 02:53:37.756519 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 02:53:37.756534 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 02:53:37.766085 systemd[1]: Started update-engine.service - Update Engine. Mar 6 02:53:37.774956 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 02:53:37.809773 coreos-metadata[1860]: Mar 06 02:53:37.809 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 6 02:53:37.818446 coreos-metadata[1860]: Mar 06 02:53:37.818 INFO Fetch successful Mar 6 02:53:37.818446 coreos-metadata[1860]: Mar 06 02:53:37.818 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 6 02:53:37.822636 coreos-metadata[1860]: Mar 06 02:53:37.822 INFO Fetch successful Mar 6 02:53:37.829856 coreos-metadata[1860]: Mar 06 02:53:37.823 INFO Fetching http://168.63.129.16/machine/3ad155c9-b7c8-4287-830b-caa9989602ea/7cdc5411%2Db4b8%2D4396%2D80b3%2D7f16405de72d.%5Fci%2D4459.2.3%2Dn%2Db98e3238ca?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 6 02:53:37.829856 coreos-metadata[1860]: Mar 06 02:53:37.825 INFO Fetch successful Mar 6 02:53:37.829856 coreos-metadata[1860]: Mar 06 02:53:37.825 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 6 02:53:37.835226 coreos-metadata[1860]: Mar 06 02:53:37.835 INFO Fetch successful Mar 6 02:53:37.848427 bash[1989]: Updated "/home/core/.ssh/authorized_keys" Mar 6 02:53:37.854914 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 02:53:37.867931 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 02:53:37.880618 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 6 02:53:37.891366 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 02:53:37.895838 sshd_keygen[1886]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 02:53:37.932764 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 02:53:37.945783 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 02:53:37.957927 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 6 02:53:37.979027 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 02:53:37.979200 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 02:53:37.994135 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 02:53:38.004680 locksmithd[2004]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 02:53:38.005705 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 6 02:53:38.015071 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 02:53:38.025066 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 02:53:38.034983 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 6 02:53:38.041971 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 02:53:38.122212 tar[1902]: linux-arm64/README.md Mar 6 02:53:38.142832 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 02:53:38.300807 containerd[1911]: time="2026-03-06T02:53:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 6 02:53:38.302979 containerd[1911]: time="2026-03-06T02:53:38.302718008Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 6 02:53:38.309606 containerd[1911]: time="2026-03-06T02:53:38.309558024Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.064µs" Mar 6 02:53:38.309606 containerd[1911]: time="2026-03-06T02:53:38.309594768Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 6 02:53:38.309606 containerd[1911]: time="2026-03-06T02:53:38.309609280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 6 02:53:38.309793 containerd[1911]: time="2026-03-06T02:53:38.309774712Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 6 02:53:38.309793 containerd[1911]: time="2026-03-06T02:53:38.309791192Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 6 02:53:38.309874 containerd[1911]: time="2026-03-06T02:53:38.309809640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 02:53:38.309874 containerd[1911]: time="2026-03-06T02:53:38.309867712Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 02:53:38.309900 containerd[1911]: time="2026-03-06T02:53:38.309876232Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 02:53:38.310103 containerd[1911]: time="2026-03-06T02:53:38.310075440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 02:53:38.310103 containerd[1911]: time="2026-03-06T02:53:38.310090208Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 02:53:38.310103 containerd[1911]: time="2026-03-06T02:53:38.310097848Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 02:53:38.310103 containerd[1911]: time="2026-03-06T02:53:38.310103952Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 6 02:53:38.310191 containerd[1911]: time="2026-03-06T02:53:38.310167200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 6 02:53:38.310357 containerd[1911]: time="2026-03-06T02:53:38.310330544Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 02:53:38.310387 containerd[1911]: time="2026-03-06T02:53:38.310361088Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 02:53:38.310387 containerd[1911]: time="2026-03-06T02:53:38.310368384Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 6 02:53:38.310417 containerd[1911]: time="2026-03-06T02:53:38.310410656Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 6 02:53:38.310748 containerd[1911]: time="2026-03-06T02:53:38.310623816Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 6 02:53:38.310748 containerd[1911]: time="2026-03-06T02:53:38.310699120Z" level=info msg="metadata content store policy set" policy=shared Mar 6 02:53:38.328467 containerd[1911]: time="2026-03-06T02:53:38.328410744Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 6 02:53:38.328574 containerd[1911]: time="2026-03-06T02:53:38.328495568Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 6 02:53:38.328574 containerd[1911]: time="2026-03-06T02:53:38.328508880Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 6 02:53:38.328574 containerd[1911]: time="2026-03-06T02:53:38.328517752Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 6 02:53:38.328574 containerd[1911]: time="2026-03-06T02:53:38.328526064Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 6 02:53:38.328574 containerd[1911]: time="2026-03-06T02:53:38.328533584Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 6 02:53:38.328574 containerd[1911]: time="2026-03-06T02:53:38.328547984Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 6 02:53:38.328574 containerd[1911]: time="2026-03-06T02:53:38.328555832Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 6 02:53:38.328574 containerd[1911]: time="2026-03-06T02:53:38.328565080Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 6 02:53:38.328574 containerd[1911]: time="2026-03-06T02:53:38.328571536Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 6 02:53:38.328574 containerd[1911]: time="2026-03-06T02:53:38.328577816Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 6 02:53:38.328721 containerd[1911]: time="2026-03-06T02:53:38.328586344Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328763120Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328781088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328791776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328798712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328805344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328812728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328820328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328826520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328834608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328842064Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328848800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328897208Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328908336Z" level=info msg="Start snapshots syncer" Mar 6 02:53:38.328993 containerd[1911]: time="2026-03-06T02:53:38.328925952Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 6 02:53:38.329209 containerd[1911]: time="2026-03-06T02:53:38.329123160Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 6 02:53:38.329209 containerd[1911]: time="2026-03-06T02:53:38.329164736Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329197648Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329317032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329334936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329344080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329350560Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329357928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329364544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329373080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329394968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329405168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329411632Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329431872Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329442640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 02:53:38.329814 containerd[1911]: time="2026-03-06T02:53:38.329448368Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 02:53:38.329976 containerd[1911]: time="2026-03-06T02:53:38.329454792Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 02:53:38.329976 containerd[1911]: time="2026-03-06T02:53:38.329459984Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 6 02:53:38.329976 containerd[1911]: time="2026-03-06T02:53:38.329465120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 6 02:53:38.329976 containerd[1911]: time="2026-03-06T02:53:38.329471976Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 6 02:53:38.329976 containerd[1911]: time="2026-03-06T02:53:38.329488056Z" level=info msg="runtime interface created" Mar 6 02:53:38.329976 containerd[1911]: time="2026-03-06T02:53:38.329492624Z" level=info msg="created NRI interface" Mar 6 02:53:38.329976 containerd[1911]: time="2026-03-06T02:53:38.329500384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 6 02:53:38.329976 containerd[1911]: time="2026-03-06T02:53:38.329508968Z" level=info msg="Connect containerd service" Mar 6 02:53:38.329976 containerd[1911]: time="2026-03-06T02:53:38.329522648Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 02:53:38.330649 containerd[1911]: time="2026-03-06T02:53:38.330407720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 02:53:38.462162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:53:38.474285 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:53:38.682689 containerd[1911]: time="2026-03-06T02:53:38.682576848Z" level=info msg="Start subscribing containerd event" Mar 6 02:53:38.682689 containerd[1911]: time="2026-03-06T02:53:38.682643856Z" level=info msg="Start recovering state" Mar 6 02:53:38.682996 containerd[1911]: time="2026-03-06T02:53:38.682725048Z" level=info msg="Start event monitor" Mar 6 02:53:38.682996 containerd[1911]: time="2026-03-06T02:53:38.682748448Z" level=info msg="Start cni network conf syncer for default" Mar 6 02:53:38.682996 containerd[1911]: time="2026-03-06T02:53:38.682755136Z" level=info msg="Start streaming server" Mar 6 02:53:38.682996 containerd[1911]: time="2026-03-06T02:53:38.682762904Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 6 02:53:38.682996 containerd[1911]: time="2026-03-06T02:53:38.682768752Z" level=info msg="runtime interface starting up..." Mar 6 02:53:38.682996 containerd[1911]: time="2026-03-06T02:53:38.682773056Z" level=info msg="starting plugins..." Mar 6 02:53:38.682996 containerd[1911]: time="2026-03-06T02:53:38.682784832Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 6 02:53:38.683268 containerd[1911]: time="2026-03-06T02:53:38.683198712Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 02:53:38.683336 containerd[1911]: time="2026-03-06T02:53:38.683272496Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 02:53:38.683439 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 02:53:38.687917 containerd[1911]: time="2026-03-06T02:53:38.687871736Z" level=info msg="containerd successfully booted in 0.387458s" Mar 6 02:53:38.688558 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 02:53:38.699826 systemd[1]: Startup finished in 1.654s (kernel) + 12.599s (initrd) + 11.496s (userspace) = 25.751s. Mar 6 02:53:38.842909 kubelet[2062]: E0306 02:53:38.842861 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:53:38.848001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:53:38.848103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:53:38.848403 systemd[1]: kubelet.service: Consumed 511ms CPU time, 246.7M memory peak. Mar 6 02:53:39.004447 login[2038]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:53:39.005709 login[2039]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:53:39.010286 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 02:53:39.011170 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 02:53:39.016707 systemd-logind[1879]: New session 2 of user core. Mar 6 02:53:39.019568 systemd-logind[1879]: New session 1 of user core. Mar 6 02:53:39.044413 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 02:53:39.046965 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 02:53:39.064724 (systemd)[2079]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 02:53:39.066798 systemd-logind[1879]: New session c1 of user core. Mar 6 02:53:39.187060 systemd[2079]: Queued start job for default target default.target. Mar 6 02:53:39.194604 systemd[2079]: Created slice app.slice - User Application Slice. Mar 6 02:53:39.194634 systemd[2079]: Reached target paths.target - Paths. Mar 6 02:53:39.194668 systemd[2079]: Reached target timers.target - Timers. Mar 6 02:53:39.195794 systemd[2079]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 02:53:39.205340 systemd[2079]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 02:53:39.205474 systemd[2079]: Reached target sockets.target - Sockets. Mar 6 02:53:39.205655 systemd[2079]: Reached target basic.target - Basic System. Mar 6 02:53:39.205820 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 02:53:39.205916 systemd[2079]: Reached target default.target - Main User Target. Mar 6 02:53:39.205943 systemd[2079]: Startup finished in 134ms. Mar 6 02:53:39.212877 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 02:53:39.214179 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 02:53:39.632822 waagent[2036]: 2026-03-06T02:53:39.632720Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 6 02:53:39.637228 waagent[2036]: 2026-03-06T02:53:39.637176Z INFO Daemon Daemon OS: flatcar 4459.2.3 Mar 6 02:53:39.640526 waagent[2036]: 2026-03-06T02:53:39.640493Z INFO Daemon Daemon Python: 3.11.13 Mar 6 02:53:39.643981 waagent[2036]: 2026-03-06T02:53:39.643927Z INFO Daemon Daemon Run daemon Mar 6 02:53:39.646880 waagent[2036]: 2026-03-06T02:53:39.646841Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.3' Mar 6 02:53:39.653546 waagent[2036]: 2026-03-06T02:53:39.653512Z INFO Daemon Daemon Using waagent for provisioning Mar 6 02:53:39.657407 waagent[2036]: 2026-03-06T02:53:39.657372Z INFO Daemon Daemon Activate resource disk Mar 6 02:53:39.660743 waagent[2036]: 2026-03-06T02:53:39.660700Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 6 02:53:39.668792 waagent[2036]: 2026-03-06T02:53:39.668722Z INFO Daemon Daemon Found device: None Mar 6 02:53:39.672107 waagent[2036]: 2026-03-06T02:53:39.672073Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 6 02:53:39.678100 waagent[2036]: 2026-03-06T02:53:39.678067Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 6 02:53:39.686990 waagent[2036]: 2026-03-06T02:53:39.686948Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 6 02:53:39.691134 waagent[2036]: 2026-03-06T02:53:39.691100Z INFO Daemon Daemon Running default provisioning handler Mar 6 02:53:39.700413 waagent[2036]: 2026-03-06T02:53:39.700364Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 6 02:53:39.711045 waagent[2036]: 2026-03-06T02:53:39.710998Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 6 02:53:39.719135 waagent[2036]: 2026-03-06T02:53:39.719094Z INFO Daemon Daemon cloud-init is enabled: False Mar 6 02:53:39.723241 waagent[2036]: 2026-03-06T02:53:39.723214Z INFO Daemon Daemon Copying ovf-env.xml Mar 6 02:53:39.793887 waagent[2036]: 2026-03-06T02:53:39.793802Z INFO Daemon Daemon Successfully mounted dvd Mar 6 02:53:39.835682 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 6 02:53:39.837841 waagent[2036]: 2026-03-06T02:53:39.837777Z INFO Daemon Daemon Detect protocol endpoint Mar 6 02:53:39.841546 waagent[2036]: 2026-03-06T02:53:39.841500Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 6 02:53:39.846213 waagent[2036]: 2026-03-06T02:53:39.846170Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 6 02:53:39.851173 waagent[2036]: 2026-03-06T02:53:39.851133Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 6 02:53:39.855067 waagent[2036]: 2026-03-06T02:53:39.855033Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 6 02:53:39.858932 waagent[2036]: 2026-03-06T02:53:39.858902Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 6 02:53:39.914391 waagent[2036]: 2026-03-06T02:53:39.914300Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 6 02:53:39.919455 waagent[2036]: 2026-03-06T02:53:39.919431Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 6 02:53:39.923527 waagent[2036]: 2026-03-06T02:53:39.923499Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 6 02:53:40.029408 waagent[2036]: 2026-03-06T02:53:40.029329Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 6 02:53:40.034598 waagent[2036]: 2026-03-06T02:53:40.034550Z INFO Daemon Daemon Forcing an update of the goal state. Mar 6 02:53:40.042072 waagent[2036]: 2026-03-06T02:53:40.042027Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 6 02:53:40.059762 waagent[2036]: 2026-03-06T02:53:40.059711Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 6 02:53:40.064462 waagent[2036]: 2026-03-06T02:53:40.064427Z INFO Daemon Mar 6 02:53:40.066565 waagent[2036]: 2026-03-06T02:53:40.066533Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ca169840-86f7-4315-85f2-250637ccf9ef eTag: 17483036713430139833 source: Fabric] Mar 6 02:53:40.074981 waagent[2036]: 2026-03-06T02:53:40.074947Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 6 02:53:40.080060 waagent[2036]: 2026-03-06T02:53:40.080029Z INFO Daemon Mar 6 02:53:40.082108 waagent[2036]: 2026-03-06T02:53:40.082080Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 6 02:53:40.090867 waagent[2036]: 2026-03-06T02:53:40.090839Z INFO Daemon Daemon Downloading artifacts profile blob Mar 6 02:53:40.150312 waagent[2036]: 2026-03-06T02:53:40.150239Z INFO Daemon Downloaded certificate {'thumbprint': '5AD80F66CD7F8602C96BFE43142DC16093E55ABD', 'hasPrivateKey': True} Mar 6 02:53:40.157965 waagent[2036]: 2026-03-06T02:53:40.157919Z INFO Daemon Fetch goal state completed Mar 6 02:53:40.168482 waagent[2036]: 2026-03-06T02:53:40.168408Z INFO Daemon Daemon Starting provisioning Mar 6 02:53:40.172189 waagent[2036]: 2026-03-06T02:53:40.172148Z INFO Daemon Daemon Handle ovf-env.xml. Mar 6 02:53:40.175765 waagent[2036]: 2026-03-06T02:53:40.175728Z INFO Daemon Daemon Set hostname [ci-4459.2.3-n-b98e3238ca] Mar 6 02:53:40.182107 waagent[2036]: 2026-03-06T02:53:40.182063Z INFO Daemon Daemon Publish hostname [ci-4459.2.3-n-b98e3238ca] Mar 6 02:53:40.186890 waagent[2036]: 2026-03-06T02:53:40.186849Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 6 02:53:40.191781 waagent[2036]: 2026-03-06T02:53:40.191723Z INFO Daemon Daemon Primary interface is [eth0] Mar 6 02:53:40.201595 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:53:40.201602 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 02:53:40.201633 systemd-networkd[1493]: eth0: DHCP lease lost Mar 6 02:53:40.203209 waagent[2036]: 2026-03-06T02:53:40.203152Z INFO Daemon Daemon Create user account if not exists Mar 6 02:53:40.207526 waagent[2036]: 2026-03-06T02:53:40.207486Z INFO Daemon Daemon User core already exists, skip useradd Mar 6 02:53:40.211913 waagent[2036]: 2026-03-06T02:53:40.211703Z INFO Daemon Daemon Configure sudoer Mar 6 02:53:40.219461 waagent[2036]: 2026-03-06T02:53:40.219403Z INFO Daemon Daemon Configure sshd Mar 6 02:53:40.227371 waagent[2036]: 2026-03-06T02:53:40.227315Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 6 02:53:40.237242 waagent[2036]: 2026-03-06T02:53:40.237182Z INFO Daemon Daemon Deploy ssh public key. Mar 6 02:53:40.248800 systemd-networkd[1493]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 6 02:53:41.364864 waagent[2036]: 2026-03-06T02:53:41.364815Z INFO Daemon Daemon Provisioning complete Mar 6 02:53:41.378943 waagent[2036]: 2026-03-06T02:53:41.378900Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 6 02:53:41.383764 waagent[2036]: 2026-03-06T02:53:41.383713Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 6 02:53:41.390899 waagent[2036]: 2026-03-06T02:53:41.390863Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 6 02:53:41.496786 waagent[2129]: 2026-03-06T02:53:41.495994Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 6 02:53:41.496786 waagent[2129]: 2026-03-06T02:53:41.496132Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.3 Mar 6 02:53:41.496786 waagent[2129]: 2026-03-06T02:53:41.496167Z INFO ExtHandler ExtHandler Python: 3.11.13 Mar 6 02:53:41.496786 waagent[2129]: 2026-03-06T02:53:41.496202Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 6 02:53:41.541168 waagent[2129]: 2026-03-06T02:53:41.541093Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Mar 6 02:53:41.541340 waagent[2129]: 2026-03-06T02:53:41.541310Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 6 02:53:41.541381 waagent[2129]: 2026-03-06T02:53:41.541363Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 6 02:53:41.547477 waagent[2129]: 2026-03-06T02:53:41.547430Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 6 02:53:41.552974 waagent[2129]: 2026-03-06T02:53:41.552943Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 6 02:53:41.553366 waagent[2129]: 2026-03-06T02:53:41.553334Z INFO ExtHandler Mar 6 02:53:41.553417 waagent[2129]: 2026-03-06T02:53:41.553399Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 56befbcd-4667-479d-9613-4a2cc18fc85d eTag: 17483036713430139833 source: Fabric] Mar 6 02:53:41.553632 waagent[2129]: 2026-03-06T02:53:41.553607Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 6 02:53:41.554050 waagent[2129]: 2026-03-06T02:53:41.554020Z INFO ExtHandler Mar 6 02:53:41.554089 waagent[2129]: 2026-03-06T02:53:41.554072Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 6 02:53:41.557494 waagent[2129]: 2026-03-06T02:53:41.557467Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 6 02:53:41.611573 waagent[2129]: 2026-03-06T02:53:41.611494Z INFO ExtHandler Downloaded certificate {'thumbprint': '5AD80F66CD7F8602C96BFE43142DC16093E55ABD', 'hasPrivateKey': True} Mar 6 02:53:41.611993 waagent[2129]: 2026-03-06T02:53:41.611959Z INFO ExtHandler Fetch goal state completed Mar 6 02:53:41.624297 waagent[2129]: 2026-03-06T02:53:41.624196Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Mar 6 02:53:41.627815 waagent[2129]: 2026-03-06T02:53:41.627771Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2129 Mar 6 02:53:41.627927 waagent[2129]: 2026-03-06T02:53:41.627899Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 6 02:53:41.628172 waagent[2129]: 2026-03-06T02:53:41.628144Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 6 02:53:41.629274 waagent[2129]: 2026-03-06T02:53:41.629236Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] Mar 6 02:53:41.629585 waagent[2129]: 2026-03-06T02:53:41.629553Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 6 02:53:41.629701 waagent[2129]: 2026-03-06T02:53:41.629677Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 6 02:53:41.630142 waagent[2129]: 2026-03-06T02:53:41.630111Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 6 02:53:41.705611 waagent[2129]: 2026-03-06T02:53:41.705568Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 6 02:53:41.705826 waagent[2129]: 2026-03-06T02:53:41.705797Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 6 02:53:41.710520 waagent[2129]: 2026-03-06T02:53:41.710488Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 6 02:53:41.715523 systemd[1]: Reload requested from client PID 2144 ('systemctl') (unit waagent.service)... Mar 6 02:53:41.715774 systemd[1]: Reloading... Mar 6 02:53:41.796454 zram_generator::config[2183]: No configuration found. Mar 6 02:53:41.951524 systemd[1]: Reloading finished in 235 ms. Mar 6 02:53:41.973242 waagent[2129]: 2026-03-06T02:53:41.973169Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 6 02:53:41.973481 waagent[2129]: 2026-03-06T02:53:41.973449Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 6 02:53:42.549191 waagent[2129]: 2026-03-06T02:53:42.548377Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 6 02:53:42.549191 waagent[2129]: 2026-03-06T02:53:42.548689Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 6 02:53:42.549517 waagent[2129]: 2026-03-06T02:53:42.549399Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 6 02:53:42.549517 waagent[2129]: 2026-03-06T02:53:42.549464Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 6 02:53:42.549644 waagent[2129]: 2026-03-06T02:53:42.549611Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 6 02:53:42.549756 waagent[2129]: 2026-03-06T02:53:42.549692Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 6 02:53:42.549877 waagent[2129]: 2026-03-06T02:53:42.549845Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 6 02:53:42.549877 waagent[2129]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 6 02:53:42.549877 waagent[2129]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 6 02:53:42.549877 waagent[2129]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 6 02:53:42.549877 waagent[2129]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 6 02:53:42.549877 waagent[2129]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 6 02:53:42.549877 waagent[2129]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 6 02:53:42.550366 waagent[2129]: 2026-03-06T02:53:42.550334Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 6 02:53:42.550513 waagent[2129]: 2026-03-06T02:53:42.550488Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 6 02:53:42.550854 waagent[2129]: 2026-03-06T02:53:42.550814Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 6 02:53:42.551017 waagent[2129]: 2026-03-06T02:53:42.550982Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 6 02:53:42.551114 waagent[2129]: 2026-03-06T02:53:42.551090Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 6 02:53:42.551406 waagent[2129]: 2026-03-06T02:53:42.551371Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 6 02:53:42.551565 waagent[2129]: 2026-03-06T02:53:42.551532Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 6 02:53:42.551624 waagent[2129]: 2026-03-06T02:53:42.551601Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 6 02:53:42.552034 waagent[2129]: 2026-03-06T02:53:42.551995Z INFO EnvHandler ExtHandler Configure routes Mar 6 02:53:42.552543 waagent[2129]: 2026-03-06T02:53:42.552510Z INFO EnvHandler ExtHandler Gateway:None Mar 6 02:53:42.553247 waagent[2129]: 2026-03-06T02:53:42.553211Z INFO EnvHandler ExtHandler Routes:None Mar 6 02:53:42.558514 waagent[2129]: 2026-03-06T02:53:42.558481Z INFO ExtHandler ExtHandler Mar 6 02:53:42.558651 waagent[2129]: 2026-03-06T02:53:42.558624Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d027de4d-d56a-433e-8377-79ddd8a55d34 correlation a9502453-4e0a-42c2-ae7c-e74e1ca18830 created: 2026-03-06T02:52:42.837776Z] Mar 6 02:53:42.559043 waagent[2129]: 2026-03-06T02:53:42.559008Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 6 02:53:42.559532 waagent[2129]: 2026-03-06T02:53:42.559499Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 6 02:53:42.581995 waagent[2129]: 2026-03-06T02:53:42.581950Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Mar 6 02:53:42.581995 waagent[2129]: Try `iptables -h' or 'iptables --help' for more information.) Mar 6 02:53:42.582550 waagent[2129]: 2026-03-06T02:53:42.582470Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CCB3B1A2-7D0E-4E91-8EF5-401CE9D70939;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 6 02:53:42.586130 waagent[2129]: 2026-03-06T02:53:42.586078Z INFO MonitorHandler ExtHandler Network interfaces: Mar 6 02:53:42.586130 waagent[2129]: Executing ['ip', '-a', '-o', 'link']: Mar 6 02:53:42.586130 waagent[2129]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 6 02:53:42.586130 waagent[2129]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:0c:55 brd ff:ff:ff:ff:ff:ff Mar 6 02:53:42.586130 waagent[2129]: 3: enP29799s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:0c:55 brd ff:ff:ff:ff:ff:ff\ altname enP29799p0s2 Mar 6 02:53:42.586130 waagent[2129]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 6 02:53:42.586130 waagent[2129]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 6 02:53:42.586130 waagent[2129]: 2: eth0 inet 10.200.20.16/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 6 02:53:42.586130 waagent[2129]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 6 02:53:42.586130 waagent[2129]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 6 02:53:42.586130 waagent[2129]: 2: eth0 inet6 fe80::20d:3aff:fef6:c55/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 6 02:53:43.174806 waagent[2129]: 2026-03-06T02:53:43.174357Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 6 02:53:43.174806 waagent[2129]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 6 02:53:43.174806 waagent[2129]: pkts bytes target prot opt in out source destination Mar 6 02:53:43.174806 waagent[2129]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 6 02:53:43.174806 waagent[2129]: pkts bytes target prot opt in out source destination Mar 6 02:53:43.174806 waagent[2129]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 6 02:53:43.174806 waagent[2129]: pkts bytes target prot opt in out source destination Mar 6 02:53:43.174806 waagent[2129]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 6 02:53:43.174806 waagent[2129]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 6 02:53:43.174806 waagent[2129]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 6 02:53:43.177182 waagent[2129]: 2026-03-06T02:53:43.177130Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 6 02:53:43.177182 waagent[2129]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 6 02:53:43.177182 waagent[2129]: pkts bytes target prot opt in out source destination Mar 6 02:53:43.177182 waagent[2129]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 6 02:53:43.177182 waagent[2129]: pkts bytes target prot opt in out source destination Mar 6 02:53:43.177182 waagent[2129]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 6 02:53:43.177182 waagent[2129]: pkts bytes target prot opt in out source destination Mar 6 02:53:43.177182 waagent[2129]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 6 02:53:43.177182 waagent[2129]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 6 02:53:43.177182 waagent[2129]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 6 02:53:43.177392 waagent[2129]: 2026-03-06T02:53:43.177367Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 6 02:53:49.098796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 02:53:49.100398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:53:49.215211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:53:49.222027 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:53:49.311274 kubelet[2278]: E0306 02:53:49.311206 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:53:49.314002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:53:49.314121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:53:49.314412 systemd[1]: kubelet.service: Consumed 176ms CPU time, 107.2M memory peak. Mar 6 02:53:59.192819 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 02:53:59.194939 systemd[1]: Started sshd@0-10.200.20.16:22-10.200.16.10:55526.service - OpenSSH per-connection server daemon (10.200.16.10:55526). Mar 6 02:53:59.564566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 02:53:59.567910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:53:59.770625 sshd[2286]: Accepted publickey for core from 10.200.16.10 port 55526 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:53:59.771388 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:53:59.775076 systemd-logind[1879]: New session 3 of user core. Mar 6 02:53:59.783051 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 02:53:59.914605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:53:59.926133 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:53:59.952441 kubelet[2298]: E0306 02:53:59.952391 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:53:59.954533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:53:59.954649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:53:59.956812 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.9M memory peak. Mar 6 02:54:00.088426 systemd[1]: Started sshd@1-10.200.20.16:22-10.200.16.10:49954.service - OpenSSH per-connection server daemon (10.200.16.10:49954). Mar 6 02:54:00.489438 sshd[2308]: Accepted publickey for core from 10.200.16.10 port 49954 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:54:00.490562 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:54:00.494016 systemd-logind[1879]: New session 4 of user core. Mar 6 02:54:00.502106 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 02:54:00.710197 sshd[2311]: Connection closed by 10.200.16.10 port 49954 Mar 6 02:54:00.710802 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Mar 6 02:54:00.714435 systemd[1]: sshd@1-10.200.20.16:22-10.200.16.10:49954.service: Deactivated successfully. Mar 6 02:54:00.715896 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 02:54:00.717912 systemd-logind[1879]: Session 4 logged out. Waiting for processes to exit. Mar 6 02:54:00.719114 systemd-logind[1879]: Removed session 4. Mar 6 02:54:00.793720 systemd[1]: Started sshd@2-10.200.20.16:22-10.200.16.10:49968.service - OpenSSH per-connection server daemon (10.200.16.10:49968). Mar 6 02:54:01.192657 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 49968 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:54:01.193438 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:54:01.197084 systemd-logind[1879]: New session 5 of user core. Mar 6 02:54:01.204909 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 02:54:01.313828 chronyd[1858]: Selected source PHC0 Mar 6 02:54:01.410765 sshd[2320]: Connection closed by 10.200.16.10 port 49968 Mar 6 02:54:01.410815 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Mar 6 02:54:01.415040 systemd[1]: sshd@2-10.200.20.16:22-10.200.16.10:49968.service: Deactivated successfully. Mar 6 02:54:01.417108 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 02:54:01.418522 systemd-logind[1879]: Session 5 logged out. Waiting for processes to exit. Mar 6 02:54:01.420026 systemd-logind[1879]: Removed session 5. Mar 6 02:54:01.497575 systemd[1]: Started sshd@3-10.200.20.16:22-10.200.16.10:49974.service - OpenSSH per-connection server daemon (10.200.16.10:49974). Mar 6 02:54:01.897255 sshd[2326]: Accepted publickey for core from 10.200.16.10 port 49974 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:54:01.898059 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:54:01.901769 systemd-logind[1879]: New session 6 of user core. Mar 6 02:54:01.908905 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 02:54:02.119874 sshd[2329]: Connection closed by 10.200.16.10 port 49974 Mar 6 02:54:02.120443 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Mar 6 02:54:02.123949 systemd[1]: sshd@3-10.200.20.16:22-10.200.16.10:49974.service: Deactivated successfully. Mar 6 02:54:02.125826 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 02:54:02.126686 systemd-logind[1879]: Session 6 logged out. Waiting for processes to exit. Mar 6 02:54:02.128374 systemd-logind[1879]: Removed session 6. Mar 6 02:54:02.202683 systemd[1]: Started sshd@4-10.200.20.16:22-10.200.16.10:49986.service - OpenSSH per-connection server daemon (10.200.16.10:49986). Mar 6 02:54:02.598224 sshd[2335]: Accepted publickey for core from 10.200.16.10 port 49986 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:54:02.599176 sshd-session[2335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:54:02.602936 systemd-logind[1879]: New session 7 of user core. Mar 6 02:54:02.610892 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 02:54:02.854927 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 02:54:02.855163 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:54:02.865121 sudo[2339]: pam_unix(sudo:session): session closed for user root Mar 6 02:54:02.937128 sshd[2338]: Connection closed by 10.200.16.10 port 49986 Mar 6 02:54:02.937895 sshd-session[2335]: pam_unix(sshd:session): session closed for user core Mar 6 02:54:02.941536 systemd[1]: sshd@4-10.200.20.16:22-10.200.16.10:49986.service: Deactivated successfully. Mar 6 02:54:02.943446 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 02:54:02.944926 systemd-logind[1879]: Session 7 logged out. Waiting for processes to exit. Mar 6 02:54:02.946372 systemd-logind[1879]: Removed session 7. Mar 6 02:54:03.049973 systemd[1]: Started sshd@5-10.200.20.16:22-10.200.16.10:49988.service - OpenSSH per-connection server daemon (10.200.16.10:49988). Mar 6 02:54:03.583798 sshd[2345]: Accepted publickey for core from 10.200.16.10 port 49988 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:54:03.584964 sshd-session[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:54:03.589103 systemd-logind[1879]: New session 8 of user core. Mar 6 02:54:03.593878 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 02:54:03.798483 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 02:54:03.798700 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:54:03.805956 sudo[2350]: pam_unix(sudo:session): session closed for user root Mar 6 02:54:03.809978 sudo[2349]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 6 02:54:03.810193 sudo[2349]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:54:03.817283 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 02:54:03.846572 augenrules[2372]: No rules Mar 6 02:54:03.848048 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 02:54:03.848387 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 02:54:03.849743 sudo[2349]: pam_unix(sudo:session): session closed for user root Mar 6 02:54:03.955069 sshd[2348]: Connection closed by 10.200.16.10 port 49988 Mar 6 02:54:03.955952 sshd-session[2345]: pam_unix(sshd:session): session closed for user core Mar 6 02:54:03.960034 systemd[1]: sshd@5-10.200.20.16:22-10.200.16.10:49988.service: Deactivated successfully. Mar 6 02:54:03.961544 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 02:54:03.962899 systemd-logind[1879]: Session 8 logged out. Waiting for processes to exit. Mar 6 02:54:03.963661 systemd-logind[1879]: Removed session 8. Mar 6 02:54:04.015368 systemd[1]: Started sshd@6-10.200.20.16:22-10.200.16.10:49994.service - OpenSSH per-connection server daemon (10.200.16.10:49994). Mar 6 02:54:04.448950 sshd[2381]: Accepted publickey for core from 10.200.16.10 port 49994 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:54:04.450088 sshd-session[2381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:54:04.454272 systemd-logind[1879]: New session 9 of user core. Mar 6 02:54:04.459897 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 02:54:04.613853 sudo[2385]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 02:54:04.614076 sudo[2385]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:54:05.934074 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 02:54:05.950284 (dockerd)[2402]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 02:54:07.001574 dockerd[2402]: time="2026-03-06T02:54:07.001333619Z" level=info msg="Starting up" Mar 6 02:54:07.002757 dockerd[2402]: time="2026-03-06T02:54:07.002501787Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 6 02:54:07.010459 dockerd[2402]: time="2026-03-06T02:54:07.010426051Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 6 02:54:07.077084 dockerd[2402]: time="2026-03-06T02:54:07.077038787Z" level=info msg="Loading containers: start." Mar 6 02:54:07.113756 kernel: Initializing XFRM netlink socket Mar 6 02:54:07.475370 systemd-networkd[1493]: docker0: Link UP Mar 6 02:54:07.494277 dockerd[2402]: time="2026-03-06T02:54:07.494172787Z" level=info msg="Loading containers: done." Mar 6 02:54:07.506564 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3056839614-merged.mount: Deactivated successfully. Mar 6 02:54:07.516533 dockerd[2402]: time="2026-03-06T02:54:07.516182035Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 02:54:07.516533 dockerd[2402]: time="2026-03-06T02:54:07.516275947Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 6 02:54:07.516533 dockerd[2402]: time="2026-03-06T02:54:07.516375171Z" level=info msg="Initializing buildkit" Mar 6 02:54:07.568156 dockerd[2402]: time="2026-03-06T02:54:07.568112547Z" level=info msg="Completed buildkit initialization" Mar 6 02:54:07.573035 dockerd[2402]: time="2026-03-06T02:54:07.572995843Z" level=info msg="Daemon has completed initialization" Mar 6 02:54:07.573627 dockerd[2402]: time="2026-03-06T02:54:07.573269731Z" level=info msg="API listen on /run/docker.sock" Mar 6 02:54:07.573369 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 02:54:07.954874 containerd[1911]: time="2026-03-06T02:54:07.954826643Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 6 02:54:08.917461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573611342.mount: Deactivated successfully. Mar 6 02:54:09.934479 containerd[1911]: time="2026-03-06T02:54:09.933843888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:09.938791 containerd[1911]: time="2026-03-06T02:54:09.938757317Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701796" Mar 6 02:54:09.943783 containerd[1911]: time="2026-03-06T02:54:09.943753083Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:09.948128 containerd[1911]: time="2026-03-06T02:54:09.948087096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:09.949193 containerd[1911]: time="2026-03-06T02:54:09.948763197Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 1.993896312s" Mar 6 02:54:09.949193 containerd[1911]: time="2026-03-06T02:54:09.948793375Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 6 02:54:09.949438 containerd[1911]: time="2026-03-06T02:54:09.949383369Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 6 02:54:10.025043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 6 02:54:10.026888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:54:10.575425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:54:10.583999 (kubelet)[2674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:54:10.610360 kubelet[2674]: E0306 02:54:10.610309 2674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:54:10.612375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:54:10.612614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:54:10.613201 systemd[1]: kubelet.service: Consumed 109ms CPU time, 105.4M memory peak. Mar 6 02:54:11.649887 containerd[1911]: time="2026-03-06T02:54:11.649821494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:11.653187 containerd[1911]: time="2026-03-06T02:54:11.653147375Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063039" Mar 6 02:54:11.656771 containerd[1911]: time="2026-03-06T02:54:11.656744160Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:11.662203 containerd[1911]: time="2026-03-06T02:54:11.662170532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:11.662894 containerd[1911]: time="2026-03-06T02:54:11.662768718Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 1.713179958s" Mar 6 02:54:11.662894 containerd[1911]: time="2026-03-06T02:54:11.662795959Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 6 02:54:11.663401 containerd[1911]: time="2026-03-06T02:54:11.663207180Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 6 02:54:12.562770 containerd[1911]: time="2026-03-06T02:54:12.562295315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:12.566549 containerd[1911]: time="2026-03-06T02:54:12.566520696Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797901" Mar 6 02:54:12.570273 containerd[1911]: time="2026-03-06T02:54:12.570241782Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:12.574819 containerd[1911]: time="2026-03-06T02:54:12.574784077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:12.575452 containerd[1911]: time="2026-03-06T02:54:12.575426217Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 912.19546ms" Mar 6 02:54:12.575535 containerd[1911]: time="2026-03-06T02:54:12.575522628Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 6 02:54:12.576190 containerd[1911]: time="2026-03-06T02:54:12.576159896Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 6 02:54:14.026055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049366060.mount: Deactivated successfully. Mar 6 02:54:14.244714 containerd[1911]: time="2026-03-06T02:54:14.244649015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:14.248665 containerd[1911]: time="2026-03-06T02:54:14.248498433Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329583" Mar 6 02:54:14.252578 containerd[1911]: time="2026-03-06T02:54:14.252538864Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:14.259657 containerd[1911]: time="2026-03-06T02:54:14.259596751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:14.260201 containerd[1911]: time="2026-03-06T02:54:14.260026916Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 1.683835483s" Mar 6 02:54:14.260201 containerd[1911]: time="2026-03-06T02:54:14.260050677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 6 02:54:14.260745 containerd[1911]: time="2026-03-06T02:54:14.260710818Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 6 02:54:14.986630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221587770.mount: Deactivated successfully. Mar 6 02:54:16.303900 containerd[1911]: time="2026-03-06T02:54:16.303840183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:16.307153 containerd[1911]: time="2026-03-06T02:54:16.306945618Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Mar 6 02:54:16.310514 containerd[1911]: time="2026-03-06T02:54:16.310483993Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:16.315281 containerd[1911]: time="2026-03-06T02:54:16.315241807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:16.316217 containerd[1911]: time="2026-03-06T02:54:16.316028584Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 2.055275805s" Mar 6 02:54:16.316217 containerd[1911]: time="2026-03-06T02:54:16.316058065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 6 02:54:16.316585 containerd[1911]: time="2026-03-06T02:54:16.316553769Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 6 02:54:16.909992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3762368492.mount: Deactivated successfully. Mar 6 02:54:16.940430 containerd[1911]: time="2026-03-06T02:54:16.940368927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:16.944601 containerd[1911]: time="2026-03-06T02:54:16.944436199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 6 02:54:16.948499 containerd[1911]: time="2026-03-06T02:54:16.948465774Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:16.953413 containerd[1911]: time="2026-03-06T02:54:16.952956572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:16.953413 containerd[1911]: time="2026-03-06T02:54:16.953298839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 636.638195ms" Mar 6 02:54:16.953413 containerd[1911]: time="2026-03-06T02:54:16.953327040Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 6 02:54:16.953994 containerd[1911]: time="2026-03-06T02:54:16.953976012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 6 02:54:17.594476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331400393.mount: Deactivated successfully. Mar 6 02:54:18.831030 containerd[1911]: time="2026-03-06T02:54:18.830969128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:18.834949 containerd[1911]: time="2026-03-06T02:54:18.834912737Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738165" Mar 6 02:54:18.838474 containerd[1911]: time="2026-03-06T02:54:18.838418787Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:18.843040 containerd[1911]: time="2026-03-06T02:54:18.842977307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:18.843613 containerd[1911]: time="2026-03-06T02:54:18.843513879Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.889438496s" Mar 6 02:54:18.843613 containerd[1911]: time="2026-03-06T02:54:18.843546016Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 6 02:54:20.082022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:54:20.082496 systemd[1]: kubelet.service: Consumed 109ms CPU time, 105.4M memory peak. Mar 6 02:54:20.084645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:54:20.109488 systemd[1]: Reload requested from client PID 2843 ('systemctl') (unit session-9.scope)... Mar 6 02:54:20.109632 systemd[1]: Reloading... Mar 6 02:54:20.214766 zram_generator::config[2890]: No configuration found. Mar 6 02:54:20.363846 systemd[1]: Reloading finished in 253 ms. Mar 6 02:54:20.411065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:54:20.416186 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:54:20.418171 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 02:54:20.418352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:54:20.418392 systemd[1]: kubelet.service: Consumed 80ms CPU time, 95M memory peak. Mar 6 02:54:20.421890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:54:20.552273 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 6 02:54:20.629174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:54:20.640020 (kubelet)[2959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 02:54:20.664859 kubelet[2959]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:54:20.862178 kubelet[2959]: I0306 02:54:20.862118 2959 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 6 02:54:20.862178 kubelet[2959]: I0306 02:54:20.862168 2959 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 02:54:20.862178 kubelet[2959]: I0306 02:54:20.862190 2959 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 02:54:20.862178 kubelet[2959]: I0306 02:54:20.862194 2959 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 02:54:20.862396 kubelet[2959]: I0306 02:54:20.862379 2959 server.go:951] "Client rotation is on, will bootstrap in background" Mar 6 02:54:21.156862 kubelet[2959]: E0306 02:54:21.156810 2959 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 02:54:21.157253 kubelet[2959]: I0306 02:54:21.157197 2959 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 02:54:21.161103 kubelet[2959]: I0306 02:54:21.161081 2959 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 02:54:21.165974 kubelet[2959]: I0306 02:54:21.165842 2959 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 02:54:21.166098 kubelet[2959]: I0306 02:54:21.166027 2959 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 02:54:21.166168 kubelet[2959]: I0306 02:54:21.166050 2959 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-b98e3238ca","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 02:54:21.166264 kubelet[2959]: I0306 02:54:21.166171 2959 topology_manager.go:143] "Creating topology manager with none policy" Mar 6 02:54:21.166264 kubelet[2959]: I0306 02:54:21.166177 2959 container_manager_linux.go:308] "Creating device plugin manager" Mar 6 02:54:21.166300 kubelet[2959]: I0306 02:54:21.166286 2959 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 02:54:21.171860 kubelet[2959]: I0306 02:54:21.171837 2959 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 6 02:54:21.172018 kubelet[2959]: I0306 02:54:21.171981 2959 kubelet.go:482] "Attempting to sync node with API server" Mar 6 02:54:21.172018 kubelet[2959]: I0306 02:54:21.171994 2959 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 02:54:21.172018 kubelet[2959]: I0306 02:54:21.172011 2959 kubelet.go:394] "Adding apiserver pod source" Mar 6 02:54:21.172018 kubelet[2959]: I0306 02:54:21.172018 2959 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 02:54:21.176291 kubelet[2959]: I0306 02:54:21.176248 2959 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 02:54:21.176958 kubelet[2959]: I0306 02:54:21.176937 2959 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 02:54:21.177000 kubelet[2959]: I0306 02:54:21.176965 2959 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 02:54:21.177018 kubelet[2959]: W0306 02:54:21.177001 2959 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 02:54:21.179046 kubelet[2959]: I0306 02:54:21.178903 2959 server.go:1257] "Started kubelet" Mar 6 02:54:21.180301 kubelet[2959]: I0306 02:54:21.180282 2959 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 6 02:54:21.183563 kubelet[2959]: E0306 02:54:21.182663 2959 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.16:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.3-n-b98e3238ca.189a20f9f093ba29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.3-n-b98e3238ca,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.3-n-b98e3238ca,},FirstTimestamp:2026-03-06 02:54:21.178870313 +0000 UTC m=+0.536342141,LastTimestamp:2026-03-06 02:54:21.178870313 +0000 UTC m=+0.536342141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.3-n-b98e3238ca,}" Mar 6 02:54:21.184701 kubelet[2959]: I0306 02:54:21.184525 2959 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 02:54:21.185496 kubelet[2959]: I0306 02:54:21.185466 2959 server.go:317] "Adding debug handlers to kubelet server" Mar 6 02:54:21.187292 kubelet[2959]: I0306 02:54:21.187276 2959 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 6 02:54:21.187632 kubelet[2959]: E0306 02:54:21.187610 2959 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-b98e3238ca\" not found" Mar 6 02:54:21.189028 kubelet[2959]: I0306 02:54:21.188982 2959 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 02:54:21.189256 kubelet[2959]: I0306 02:54:21.189237 2959 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 02:54:21.189495 kubelet[2959]: I0306 02:54:21.189480 2959 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 02:54:21.189785 kubelet[2959]: I0306 02:54:21.189768 2959 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 02:54:21.189917 kubelet[2959]: I0306 02:54:21.189907 2959 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 02:54:21.191795 kubelet[2959]: I0306 02:54:21.190857 2959 reconciler.go:29] "Reconciler: start to sync state" Mar 6 02:54:21.191795 kubelet[2959]: E0306 02:54:21.191016 2959 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-b98e3238ca?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="200ms" Mar 6 02:54:21.191795 kubelet[2959]: I0306 02:54:21.191194 2959 factory.go:223] Registration of the systemd container factory successfully Mar 6 02:54:21.191795 kubelet[2959]: I0306 02:54:21.191271 2959 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 02:54:21.192674 kubelet[2959]: E0306 02:54:21.192652 2959 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 02:54:21.193073 kubelet[2959]: I0306 02:54:21.193059 2959 factory.go:223] Registration of the containerd container factory successfully Mar 6 02:54:21.197762 kubelet[2959]: I0306 02:54:21.197486 2959 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 02:54:21.198451 kubelet[2959]: I0306 02:54:21.198420 2959 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 02:54:21.198451 kubelet[2959]: I0306 02:54:21.198447 2959 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 6 02:54:21.198524 kubelet[2959]: I0306 02:54:21.198470 2959 kubelet.go:2501] "Starting kubelet main sync loop" Mar 6 02:54:21.198524 kubelet[2959]: E0306 02:54:21.198507 2959 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 02:54:21.216192 kubelet[2959]: I0306 02:54:21.216168 2959 cpu_manager.go:225] "Starting" policy="none" Mar 6 02:54:21.216349 kubelet[2959]: I0306 02:54:21.216337 2959 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 6 02:54:21.216402 kubelet[2959]: I0306 02:54:21.216393 2959 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 6 02:54:21.223214 kubelet[2959]: I0306 02:54:21.223189 2959 policy_none.go:50] "Start" Mar 6 02:54:21.223350 kubelet[2959]: I0306 02:54:21.223341 2959 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 02:54:21.223408 kubelet[2959]: I0306 02:54:21.223391 2959 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 02:54:21.229818 kubelet[2959]: I0306 02:54:21.229792 2959 policy_none.go:44] "Start" Mar 6 02:54:21.233879 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 02:54:21.250046 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 02:54:21.253369 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 02:54:21.260349 kubelet[2959]: E0306 02:54:21.260324 2959 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 02:54:21.260791 kubelet[2959]: I0306 02:54:21.260774 2959 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 6 02:54:21.260900 kubelet[2959]: I0306 02:54:21.260865 2959 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 02:54:21.261212 kubelet[2959]: I0306 02:54:21.261193 2959 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 6 02:54:21.262620 kubelet[2959]: E0306 02:54:21.262604 2959 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 02:54:21.262793 kubelet[2959]: E0306 02:54:21.262780 2959 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.3-n-b98e3238ca\" not found" Mar 6 02:54:21.312893 systemd[1]: Created slice kubepods-burstable-podedf131f23df2607b583b12239262bd5c.slice - libcontainer container kubepods-burstable-podedf131f23df2607b583b12239262bd5c.slice. Mar 6 02:54:21.321492 kubelet[2959]: E0306 02:54:21.321466 2959 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-b98e3238ca\" not found" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.325375 systemd[1]: Created slice kubepods-burstable-pod5079d6b5ab8f1acc9eb285e64a4e506b.slice - libcontainer container kubepods-burstable-pod5079d6b5ab8f1acc9eb285e64a4e506b.slice. Mar 6 02:54:21.331979 kubelet[2959]: E0306 02:54:21.331952 2959 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-b98e3238ca\" not found" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.334885 systemd[1]: Created slice kubepods-burstable-pod340649d7ac4d203f0c879c256ed980b3.slice - libcontainer container kubepods-burstable-pod340649d7ac4d203f0c879c256ed980b3.slice. Mar 6 02:54:21.336359 kubelet[2959]: E0306 02:54:21.336334 2959 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-b98e3238ca\" not found" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.363464 kubelet[2959]: I0306 02:54:21.363093 2959 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.363464 kubelet[2959]: E0306 02:54:21.363428 2959 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.392213 kubelet[2959]: E0306 02:54:21.392178 2959 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-b98e3238ca?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="400ms" Mar 6 02:54:21.493023 kubelet[2959]: I0306 02:54:21.492901 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/340649d7ac4d203f0c879c256ed980b3-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-b98e3238ca\" (UID: \"340649d7ac4d203f0c879c256ed980b3\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.493023 kubelet[2959]: I0306 02:54:21.492940 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edf131f23df2607b583b12239262bd5c-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-b98e3238ca\" (UID: \"edf131f23df2607b583b12239262bd5c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.493023 kubelet[2959]: I0306 02:54:21.492964 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d6b5ab8f1acc9eb285e64a4e506b-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" (UID: \"5079d6b5ab8f1acc9eb285e64a4e506b\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.493023 kubelet[2959]: I0306 02:54:21.492976 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d6b5ab8f1acc9eb285e64a4e506b-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" (UID: \"5079d6b5ab8f1acc9eb285e64a4e506b\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.493023 kubelet[2959]: I0306 02:54:21.492993 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d6b5ab8f1acc9eb285e64a4e506b-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" (UID: \"5079d6b5ab8f1acc9eb285e64a4e506b\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.493185 kubelet[2959]: I0306 02:54:21.493006 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d6b5ab8f1acc9eb285e64a4e506b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" (UID: \"5079d6b5ab8f1acc9eb285e64a4e506b\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.493185 kubelet[2959]: I0306 02:54:21.493029 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edf131f23df2607b583b12239262bd5c-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-b98e3238ca\" (UID: \"edf131f23df2607b583b12239262bd5c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.493185 kubelet[2959]: I0306 02:54:21.493038 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edf131f23df2607b583b12239262bd5c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-b98e3238ca\" (UID: \"edf131f23df2607b583b12239262bd5c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.493185 kubelet[2959]: I0306 02:54:21.493048 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d6b5ab8f1acc9eb285e64a4e506b-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" (UID: \"5079d6b5ab8f1acc9eb285e64a4e506b\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.565798 kubelet[2959]: I0306 02:54:21.565760 2959 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.566188 kubelet[2959]: E0306 02:54:21.566164 2959 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.630665 containerd[1911]: time="2026-03-06T02:54:21.630600616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-b98e3238ca,Uid:edf131f23df2607b583b12239262bd5c,Namespace:kube-system,Attempt:0,}" Mar 6 02:54:21.637945 containerd[1911]: time="2026-03-06T02:54:21.637898653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-b98e3238ca,Uid:5079d6b5ab8f1acc9eb285e64a4e506b,Namespace:kube-system,Attempt:0,}" Mar 6 02:54:21.643239 containerd[1911]: time="2026-03-06T02:54:21.643200513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-b98e3238ca,Uid:340649d7ac4d203f0c879c256ed980b3,Namespace:kube-system,Attempt:0,}" Mar 6 02:54:21.793361 kubelet[2959]: E0306 02:54:21.793319 2959 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-b98e3238ca?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="800ms" Mar 6 02:54:21.968857 kubelet[2959]: I0306 02:54:21.968464 2959 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:21.968857 kubelet[2959]: E0306 02:54:21.968779 2959 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:22.306185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108629026.mount: Deactivated successfully. Mar 6 02:54:22.328764 containerd[1911]: time="2026-03-06T02:54:22.328305127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:54:22.336844 containerd[1911]: time="2026-03-06T02:54:22.336799521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 6 02:54:22.355787 containerd[1911]: time="2026-03-06T02:54:22.355573223Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:54:22.358924 containerd[1911]: time="2026-03-06T02:54:22.358886026Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:54:22.362202 containerd[1911]: time="2026-03-06T02:54:22.362171515Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 6 02:54:22.369705 containerd[1911]: time="2026-03-06T02:54:22.369663840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:54:22.370182 containerd[1911]: time="2026-03-06T02:54:22.370152026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 734.739481ms" Mar 6 02:54:22.373242 containerd[1911]: time="2026-03-06T02:54:22.373212299Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:54:22.385187 containerd[1911]: time="2026-03-06T02:54:22.384501301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 6 02:54:22.386376 containerd[1911]: time="2026-03-06T02:54:22.386341217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 741.211769ms" Mar 6 02:54:22.421863 containerd[1911]: time="2026-03-06T02:54:22.420445582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 766.04067ms" Mar 6 02:54:22.426441 containerd[1911]: time="2026-03-06T02:54:22.426305662Z" level=info msg="connecting to shim 1d1cf4e2fd748437f5e6d9fbb53e7d835327562ee07c2ced9fe7af8924ddd54d" address="unix:///run/containerd/s/38a3b988c3d14a52c9af15e2cf3960546c50b1179bc221ebf0396a161590c0d6" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:54:22.439761 containerd[1911]: time="2026-03-06T02:54:22.439703254Z" level=info msg="connecting to shim febb358e20cd66eabc77b323bf7cf8330bd760a829c68c534f6ec901b82f9041" address="unix:///run/containerd/s/f5c7b33f9f69ea4811967339cb621fd5121e13c6f2355ed65e61d1d6dc690627" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:54:22.451922 systemd[1]: Started cri-containerd-1d1cf4e2fd748437f5e6d9fbb53e7d835327562ee07c2ced9fe7af8924ddd54d.scope - libcontainer container 1d1cf4e2fd748437f5e6d9fbb53e7d835327562ee07c2ced9fe7af8924ddd54d. Mar 6 02:54:22.468939 systemd[1]: Started cri-containerd-febb358e20cd66eabc77b323bf7cf8330bd760a829c68c534f6ec901b82f9041.scope - libcontainer container febb358e20cd66eabc77b323bf7cf8330bd760a829c68c534f6ec901b82f9041. Mar 6 02:54:22.488344 containerd[1911]: time="2026-03-06T02:54:22.488301962Z" level=info msg="connecting to shim c1cc28528ce7a90009f430381f6f806894bdcd0f1228c4e360169f4012915da9" address="unix:///run/containerd/s/41b41448b41d4ba70d4008c3e4169e28e46b2f493d03d19bbac20b50758fe8ab" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:54:22.511152 containerd[1911]: time="2026-03-06T02:54:22.511072636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-b98e3238ca,Uid:edf131f23df2607b583b12239262bd5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d1cf4e2fd748437f5e6d9fbb53e7d835327562ee07c2ced9fe7af8924ddd54d\"" Mar 6 02:54:22.512903 systemd[1]: Started cri-containerd-c1cc28528ce7a90009f430381f6f806894bdcd0f1228c4e360169f4012915da9.scope - libcontainer container c1cc28528ce7a90009f430381f6f806894bdcd0f1228c4e360169f4012915da9. Mar 6 02:54:22.521809 containerd[1911]: time="2026-03-06T02:54:22.521722574Z" level=info msg="CreateContainer within sandbox \"1d1cf4e2fd748437f5e6d9fbb53e7d835327562ee07c2ced9fe7af8924ddd54d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 02:54:22.524220 containerd[1911]: time="2026-03-06T02:54:22.524190753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-b98e3238ca,Uid:5079d6b5ab8f1acc9eb285e64a4e506b,Namespace:kube-system,Attempt:0,} returns sandbox id \"febb358e20cd66eabc77b323bf7cf8330bd760a829c68c534f6ec901b82f9041\"" Mar 6 02:54:22.534171 containerd[1911]: time="2026-03-06T02:54:22.534092471Z" level=info msg="CreateContainer within sandbox \"febb358e20cd66eabc77b323bf7cf8330bd760a829c68c534f6ec901b82f9041\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 02:54:22.557161 containerd[1911]: time="2026-03-06T02:54:22.557043752Z" level=info msg="Container 6e2510f61f50e9dfb3e925f08a2ce1751051a6a1179ee452621dd0eaaf1b469b: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:22.567843 containerd[1911]: time="2026-03-06T02:54:22.567320924Z" level=info msg="Container 95a9428a7b2602cfafae03e71c6394be6043c92eec949ab171d20e80a5773dc5: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:22.570256 containerd[1911]: time="2026-03-06T02:54:22.570222487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-b98e3238ca,Uid:340649d7ac4d203f0c879c256ed980b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1cc28528ce7a90009f430381f6f806894bdcd0f1228c4e360169f4012915da9\"" Mar 6 02:54:22.580596 containerd[1911]: time="2026-03-06T02:54:22.580543036Z" level=info msg="CreateContainer within sandbox \"c1cc28528ce7a90009f430381f6f806894bdcd0f1228c4e360169f4012915da9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 02:54:22.594623 kubelet[2959]: E0306 02:54:22.594554 2959 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-b98e3238ca?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="1.6s" Mar 6 02:54:22.604309 containerd[1911]: time="2026-03-06T02:54:22.604267490Z" level=info msg="CreateContainer within sandbox \"febb358e20cd66eabc77b323bf7cf8330bd760a829c68c534f6ec901b82f9041\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e2510f61f50e9dfb3e925f08a2ce1751051a6a1179ee452621dd0eaaf1b469b\"" Mar 6 02:54:22.605779 containerd[1911]: time="2026-03-06T02:54:22.605099065Z" level=info msg="StartContainer for \"6e2510f61f50e9dfb3e925f08a2ce1751051a6a1179ee452621dd0eaaf1b469b\"" Mar 6 02:54:22.607682 containerd[1911]: time="2026-03-06T02:54:22.607652639Z" level=info msg="connecting to shim 6e2510f61f50e9dfb3e925f08a2ce1751051a6a1179ee452621dd0eaaf1b469b" address="unix:///run/containerd/s/f5c7b33f9f69ea4811967339cb621fd5121e13c6f2355ed65e61d1d6dc690627" protocol=ttrpc version=3 Mar 6 02:54:22.621958 systemd[1]: Started cri-containerd-6e2510f61f50e9dfb3e925f08a2ce1751051a6a1179ee452621dd0eaaf1b469b.scope - libcontainer container 6e2510f61f50e9dfb3e925f08a2ce1751051a6a1179ee452621dd0eaaf1b469b. Mar 6 02:54:22.623435 containerd[1911]: time="2026-03-06T02:54:22.623395133Z" level=info msg="CreateContainer within sandbox \"1d1cf4e2fd748437f5e6d9fbb53e7d835327562ee07c2ced9fe7af8924ddd54d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95a9428a7b2602cfafae03e71c6394be6043c92eec949ab171d20e80a5773dc5\"" Mar 6 02:54:22.624594 containerd[1911]: time="2026-03-06T02:54:22.624175074Z" level=info msg="StartContainer for \"95a9428a7b2602cfafae03e71c6394be6043c92eec949ab171d20e80a5773dc5\"" Mar 6 02:54:22.626075 containerd[1911]: time="2026-03-06T02:54:22.625967380Z" level=info msg="connecting to shim 95a9428a7b2602cfafae03e71c6394be6043c92eec949ab171d20e80a5773dc5" address="unix:///run/containerd/s/38a3b988c3d14a52c9af15e2cf3960546c50b1179bc221ebf0396a161590c0d6" protocol=ttrpc version=3 Mar 6 02:54:22.627667 containerd[1911]: time="2026-03-06T02:54:22.627638426Z" level=info msg="Container 9f705e8e6c261590f4d1110bbbc7a75907cf267e3818af7f3202dab474b24148: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:22.645884 systemd[1]: Started cri-containerd-95a9428a7b2602cfafae03e71c6394be6043c92eec949ab171d20e80a5773dc5.scope - libcontainer container 95a9428a7b2602cfafae03e71c6394be6043c92eec949ab171d20e80a5773dc5. Mar 6 02:54:22.653024 containerd[1911]: time="2026-03-06T02:54:22.651937852Z" level=info msg="CreateContainer within sandbox \"c1cc28528ce7a90009f430381f6f806894bdcd0f1228c4e360169f4012915da9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f705e8e6c261590f4d1110bbbc7a75907cf267e3818af7f3202dab474b24148\"" Mar 6 02:54:22.653717 containerd[1911]: time="2026-03-06T02:54:22.653684757Z" level=info msg="StartContainer for \"9f705e8e6c261590f4d1110bbbc7a75907cf267e3818af7f3202dab474b24148\"" Mar 6 02:54:22.655044 containerd[1911]: time="2026-03-06T02:54:22.654985437Z" level=info msg="connecting to shim 9f705e8e6c261590f4d1110bbbc7a75907cf267e3818af7f3202dab474b24148" address="unix:///run/containerd/s/41b41448b41d4ba70d4008c3e4169e28e46b2f493d03d19bbac20b50758fe8ab" protocol=ttrpc version=3 Mar 6 02:54:22.674085 systemd[1]: Started cri-containerd-9f705e8e6c261590f4d1110bbbc7a75907cf267e3818af7f3202dab474b24148.scope - libcontainer container 9f705e8e6c261590f4d1110bbbc7a75907cf267e3818af7f3202dab474b24148. Mar 6 02:54:22.703628 containerd[1911]: time="2026-03-06T02:54:22.703558856Z" level=info msg="StartContainer for \"6e2510f61f50e9dfb3e925f08a2ce1751051a6a1179ee452621dd0eaaf1b469b\" returns successfully" Mar 6 02:54:22.710992 containerd[1911]: time="2026-03-06T02:54:22.710956602Z" level=info msg="StartContainer for \"95a9428a7b2602cfafae03e71c6394be6043c92eec949ab171d20e80a5773dc5\" returns successfully" Mar 6 02:54:22.737331 containerd[1911]: time="2026-03-06T02:54:22.737291311Z" level=info msg="StartContainer for \"9f705e8e6c261590f4d1110bbbc7a75907cf267e3818af7f3202dab474b24148\" returns successfully" Mar 6 02:54:22.771165 kubelet[2959]: I0306 02:54:22.771136 2959 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:22.949348 update_engine[1885]: I20260306 02:54:22.948773 1885 update_attempter.cc:509] Updating boot flags... Mar 6 02:54:23.218707 kubelet[2959]: E0306 02:54:23.218596 2959 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-b98e3238ca\" not found" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:23.226253 kubelet[2959]: E0306 02:54:23.226201 2959 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-b98e3238ca\" not found" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:23.228824 kubelet[2959]: E0306 02:54:23.228670 2959 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-b98e3238ca\" not found" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:23.659499 kubelet[2959]: I0306 02:54:23.659456 2959 kubelet_node_status.go:77] "Successfully registered node" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:23.659499 kubelet[2959]: E0306 02:54:23.659494 2959 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4459.2.3-n-b98e3238ca\": node \"ci-4459.2.3-n-b98e3238ca\" not found" Mar 6 02:54:23.678477 kubelet[2959]: E0306 02:54:23.678439 2959 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-b98e3238ca\" not found" Mar 6 02:54:23.779208 kubelet[2959]: E0306 02:54:23.779090 2959 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-b98e3238ca\" not found" Mar 6 02:54:23.880095 kubelet[2959]: E0306 02:54:23.880050 2959 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-b98e3238ca\" not found" Mar 6 02:54:23.988587 kubelet[2959]: I0306 02:54:23.988481 2959 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:23.992723 kubelet[2959]: E0306 02:54:23.992685 2959 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-b98e3238ca\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:23.992723 kubelet[2959]: I0306 02:54:23.992765 2959 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:23.994402 kubelet[2959]: E0306 02:54:23.994338 2959 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:23.994402 kubelet[2959]: I0306 02:54:23.994358 2959 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:23.996014 kubelet[2959]: E0306 02:54:23.995983 2959 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-n-b98e3238ca\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:24.176420 kubelet[2959]: I0306 02:54:24.176166 2959 apiserver.go:52] "Watching apiserver" Mar 6 02:54:24.191456 kubelet[2959]: I0306 02:54:24.191428 2959 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 02:54:24.229835 kubelet[2959]: I0306 02:54:24.228964 2959 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:24.229835 kubelet[2959]: I0306 02:54:24.229315 2959 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:24.231289 kubelet[2959]: E0306 02:54:24.231111 2959 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-n-b98e3238ca\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:24.231590 kubelet[2959]: E0306 02:54:24.231570 2959 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-b98e3238ca\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:25.231051 kubelet[2959]: I0306 02:54:25.230868 2959 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:25.242093 kubelet[2959]: I0306 02:54:25.241824 2959 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 6 02:54:26.197884 systemd[1]: Reload requested from client PID 3308 ('systemctl') (unit session-9.scope)... Mar 6 02:54:26.197898 systemd[1]: Reloading... Mar 6 02:54:26.280833 zram_generator::config[3352]: No configuration found. Mar 6 02:54:26.450371 systemd[1]: Reloading finished in 252 ms. Mar 6 02:54:26.478347 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:54:26.479858 kubelet[2959]: I0306 02:54:26.479702 2959 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 02:54:26.498214 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 02:54:26.498416 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:54:26.498470 systemd[1]: kubelet.service: Consumed 514ms CPU time, 121.6M memory peak. Mar 6 02:54:26.502878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:54:26.646911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:54:26.651095 (kubelet)[3419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 02:54:26.682350 kubelet[3419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:54:26.690769 kubelet[3419]: I0306 02:54:26.689708 3419 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 6 02:54:26.690769 kubelet[3419]: I0306 02:54:26.689816 3419 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 02:54:26.690769 kubelet[3419]: I0306 02:54:26.689835 3419 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 02:54:26.690769 kubelet[3419]: I0306 02:54:26.689839 3419 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 02:54:26.690769 kubelet[3419]: I0306 02:54:26.690022 3419 server.go:951] "Client rotation is on, will bootstrap in background" Mar 6 02:54:26.691294 kubelet[3419]: I0306 02:54:26.691269 3419 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 02:54:26.708923 kubelet[3419]: I0306 02:54:26.708643 3419 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 02:54:26.713567 kubelet[3419]: I0306 02:54:26.713551 3419 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 02:54:26.717188 kubelet[3419]: I0306 02:54:26.717147 3419 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 02:54:26.717533 kubelet[3419]: I0306 02:54:26.717503 3419 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 02:54:26.717843 kubelet[3419]: I0306 02:54:26.717643 3419 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-b98e3238ca","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 02:54:26.718067 kubelet[3419]: I0306 02:54:26.718052 3419 topology_manager.go:143] "Creating topology manager with none policy" Mar 6 02:54:26.718135 kubelet[3419]: I0306 02:54:26.718127 3419 container_manager_linux.go:308] "Creating device plugin manager" Mar 6 02:54:26.718215 kubelet[3419]: I0306 02:54:26.718207 3419 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 02:54:26.718454 kubelet[3419]: I0306 02:54:26.718442 3419 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 6 02:54:26.719082 kubelet[3419]: I0306 02:54:26.719055 3419 kubelet.go:482] "Attempting to sync node with API server" Mar 6 02:54:26.719082 kubelet[3419]: I0306 02:54:26.719086 3419 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 02:54:26.719179 kubelet[3419]: I0306 02:54:26.719104 3419 kubelet.go:394] "Adding apiserver pod source" Mar 6 02:54:26.719179 kubelet[3419]: I0306 02:54:26.719113 3419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 02:54:26.733293 kubelet[3419]: I0306 02:54:26.733253 3419 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 02:54:26.734220 kubelet[3419]: I0306 02:54:26.734202 3419 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 02:54:26.734322 kubelet[3419]: I0306 02:54:26.734311 3419 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 02:54:26.739142 kubelet[3419]: I0306 02:54:26.738617 3419 server.go:1257] "Started kubelet" Mar 6 02:54:26.739142 kubelet[3419]: I0306 02:54:26.738811 3419 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 02:54:26.739530 kubelet[3419]: I0306 02:54:26.739412 3419 server.go:317] "Adding debug handlers to kubelet server" Mar 6 02:54:26.740167 kubelet[3419]: I0306 02:54:26.739886 3419 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 02:54:26.740388 kubelet[3419]: I0306 02:54:26.740285 3419 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 02:54:26.741328 kubelet[3419]: I0306 02:54:26.741143 3419 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 02:54:26.745977 kubelet[3419]: I0306 02:54:26.745813 3419 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 6 02:54:26.746645 kubelet[3419]: I0306 02:54:26.746619 3419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 02:54:26.748522 kubelet[3419]: I0306 02:54:26.748462 3419 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 6 02:54:26.748595 kubelet[3419]: I0306 02:54:26.748537 3419 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 02:54:26.748784 kubelet[3419]: I0306 02:54:26.748627 3419 reconciler.go:29] "Reconciler: start to sync state" Mar 6 02:54:26.750869 kubelet[3419]: I0306 02:54:26.749491 3419 factory.go:223] Registration of the systemd container factory successfully Mar 6 02:54:26.750869 kubelet[3419]: I0306 02:54:26.749588 3419 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 02:54:26.750869 kubelet[3419]: I0306 02:54:26.750830 3419 factory.go:223] Registration of the containerd container factory successfully Mar 6 02:54:26.760560 kubelet[3419]: I0306 02:54:26.760517 3419 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 02:54:26.761473 kubelet[3419]: I0306 02:54:26.761456 3419 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 02:54:26.761564 kubelet[3419]: I0306 02:54:26.761558 3419 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 6 02:54:26.761641 kubelet[3419]: I0306 02:54:26.761634 3419 kubelet.go:2501] "Starting kubelet main sync loop" Mar 6 02:54:26.761783 kubelet[3419]: E0306 02:54:26.761755 3419 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 02:54:26.797818 kubelet[3419]: I0306 02:54:26.797793 3419 cpu_manager.go:225] "Starting" policy="none" Mar 6 02:54:26.797984 kubelet[3419]: I0306 02:54:26.797973 3419 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 6 02:54:26.798112 kubelet[3419]: I0306 02:54:26.798032 3419 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 6 02:54:26.798346 kubelet[3419]: I0306 02:54:26.798330 3419 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 6 02:54:26.798436 kubelet[3419]: I0306 02:54:26.798412 3419 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 6 02:54:26.798561 kubelet[3419]: I0306 02:54:26.798486 3419 policy_none.go:50] "Start" Mar 6 02:54:26.798561 kubelet[3419]: I0306 02:54:26.798497 3419 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 02:54:26.798561 kubelet[3419]: I0306 02:54:26.798507 3419 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 02:54:26.798762 kubelet[3419]: I0306 02:54:26.798750 3419 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 6 02:54:26.798908 kubelet[3419]: I0306 02:54:26.798825 3419 policy_none.go:44] "Start" Mar 6 02:54:26.802785 kubelet[3419]: E0306 02:54:26.802767 3419 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 02:54:26.803039 kubelet[3419]: I0306 02:54:26.803023 3419 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 6 02:54:26.803158 kubelet[3419]: I0306 02:54:26.803127 3419 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 02:54:26.805025 kubelet[3419]: I0306 02:54:26.804848 3419 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 6 02:54:26.808755 kubelet[3419]: E0306 02:54:26.807167 3419 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 02:54:26.862798 kubelet[3419]: I0306 02:54:26.862763 3419 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:26.862987 kubelet[3419]: I0306 02:54:26.862963 3419 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:26.863122 kubelet[3419]: I0306 02:54:26.862727 3419 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:26.871454 kubelet[3419]: I0306 02:54:26.871417 3419 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 6 02:54:26.880247 kubelet[3419]: I0306 02:54:26.880170 3419 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 6 02:54:26.880961 kubelet[3419]: I0306 02:54:26.880928 3419 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 6 02:54:26.881051 kubelet[3419]: E0306 02:54:26.880988 3419 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-b98e3238ca\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:26.908758 kubelet[3419]: I0306 02:54:26.908717 3419 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:26.919787 kubelet[3419]: I0306 02:54:26.918907 3419 kubelet_node_status.go:123] "Node was previously registered" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:26.919787 kubelet[3419]: I0306 02:54:26.919003 3419 kubelet_node_status.go:77] "Successfully registered node" node="ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.049831 kubelet[3419]: I0306 02:54:27.049718 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d6b5ab8f1acc9eb285e64a4e506b-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" (UID: \"5079d6b5ab8f1acc9eb285e64a4e506b\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.049831 kubelet[3419]: I0306 02:54:27.049766 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d6b5ab8f1acc9eb285e64a4e506b-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" (UID: \"5079d6b5ab8f1acc9eb285e64a4e506b\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.049831 kubelet[3419]: I0306 02:54:27.049796 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d6b5ab8f1acc9eb285e64a4e506b-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" (UID: \"5079d6b5ab8f1acc9eb285e64a4e506b\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.049831 kubelet[3419]: I0306 02:54:27.049808 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d6b5ab8f1acc9eb285e64a4e506b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" (UID: \"5079d6b5ab8f1acc9eb285e64a4e506b\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.050180 kubelet[3419]: I0306 02:54:27.050080 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edf131f23df2607b583b12239262bd5c-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-b98e3238ca\" (UID: \"edf131f23df2607b583b12239262bd5c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.050180 kubelet[3419]: I0306 02:54:27.050103 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edf131f23df2607b583b12239262bd5c-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-b98e3238ca\" (UID: \"edf131f23df2607b583b12239262bd5c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.050180 kubelet[3419]: I0306 02:54:27.050114 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edf131f23df2607b583b12239262bd5c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-b98e3238ca\" (UID: \"edf131f23df2607b583b12239262bd5c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.050180 kubelet[3419]: I0306 02:54:27.050125 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d6b5ab8f1acc9eb285e64a4e506b-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" (UID: \"5079d6b5ab8f1acc9eb285e64a4e506b\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.050180 kubelet[3419]: I0306 02:54:27.050137 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/340649d7ac4d203f0c879c256ed980b3-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-b98e3238ca\" (UID: \"340649d7ac4d203f0c879c256ed980b3\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.233653 sudo[3456]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 6 02:54:27.234364 sudo[3456]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 6 02:54:27.476727 sudo[3456]: pam_unix(sudo:session): session closed for user root Mar 6 02:54:27.719682 kubelet[3419]: I0306 02:54:27.719633 3419 apiserver.go:52] "Watching apiserver" Mar 6 02:54:27.754334 kubelet[3419]: I0306 02:54:27.749484 3419 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 02:54:27.788651 kubelet[3419]: I0306 02:54:27.786200 3419 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.789005 kubelet[3419]: I0306 02:54:27.788986 3419 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.795448 kubelet[3419]: I0306 02:54:27.795169 3419 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 6 02:54:27.797804 kubelet[3419]: E0306 02:54:27.797768 3419 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-b98e3238ca\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.807571 kubelet[3419]: I0306 02:54:27.807518 3419 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 6 02:54:27.807738 kubelet[3419]: E0306 02:54:27.807713 3419 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-n-b98e3238ca\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" Mar 6 02:54:27.828247 kubelet[3419]: I0306 02:54:27.828185 3419 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.3-n-b98e3238ca" podStartSLOduration=2.82815683 podStartE2EDuration="2.82815683s" podCreationTimestamp="2026-03-06 02:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:54:27.815021832 +0000 UTC m=+1.160411475" watchObservedRunningTime="2026-03-06 02:54:27.82815683 +0000 UTC m=+1.173546473" Mar 6 02:54:27.854198 kubelet[3419]: I0306 02:54:27.854121 3419 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-b98e3238ca" podStartSLOduration=1.85410107 podStartE2EDuration="1.85410107s" podCreationTimestamp="2026-03-06 02:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:54:27.828422166 +0000 UTC m=+1.173811809" watchObservedRunningTime="2026-03-06 02:54:27.85410107 +0000 UTC m=+1.199490713" Mar 6 02:54:27.854926 kubelet[3419]: I0306 02:54:27.854418 3419 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.3-n-b98e3238ca" podStartSLOduration=1.854411209 podStartE2EDuration="1.854411209s" podCreationTimestamp="2026-03-06 02:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:54:27.853326197 +0000 UTC m=+1.198715840" watchObservedRunningTime="2026-03-06 02:54:27.854411209 +0000 UTC m=+1.199800852" Mar 6 02:54:28.604045 sudo[2385]: pam_unix(sudo:session): session closed for user root Mar 6 02:54:28.684453 sshd[2384]: Connection closed by 10.200.16.10 port 49994 Mar 6 02:54:28.685956 sshd-session[2381]: pam_unix(sshd:session): session closed for user core Mar 6 02:54:28.689175 systemd[1]: sshd@6-10.200.20.16:22-10.200.16.10:49994.service: Deactivated successfully. Mar 6 02:54:28.691258 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 02:54:28.691513 systemd[1]: session-9.scope: Consumed 2.431s CPU time, 258.8M memory peak. Mar 6 02:54:28.694007 systemd-logind[1879]: Session 9 logged out. Waiting for processes to exit. Mar 6 02:54:28.695908 systemd-logind[1879]: Removed session 9. Mar 6 02:54:31.463970 kubelet[3419]: I0306 02:54:31.463931 3419 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 02:54:31.464588 containerd[1911]: time="2026-03-06T02:54:31.464272581Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 02:54:31.465035 kubelet[3419]: I0306 02:54:31.464810 3419 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 02:54:32.373070 systemd[1]: Created slice kubepods-besteffort-pod9d32d93c_8ae2_4a14_99ea_ce2d35d2ab70.slice - libcontainer container kubepods-besteffort-pod9d32d93c_8ae2_4a14_99ea_ce2d35d2ab70.slice. Mar 6 02:54:32.377798 kubelet[3419]: I0306 02:54:32.377771 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d32d93c-8ae2-4a14-99ea-ce2d35d2ab70-kube-proxy\") pod \"kube-proxy-lphbs\" (UID: \"9d32d93c-8ae2-4a14-99ea-ce2d35d2ab70\") " pod="kube-system/kube-proxy-lphbs" Mar 6 02:54:32.377887 kubelet[3419]: I0306 02:54:32.377802 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d32d93c-8ae2-4a14-99ea-ce2d35d2ab70-xtables-lock\") pod \"kube-proxy-lphbs\" (UID: \"9d32d93c-8ae2-4a14-99ea-ce2d35d2ab70\") " pod="kube-system/kube-proxy-lphbs" Mar 6 02:54:32.377887 kubelet[3419]: I0306 02:54:32.377816 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d32d93c-8ae2-4a14-99ea-ce2d35d2ab70-lib-modules\") pod \"kube-proxy-lphbs\" (UID: \"9d32d93c-8ae2-4a14-99ea-ce2d35d2ab70\") " pod="kube-system/kube-proxy-lphbs" Mar 6 02:54:32.377887 kubelet[3419]: I0306 02:54:32.377829 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ksjt\" (UniqueName: \"kubernetes.io/projected/9d32d93c-8ae2-4a14-99ea-ce2d35d2ab70-kube-api-access-9ksjt\") pod \"kube-proxy-lphbs\" (UID: \"9d32d93c-8ae2-4a14-99ea-ce2d35d2ab70\") " pod="kube-system/kube-proxy-lphbs" Mar 6 02:54:32.394390 systemd[1]: Created slice kubepods-burstable-podb68a0e0f_1a06_424b_89f2_db78ffd6b367.slice - libcontainer container kubepods-burstable-podb68a0e0f_1a06_424b_89f2_db78ffd6b367.slice. Mar 6 02:54:32.479149 kubelet[3419]: I0306 02:54:32.479104 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-host-proc-sys-net\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479149 kubelet[3419]: I0306 02:54:32.479162 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-host-proc-sys-kernel\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479543 kubelet[3419]: I0306 02:54:32.479175 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b68a0e0f-1a06-424b-89f2-db78ffd6b367-hubble-tls\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479543 kubelet[3419]: I0306 02:54:32.479206 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-hostproc\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479543 kubelet[3419]: I0306 02:54:32.479215 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-cgroup\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479543 kubelet[3419]: I0306 02:54:32.479223 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-lib-modules\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479543 kubelet[3419]: I0306 02:54:32.479232 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-xtables-lock\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479543 kubelet[3419]: I0306 02:54:32.479246 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-run\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479637 kubelet[3419]: I0306 02:54:32.479254 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-etc-cni-netd\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479637 kubelet[3419]: I0306 02:54:32.479266 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-config-path\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479637 kubelet[3419]: I0306 02:54:32.479280 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6zl5\" (UniqueName: \"kubernetes.io/projected/b68a0e0f-1a06-424b-89f2-db78ffd6b367-kube-api-access-k6zl5\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479637 kubelet[3419]: I0306 02:54:32.479295 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-bpf-maps\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479637 kubelet[3419]: I0306 02:54:32.479303 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cni-path\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.479637 kubelet[3419]: I0306 02:54:32.479314 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b68a0e0f-1a06-424b-89f2-db78ffd6b367-clustermesh-secrets\") pod \"cilium-kc9kl\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " pod="kube-system/cilium-kc9kl" Mar 6 02:54:32.663964 systemd[1]: Created slice kubepods-besteffort-pod4943ea04_b4a4_48fa_b19e_890a8cfa9910.slice - libcontainer container kubepods-besteffort-pod4943ea04_b4a4_48fa_b19e_890a8cfa9910.slice. Mar 6 02:54:32.682199 kubelet[3419]: I0306 02:54:32.682147 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4943ea04-b4a4-48fa-b19e-890a8cfa9910-cilium-config-path\") pod \"cilium-operator-78cf5644cb-hc7gx\" (UID: \"4943ea04-b4a4-48fa-b19e-890a8cfa9910\") " pod="kube-system/cilium-operator-78cf5644cb-hc7gx" Mar 6 02:54:32.682446 kubelet[3419]: I0306 02:54:32.682404 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4hv5\" (UniqueName: \"kubernetes.io/projected/4943ea04-b4a4-48fa-b19e-890a8cfa9910-kube-api-access-c4hv5\") pod \"cilium-operator-78cf5644cb-hc7gx\" (UID: \"4943ea04-b4a4-48fa-b19e-890a8cfa9910\") " pod="kube-system/cilium-operator-78cf5644cb-hc7gx" Mar 6 02:54:32.693565 containerd[1911]: time="2026-03-06T02:54:32.693517842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lphbs,Uid:9d32d93c-8ae2-4a14-99ea-ce2d35d2ab70,Namespace:kube-system,Attempt:0,}" Mar 6 02:54:32.707324 containerd[1911]: time="2026-03-06T02:54:32.707181769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kc9kl,Uid:b68a0e0f-1a06-424b-89f2-db78ffd6b367,Namespace:kube-system,Attempt:0,}" Mar 6 02:54:32.748717 containerd[1911]: time="2026-03-06T02:54:32.748614805Z" level=info msg="connecting to shim 8a6c118e02c83995dd5b687f16e4dfd06dffd5b53224787962d16fd14e17729c" address="unix:///run/containerd/s/da2d36938d515f725f19b085153a493f6656e51fba700f7d769553e51f4d5ae4" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:54:32.766890 systemd[1]: Started cri-containerd-8a6c118e02c83995dd5b687f16e4dfd06dffd5b53224787962d16fd14e17729c.scope - libcontainer container 8a6c118e02c83995dd5b687f16e4dfd06dffd5b53224787962d16fd14e17729c. Mar 6 02:54:32.769615 containerd[1911]: time="2026-03-06T02:54:32.769254336Z" level=info msg="connecting to shim 6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11" address="unix:///run/containerd/s/62cd125f0de05f9f60b030657cd92260e84352d206603935878b555e14bb4332" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:54:32.799077 systemd[1]: Started cri-containerd-6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11.scope - libcontainer container 6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11. Mar 6 02:54:32.806925 containerd[1911]: time="2026-03-06T02:54:32.806869831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lphbs,Uid:9d32d93c-8ae2-4a14-99ea-ce2d35d2ab70,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a6c118e02c83995dd5b687f16e4dfd06dffd5b53224787962d16fd14e17729c\"" Mar 6 02:54:32.821988 containerd[1911]: time="2026-03-06T02:54:32.821945956Z" level=info msg="CreateContainer within sandbox \"8a6c118e02c83995dd5b687f16e4dfd06dffd5b53224787962d16fd14e17729c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 02:54:32.833911 containerd[1911]: time="2026-03-06T02:54:32.833842161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kc9kl,Uid:b68a0e0f-1a06-424b-89f2-db78ffd6b367,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\"" Mar 6 02:54:32.836483 containerd[1911]: time="2026-03-06T02:54:32.836095835Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 6 02:54:32.853002 containerd[1911]: time="2026-03-06T02:54:32.852956458Z" level=info msg="Container 92011422039e276dd2f80948a570eee4b952ceb369c60719d7a5eea1feb70b05: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:32.871467 containerd[1911]: time="2026-03-06T02:54:32.871418430Z" level=info msg="CreateContainer within sandbox \"8a6c118e02c83995dd5b687f16e4dfd06dffd5b53224787962d16fd14e17729c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92011422039e276dd2f80948a570eee4b952ceb369c60719d7a5eea1feb70b05\"" Mar 6 02:54:32.873949 containerd[1911]: time="2026-03-06T02:54:32.873612174Z" level=info msg="StartContainer for \"92011422039e276dd2f80948a570eee4b952ceb369c60719d7a5eea1feb70b05\"" Mar 6 02:54:32.874937 containerd[1911]: time="2026-03-06T02:54:32.874912401Z" level=info msg="connecting to shim 92011422039e276dd2f80948a570eee4b952ceb369c60719d7a5eea1feb70b05" address="unix:///run/containerd/s/da2d36938d515f725f19b085153a493f6656e51fba700f7d769553e51f4d5ae4" protocol=ttrpc version=3 Mar 6 02:54:32.892889 systemd[1]: Started cri-containerd-92011422039e276dd2f80948a570eee4b952ceb369c60719d7a5eea1feb70b05.scope - libcontainer container 92011422039e276dd2f80948a570eee4b952ceb369c60719d7a5eea1feb70b05. Mar 6 02:54:32.955970 containerd[1911]: time="2026-03-06T02:54:32.955723564Z" level=info msg="StartContainer for \"92011422039e276dd2f80948a570eee4b952ceb369c60719d7a5eea1feb70b05\" returns successfully" Mar 6 02:54:32.975959 containerd[1911]: time="2026-03-06T02:54:32.975863663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-hc7gx,Uid:4943ea04-b4a4-48fa-b19e-890a8cfa9910,Namespace:kube-system,Attempt:0,}" Mar 6 02:54:33.021027 containerd[1911]: time="2026-03-06T02:54:33.020946130Z" level=info msg="connecting to shim 7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d" address="unix:///run/containerd/s/e946b251de1c5458e1c4becc8eff3e6f2a66c92b7fef772a3cd3657946d77853" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:54:33.039892 systemd[1]: Started cri-containerd-7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d.scope - libcontainer container 7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d. Mar 6 02:54:33.076616 containerd[1911]: time="2026-03-06T02:54:33.076546141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-hc7gx,Uid:4943ea04-b4a4-48fa-b19e-890a8cfa9910,Namespace:kube-system,Attempt:0,} returns sandbox id \"7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d\"" Mar 6 02:54:36.288678 kubelet[3419]: I0306 02:54:36.288164 3419 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-lphbs" podStartSLOduration=4.288152522 podStartE2EDuration="4.288152522s" podCreationTimestamp="2026-03-06 02:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:54:33.812946369 +0000 UTC m=+7.158336020" watchObservedRunningTime="2026-03-06 02:54:36.288152522 +0000 UTC m=+9.633542165" Mar 6 02:54:38.423520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2086760289.mount: Deactivated successfully. Mar 6 02:54:39.828765 containerd[1911]: time="2026-03-06T02:54:39.828528990Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:39.832606 containerd[1911]: time="2026-03-06T02:54:39.832463639Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 6 02:54:39.836788 containerd[1911]: time="2026-03-06T02:54:39.836755468Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:39.846625 containerd[1911]: time="2026-03-06T02:54:39.846504060Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.010369672s" Mar 6 02:54:39.846625 containerd[1911]: time="2026-03-06T02:54:39.846544813Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 6 02:54:39.847761 containerd[1911]: time="2026-03-06T02:54:39.847672986Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 6 02:54:39.857104 containerd[1911]: time="2026-03-06T02:54:39.857062934Z" level=info msg="CreateContainer within sandbox \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 6 02:54:40.106533 containerd[1911]: time="2026-03-06T02:54:40.105914594Z" level=info msg="Container 261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:41.759635 containerd[1911]: time="2026-03-06T02:54:41.759524562Z" level=info msg="CreateContainer within sandbox \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\"" Mar 6 02:54:41.761949 containerd[1911]: time="2026-03-06T02:54:41.761913450Z" level=info msg="StartContainer for \"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\"" Mar 6 02:54:41.762620 containerd[1911]: time="2026-03-06T02:54:41.762595502Z" level=info msg="connecting to shim 261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5" address="unix:///run/containerd/s/62cd125f0de05f9f60b030657cd92260e84352d206603935878b555e14bb4332" protocol=ttrpc version=3 Mar 6 02:54:41.784892 systemd[1]: Started cri-containerd-261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5.scope - libcontainer container 261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5. Mar 6 02:54:41.812471 containerd[1911]: time="2026-03-06T02:54:41.812430037Z" level=info msg="StartContainer for \"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\" returns successfully" Mar 6 02:54:41.819461 systemd[1]: cri-containerd-261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5.scope: Deactivated successfully. Mar 6 02:54:41.824800 containerd[1911]: time="2026-03-06T02:54:41.824133535Z" level=info msg="received container exit event container_id:\"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\" id:\"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\" pid:3834 exited_at:{seconds:1772765681 nanos:823362239}" Mar 6 02:54:41.846231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5-rootfs.mount: Deactivated successfully. Mar 6 02:54:42.838182 containerd[1911]: time="2026-03-06T02:54:42.838137450Z" level=info msg="CreateContainer within sandbox \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 6 02:54:42.866751 containerd[1911]: time="2026-03-06T02:54:42.866674623Z" level=info msg="Container 40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:42.869100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595839489.mount: Deactivated successfully. Mar 6 02:54:42.881140 containerd[1911]: time="2026-03-06T02:54:42.881098034Z" level=info msg="CreateContainer within sandbox \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\"" Mar 6 02:54:42.883291 containerd[1911]: time="2026-03-06T02:54:42.881925683Z" level=info msg="StartContainer for \"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\"" Mar 6 02:54:42.883910 containerd[1911]: time="2026-03-06T02:54:42.883876662Z" level=info msg="connecting to shim 40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff" address="unix:///run/containerd/s/62cd125f0de05f9f60b030657cd92260e84352d206603935878b555e14bb4332" protocol=ttrpc version=3 Mar 6 02:54:42.905053 systemd[1]: Started cri-containerd-40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff.scope - libcontainer container 40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff. Mar 6 02:54:42.949220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065153549.mount: Deactivated successfully. Mar 6 02:54:42.954642 containerd[1911]: time="2026-03-06T02:54:42.954226800Z" level=info msg="StartContainer for \"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\" returns successfully" Mar 6 02:54:42.954929 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 02:54:42.955091 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:54:42.955527 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:54:42.957390 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:54:42.961674 systemd[1]: cri-containerd-40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff.scope: Deactivated successfully. Mar 6 02:54:42.964120 containerd[1911]: time="2026-03-06T02:54:42.964091570Z" level=info msg="received container exit event container_id:\"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\" id:\"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\" pid:3878 exited_at:{seconds:1772765682 nanos:963082235}" Mar 6 02:54:42.979072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:54:43.840166 containerd[1911]: time="2026-03-06T02:54:43.840031033Z" level=info msg="CreateContainer within sandbox \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 6 02:54:43.866985 containerd[1911]: time="2026-03-06T02:54:43.866148764Z" level=info msg="Container dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:43.867850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff-rootfs.mount: Deactivated successfully. Mar 6 02:54:43.887355 containerd[1911]: time="2026-03-06T02:54:43.887313955Z" level=info msg="CreateContainer within sandbox \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\"" Mar 6 02:54:43.889562 containerd[1911]: time="2026-03-06T02:54:43.889523709Z" level=info msg="StartContainer for \"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\"" Mar 6 02:54:43.891773 containerd[1911]: time="2026-03-06T02:54:43.891691767Z" level=info msg="connecting to shim dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181" address="unix:///run/containerd/s/62cd125f0de05f9f60b030657cd92260e84352d206603935878b555e14bb4332" protocol=ttrpc version=3 Mar 6 02:54:43.910883 systemd[1]: Started cri-containerd-dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181.scope - libcontainer container dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181. Mar 6 02:54:43.957959 systemd[1]: cri-containerd-dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181.scope: Deactivated successfully. Mar 6 02:54:43.962561 containerd[1911]: time="2026-03-06T02:54:43.962499519Z" level=info msg="received container exit event container_id:\"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\" id:\"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\" pid:3936 exited_at:{seconds:1772765683 nanos:960225594}" Mar 6 02:54:43.964636 containerd[1911]: time="2026-03-06T02:54:43.964607934Z" level=info msg="StartContainer for \"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\" returns successfully" Mar 6 02:54:43.973193 containerd[1911]: time="2026-03-06T02:54:43.973151456Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:43.976799 containerd[1911]: time="2026-03-06T02:54:43.976699051Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 6 02:54:43.980092 containerd[1911]: time="2026-03-06T02:54:43.980059144Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:54:43.981632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181-rootfs.mount: Deactivated successfully. Mar 6 02:54:43.984068 containerd[1911]: time="2026-03-06T02:54:43.983663725Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.135952738s" Mar 6 02:54:43.984068 containerd[1911]: time="2026-03-06T02:54:43.983691414Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 6 02:54:44.269628 containerd[1911]: time="2026-03-06T02:54:44.268190108Z" level=info msg="CreateContainer within sandbox \"7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 6 02:54:44.291383 containerd[1911]: time="2026-03-06T02:54:44.291331838Z" level=info msg="Container e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:44.306199 containerd[1911]: time="2026-03-06T02:54:44.306150445Z" level=info msg="CreateContainer within sandbox \"7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\"" Mar 6 02:54:44.306962 containerd[1911]: time="2026-03-06T02:54:44.306937557Z" level=info msg="StartContainer for \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\"" Mar 6 02:54:44.308627 containerd[1911]: time="2026-03-06T02:54:44.308596775Z" level=info msg="connecting to shim e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243" address="unix:///run/containerd/s/e946b251de1c5458e1c4becc8eff3e6f2a66c92b7fef772a3cd3657946d77853" protocol=ttrpc version=3 Mar 6 02:54:44.323042 systemd[1]: Started cri-containerd-e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243.scope - libcontainer container e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243. Mar 6 02:54:44.353501 containerd[1911]: time="2026-03-06T02:54:44.353460880Z" level=info msg="StartContainer for \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\" returns successfully" Mar 6 02:54:44.850460 containerd[1911]: time="2026-03-06T02:54:44.850038524Z" level=info msg="CreateContainer within sandbox \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 6 02:54:44.881594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855520840.mount: Deactivated successfully. Mar 6 02:54:44.885766 containerd[1911]: time="2026-03-06T02:54:44.884012164Z" level=info msg="Container b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:44.897852 kubelet[3419]: I0306 02:54:44.897772 3419 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-hc7gx" podStartSLOduration=1.989720508 podStartE2EDuration="12.896653306s" podCreationTimestamp="2026-03-06 02:54:32 +0000 UTC" firstStartedPulling="2026-03-06 02:54:33.07841177 +0000 UTC m=+6.423801413" lastFinishedPulling="2026-03-06 02:54:43.98534456 +0000 UTC m=+17.330734211" observedRunningTime="2026-03-06 02:54:44.873668701 +0000 UTC m=+18.219058344" watchObservedRunningTime="2026-03-06 02:54:44.896653306 +0000 UTC m=+18.242042949" Mar 6 02:54:44.914616 containerd[1911]: time="2026-03-06T02:54:44.914537581Z" level=info msg="CreateContainer within sandbox \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\"" Mar 6 02:54:44.915358 containerd[1911]: time="2026-03-06T02:54:44.915285476Z" level=info msg="StartContainer for \"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\"" Mar 6 02:54:44.916960 containerd[1911]: time="2026-03-06T02:54:44.916928813Z" level=info msg="connecting to shim b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94" address="unix:///run/containerd/s/62cd125f0de05f9f60b030657cd92260e84352d206603935878b555e14bb4332" protocol=ttrpc version=3 Mar 6 02:54:44.941922 systemd[1]: Started cri-containerd-b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94.scope - libcontainer container b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94. Mar 6 02:54:45.004069 systemd[1]: cri-containerd-b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94.scope: Deactivated successfully. Mar 6 02:54:45.006190 containerd[1911]: time="2026-03-06T02:54:45.006088599Z" level=info msg="received container exit event container_id:\"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\" id:\"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\" pid:4013 exited_at:{seconds:1772765685 nanos:5422011}" Mar 6 02:54:45.007455 containerd[1911]: time="2026-03-06T02:54:45.007323500Z" level=info msg="StartContainer for \"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\" returns successfully" Mar 6 02:54:45.030096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94-rootfs.mount: Deactivated successfully. Mar 6 02:54:45.867728 containerd[1911]: time="2026-03-06T02:54:45.867270425Z" level=info msg="CreateContainer within sandbox \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 6 02:54:45.891950 containerd[1911]: time="2026-03-06T02:54:45.891903520Z" level=info msg="Container dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:45.893418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236611579.mount: Deactivated successfully. Mar 6 02:54:45.910047 containerd[1911]: time="2026-03-06T02:54:45.910001746Z" level=info msg="CreateContainer within sandbox \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\"" Mar 6 02:54:45.910888 containerd[1911]: time="2026-03-06T02:54:45.910860380Z" level=info msg="StartContainer for \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\"" Mar 6 02:54:45.913248 containerd[1911]: time="2026-03-06T02:54:45.913200091Z" level=info msg="connecting to shim dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f" address="unix:///run/containerd/s/62cd125f0de05f9f60b030657cd92260e84352d206603935878b555e14bb4332" protocol=ttrpc version=3 Mar 6 02:54:45.931903 systemd[1]: Started cri-containerd-dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f.scope - libcontainer container dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f. Mar 6 02:54:45.973940 containerd[1911]: time="2026-03-06T02:54:45.973899858Z" level=info msg="StartContainer for \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\" returns successfully" Mar 6 02:54:46.120430 kubelet[3419]: I0306 02:54:46.120274 3419 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 6 02:54:46.167971 systemd[1]: Created slice kubepods-burstable-pod1c6fd582_fc33_40ad_9e77_c95c7e9fd081.slice - libcontainer container kubepods-burstable-pod1c6fd582_fc33_40ad_9e77_c95c7e9fd081.slice. Mar 6 02:54:46.176487 systemd[1]: Created slice kubepods-burstable-podf21b2176_a61e_40a4_bf48_7a42815aa077.slice - libcontainer container kubepods-burstable-podf21b2176_a61e_40a4_bf48_7a42815aa077.slice. Mar 6 02:54:46.267662 kubelet[3419]: I0306 02:54:46.267318 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cl6t\" (UniqueName: \"kubernetes.io/projected/f21b2176-a61e-40a4-bf48-7a42815aa077-kube-api-access-5cl6t\") pod \"coredns-7d764666f9-mhntw\" (UID: \"f21b2176-a61e-40a4-bf48-7a42815aa077\") " pod="kube-system/coredns-7d764666f9-mhntw" Mar 6 02:54:46.267662 kubelet[3419]: I0306 02:54:46.267614 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f21b2176-a61e-40a4-bf48-7a42815aa077-config-volume\") pod \"coredns-7d764666f9-mhntw\" (UID: \"f21b2176-a61e-40a4-bf48-7a42815aa077\") " pod="kube-system/coredns-7d764666f9-mhntw" Mar 6 02:54:46.268010 kubelet[3419]: I0306 02:54:46.267644 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c6fd582-fc33-40ad-9e77-c95c7e9fd081-config-volume\") pod \"coredns-7d764666f9-f59c6\" (UID: \"1c6fd582-fc33-40ad-9e77-c95c7e9fd081\") " pod="kube-system/coredns-7d764666f9-f59c6" Mar 6 02:54:46.268010 kubelet[3419]: I0306 02:54:46.267911 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8fk\" (UniqueName: \"kubernetes.io/projected/1c6fd582-fc33-40ad-9e77-c95c7e9fd081-kube-api-access-kl8fk\") pod \"coredns-7d764666f9-f59c6\" (UID: \"1c6fd582-fc33-40ad-9e77-c95c7e9fd081\") " pod="kube-system/coredns-7d764666f9-f59c6" Mar 6 02:54:46.479324 containerd[1911]: time="2026-03-06T02:54:46.479205965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-f59c6,Uid:1c6fd582-fc33-40ad-9e77-c95c7e9fd081,Namespace:kube-system,Attempt:0,}" Mar 6 02:54:46.490284 containerd[1911]: time="2026-03-06T02:54:46.490239649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mhntw,Uid:f21b2176-a61e-40a4-bf48-7a42815aa077,Namespace:kube-system,Attempt:0,}" Mar 6 02:54:46.875206 kubelet[3419]: I0306 02:54:46.875143 3419 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-kc9kl" podStartSLOduration=1.863348422 podStartE2EDuration="14.875132948s" podCreationTimestamp="2026-03-06 02:54:32 +0000 UTC" firstStartedPulling="2026-03-06 02:54:32.83575644 +0000 UTC m=+6.181146083" lastFinishedPulling="2026-03-06 02:54:45.847540966 +0000 UTC m=+19.192930609" observedRunningTime="2026-03-06 02:54:46.874789722 +0000 UTC m=+20.220179413" watchObservedRunningTime="2026-03-06 02:54:46.875132948 +0000 UTC m=+20.220522599" Mar 6 02:54:47.995793 systemd-networkd[1493]: cilium_host: Link UP Mar 6 02:54:47.996492 systemd-networkd[1493]: cilium_net: Link UP Mar 6 02:54:47.997370 systemd-networkd[1493]: cilium_net: Gained carrier Mar 6 02:54:47.998328 systemd-networkd[1493]: cilium_host: Gained carrier Mar 6 02:54:48.115846 systemd-networkd[1493]: cilium_net: Gained IPv6LL Mar 6 02:54:48.128388 systemd-networkd[1493]: cilium_vxlan: Link UP Mar 6 02:54:48.128888 systemd-networkd[1493]: cilium_vxlan: Gained carrier Mar 6 02:54:48.465821 kernel: NET: Registered PF_ALG protocol family Mar 6 02:54:48.659975 systemd-networkd[1493]: cilium_host: Gained IPv6LL Mar 6 02:54:49.005725 systemd-networkd[1493]: lxc_health: Link UP Mar 6 02:54:49.012240 systemd-networkd[1493]: lxc_health: Gained carrier Mar 6 02:54:49.528503 kernel: eth0: renamed from tmpd1efc Mar 6 02:54:49.528143 systemd-networkd[1493]: lxc3f30a6bfdc22: Link UP Mar 6 02:54:49.530241 systemd-networkd[1493]: lxc3f30a6bfdc22: Gained carrier Mar 6 02:54:49.544047 systemd-networkd[1493]: lxc509b50c6c13a: Link UP Mar 6 02:54:49.558501 kernel: eth0: renamed from tmpe26ce Mar 6 02:54:49.562302 systemd-networkd[1493]: lxc509b50c6c13a: Gained carrier Mar 6 02:54:50.003896 systemd-networkd[1493]: cilium_vxlan: Gained IPv6LL Mar 6 02:54:50.579997 systemd-networkd[1493]: lxc509b50c6c13a: Gained IPv6LL Mar 6 02:54:50.644019 systemd-networkd[1493]: lxc3f30a6bfdc22: Gained IPv6LL Mar 6 02:54:50.835903 systemd-networkd[1493]: lxc_health: Gained IPv6LL Mar 6 02:54:52.203923 containerd[1911]: time="2026-03-06T02:54:52.201909663Z" level=info msg="connecting to shim e26ce7072cd6528b9d818823c5507da05804e2a43091460ca55d158e26e180ac" address="unix:///run/containerd/s/b909cf0120ebae057bced4333a0698667937147e5660e03889da8bda6f6d9422" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:54:52.228652 containerd[1911]: time="2026-03-06T02:54:52.228543874Z" level=info msg="connecting to shim d1efc0515ca4ef49393a9c2871944f8dfee0b372de5abcb971b6baf1728562a1" address="unix:///run/containerd/s/350e653cc92f7ea6b94023dc91e9de027b5fed464980d986dd80103ab43499ca" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:54:52.231873 systemd[1]: Started cri-containerd-e26ce7072cd6528b9d818823c5507da05804e2a43091460ca55d158e26e180ac.scope - libcontainer container e26ce7072cd6528b9d818823c5507da05804e2a43091460ca55d158e26e180ac. Mar 6 02:54:52.256073 systemd[1]: Started cri-containerd-d1efc0515ca4ef49393a9c2871944f8dfee0b372de5abcb971b6baf1728562a1.scope - libcontainer container d1efc0515ca4ef49393a9c2871944f8dfee0b372de5abcb971b6baf1728562a1. Mar 6 02:54:52.274649 containerd[1911]: time="2026-03-06T02:54:52.274580997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mhntw,Uid:f21b2176-a61e-40a4-bf48-7a42815aa077,Namespace:kube-system,Attempt:0,} returns sandbox id \"e26ce7072cd6528b9d818823c5507da05804e2a43091460ca55d158e26e180ac\"" Mar 6 02:54:52.285431 containerd[1911]: time="2026-03-06T02:54:52.285389360Z" level=info msg="CreateContainer within sandbox \"e26ce7072cd6528b9d818823c5507da05804e2a43091460ca55d158e26e180ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 02:54:52.304965 containerd[1911]: time="2026-03-06T02:54:52.304913972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-f59c6,Uid:1c6fd582-fc33-40ad-9e77-c95c7e9fd081,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1efc0515ca4ef49393a9c2871944f8dfee0b372de5abcb971b6baf1728562a1\"" Mar 6 02:54:52.315976 containerd[1911]: time="2026-03-06T02:54:52.315720223Z" level=info msg="CreateContainer within sandbox \"d1efc0515ca4ef49393a9c2871944f8dfee0b372de5abcb971b6baf1728562a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 02:54:52.316535 containerd[1911]: time="2026-03-06T02:54:52.315854515Z" level=info msg="Container 539ff65e74e2c7e137c3e02568eb4ae15ce74067c32870815896bdff72e16c69: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:52.334230 containerd[1911]: time="2026-03-06T02:54:52.334182186Z" level=info msg="CreateContainer within sandbox \"e26ce7072cd6528b9d818823c5507da05804e2a43091460ca55d158e26e180ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"539ff65e74e2c7e137c3e02568eb4ae15ce74067c32870815896bdff72e16c69\"" Mar 6 02:54:52.335116 containerd[1911]: time="2026-03-06T02:54:52.334870816Z" level=info msg="StartContainer for \"539ff65e74e2c7e137c3e02568eb4ae15ce74067c32870815896bdff72e16c69\"" Mar 6 02:54:52.337145 containerd[1911]: time="2026-03-06T02:54:52.336793700Z" level=info msg="connecting to shim 539ff65e74e2c7e137c3e02568eb4ae15ce74067c32870815896bdff72e16c69" address="unix:///run/containerd/s/b909cf0120ebae057bced4333a0698667937147e5660e03889da8bda6f6d9422" protocol=ttrpc version=3 Mar 6 02:54:52.351393 containerd[1911]: time="2026-03-06T02:54:52.351351453Z" level=info msg="Container cb511f8d5f4b66ebd449b38d051af8d44cf4b7e3b69a8e5117a575dea0d8ff6a: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:54:52.355918 systemd[1]: Started cri-containerd-539ff65e74e2c7e137c3e02568eb4ae15ce74067c32870815896bdff72e16c69.scope - libcontainer container 539ff65e74e2c7e137c3e02568eb4ae15ce74067c32870815896bdff72e16c69. Mar 6 02:54:52.368219 containerd[1911]: time="2026-03-06T02:54:52.368147835Z" level=info msg="CreateContainer within sandbox \"d1efc0515ca4ef49393a9c2871944f8dfee0b372de5abcb971b6baf1728562a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb511f8d5f4b66ebd449b38d051af8d44cf4b7e3b69a8e5117a575dea0d8ff6a\"" Mar 6 02:54:52.370227 containerd[1911]: time="2026-03-06T02:54:52.369467148Z" level=info msg="StartContainer for \"cb511f8d5f4b66ebd449b38d051af8d44cf4b7e3b69a8e5117a575dea0d8ff6a\"" Mar 6 02:54:52.372146 containerd[1911]: time="2026-03-06T02:54:52.372111639Z" level=info msg="connecting to shim cb511f8d5f4b66ebd449b38d051af8d44cf4b7e3b69a8e5117a575dea0d8ff6a" address="unix:///run/containerd/s/350e653cc92f7ea6b94023dc91e9de027b5fed464980d986dd80103ab43499ca" protocol=ttrpc version=3 Mar 6 02:54:52.391078 systemd[1]: Started cri-containerd-cb511f8d5f4b66ebd449b38d051af8d44cf4b7e3b69a8e5117a575dea0d8ff6a.scope - libcontainer container cb511f8d5f4b66ebd449b38d051af8d44cf4b7e3b69a8e5117a575dea0d8ff6a. Mar 6 02:54:52.392461 containerd[1911]: time="2026-03-06T02:54:52.392224038Z" level=info msg="StartContainer for \"539ff65e74e2c7e137c3e02568eb4ae15ce74067c32870815896bdff72e16c69\" returns successfully" Mar 6 02:54:52.432944 containerd[1911]: time="2026-03-06T02:54:52.432848160Z" level=info msg="StartContainer for \"cb511f8d5f4b66ebd449b38d051af8d44cf4b7e3b69a8e5117a575dea0d8ff6a\" returns successfully" Mar 6 02:54:52.908860 kubelet[3419]: I0306 02:54:52.908227 3419 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-mhntw" podStartSLOduration=20.908215505 podStartE2EDuration="20.908215505s" podCreationTimestamp="2026-03-06 02:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:54:52.887478759 +0000 UTC m=+26.232868466" watchObservedRunningTime="2026-03-06 02:54:52.908215505 +0000 UTC m=+26.253605148" Mar 6 02:54:52.932540 kubelet[3419]: I0306 02:54:52.932469 3419 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-f59c6" podStartSLOduration=20.932271644 podStartE2EDuration="20.932271644s" podCreationTimestamp="2026-03-06 02:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:54:52.909815492 +0000 UTC m=+26.255205199" watchObservedRunningTime="2026-03-06 02:54:52.932271644 +0000 UTC m=+26.277661295" Mar 6 02:54:56.544356 kubelet[3419]: I0306 02:54:56.544215 3419 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 02:55:56.346678 systemd[1]: Started sshd@7-10.200.20.16:22-10.200.16.10:42722.service - OpenSSH per-connection server daemon (10.200.16.10:42722). Mar 6 02:55:56.788601 sshd[4739]: Accepted publickey for core from 10.200.16.10 port 42722 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:55:56.789924 sshd-session[4739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:55:56.796490 systemd-logind[1879]: New session 10 of user core. Mar 6 02:55:56.801071 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 02:55:57.086311 sshd[4742]: Connection closed by 10.200.16.10 port 42722 Mar 6 02:55:57.087748 sshd-session[4739]: pam_unix(sshd:session): session closed for user core Mar 6 02:55:57.091008 systemd-logind[1879]: Session 10 logged out. Waiting for processes to exit. Mar 6 02:55:57.091341 systemd[1]: sshd@7-10.200.20.16:22-10.200.16.10:42722.service: Deactivated successfully. Mar 6 02:55:57.093553 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 02:55:57.095516 systemd-logind[1879]: Removed session 10. Mar 6 02:56:02.157328 systemd[1]: Started sshd@8-10.200.20.16:22-10.200.16.10:53472.service - OpenSSH per-connection server daemon (10.200.16.10:53472). Mar 6 02:56:02.507648 sshd[4754]: Accepted publickey for core from 10.200.16.10 port 53472 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:02.508554 sshd-session[4754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:02.512197 systemd-logind[1879]: New session 11 of user core. Mar 6 02:56:02.519867 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 02:56:02.739328 sshd[4757]: Connection closed by 10.200.16.10 port 53472 Mar 6 02:56:02.739224 sshd-session[4754]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:02.743404 systemd[1]: sshd@8-10.200.20.16:22-10.200.16.10:53472.service: Deactivated successfully. Mar 6 02:56:02.745392 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 02:56:02.747288 systemd-logind[1879]: Session 11 logged out. Waiting for processes to exit. Mar 6 02:56:02.748464 systemd-logind[1879]: Removed session 11. Mar 6 02:56:07.819935 systemd[1]: Started sshd@9-10.200.20.16:22-10.200.16.10:53482.service - OpenSSH per-connection server daemon (10.200.16.10:53482). Mar 6 02:56:08.177565 sshd[4771]: Accepted publickey for core from 10.200.16.10 port 53482 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:08.179115 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:08.182943 systemd-logind[1879]: New session 12 of user core. Mar 6 02:56:08.187853 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 02:56:08.414946 sshd[4774]: Connection closed by 10.200.16.10 port 53482 Mar 6 02:56:08.414855 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:08.419393 systemd-logind[1879]: Session 12 logged out. Waiting for processes to exit. Mar 6 02:56:08.419931 systemd[1]: sshd@9-10.200.20.16:22-10.200.16.10:53482.service: Deactivated successfully. Mar 6 02:56:08.422330 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 02:56:08.425482 systemd-logind[1879]: Removed session 12. Mar 6 02:56:13.503751 systemd[1]: Started sshd@10-10.200.20.16:22-10.200.16.10:52902.service - OpenSSH per-connection server daemon (10.200.16.10:52902). Mar 6 02:56:13.873809 sshd[4787]: Accepted publickey for core from 10.200.16.10 port 52902 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:13.874862 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:13.878646 systemd-logind[1879]: New session 13 of user core. Mar 6 02:56:13.893056 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 02:56:14.121554 sshd[4790]: Connection closed by 10.200.16.10 port 52902 Mar 6 02:56:14.122165 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:14.126082 systemd[1]: sshd@10-10.200.20.16:22-10.200.16.10:52902.service: Deactivated successfully. Mar 6 02:56:14.127858 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 02:56:14.129194 systemd-logind[1879]: Session 13 logged out. Waiting for processes to exit. Mar 6 02:56:14.130504 systemd-logind[1879]: Removed session 13. Mar 6 02:56:14.201066 systemd[1]: Started sshd@11-10.200.20.16:22-10.200.16.10:52914.service - OpenSSH per-connection server daemon (10.200.16.10:52914). Mar 6 02:56:14.552789 sshd[4803]: Accepted publickey for core from 10.200.16.10 port 52914 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:14.554062 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:14.558049 systemd-logind[1879]: New session 14 of user core. Mar 6 02:56:14.561865 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 02:56:14.815608 sshd[4806]: Connection closed by 10.200.16.10 port 52914 Mar 6 02:56:14.815647 sshd-session[4803]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:14.819278 systemd[1]: sshd@11-10.200.20.16:22-10.200.16.10:52914.service: Deactivated successfully. Mar 6 02:56:14.821279 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 02:56:14.822310 systemd-logind[1879]: Session 14 logged out. Waiting for processes to exit. Mar 6 02:56:14.823488 systemd-logind[1879]: Removed session 14. Mar 6 02:56:14.895278 systemd[1]: Started sshd@12-10.200.20.16:22-10.200.16.10:52920.service - OpenSSH per-connection server daemon (10.200.16.10:52920). Mar 6 02:56:15.266149 sshd[4815]: Accepted publickey for core from 10.200.16.10 port 52920 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:15.268257 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:15.272002 systemd-logind[1879]: New session 15 of user core. Mar 6 02:56:15.279976 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 02:56:15.510550 sshd[4818]: Connection closed by 10.200.16.10 port 52920 Mar 6 02:56:15.511132 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:15.514542 systemd-logind[1879]: Session 15 logged out. Waiting for processes to exit. Mar 6 02:56:15.514774 systemd[1]: sshd@12-10.200.20.16:22-10.200.16.10:52920.service: Deactivated successfully. Mar 6 02:56:15.516490 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 02:56:15.518435 systemd-logind[1879]: Removed session 15. Mar 6 02:56:20.605577 systemd[1]: Started sshd@13-10.200.20.16:22-10.200.16.10:47482.service - OpenSSH per-connection server daemon (10.200.16.10:47482). Mar 6 02:56:21.007828 sshd[4830]: Accepted publickey for core from 10.200.16.10 port 47482 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:21.008703 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:21.013122 systemd-logind[1879]: New session 16 of user core. Mar 6 02:56:21.019863 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 02:56:21.268933 sshd[4833]: Connection closed by 10.200.16.10 port 47482 Mar 6 02:56:21.269022 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:21.273482 systemd-logind[1879]: Session 16 logged out. Waiting for processes to exit. Mar 6 02:56:21.274089 systemd[1]: sshd@13-10.200.20.16:22-10.200.16.10:47482.service: Deactivated successfully. Mar 6 02:56:21.275609 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 02:56:21.277265 systemd-logind[1879]: Removed session 16. Mar 6 02:56:21.339836 systemd[1]: Started sshd@14-10.200.20.16:22-10.200.16.10:47492.service - OpenSSH per-connection server daemon (10.200.16.10:47492). Mar 6 02:56:21.707366 sshd[4845]: Accepted publickey for core from 10.200.16.10 port 47492 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:21.708499 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:21.713544 systemd-logind[1879]: New session 17 of user core. Mar 6 02:56:21.718856 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 02:56:21.978488 sshd[4848]: Connection closed by 10.200.16.10 port 47492 Mar 6 02:56:21.979147 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:21.982471 systemd[1]: sshd@14-10.200.20.16:22-10.200.16.10:47492.service: Deactivated successfully. Mar 6 02:56:21.984092 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 02:56:21.985481 systemd-logind[1879]: Session 17 logged out. Waiting for processes to exit. Mar 6 02:56:21.986957 systemd-logind[1879]: Removed session 17. Mar 6 02:56:22.074703 systemd[1]: Started sshd@15-10.200.20.16:22-10.200.16.10:47500.service - OpenSSH per-connection server daemon (10.200.16.10:47500). Mar 6 02:56:22.472485 sshd[4858]: Accepted publickey for core from 10.200.16.10 port 47500 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:22.473229 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:22.476648 systemd-logind[1879]: New session 18 of user core. Mar 6 02:56:22.484862 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 02:56:23.021539 sshd[4862]: Connection closed by 10.200.16.10 port 47500 Mar 6 02:56:23.021450 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:23.024946 systemd[1]: sshd@15-10.200.20.16:22-10.200.16.10:47500.service: Deactivated successfully. Mar 6 02:56:23.026627 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 02:56:23.028881 systemd-logind[1879]: Session 18 logged out. Waiting for processes to exit. Mar 6 02:56:23.030079 systemd-logind[1879]: Removed session 18. Mar 6 02:56:23.112918 systemd[1]: Started sshd@16-10.200.20.16:22-10.200.16.10:47502.service - OpenSSH per-connection server daemon (10.200.16.10:47502). Mar 6 02:56:23.507282 sshd[4877]: Accepted publickey for core from 10.200.16.10 port 47502 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:23.508457 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:23.511953 systemd-logind[1879]: New session 19 of user core. Mar 6 02:56:23.523881 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 02:56:23.844668 sshd[4880]: Connection closed by 10.200.16.10 port 47502 Mar 6 02:56:23.845030 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:23.848642 systemd[1]: sshd@16-10.200.20.16:22-10.200.16.10:47502.service: Deactivated successfully. Mar 6 02:56:23.852237 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 02:56:23.853163 systemd-logind[1879]: Session 19 logged out. Waiting for processes to exit. Mar 6 02:56:23.854405 systemd-logind[1879]: Removed session 19. Mar 6 02:56:23.936301 systemd[1]: Started sshd@17-10.200.20.16:22-10.200.16.10:47516.service - OpenSSH per-connection server daemon (10.200.16.10:47516). Mar 6 02:56:24.344959 sshd[4890]: Accepted publickey for core from 10.200.16.10 port 47516 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:24.346093 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:24.349917 systemd-logind[1879]: New session 20 of user core. Mar 6 02:56:24.357861 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 02:56:24.613100 sshd[4895]: Connection closed by 10.200.16.10 port 47516 Mar 6 02:56:24.613619 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:24.616961 systemd[1]: sshd@17-10.200.20.16:22-10.200.16.10:47516.service: Deactivated successfully. Mar 6 02:56:24.619236 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 02:56:24.620468 systemd-logind[1879]: Session 20 logged out. Waiting for processes to exit. Mar 6 02:56:24.622060 systemd-logind[1879]: Removed session 20. Mar 6 02:56:29.699188 systemd[1]: Started sshd@18-10.200.20.16:22-10.200.16.10:47526.service - OpenSSH per-connection server daemon (10.200.16.10:47526). Mar 6 02:56:30.088826 sshd[4910]: Accepted publickey for core from 10.200.16.10 port 47526 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:30.089932 sshd-session[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:30.093418 systemd-logind[1879]: New session 21 of user core. Mar 6 02:56:30.099872 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 02:56:30.344879 sshd[4913]: Connection closed by 10.200.16.10 port 47526 Mar 6 02:56:30.345524 sshd-session[4910]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:30.348398 systemd[1]: sshd@18-10.200.20.16:22-10.200.16.10:47526.service: Deactivated successfully. Mar 6 02:56:30.350338 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 02:56:30.351441 systemd-logind[1879]: Session 21 logged out. Waiting for processes to exit. Mar 6 02:56:30.352845 systemd-logind[1879]: Removed session 21. Mar 6 02:56:35.420207 systemd[1]: Started sshd@19-10.200.20.16:22-10.200.16.10:47210.service - OpenSSH per-connection server daemon (10.200.16.10:47210). Mar 6 02:56:35.790489 sshd[4928]: Accepted publickey for core from 10.200.16.10 port 47210 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:35.791714 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:35.797187 systemd-logind[1879]: New session 22 of user core. Mar 6 02:56:35.802878 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 02:56:36.033951 sshd[4931]: Connection closed by 10.200.16.10 port 47210 Mar 6 02:56:36.034324 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:36.038554 systemd-logind[1879]: Session 22 logged out. Waiting for processes to exit. Mar 6 02:56:36.039324 systemd[1]: sshd@19-10.200.20.16:22-10.200.16.10:47210.service: Deactivated successfully. Mar 6 02:56:36.042035 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 02:56:36.043591 systemd-logind[1879]: Removed session 22. Mar 6 02:56:36.132794 systemd[1]: Started sshd@20-10.200.20.16:22-10.200.16.10:47214.service - OpenSSH per-connection server daemon (10.200.16.10:47214). Mar 6 02:56:36.543647 sshd[4942]: Accepted publickey for core from 10.200.16.10 port 47214 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:36.544401 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:36.548254 systemd-logind[1879]: New session 23 of user core. Mar 6 02:56:36.551865 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 02:56:37.986132 containerd[1911]: time="2026-03-06T02:56:37.986046353Z" level=info msg="StopContainer for \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\" with timeout 30 (s)" Mar 6 02:56:37.987191 containerd[1911]: time="2026-03-06T02:56:37.987007948Z" level=info msg="Stop container \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\" with signal terminated" Mar 6 02:56:37.994092 containerd[1911]: time="2026-03-06T02:56:37.993966827Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 02:56:38.004789 systemd[1]: cri-containerd-e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243.scope: Deactivated successfully. Mar 6 02:56:38.007607 containerd[1911]: time="2026-03-06T02:56:38.007420726Z" level=info msg="received container exit event container_id:\"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\" id:\"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\" pid:3980 exited_at:{seconds:1772765798 nanos:6853574}" Mar 6 02:56:38.010448 containerd[1911]: time="2026-03-06T02:56:38.010175114Z" level=info msg="StopContainer for \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\" with timeout 2 (s)" Mar 6 02:56:38.011141 containerd[1911]: time="2026-03-06T02:56:38.011103291Z" level=info msg="Stop container \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\" with signal terminated" Mar 6 02:56:38.024865 systemd-networkd[1493]: lxc_health: Link DOWN Mar 6 02:56:38.024871 systemd-networkd[1493]: lxc_health: Lost carrier Mar 6 02:56:38.038805 systemd[1]: cri-containerd-dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f.scope: Deactivated successfully. Mar 6 02:56:38.040930 systemd[1]: cri-containerd-dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f.scope: Consumed 4.580s CPU time, 122.3M memory peak, 128K read from disk, 12.9M written to disk. Mar 6 02:56:38.041622 containerd[1911]: time="2026-03-06T02:56:38.041542946Z" level=info msg="received container exit event container_id:\"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\" id:\"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\" pid:4049 exited_at:{seconds:1772765798 nanos:40680666}" Mar 6 02:56:38.045086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243-rootfs.mount: Deactivated successfully. Mar 6 02:56:38.061981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f-rootfs.mount: Deactivated successfully. Mar 6 02:56:38.089313 containerd[1911]: time="2026-03-06T02:56:38.089228547Z" level=info msg="StopContainer for \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\" returns successfully" Mar 6 02:56:38.090347 containerd[1911]: time="2026-03-06T02:56:38.090303888Z" level=info msg="StopPodSandbox for \"7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d\"" Mar 6 02:56:38.090432 containerd[1911]: time="2026-03-06T02:56:38.090366290Z" level=info msg="Container to stop \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:56:38.092028 containerd[1911]: time="2026-03-06T02:56:38.091998463Z" level=info msg="StopContainer for \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\" returns successfully" Mar 6 02:56:38.092495 containerd[1911]: time="2026-03-06T02:56:38.092440331Z" level=info msg="StopPodSandbox for \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\"" Mar 6 02:56:38.092495 containerd[1911]: time="2026-03-06T02:56:38.092484052Z" level=info msg="Container to stop \"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:56:38.092495 containerd[1911]: time="2026-03-06T02:56:38.092492461Z" level=info msg="Container to stop \"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:56:38.092495 containerd[1911]: time="2026-03-06T02:56:38.092498405Z" level=info msg="Container to stop \"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:56:38.092609 containerd[1911]: time="2026-03-06T02:56:38.092503581Z" level=info msg="Container to stop \"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:56:38.092609 containerd[1911]: time="2026-03-06T02:56:38.092508269Z" level=info msg="Container to stop \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:56:38.098014 systemd[1]: cri-containerd-6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11.scope: Deactivated successfully. Mar 6 02:56:38.099421 containerd[1911]: time="2026-03-06T02:56:38.099378834Z" level=info msg="received sandbox exit event container_id:\"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" id:\"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" exit_status:137 exited_at:{seconds:1772765798 nanos:99261519}" monitor_name=podsandbox Mar 6 02:56:38.100029 systemd[1]: cri-containerd-7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d.scope: Deactivated successfully. Mar 6 02:56:38.104143 containerd[1911]: time="2026-03-06T02:56:38.104087852Z" level=info msg="received sandbox exit event container_id:\"7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d\" id:\"7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d\" exit_status:137 exited_at:{seconds:1772765798 nanos:103959752}" monitor_name=podsandbox Mar 6 02:56:38.123043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11-rootfs.mount: Deactivated successfully. Mar 6 02:56:38.127569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d-rootfs.mount: Deactivated successfully. Mar 6 02:56:38.138915 containerd[1911]: time="2026-03-06T02:56:38.138802288Z" level=info msg="shim disconnected" id=7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d namespace=k8s.io Mar 6 02:56:38.138915 containerd[1911]: time="2026-03-06T02:56:38.138853537Z" level=warning msg="cleaning up after shim disconnected" id=7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d namespace=k8s.io Mar 6 02:56:38.138915 containerd[1911]: time="2026-03-06T02:56:38.138878258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 02:56:38.139297 containerd[1911]: time="2026-03-06T02:56:38.139154833Z" level=info msg="shim disconnected" id=6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11 namespace=k8s.io Mar 6 02:56:38.139297 containerd[1911]: time="2026-03-06T02:56:38.139173618Z" level=warning msg="cleaning up after shim disconnected" id=6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11 namespace=k8s.io Mar 6 02:56:38.139297 containerd[1911]: time="2026-03-06T02:56:38.139192083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 02:56:38.150614 containerd[1911]: time="2026-03-06T02:56:38.150566812Z" level=info msg="received sandbox container exit event sandbox_id:\"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" exit_status:137 exited_at:{seconds:1772765798 nanos:99261519}" monitor_name=criService Mar 6 02:56:38.152297 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11-shm.mount: Deactivated successfully. Mar 6 02:56:38.152938 containerd[1911]: time="2026-03-06T02:56:38.152903948Z" level=info msg="TearDown network for sandbox \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" successfully" Mar 6 02:56:38.152938 containerd[1911]: time="2026-03-06T02:56:38.152929829Z" level=info msg="StopPodSandbox for \"6fc9346c6ba58053a36a7bfccfe225ed7c2713b1d53fb526fa3d2192fd7a2d11\" returns successfully" Mar 6 02:56:38.154708 containerd[1911]: time="2026-03-06T02:56:38.154594587Z" level=info msg="received sandbox container exit event sandbox_id:\"7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d\" exit_status:137 exited_at:{seconds:1772765798 nanos:103959752}" monitor_name=criService Mar 6 02:56:38.155680 containerd[1911]: time="2026-03-06T02:56:38.155485427Z" level=info msg="TearDown network for sandbox \"7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d\" successfully" Mar 6 02:56:38.155680 containerd[1911]: time="2026-03-06T02:56:38.155508196Z" level=info msg="StopPodSandbox for \"7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d\" returns successfully" Mar 6 02:56:38.277710 kubelet[3419]: I0306 02:56:38.277148 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/b68a0e0f-1a06-424b-89f2-db78ffd6b367-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b68a0e0f-1a06-424b-89f2-db78ffd6b367-clustermesh-secrets\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.277710 kubelet[3419]: I0306 02:56:38.277183 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/b68a0e0f-1a06-424b-89f2-db78ffd6b367-hubble-tls\" (UniqueName: \"kubernetes.io/projected/b68a0e0f-1a06-424b-89f2-db78ffd6b367-hubble-tls\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.277710 kubelet[3419]: I0306 02:56:38.277197 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/b68a0e0f-1a06-424b-89f2-db78ffd6b367-kube-api-access-k6zl5\" (UniqueName: \"kubernetes.io/projected/b68a0e0f-1a06-424b-89f2-db78ffd6b367-kube-api-access-k6zl5\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.277710 kubelet[3419]: I0306 02:56:38.277212 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-bpf-maps\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.277710 kubelet[3419]: I0306 02:56:38.277225 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-xtables-lock\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.278154 kubelet[3419]: I0306 02:56:38.277236 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-cgroup\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.278154 kubelet[3419]: I0306 02:56:38.277245 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-lib-modules\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-lib-modules\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.278154 kubelet[3419]: I0306 02:56:38.277260 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-config-path\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.278154 kubelet[3419]: I0306 02:56:38.277270 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-host-proc-sys-net\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.278154 kubelet[3419]: I0306 02:56:38.277280 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-host-proc-sys-kernel\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.278235 kubelet[3419]: I0306 02:56:38.277293 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-etc-cni-netd\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.278235 kubelet[3419]: I0306 02:56:38.277304 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/4943ea04-b4a4-48fa-b19e-890a8cfa9910-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4943ea04-b4a4-48fa-b19e-890a8cfa9910-cilium-config-path\") pod \"4943ea04-b4a4-48fa-b19e-890a8cfa9910\" (UID: \"4943ea04-b4a4-48fa-b19e-890a8cfa9910\") " Mar 6 02:56:38.278235 kubelet[3419]: I0306 02:56:38.277315 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/4943ea04-b4a4-48fa-b19e-890a8cfa9910-kube-api-access-c4hv5\" (UniqueName: \"kubernetes.io/projected/4943ea04-b4a4-48fa-b19e-890a8cfa9910-kube-api-access-c4hv5\") pod \"4943ea04-b4a4-48fa-b19e-890a8cfa9910\" (UID: \"4943ea04-b4a4-48fa-b19e-890a8cfa9910\") " Mar 6 02:56:38.278235 kubelet[3419]: I0306 02:56:38.277325 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-hostproc\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-hostproc\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.278235 kubelet[3419]: I0306 02:56:38.277334 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-run\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-run\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.278307 kubelet[3419]: I0306 02:56:38.277343 3419 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cni-path\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cni-path\") pod \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\" (UID: \"b68a0e0f-1a06-424b-89f2-db78ffd6b367\") " Mar 6 02:56:38.278307 kubelet[3419]: I0306 02:56:38.277401 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cni-path" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:56:38.279408 kubelet[3419]: I0306 02:56:38.279322 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-host-proc-sys-net" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:56:38.279408 kubelet[3419]: I0306 02:56:38.279354 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-host-proc-sys-kernel" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:56:38.279408 kubelet[3419]: I0306 02:56:38.279363 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-etc-cni-netd" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:56:38.279893 kubelet[3419]: I0306 02:56:38.279826 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b68a0e0f-1a06-424b-89f2-db78ffd6b367-clustermesh-secrets" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 02:56:38.280503 kubelet[3419]: I0306 02:56:38.280448 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-hostproc" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:56:38.280503 kubelet[3419]: I0306 02:56:38.280474 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-run" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:56:38.281915 kubelet[3419]: I0306 02:56:38.281879 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-config-path" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 02:56:38.282068 kubelet[3419]: I0306 02:56:38.282046 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-bpf-maps" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:56:38.282113 kubelet[3419]: I0306 02:56:38.282072 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-xtables-lock" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:56:38.282113 kubelet[3419]: I0306 02:56:38.282083 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-cgroup" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:56:38.282113 kubelet[3419]: I0306 02:56:38.282091 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-lib-modules" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:56:38.282161 kubelet[3419]: I0306 02:56:38.282154 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68a0e0f-1a06-424b-89f2-db78ffd6b367-hubble-tls" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 02:56:38.283236 kubelet[3419]: I0306 02:56:38.283209 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68a0e0f-1a06-424b-89f2-db78ffd6b367-kube-api-access-k6zl5" pod "b68a0e0f-1a06-424b-89f2-db78ffd6b367" (UID: "b68a0e0f-1a06-424b-89f2-db78ffd6b367"). InnerVolumeSpecName "kube-api-access-k6zl5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 02:56:38.283650 kubelet[3419]: I0306 02:56:38.283630 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4943ea04-b4a4-48fa-b19e-890a8cfa9910-cilium-config-path" pod "4943ea04-b4a4-48fa-b19e-890a8cfa9910" (UID: "4943ea04-b4a4-48fa-b19e-890a8cfa9910"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 02:56:38.284612 kubelet[3419]: I0306 02:56:38.284574 3419 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4943ea04-b4a4-48fa-b19e-890a8cfa9910-kube-api-access-c4hv5" pod "4943ea04-b4a4-48fa-b19e-890a8cfa9910" (UID: "4943ea04-b4a4-48fa-b19e-890a8cfa9910"). InnerVolumeSpecName "kube-api-access-c4hv5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 02:56:38.378054 kubelet[3419]: I0306 02:56:38.377908 3419 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b68a0e0f-1a06-424b-89f2-db78ffd6b367-clustermesh-secrets\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378054 kubelet[3419]: I0306 02:56:38.377947 3419 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b68a0e0f-1a06-424b-89f2-db78ffd6b367-hubble-tls\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378054 kubelet[3419]: I0306 02:56:38.377954 3419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k6zl5\" (UniqueName: \"kubernetes.io/projected/b68a0e0f-1a06-424b-89f2-db78ffd6b367-kube-api-access-k6zl5\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378054 kubelet[3419]: I0306 02:56:38.377960 3419 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-bpf-maps\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378054 kubelet[3419]: I0306 02:56:38.377968 3419 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-xtables-lock\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378054 kubelet[3419]: I0306 02:56:38.377974 3419 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-cgroup\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378054 kubelet[3419]: I0306 02:56:38.377979 3419 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-lib-modules\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378054 kubelet[3419]: I0306 02:56:38.377985 3419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-config-path\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378318 kubelet[3419]: I0306 02:56:38.377992 3419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-host-proc-sys-net\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378318 kubelet[3419]: I0306 02:56:38.377998 3419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-host-proc-sys-kernel\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378318 kubelet[3419]: I0306 02:56:38.378004 3419 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-etc-cni-netd\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378318 kubelet[3419]: I0306 02:56:38.378010 3419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4943ea04-b4a4-48fa-b19e-890a8cfa9910-cilium-config-path\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378318 kubelet[3419]: I0306 02:56:38.378015 3419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c4hv5\" (UniqueName: \"kubernetes.io/projected/4943ea04-b4a4-48fa-b19e-890a8cfa9910-kube-api-access-c4hv5\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378318 kubelet[3419]: I0306 02:56:38.378020 3419 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-hostproc\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378318 kubelet[3419]: I0306 02:56:38.378029 3419 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cilium-run\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.378318 kubelet[3419]: I0306 02:56:38.378034 3419 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b68a0e0f-1a06-424b-89f2-db78ffd6b367-cni-path\") on node \"ci-4459.2.3-n-b98e3238ca\" DevicePath \"\"" Mar 6 02:56:38.768103 systemd[1]: Removed slice kubepods-burstable-podb68a0e0f_1a06_424b_89f2_db78ffd6b367.slice - libcontainer container kubepods-burstable-podb68a0e0f_1a06_424b_89f2_db78ffd6b367.slice. Mar 6 02:56:38.768185 systemd[1]: kubepods-burstable-podb68a0e0f_1a06_424b_89f2_db78ffd6b367.slice: Consumed 4.650s CPU time, 122.7M memory peak, 128K read from disk, 12.9M written to disk. Mar 6 02:56:38.769636 systemd[1]: Removed slice kubepods-besteffort-pod4943ea04_b4a4_48fa_b19e_890a8cfa9910.slice - libcontainer container kubepods-besteffort-pod4943ea04_b4a4_48fa_b19e_890a8cfa9910.slice. Mar 6 02:56:39.043674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7124b33abe5d0d2991d5d95d86101b9aa1454a79531c213a2d8b9120399d2e9d-shm.mount: Deactivated successfully. Mar 6 02:56:39.043779 systemd[1]: var-lib-kubelet-pods-4943ea04\x2db4a4\x2d48fa\x2db19e\x2d890a8cfa9910-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc4hv5.mount: Deactivated successfully. Mar 6 02:56:39.043826 systemd[1]: var-lib-kubelet-pods-b68a0e0f\x2d1a06\x2d424b\x2d89f2\x2ddb78ffd6b367-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6zl5.mount: Deactivated successfully. Mar 6 02:56:39.043861 systemd[1]: var-lib-kubelet-pods-b68a0e0f\x2d1a06\x2d424b\x2d89f2\x2ddb78ffd6b367-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 6 02:56:39.043897 systemd[1]: var-lib-kubelet-pods-b68a0e0f\x2d1a06\x2d424b\x2d89f2\x2ddb78ffd6b367-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 6 02:56:39.073986 kubelet[3419]: I0306 02:56:39.073933 3419 scope.go:122] "RemoveContainer" containerID="e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243" Mar 6 02:56:39.076485 containerd[1911]: time="2026-03-06T02:56:39.076446499Z" level=info msg="RemoveContainer for \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\"" Mar 6 02:56:39.091329 containerd[1911]: time="2026-03-06T02:56:39.091288684Z" level=info msg="RemoveContainer for \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\" returns successfully" Mar 6 02:56:39.091961 kubelet[3419]: I0306 02:56:39.091850 3419 scope.go:122] "RemoveContainer" containerID="e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243" Mar 6 02:56:39.092599 containerd[1911]: time="2026-03-06T02:56:39.092554767Z" level=error msg="ContainerStatus for \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\": not found" Mar 6 02:56:39.092936 kubelet[3419]: E0306 02:56:39.092675 3419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\": not found" containerID="e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243" Mar 6 02:56:39.092936 kubelet[3419]: I0306 02:56:39.092713 3419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243"} err="failed to get container status \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4dd897c114425fb615530de0b94711c894102f088807c4eefcb04113bd61243\": not found" Mar 6 02:56:39.092936 kubelet[3419]: I0306 02:56:39.092768 3419 scope.go:122] "RemoveContainer" containerID="dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f" Mar 6 02:56:39.094066 containerd[1911]: time="2026-03-06T02:56:39.093995182Z" level=info msg="RemoveContainer for \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\"" Mar 6 02:56:39.103842 containerd[1911]: time="2026-03-06T02:56:39.103810605Z" level=info msg="RemoveContainer for \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\" returns successfully" Mar 6 02:56:39.104094 kubelet[3419]: I0306 02:56:39.104070 3419 scope.go:122] "RemoveContainer" containerID="b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94" Mar 6 02:56:39.105911 containerd[1911]: time="2026-03-06T02:56:39.105876366Z" level=info msg="RemoveContainer for \"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\"" Mar 6 02:56:39.114524 containerd[1911]: time="2026-03-06T02:56:39.114485731Z" level=info msg="RemoveContainer for \"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\" returns successfully" Mar 6 02:56:39.114720 kubelet[3419]: I0306 02:56:39.114690 3419 scope.go:122] "RemoveContainer" containerID="dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181" Mar 6 02:56:39.116807 containerd[1911]: time="2026-03-06T02:56:39.116775754Z" level=info msg="RemoveContainer for \"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\"" Mar 6 02:56:39.129708 containerd[1911]: time="2026-03-06T02:56:39.129668093Z" level=info msg="RemoveContainer for \"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\" returns successfully" Mar 6 02:56:39.129973 kubelet[3419]: I0306 02:56:39.129942 3419 scope.go:122] "RemoveContainer" containerID="40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff" Mar 6 02:56:39.131503 containerd[1911]: time="2026-03-06T02:56:39.131423021Z" level=info msg="RemoveContainer for \"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\"" Mar 6 02:56:39.139877 containerd[1911]: time="2026-03-06T02:56:39.139844893Z" level=info msg="RemoveContainer for \"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\" returns successfully" Mar 6 02:56:39.140042 kubelet[3419]: I0306 02:56:39.140018 3419 scope.go:122] "RemoveContainer" containerID="261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5" Mar 6 02:56:39.141429 containerd[1911]: time="2026-03-06T02:56:39.141392808Z" level=info msg="RemoveContainer for \"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\"" Mar 6 02:56:39.149977 containerd[1911]: time="2026-03-06T02:56:39.149951243Z" level=info msg="RemoveContainer for \"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\" returns successfully" Mar 6 02:56:39.150207 kubelet[3419]: I0306 02:56:39.150118 3419 scope.go:122] "RemoveContainer" containerID="dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f" Mar 6 02:56:39.150442 containerd[1911]: time="2026-03-06T02:56:39.150412056Z" level=error msg="ContainerStatus for \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\": not found" Mar 6 02:56:39.150628 kubelet[3419]: E0306 02:56:39.150604 3419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\": not found" containerID="dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f" Mar 6 02:56:39.150671 kubelet[3419]: I0306 02:56:39.150631 3419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f"} err="failed to get container status \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\": rpc error: code = NotFound desc = an error occurred when try to find container \"dccb7539b110e7b183cbd002efd2e5a0f1ab262d2a54fe652fd68b4a5f6e824f\": not found" Mar 6 02:56:39.150671 kubelet[3419]: I0306 02:56:39.150652 3419 scope.go:122] "RemoveContainer" containerID="b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94" Mar 6 02:56:39.150846 containerd[1911]: time="2026-03-06T02:56:39.150816171Z" level=error msg="ContainerStatus for \"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\": not found" Mar 6 02:56:39.150945 kubelet[3419]: E0306 02:56:39.150923 3419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\": not found" containerID="b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94" Mar 6 02:56:39.150987 kubelet[3419]: I0306 02:56:39.150973 3419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94"} err="failed to get container status \"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6b0aec96d3b78e84d1b364ad5855c98c1a05a3d465c360fe0f4cee8fda13c94\": not found" Mar 6 02:56:39.151005 kubelet[3419]: I0306 02:56:39.150987 3419 scope.go:122] "RemoveContainer" containerID="dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181" Mar 6 02:56:39.151175 containerd[1911]: time="2026-03-06T02:56:39.151141188Z" level=error msg="ContainerStatus for \"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\": not found" Mar 6 02:56:39.151426 kubelet[3419]: E0306 02:56:39.151402 3419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\": not found" containerID="dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181" Mar 6 02:56:39.151477 kubelet[3419]: I0306 02:56:39.151424 3419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181"} err="failed to get container status \"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfdbaf5f38054bb51121d18ae2e2672796d9605b1036a64bb745c8b6189b9181\": not found" Mar 6 02:56:39.151477 kubelet[3419]: I0306 02:56:39.151447 3419 scope.go:122] "RemoveContainer" containerID="40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff" Mar 6 02:56:39.151712 containerd[1911]: time="2026-03-06T02:56:39.151686171Z" level=error msg="ContainerStatus for \"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\": not found" Mar 6 02:56:39.151843 kubelet[3419]: E0306 02:56:39.151822 3419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\": not found" containerID="40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff" Mar 6 02:56:39.151874 kubelet[3419]: I0306 02:56:39.151843 3419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff"} err="failed to get container status \"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\": rpc error: code = NotFound desc = an error occurred when try to find container \"40ea953859307ce8a8a470c88300b84649a548c84681c0d44159793b803f5cff\": not found" Mar 6 02:56:39.151874 kubelet[3419]: I0306 02:56:39.151855 3419 scope.go:122] "RemoveContainer" containerID="261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5" Mar 6 02:56:39.152094 containerd[1911]: time="2026-03-06T02:56:39.152038357Z" level=error msg="ContainerStatus for \"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\": not found" Mar 6 02:56:39.152188 kubelet[3419]: E0306 02:56:39.152166 3419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\": not found" containerID="261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5" Mar 6 02:56:39.152223 kubelet[3419]: I0306 02:56:39.152186 3419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5"} err="failed to get container status \"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\": rpc error: code = NotFound desc = an error occurred when try to find container \"261c736f789be223364e6426bc13e3438ff9d6e47659daa6b216e09460a9fff5\": not found" Mar 6 02:56:40.002526 sshd[4945]: Connection closed by 10.200.16.10 port 47214 Mar 6 02:56:40.003930 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:40.007512 systemd[1]: sshd@20-10.200.20.16:22-10.200.16.10:47214.service: Deactivated successfully. Mar 6 02:56:40.009183 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 02:56:40.010251 systemd-logind[1879]: Session 23 logged out. Waiting for processes to exit. Mar 6 02:56:40.011467 systemd-logind[1879]: Removed session 23. Mar 6 02:56:40.092972 systemd[1]: Started sshd@21-10.200.20.16:22-10.200.16.10:48470.service - OpenSSH per-connection server daemon (10.200.16.10:48470). Mar 6 02:56:40.497366 sshd[5089]: Accepted publickey for core from 10.200.16.10 port 48470 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:40.498452 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:40.501987 systemd-logind[1879]: New session 24 of user core. Mar 6 02:56:40.509063 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 02:56:40.764532 kubelet[3419]: I0306 02:56:40.764390 3419 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4943ea04-b4a4-48fa-b19e-890a8cfa9910" path="/var/lib/kubelet/pods/4943ea04-b4a4-48fa-b19e-890a8cfa9910/volumes" Mar 6 02:56:40.765677 kubelet[3419]: I0306 02:56:40.765370 3419 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b68a0e0f-1a06-424b-89f2-db78ffd6b367" path="/var/lib/kubelet/pods/b68a0e0f-1a06-424b-89f2-db78ffd6b367/volumes" Mar 6 02:56:40.987961 systemd[1]: Created slice kubepods-burstable-pod6a0fd2c6_e942_4891_a28e_01ef5bb63a78.slice - libcontainer container kubepods-burstable-pod6a0fd2c6_e942_4891_a28e_01ef5bb63a78.slice. Mar 6 02:56:41.025862 sshd[5092]: Connection closed by 10.200.16.10 port 48470 Mar 6 02:56:41.026693 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:41.033135 systemd[1]: sshd@21-10.200.20.16:22-10.200.16.10:48470.service: Deactivated successfully. Mar 6 02:56:41.035586 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 02:56:41.039346 systemd-logind[1879]: Session 24 logged out. Waiting for processes to exit. Mar 6 02:56:41.041002 systemd-logind[1879]: Removed session 24. Mar 6 02:56:41.091764 kubelet[3419]: I0306 02:56:41.091517 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-lib-modules\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.091764 kubelet[3419]: I0306 02:56:41.091550 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-clustermesh-secrets\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.091764 kubelet[3419]: I0306 02:56:41.091563 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-cilium-config-path\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.091764 kubelet[3419]: I0306 02:56:41.091573 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-host-proc-sys-net\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.091764 kubelet[3419]: I0306 02:56:41.091584 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9lft\" (UniqueName: \"kubernetes.io/projected/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-kube-api-access-l9lft\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.091991 kubelet[3419]: I0306 02:56:41.091596 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-hostproc\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.091991 kubelet[3419]: I0306 02:56:41.091604 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-xtables-lock\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.091991 kubelet[3419]: I0306 02:56:41.091614 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-cilium-run\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.091991 kubelet[3419]: I0306 02:56:41.091625 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-cilium-cgroup\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.091991 kubelet[3419]: I0306 02:56:41.091635 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-host-proc-sys-kernel\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.091991 kubelet[3419]: I0306 02:56:41.091644 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-bpf-maps\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.092075 kubelet[3419]: I0306 02:56:41.091652 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-cni-path\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.092075 kubelet[3419]: I0306 02:56:41.091696 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-hubble-tls\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.092075 kubelet[3419]: I0306 02:56:41.091705 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-etc-cni-netd\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.092075 kubelet[3419]: I0306 02:56:41.091713 3419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6a0fd2c6-e942-4891-a28e-01ef5bb63a78-cilium-ipsec-secrets\") pod \"cilium-mqcl4\" (UID: \"6a0fd2c6-e942-4891-a28e-01ef5bb63a78\") " pod="kube-system/cilium-mqcl4" Mar 6 02:56:41.106749 systemd[1]: Started sshd@22-10.200.20.16:22-10.200.16.10:48482.service - OpenSSH per-connection server daemon (10.200.16.10:48482). Mar 6 02:56:41.303069 containerd[1911]: time="2026-03-06T02:56:41.302958585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqcl4,Uid:6a0fd2c6-e942-4891-a28e-01ef5bb63a78,Namespace:kube-system,Attempt:0,}" Mar 6 02:56:41.340491 containerd[1911]: time="2026-03-06T02:56:41.340120800Z" level=info msg="connecting to shim c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d" address="unix:///run/containerd/s/2355a9c57cc571dfb9717ab29fea4d2097939500e03b50ed2455b6303da7a0f3" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:56:41.359898 systemd[1]: Started cri-containerd-c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d.scope - libcontainer container c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d. Mar 6 02:56:41.383545 containerd[1911]: time="2026-03-06T02:56:41.383503147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqcl4,Uid:6a0fd2c6-e942-4891-a28e-01ef5bb63a78,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d\"" Mar 6 02:56:41.393538 containerd[1911]: time="2026-03-06T02:56:41.393494342Z" level=info msg="CreateContainer within sandbox \"c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 6 02:56:41.413756 containerd[1911]: time="2026-03-06T02:56:41.413324064Z" level=info msg="Container 04dea78b6f269696b8a1db9275f4a3f21f828dc676652ea3a1c0077d62934855: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:56:41.427864 containerd[1911]: time="2026-03-06T02:56:41.427828552Z" level=info msg="CreateContainer within sandbox \"c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"04dea78b6f269696b8a1db9275f4a3f21f828dc676652ea3a1c0077d62934855\"" Mar 6 02:56:41.429035 containerd[1911]: time="2026-03-06T02:56:41.429016976Z" level=info msg="StartContainer for \"04dea78b6f269696b8a1db9275f4a3f21f828dc676652ea3a1c0077d62934855\"" Mar 6 02:56:41.430822 containerd[1911]: time="2026-03-06T02:56:41.430796161Z" level=info msg="connecting to shim 04dea78b6f269696b8a1db9275f4a3f21f828dc676652ea3a1c0077d62934855" address="unix:///run/containerd/s/2355a9c57cc571dfb9717ab29fea4d2097939500e03b50ed2455b6303da7a0f3" protocol=ttrpc version=3 Mar 6 02:56:41.446882 systemd[1]: Started cri-containerd-04dea78b6f269696b8a1db9275f4a3f21f828dc676652ea3a1c0077d62934855.scope - libcontainer container 04dea78b6f269696b8a1db9275f4a3f21f828dc676652ea3a1c0077d62934855. Mar 6 02:56:41.478037 containerd[1911]: time="2026-03-06T02:56:41.477719285Z" level=info msg="StartContainer for \"04dea78b6f269696b8a1db9275f4a3f21f828dc676652ea3a1c0077d62934855\" returns successfully" Mar 6 02:56:41.478120 systemd[1]: cri-containerd-04dea78b6f269696b8a1db9275f4a3f21f828dc676652ea3a1c0077d62934855.scope: Deactivated successfully. Mar 6 02:56:41.480540 containerd[1911]: time="2026-03-06T02:56:41.480483378Z" level=info msg="received container exit event container_id:\"04dea78b6f269696b8a1db9275f4a3f21f828dc676652ea3a1c0077d62934855\" id:\"04dea78b6f269696b8a1db9275f4a3f21f828dc676652ea3a1c0077d62934855\" pid:5168 exited_at:{seconds:1772765801 nanos:480234555}" Mar 6 02:56:41.531586 sshd[5102]: Accepted publickey for core from 10.200.16.10 port 48482 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:41.532844 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:41.536422 systemd-logind[1879]: New session 25 of user core. Mar 6 02:56:41.541862 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 6 02:56:41.761099 sshd[5200]: Connection closed by 10.200.16.10 port 48482 Mar 6 02:56:41.761702 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:41.765304 systemd-logind[1879]: Session 25 logged out. Waiting for processes to exit. Mar 6 02:56:41.765591 systemd[1]: sshd@22-10.200.20.16:22-10.200.16.10:48482.service: Deactivated successfully. Mar 6 02:56:41.767193 systemd[1]: session-25.scope: Deactivated successfully. Mar 6 02:56:41.768775 systemd-logind[1879]: Removed session 25. Mar 6 02:56:41.837928 systemd[1]: Started sshd@23-10.200.20.16:22-10.200.16.10:48484.service - OpenSSH per-connection server daemon (10.200.16.10:48484). Mar 6 02:56:41.840336 kubelet[3419]: E0306 02:56:41.840269 3419 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 6 02:56:42.101482 containerd[1911]: time="2026-03-06T02:56:42.100230802Z" level=info msg="CreateContainer within sandbox \"c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 6 02:56:42.117474 containerd[1911]: time="2026-03-06T02:56:42.117437120Z" level=info msg="Container 688e2bef057b8750639c5de16f7f5efbbb5ed067e7268c74ba93828667064072: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:56:42.132948 containerd[1911]: time="2026-03-06T02:56:42.132896797Z" level=info msg="CreateContainer within sandbox \"c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"688e2bef057b8750639c5de16f7f5efbbb5ed067e7268c74ba93828667064072\"" Mar 6 02:56:42.134929 containerd[1911]: time="2026-03-06T02:56:42.134895474Z" level=info msg="StartContainer for \"688e2bef057b8750639c5de16f7f5efbbb5ed067e7268c74ba93828667064072\"" Mar 6 02:56:42.135592 containerd[1911]: time="2026-03-06T02:56:42.135567222Z" level=info msg="connecting to shim 688e2bef057b8750639c5de16f7f5efbbb5ed067e7268c74ba93828667064072" address="unix:///run/containerd/s/2355a9c57cc571dfb9717ab29fea4d2097939500e03b50ed2455b6303da7a0f3" protocol=ttrpc version=3 Mar 6 02:56:42.154912 systemd[1]: Started cri-containerd-688e2bef057b8750639c5de16f7f5efbbb5ed067e7268c74ba93828667064072.scope - libcontainer container 688e2bef057b8750639c5de16f7f5efbbb5ed067e7268c74ba93828667064072. Mar 6 02:56:42.181843 containerd[1911]: time="2026-03-06T02:56:42.181804434Z" level=info msg="StartContainer for \"688e2bef057b8750639c5de16f7f5efbbb5ed067e7268c74ba93828667064072\" returns successfully" Mar 6 02:56:42.182548 systemd[1]: cri-containerd-688e2bef057b8750639c5de16f7f5efbbb5ed067e7268c74ba93828667064072.scope: Deactivated successfully. Mar 6 02:56:42.183315 containerd[1911]: time="2026-03-06T02:56:42.183268624Z" level=info msg="received container exit event container_id:\"688e2bef057b8750639c5de16f7f5efbbb5ed067e7268c74ba93828667064072\" id:\"688e2bef057b8750639c5de16f7f5efbbb5ed067e7268c74ba93828667064072\" pid:5223 exited_at:{seconds:1772765802 nanos:182938724}" Mar 6 02:56:42.201209 sshd[5207]: Accepted publickey for core from 10.200.16.10 port 48484 ssh2: RSA SHA256:FEy/krmA4A08ZzdMQEPdw8LvNt9bbJfX7o/obFKAbA4 Mar 6 02:56:42.202427 sshd-session[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:56:42.205366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-688e2bef057b8750639c5de16f7f5efbbb5ed067e7268c74ba93828667064072-rootfs.mount: Deactivated successfully. Mar 6 02:56:42.208867 systemd-logind[1879]: New session 26 of user core. Mar 6 02:56:42.212859 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 6 02:56:43.105835 containerd[1911]: time="2026-03-06T02:56:43.105751047Z" level=info msg="CreateContainer within sandbox \"c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 6 02:56:43.127929 containerd[1911]: time="2026-03-06T02:56:43.127882867Z" level=info msg="Container 34b68fa8f47a6d8de57361a3839ffbb2567e8f1e51cf03b6d27430aa4f54a1ff: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:56:43.147450 containerd[1911]: time="2026-03-06T02:56:43.147396023Z" level=info msg="CreateContainer within sandbox \"c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"34b68fa8f47a6d8de57361a3839ffbb2567e8f1e51cf03b6d27430aa4f54a1ff\"" Mar 6 02:56:43.148373 containerd[1911]: time="2026-03-06T02:56:43.148183899Z" level=info msg="StartContainer for \"34b68fa8f47a6d8de57361a3839ffbb2567e8f1e51cf03b6d27430aa4f54a1ff\"" Mar 6 02:56:43.149348 containerd[1911]: time="2026-03-06T02:56:43.149324330Z" level=info msg="connecting to shim 34b68fa8f47a6d8de57361a3839ffbb2567e8f1e51cf03b6d27430aa4f54a1ff" address="unix:///run/containerd/s/2355a9c57cc571dfb9717ab29fea4d2097939500e03b50ed2455b6303da7a0f3" protocol=ttrpc version=3 Mar 6 02:56:43.164864 systemd[1]: Started cri-containerd-34b68fa8f47a6d8de57361a3839ffbb2567e8f1e51cf03b6d27430aa4f54a1ff.scope - libcontainer container 34b68fa8f47a6d8de57361a3839ffbb2567e8f1e51cf03b6d27430aa4f54a1ff. Mar 6 02:56:43.217092 systemd[1]: cri-containerd-34b68fa8f47a6d8de57361a3839ffbb2567e8f1e51cf03b6d27430aa4f54a1ff.scope: Deactivated successfully. Mar 6 02:56:43.220389 containerd[1911]: time="2026-03-06T02:56:43.220351037Z" level=info msg="received container exit event container_id:\"34b68fa8f47a6d8de57361a3839ffbb2567e8f1e51cf03b6d27430aa4f54a1ff\" id:\"34b68fa8f47a6d8de57361a3839ffbb2567e8f1e51cf03b6d27430aa4f54a1ff\" pid:5273 exited_at:{seconds:1772765803 nanos:219138601}" Mar 6 02:56:43.226706 containerd[1911]: time="2026-03-06T02:56:43.226608689Z" level=info msg="StartContainer for \"34b68fa8f47a6d8de57361a3839ffbb2567e8f1e51cf03b6d27430aa4f54a1ff\" returns successfully" Mar 6 02:56:43.237724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34b68fa8f47a6d8de57361a3839ffbb2567e8f1e51cf03b6d27430aa4f54a1ff-rootfs.mount: Deactivated successfully. Mar 6 02:56:44.110486 containerd[1911]: time="2026-03-06T02:56:44.110209017Z" level=info msg="CreateContainer within sandbox \"c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 6 02:56:44.143772 containerd[1911]: time="2026-03-06T02:56:44.142787055Z" level=info msg="Container b36e1daf5fe8601de0563ca7898b06f5daabcd7527ff573c81a6508cdb035afe: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:56:44.159755 containerd[1911]: time="2026-03-06T02:56:44.159704230Z" level=info msg="CreateContainer within sandbox \"c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b36e1daf5fe8601de0563ca7898b06f5daabcd7527ff573c81a6508cdb035afe\"" Mar 6 02:56:44.160481 containerd[1911]: time="2026-03-06T02:56:44.160336206Z" level=info msg="StartContainer for \"b36e1daf5fe8601de0563ca7898b06f5daabcd7527ff573c81a6508cdb035afe\"" Mar 6 02:56:44.161144 containerd[1911]: time="2026-03-06T02:56:44.161107073Z" level=info msg="connecting to shim b36e1daf5fe8601de0563ca7898b06f5daabcd7527ff573c81a6508cdb035afe" address="unix:///run/containerd/s/2355a9c57cc571dfb9717ab29fea4d2097939500e03b50ed2455b6303da7a0f3" protocol=ttrpc version=3 Mar 6 02:56:44.179912 systemd[1]: Started cri-containerd-b36e1daf5fe8601de0563ca7898b06f5daabcd7527ff573c81a6508cdb035afe.scope - libcontainer container b36e1daf5fe8601de0563ca7898b06f5daabcd7527ff573c81a6508cdb035afe. Mar 6 02:56:44.202694 systemd[1]: cri-containerd-b36e1daf5fe8601de0563ca7898b06f5daabcd7527ff573c81a6508cdb035afe.scope: Deactivated successfully. Mar 6 02:56:44.209810 containerd[1911]: time="2026-03-06T02:56:44.209654918Z" level=info msg="received container exit event container_id:\"b36e1daf5fe8601de0563ca7898b06f5daabcd7527ff573c81a6508cdb035afe\" id:\"b36e1daf5fe8601de0563ca7898b06f5daabcd7527ff573c81a6508cdb035afe\" pid:5313 exited_at:{seconds:1772765804 nanos:204260140}" Mar 6 02:56:44.212176 containerd[1911]: time="2026-03-06T02:56:44.212154780Z" level=info msg="StartContainer for \"b36e1daf5fe8601de0563ca7898b06f5daabcd7527ff573c81a6508cdb035afe\" returns successfully" Mar 6 02:56:44.227504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b36e1daf5fe8601de0563ca7898b06f5daabcd7527ff573c81a6508cdb035afe-rootfs.mount: Deactivated successfully. Mar 6 02:56:45.115024 containerd[1911]: time="2026-03-06T02:56:45.114980812Z" level=info msg="CreateContainer within sandbox \"c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 6 02:56:45.136431 containerd[1911]: time="2026-03-06T02:56:45.135889594Z" level=info msg="Container e7ba84cc15d086559c294bca40ebbf1d1485b710dccf34c77a3b7cd2a397a899: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:56:45.152121 containerd[1911]: time="2026-03-06T02:56:45.152083034Z" level=info msg="CreateContainer within sandbox \"c5fcb764b6df7ce963712970c8b532c947c0af214051cdb40395b4032efe492d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e7ba84cc15d086559c294bca40ebbf1d1485b710dccf34c77a3b7cd2a397a899\"" Mar 6 02:56:45.152786 containerd[1911]: time="2026-03-06T02:56:45.152757957Z" level=info msg="StartContainer for \"e7ba84cc15d086559c294bca40ebbf1d1485b710dccf34c77a3b7cd2a397a899\"" Mar 6 02:56:45.153460 containerd[1911]: time="2026-03-06T02:56:45.153435881Z" level=info msg="connecting to shim e7ba84cc15d086559c294bca40ebbf1d1485b710dccf34c77a3b7cd2a397a899" address="unix:///run/containerd/s/2355a9c57cc571dfb9717ab29fea4d2097939500e03b50ed2455b6303da7a0f3" protocol=ttrpc version=3 Mar 6 02:56:45.173904 systemd[1]: Started cri-containerd-e7ba84cc15d086559c294bca40ebbf1d1485b710dccf34c77a3b7cd2a397a899.scope - libcontainer container e7ba84cc15d086559c294bca40ebbf1d1485b710dccf34c77a3b7cd2a397a899. Mar 6 02:56:45.219607 containerd[1911]: time="2026-03-06T02:56:45.219567544Z" level=info msg="StartContainer for \"e7ba84cc15d086559c294bca40ebbf1d1485b710dccf34c77a3b7cd2a397a899\" returns successfully" Mar 6 02:56:45.497932 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 6 02:56:46.129145 kubelet[3419]: I0306 02:56:46.129071 3419 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-mqcl4" podStartSLOduration=6.129060098 podStartE2EDuration="6.129060098s" podCreationTimestamp="2026-03-06 02:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:56:46.127875008 +0000 UTC m=+139.473264659" watchObservedRunningTime="2026-03-06 02:56:46.129060098 +0000 UTC m=+139.474449741" Mar 6 02:56:46.474193 kubelet[3419]: E0306 02:56:46.473932 3419 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50970->127.0.0.1:39507: write tcp 127.0.0.1:50970->127.0.0.1:39507: write: connection reset by peer Mar 6 02:56:47.883726 systemd-networkd[1493]: lxc_health: Link UP Mar 6 02:56:47.890654 systemd-networkd[1493]: lxc_health: Gained carrier Mar 6 02:56:49.043974 systemd-networkd[1493]: lxc_health: Gained IPv6LL Mar 6 02:56:54.805613 kubelet[3419]: E0306 02:56:54.805491 3419 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51000->127.0.0.1:39507: write tcp 127.0.0.1:51000->127.0.0.1:39507: write: broken pipe Mar 6 02:56:54.871701 sshd[5254]: Connection closed by 10.200.16.10 port 48484 Mar 6 02:56:54.872347 sshd-session[5207]: pam_unix(sshd:session): session closed for user core Mar 6 02:56:54.875452 systemd-logind[1879]: Session 26 logged out. Waiting for processes to exit. Mar 6 02:56:54.876804 systemd[1]: sshd@23-10.200.20.16:22-10.200.16.10:48484.service: Deactivated successfully. Mar 6 02:56:54.880103 systemd[1]: session-26.scope: Deactivated successfully. Mar 6 02:56:54.882764 systemd-logind[1879]: Removed session 26.