Mar 7 00:41:18.039187 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Mar 7 00:41:18.039204 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Mar 6 22:32:57 -00 2026 Mar 7 00:41:18.039211 kernel: KASLR enabled Mar 7 00:41:18.039215 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 7 00:41:18.039219 kernel: printk: legacy bootconsole [pl11] enabled Mar 7 00:41:18.039224 kernel: efi: EFI v2.7 by EDK II Mar 7 00:41:18.039229 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Mar 7 00:41:18.039233 kernel: random: crng init done Mar 7 00:41:18.039237 kernel: secureboot: Secure boot disabled Mar 7 00:41:18.039241 kernel: ACPI: Early table checksum verification disabled Mar 7 00:41:18.039245 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Mar 7 00:41:18.039249 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:41:18.039253 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:41:18.039257 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 00:41:18.039263 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:41:18.039267 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:41:18.039271 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:41:18.039275 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:41:18.039280 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:41:18.039285 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:41:18.039289 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 7 00:41:18.039293 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:41:18.039298 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 7 00:41:18.039302 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 7 00:41:18.039306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 7 00:41:18.039310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Mar 7 00:41:18.039315 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Mar 7 00:41:18.039319 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 7 00:41:18.039323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 7 00:41:18.039328 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 7 00:41:18.039333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 7 00:41:18.039337 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 7 00:41:18.039341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 7 00:41:18.039346 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 7 00:41:18.039350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 7 00:41:18.039354 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 7 00:41:18.039358 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Mar 7 00:41:18.039363 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Mar 7 00:41:18.039367 kernel: Zone ranges: Mar 7 00:41:18.039371 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 7 00:41:18.039378 kernel: DMA32 empty Mar 7 00:41:18.039382 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 00:41:18.039387 kernel: Device empty Mar 7 00:41:18.039391 kernel: Movable zone start for each node Mar 7 00:41:18.039396 kernel: Early memory node ranges Mar 7 00:41:18.039400 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 7 00:41:18.039405 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Mar 7 00:41:18.039409 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Mar 7 00:41:18.039414 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Mar 7 00:41:18.039418 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Mar 7 00:41:18.039423 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Mar 7 00:41:18.039427 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 00:41:18.039431 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 7 00:41:18.039436 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 7 00:41:18.039440 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Mar 7 00:41:18.039444 kernel: psci: probing for conduit method from ACPI. Mar 7 00:41:18.039449 kernel: psci: PSCIv1.3 detected in firmware. Mar 7 00:41:18.039453 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 00:41:18.039458 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 7 00:41:18.039463 kernel: psci: SMC Calling Convention v1.4 Mar 7 00:41:18.039467 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 7 00:41:18.039472 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 7 00:41:18.039476 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 7 00:41:18.039480 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 7 00:41:18.039485 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 00:41:18.039489 kernel: Detected PIPT I-cache on CPU0 Mar 7 00:41:18.039494 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Mar 7 00:41:18.039498 kernel: CPU features: detected: GIC system register CPU interface Mar 7 00:41:18.039502 kernel: CPU features: detected: Spectre-v4 Mar 7 00:41:18.039507 kernel: CPU features: detected: Spectre-BHB Mar 7 00:41:18.039512 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 7 00:41:18.039516 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 7 00:41:18.039521 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Mar 7 00:41:18.039525 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 7 00:41:18.039529 kernel: alternatives: applying boot alternatives Mar 7 00:41:18.039535 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9c226afb416af9ef4d18a1b0d3e269f0ccb0a864e96b716716d400068481d58c Mar 7 00:41:18.039540 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 00:41:18.039544 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 00:41:18.039559 kernel: Fallback order for Node 0: 0 Mar 7 00:41:18.039564 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Mar 7 00:41:18.039569 kernel: Policy zone: Normal Mar 7 00:41:18.039574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 00:41:18.039578 kernel: software IO TLB: area num 2. Mar 7 00:41:18.039582 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Mar 7 00:41:18.039587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 00:41:18.039591 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 00:41:18.039596 kernel: rcu: RCU event tracing is enabled. Mar 7 00:41:18.039601 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 00:41:18.039605 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 00:41:18.039610 kernel: Tracing variant of Tasks RCU enabled. Mar 7 00:41:18.039614 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 00:41:18.039619 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 00:41:18.039624 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:41:18.039628 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:41:18.039633 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 00:41:18.039637 kernel: GICv3: 960 SPIs implemented Mar 7 00:41:18.039641 kernel: GICv3: 0 Extended SPIs implemented Mar 7 00:41:18.039646 kernel: Root IRQ handler: gic_handle_irq Mar 7 00:41:18.039650 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 7 00:41:18.039654 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Mar 7 00:41:18.039659 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 7 00:41:18.039663 kernel: ITS: No ITS available, not enabling LPIs Mar 7 00:41:18.039668 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 00:41:18.039673 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Mar 7 00:41:18.039678 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 00:41:18.039682 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Mar 7 00:41:18.039687 kernel: Console: colour dummy device 80x25 Mar 7 00:41:18.039691 kernel: printk: legacy console [tty1] enabled Mar 7 00:41:18.039696 kernel: ACPI: Core revision 20240827 Mar 7 00:41:18.039701 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Mar 7 00:41:18.039705 kernel: pid_max: default: 32768 minimum: 301 Mar 7 00:41:18.039710 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 7 00:41:18.039714 kernel: landlock: Up and running. Mar 7 00:41:18.039719 kernel: SELinux: Initializing. Mar 7 00:41:18.039724 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:41:18.039728 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:41:18.039733 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Mar 7 00:41:18.039738 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 7 00:41:18.039746 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 00:41:18.039751 kernel: rcu: Hierarchical SRCU implementation. Mar 7 00:41:18.039756 kernel: rcu: Max phase no-delay instances is 400. Mar 7 00:41:18.039761 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 7 00:41:18.039765 kernel: Remapping and enabling EFI services. Mar 7 00:41:18.039770 kernel: smp: Bringing up secondary CPUs ... Mar 7 00:41:18.039775 kernel: Detected PIPT I-cache on CPU1 Mar 7 00:41:18.039780 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 7 00:41:18.039785 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Mar 7 00:41:18.039790 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 00:41:18.039795 kernel: SMP: Total of 2 processors activated. Mar 7 00:41:18.039799 kernel: CPU: All CPU(s) started at EL1 Mar 7 00:41:18.039805 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 00:41:18.039810 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 7 00:41:18.039815 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 7 00:41:18.039820 kernel: CPU features: detected: Common not Private translations Mar 7 00:41:18.039824 kernel: CPU features: detected: CRC32 instructions Mar 7 00:41:18.039829 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Mar 7 00:41:18.039834 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 7 00:41:18.039839 kernel: CPU features: detected: LSE atomic instructions Mar 7 00:41:18.039843 kernel: CPU features: detected: Privileged Access Never Mar 7 00:41:18.039849 kernel: CPU features: detected: Speculation barrier (SB) Mar 7 00:41:18.039854 kernel: CPU features: detected: TLB range maintenance instructions Mar 7 00:41:18.039858 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 7 00:41:18.039863 kernel: CPU features: detected: Scalable Vector Extension Mar 7 00:41:18.039868 kernel: alternatives: applying system-wide alternatives Mar 7 00:41:18.039873 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Mar 7 00:41:18.039878 kernel: SVE: maximum available vector length 16 bytes per vector Mar 7 00:41:18.039882 kernel: SVE: default vector length 16 bytes per vector Mar 7 00:41:18.039887 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Mar 7 00:41:18.039893 kernel: devtmpfs: initialized Mar 7 00:41:18.039898 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 00:41:18.039903 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 00:41:18.039907 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 7 00:41:18.039912 kernel: 0 pages in range for non-PLT usage Mar 7 00:41:18.039917 kernel: 508400 pages in range for PLT usage Mar 7 00:41:18.039922 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 00:41:18.039926 kernel: SMBIOS 3.1.0 present. Mar 7 00:41:18.039932 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 7 00:41:18.039937 kernel: DMI: Memory slots populated: 2/2 Mar 7 00:41:18.039941 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 00:41:18.039946 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 00:41:18.039951 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 00:41:18.039956 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 00:41:18.039961 kernel: audit: initializing netlink subsys (disabled) Mar 7 00:41:18.039965 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Mar 7 00:41:18.039970 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 00:41:18.039975 kernel: cpuidle: using governor menu Mar 7 00:41:18.039980 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 00:41:18.039985 kernel: ASID allocator initialised with 32768 entries Mar 7 00:41:18.039990 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 00:41:18.039995 kernel: Serial: AMBA PL011 UART driver Mar 7 00:41:18.039999 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 00:41:18.040004 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 00:41:18.040009 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 00:41:18.040014 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 00:41:18.040019 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 00:41:18.040024 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 00:41:18.040028 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 00:41:18.040033 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 00:41:18.040038 kernel: ACPI: Added _OSI(Module Device) Mar 7 00:41:18.040043 kernel: ACPI: Added _OSI(Processor Device) Mar 7 00:41:18.040047 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 00:41:18.040052 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 00:41:18.040057 kernel: ACPI: Interpreter enabled Mar 7 00:41:18.040062 kernel: ACPI: Using GIC for interrupt routing Mar 7 00:41:18.040067 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 7 00:41:18.040072 kernel: printk: legacy console [ttyAMA0] enabled Mar 7 00:41:18.040077 kernel: printk: legacy bootconsole [pl11] disabled Mar 7 00:41:18.040081 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 7 00:41:18.040086 kernel: ACPI: CPU0 has been hot-added Mar 7 00:41:18.040091 kernel: ACPI: CPU1 has been hot-added Mar 7 00:41:18.040095 kernel: iommu: Default domain type: Translated Mar 7 00:41:18.040100 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 00:41:18.040106 kernel: efivars: Registered efivars operations Mar 7 00:41:18.040111 kernel: vgaarb: loaded Mar 7 00:41:18.040115 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 00:41:18.040120 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 00:41:18.040125 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 00:41:18.040129 kernel: pnp: PnP ACPI init Mar 7 00:41:18.040134 kernel: pnp: PnP ACPI: found 0 devices Mar 7 00:41:18.040139 kernel: NET: Registered PF_INET protocol family Mar 7 00:41:18.040144 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 00:41:18.040148 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 00:41:18.040154 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 00:41:18.040159 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 00:41:18.040164 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 00:41:18.040168 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 00:41:18.040173 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:41:18.040178 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:41:18.040183 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 00:41:18.040187 kernel: PCI: CLS 0 bytes, default 64 Mar 7 00:41:18.040192 kernel: kvm [1]: HYP mode not available Mar 7 00:41:18.040197 kernel: Initialise system trusted keyrings Mar 7 00:41:18.040202 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 00:41:18.040207 kernel: Key type asymmetric registered Mar 7 00:41:18.040212 kernel: Asymmetric key parser 'x509' registered Mar 7 00:41:18.040216 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 7 00:41:18.040221 kernel: io scheduler mq-deadline registered Mar 7 00:41:18.040226 kernel: io scheduler kyber registered Mar 7 00:41:18.040231 kernel: io scheduler bfq registered Mar 7 00:41:18.040235 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 00:41:18.040241 kernel: thunder_xcv, ver 1.0 Mar 7 00:41:18.040246 kernel: thunder_bgx, ver 1.0 Mar 7 00:41:18.040250 kernel: nicpf, ver 1.0 Mar 7 00:41:18.040255 kernel: nicvf, ver 1.0 Mar 7 00:41:18.040359 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 00:41:18.040409 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T00:41:17 UTC (1772844077) Mar 7 00:41:18.040415 kernel: efifb: probing for efifb Mar 7 00:41:18.040421 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 00:41:18.040426 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 00:41:18.040431 kernel: efifb: scrolling: redraw Mar 7 00:41:18.040436 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 00:41:18.040440 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 00:41:18.040445 kernel: fb0: EFI VGA frame buffer device Mar 7 00:41:18.040450 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 7 00:41:18.040455 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 00:41:18.040460 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Mar 7 00:41:18.040465 kernel: watchdog: NMI not fully supported Mar 7 00:41:18.040470 kernel: watchdog: Hard watchdog permanently disabled Mar 7 00:41:18.040475 kernel: NET: Registered PF_INET6 protocol family Mar 7 00:41:18.040479 kernel: Segment Routing with IPv6 Mar 7 00:41:18.040484 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 00:41:18.040489 kernel: NET: Registered PF_PACKET protocol family Mar 7 00:41:18.040494 kernel: Key type dns_resolver registered Mar 7 00:41:18.040498 kernel: registered taskstats version 1 Mar 7 00:41:18.040503 kernel: Loading compiled-in X.509 certificates Mar 7 00:41:18.040508 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 7eb2f80205b35f103c9dbaa59957e2e5fe845c0f' Mar 7 00:41:18.040514 kernel: Demotion targets for Node 0: null Mar 7 00:41:18.040519 kernel: Key type .fscrypt registered Mar 7 00:41:18.040523 kernel: Key type fscrypt-provisioning registered Mar 7 00:41:18.040528 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 00:41:18.040533 kernel: ima: Allocated hash algorithm: sha1 Mar 7 00:41:18.040537 kernel: ima: No architecture policies found Mar 7 00:41:18.040542 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 00:41:18.040547 kernel: clk: Disabling unused clocks Mar 7 00:41:18.042589 kernel: PM: genpd: Disabling unused power domains Mar 7 00:41:18.042600 kernel: Warning: unable to open an initial console. Mar 7 00:41:18.042605 kernel: Freeing unused kernel memory: 39552K Mar 7 00:41:18.042610 kernel: Run /init as init process Mar 7 00:41:18.042615 kernel: with arguments: Mar 7 00:41:18.042620 kernel: /init Mar 7 00:41:18.042625 kernel: with environment: Mar 7 00:41:18.042630 kernel: HOME=/ Mar 7 00:41:18.042635 kernel: TERM=linux Mar 7 00:41:18.042640 systemd[1]: Successfully made /usr/ read-only. Mar 7 00:41:18.042649 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 7 00:41:18.042655 systemd[1]: Detected virtualization microsoft. Mar 7 00:41:18.042660 systemd[1]: Detected architecture arm64. Mar 7 00:41:18.042665 systemd[1]: Running in initrd. Mar 7 00:41:18.042670 systemd[1]: No hostname configured, using default hostname. Mar 7 00:41:18.042676 systemd[1]: Hostname set to . Mar 7 00:41:18.042681 systemd[1]: Initializing machine ID from random generator. Mar 7 00:41:18.042687 systemd[1]: Queued start job for default target initrd.target. Mar 7 00:41:18.042693 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:41:18.042698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:41:18.042704 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 00:41:18.042709 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:41:18.042715 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 00:41:18.042721 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 00:41:18.042728 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 00:41:18.042733 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 00:41:18.042738 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:41:18.042744 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:41:18.042749 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:41:18.042754 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:41:18.042759 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:41:18.042764 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:41:18.042771 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:41:18.042776 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:41:18.042781 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:41:18.042787 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 7 00:41:18.042792 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:41:18.042797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:41:18.042803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:41:18.042808 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:41:18.042814 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 00:41:18.042819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:41:18.042825 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 00:41:18.042830 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 7 00:41:18.042836 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 00:41:18.042841 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:41:18.042846 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:41:18.042870 systemd-journald[226]: Collecting audit messages is disabled. Mar 7 00:41:18.042884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:41:18.042890 systemd-journald[226]: Journal started Mar 7 00:41:18.042905 systemd-journald[226]: Runtime Journal (/run/log/journal/875fb928b0bc469b8acf4d80243f81ff) is 8M, max 78.3M, 70.3M free. Mar 7 00:41:18.050864 systemd-modules-load[228]: Inserted module 'overlay' Mar 7 00:41:18.058897 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:41:18.066604 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 00:41:18.083853 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 00:41:18.083873 kernel: Bridge firewalling registered Mar 7 00:41:18.079883 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:41:18.083255 systemd-modules-load[228]: Inserted module 'br_netfilter' Mar 7 00:41:18.088825 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 00:41:18.096618 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:41:18.104519 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:41:18.113863 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:41:18.127407 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:41:18.139499 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:41:18.157645 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:41:18.164504 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:41:18.171448 systemd-tmpfiles[248]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 7 00:41:18.180596 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:41:18.188692 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:41:18.205624 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:41:18.222438 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 00:41:18.230993 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:41:18.247663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:41:18.259456 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9c226afb416af9ef4d18a1b0d3e269f0ccb0a864e96b716716d400068481d58c Mar 7 00:41:18.290984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:41:18.299539 systemd-resolved[263]: Positive Trust Anchors: Mar 7 00:41:18.299556 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:41:18.299576 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:41:18.301106 systemd-resolved[263]: Defaulting to hostname 'linux'. Mar 7 00:41:18.305416 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:41:18.316786 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:41:18.388573 kernel: SCSI subsystem initialized Mar 7 00:41:18.394562 kernel: Loading iSCSI transport class v2.0-870. Mar 7 00:41:18.402575 kernel: iscsi: registered transport (tcp) Mar 7 00:41:18.415165 kernel: iscsi: registered transport (qla4xxx) Mar 7 00:41:18.415176 kernel: QLogic iSCSI HBA Driver Mar 7 00:41:18.428825 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 00:41:18.449019 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:41:18.455344 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 00:41:18.502736 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 00:41:18.508676 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 00:41:18.569574 kernel: raid6: neonx8 gen() 18511 MB/s Mar 7 00:41:18.588560 kernel: raid6: neonx4 gen() 18562 MB/s Mar 7 00:41:18.607556 kernel: raid6: neonx2 gen() 17080 MB/s Mar 7 00:41:18.627557 kernel: raid6: neonx1 gen() 15026 MB/s Mar 7 00:41:18.646579 kernel: raid6: int64x8 gen() 10504 MB/s Mar 7 00:41:18.665559 kernel: raid6: int64x4 gen() 10605 MB/s Mar 7 00:41:18.685558 kernel: raid6: int64x2 gen() 8988 MB/s Mar 7 00:41:18.706935 kernel: raid6: int64x1 gen() 7007 MB/s Mar 7 00:41:18.706944 kernel: raid6: using algorithm neonx4 gen() 18562 MB/s Mar 7 00:41:18.728609 kernel: raid6: .... xor() 15153 MB/s, rmw enabled Mar 7 00:41:18.728617 kernel: raid6: using neon recovery algorithm Mar 7 00:41:18.737254 kernel: xor: measuring software checksum speed Mar 7 00:41:18.737320 kernel: 8regs : 28627 MB/sec Mar 7 00:41:18.739906 kernel: 32regs : 28795 MB/sec Mar 7 00:41:18.742260 kernel: arm64_neon : 37656 MB/sec Mar 7 00:41:18.745415 kernel: xor: using function: arm64_neon (37656 MB/sec) Mar 7 00:41:18.783570 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 00:41:18.788850 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:41:18.797690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:41:18.823923 systemd-udevd[474]: Using default interface naming scheme 'v255'. Mar 7 00:41:18.828337 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:41:18.840167 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 00:41:18.868535 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Mar 7 00:41:18.890144 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:41:18.896908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:41:18.938799 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:41:18.946493 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 00:41:19.016572 kernel: hv_vmbus: Vmbus version:5.3 Mar 7 00:41:19.018067 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:41:19.022144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:41:19.040911 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:41:19.050892 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 00:41:19.050910 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 00:41:19.050917 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 7 00:41:19.072155 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 7 00:41:19.072190 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 00:41:19.072311 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 00:41:19.073075 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:41:19.083605 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 00:41:19.083620 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 00:41:19.083627 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 00:41:19.090225 kernel: scsi host0: storvsc_host_t Mar 7 00:41:19.090267 kernel: scsi host1: storvsc_host_t Mar 7 00:41:19.093842 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 00:41:19.102717 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 7 00:41:19.112565 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 00:41:19.123574 kernel: PTP clock support registered Mar 7 00:41:19.128894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:41:19.165711 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 00:41:19.165726 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 00:41:19.165856 kernel: hv_vmbus: registering driver hv_utils Mar 7 00:41:19.165863 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 00:41:19.165932 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 00:41:19.165992 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 00:41:19.166005 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 00:41:19.166064 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 00:41:19.166070 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 00:41:19.166130 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 00:41:19.535607 kernel: hv_netvsc 7ced8db8-728b-7ced-8db8-728b7ced8db8 eth0: VF slot 1 added Mar 7 00:41:19.535760 systemd-resolved[263]: Clock change detected. Flushing caches. Mar 7 00:41:19.552115 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:41:19.552141 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 00:41:19.560023 kernel: hv_vmbus: registering driver hv_pci Mar 7 00:41:19.560054 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 00:41:19.560185 kernel: hv_pci 4183fbd9-bc77-4304-8cf2-22d73c4fe692: PCI VMBus probing: Using version 0x10004 Mar 7 00:41:19.567298 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 00:41:19.572980 kernel: hv_pci 4183fbd9-bc77-4304-8cf2-22d73c4fe692: PCI host bridge to bus bc77:00 Mar 7 00:41:19.573105 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 00:41:19.578086 kernel: pci_bus bc77:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 7 00:41:19.582580 kernel: pci_bus bc77:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 00:41:19.588308 kernel: pci bc77:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Mar 7 00:41:19.593238 kernel: pci bc77:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 7 00:41:19.598299 kernel: pci bc77:00:02.0: enabling Extended Tags Mar 7 00:41:19.608289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#297 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 00:41:19.618259 kernel: pci bc77:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bc77:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Mar 7 00:41:19.630811 kernel: pci_bus bc77:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 00:41:19.630937 kernel: pci bc77:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Mar 7 00:41:19.644513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#272 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 00:41:19.697391 kernel: mlx5_core bc77:00:02.0: enabling device (0000 -> 0002) Mar 7 00:41:19.705607 kernel: mlx5_core bc77:00:02.0: PTM is not supported by PCIe Mar 7 00:41:19.705754 kernel: mlx5_core bc77:00:02.0: firmware version: 16.30.5026 Mar 7 00:41:19.876020 kernel: hv_netvsc 7ced8db8-728b-7ced-8db8-728b7ced8db8 eth0: VF registering: eth1 Mar 7 00:41:19.876232 kernel: mlx5_core bc77:00:02.0 eth1: joined to eth0 Mar 7 00:41:19.882307 kernel: mlx5_core bc77:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 7 00:41:19.891255 kernel: mlx5_core bc77:00:02.0 enP48247s1: renamed from eth1 Mar 7 00:41:20.060797 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 00:41:20.157150 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 00:41:20.185426 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 00:41:20.190278 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 00:41:20.200939 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 00:41:20.221936 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 00:41:20.232386 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 00:41:20.236866 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:41:20.245133 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:41:20.254448 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:41:20.266334 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 00:41:20.285244 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:41:20.288250 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:41:20.305239 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:41:21.309310 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:41:21.309365 disk-uuid[652]: The operation has completed successfully. Mar 7 00:41:21.387447 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 00:41:21.387542 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 00:41:21.406710 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 00:41:21.426655 sh[818]: Success Mar 7 00:41:21.462085 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 00:41:21.462154 kernel: device-mapper: uevent: version 1.0.3 Mar 7 00:41:21.467057 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 7 00:41:21.477270 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 7 00:41:21.743614 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 00:41:21.748801 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 00:41:21.766295 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 00:41:21.788713 kernel: BTRFS: device fsid 376b0ad0-b1fc-4099-8019-6f1f3d92d570 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (836) Mar 7 00:41:21.788746 kernel: BTRFS info (device dm-0): first mount of filesystem 376b0ad0-b1fc-4099-8019-6f1f3d92d570 Mar 7 00:41:21.793098 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:41:22.139035 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 7 00:41:22.139128 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 7 00:41:22.174031 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 00:41:22.178084 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 7 00:41:22.185385 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 00:41:22.186036 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 00:41:22.206855 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 00:41:22.236245 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (859) Mar 7 00:41:22.246222 kernel: BTRFS info (device sda6): first mount of filesystem a2920a34-fe1c-42ba-814e-fd8c35911ce4 Mar 7 00:41:22.246266 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:41:22.275132 kernel: BTRFS info (device sda6): turning on async discard Mar 7 00:41:22.275173 kernel: BTRFS info (device sda6): enabling free space tree Mar 7 00:41:22.283268 kernel: BTRFS info (device sda6): last unmount of filesystem a2920a34-fe1c-42ba-814e-fd8c35911ce4 Mar 7 00:41:22.284443 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 00:41:22.292780 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 00:41:22.325862 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:41:22.337175 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:41:22.369031 systemd-networkd[1005]: lo: Link UP Mar 7 00:41:22.369042 systemd-networkd[1005]: lo: Gained carrier Mar 7 00:41:22.369745 systemd-networkd[1005]: Enumeration completed Mar 7 00:41:22.370371 systemd-networkd[1005]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:41:22.370374 systemd-networkd[1005]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:41:22.371695 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:41:22.376068 systemd[1]: Reached target network.target - Network. Mar 7 00:41:22.443240 kernel: mlx5_core bc77:00:02.0 enP48247s1: Link up Mar 7 00:41:22.475235 kernel: hv_netvsc 7ced8db8-728b-7ced-8db8-728b7ced8db8 eth0: Data path switched to VF: enP48247s1 Mar 7 00:41:22.475540 systemd-networkd[1005]: enP48247s1: Link UP Mar 7 00:41:22.475598 systemd-networkd[1005]: eth0: Link UP Mar 7 00:41:22.475682 systemd-networkd[1005]: eth0: Gained carrier Mar 7 00:41:22.475691 systemd-networkd[1005]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:41:22.482348 systemd-networkd[1005]: enP48247s1: Gained carrier Mar 7 00:41:22.499255 systemd-networkd[1005]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 00:41:23.503983 ignition[959]: Ignition 2.22.0 Mar 7 00:41:23.503996 ignition[959]: Stage: fetch-offline Mar 7 00:41:23.507117 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:41:23.504094 ignition[959]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:41:23.514594 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 00:41:23.504100 ignition[959]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:41:23.504171 ignition[959]: parsed url from cmdline: "" Mar 7 00:41:23.504173 ignition[959]: no config URL provided Mar 7 00:41:23.504176 ignition[959]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:41:23.504180 ignition[959]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:41:23.504184 ignition[959]: failed to fetch config: resource requires networking Mar 7 00:41:23.505361 ignition[959]: Ignition finished successfully Mar 7 00:41:23.554543 ignition[1016]: Ignition 2.22.0 Mar 7 00:41:23.554554 ignition[1016]: Stage: fetch Mar 7 00:41:23.554731 ignition[1016]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:41:23.554738 ignition[1016]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:41:23.554814 ignition[1016]: parsed url from cmdline: "" Mar 7 00:41:23.554817 ignition[1016]: no config URL provided Mar 7 00:41:23.554820 ignition[1016]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:41:23.554827 ignition[1016]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:41:23.554841 ignition[1016]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 00:41:23.645965 ignition[1016]: GET result: OK Mar 7 00:41:23.646032 ignition[1016]: config has been read from IMDS userdata Mar 7 00:41:23.646049 ignition[1016]: parsing config with SHA512: 83b80e3c9daf49ff84b4c15a8bcc4c3a0f7bb9a9d463c05503a754c7d024c4fe95cc0090b453b9d3e6ead5886bc5c98902b15b07a6c87509bc024425ec87bf04 Mar 7 00:41:23.649249 unknown[1016]: fetched base config from "system" Mar 7 00:41:23.649588 ignition[1016]: fetch: fetch complete Mar 7 00:41:23.649253 unknown[1016]: fetched base config from "system" Mar 7 00:41:23.649591 ignition[1016]: fetch: fetch passed Mar 7 00:41:23.649256 unknown[1016]: fetched user config from "azure" Mar 7 00:41:23.649633 ignition[1016]: Ignition finished successfully Mar 7 00:41:23.651323 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 00:41:23.656530 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 00:41:23.668358 systemd-networkd[1005]: eth0: Gained IPv6LL Mar 7 00:41:23.695706 ignition[1022]: Ignition 2.22.0 Mar 7 00:41:23.695721 ignition[1022]: Stage: kargs Mar 7 00:41:23.699687 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 00:41:23.695885 ignition[1022]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:41:23.706103 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 00:41:23.695892 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:41:23.696492 ignition[1022]: kargs: kargs passed Mar 7 00:41:23.696524 ignition[1022]: Ignition finished successfully Mar 7 00:41:23.739679 ignition[1028]: Ignition 2.22.0 Mar 7 00:41:23.739692 ignition[1028]: Stage: disks Mar 7 00:41:23.743539 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 00:41:23.739848 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:41:23.750685 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 00:41:23.739855 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:41:23.758607 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:41:23.740399 ignition[1028]: disks: disks passed Mar 7 00:41:23.766752 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:41:23.740448 ignition[1028]: Ignition finished successfully Mar 7 00:41:23.775151 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:41:23.783618 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:41:23.792975 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 00:41:23.878287 systemd-fsck[1036]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Mar 7 00:41:23.886567 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 00:41:23.892787 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 00:41:24.158243 kernel: EXT4-fs (sda9): mounted filesystem dc3cd474-cc91-4aa5-8987-77b9669cedbb r/w with ordered data mode. Quota mode: none. Mar 7 00:41:24.158974 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 00:41:24.162780 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 00:41:24.188166 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:41:24.201806 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 00:41:24.210339 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 00:41:24.216215 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 00:41:24.216258 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:41:24.225495 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 00:41:24.238459 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 00:41:24.263256 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1050) Mar 7 00:41:24.272441 kernel: BTRFS info (device sda6): first mount of filesystem a2920a34-fe1c-42ba-814e-fd8c35911ce4 Mar 7 00:41:24.272460 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:41:24.281021 kernel: BTRFS info (device sda6): turning on async discard Mar 7 00:41:24.281056 kernel: BTRFS info (device sda6): enabling free space tree Mar 7 00:41:24.282185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:41:24.977247 coreos-metadata[1052]: Mar 07 00:41:24.977 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 00:41:24.983448 coreos-metadata[1052]: Mar 07 00:41:24.983 INFO Fetch successful Mar 7 00:41:24.983448 coreos-metadata[1052]: Mar 07 00:41:24.983 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 00:41:24.995676 coreos-metadata[1052]: Mar 07 00:41:24.995 INFO Fetch successful Mar 7 00:41:25.013973 coreos-metadata[1052]: Mar 07 00:41:25.013 INFO wrote hostname ci-4459.2.3-n-e6e869ea98 to /sysroot/etc/hostname Mar 7 00:41:25.021085 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:41:25.233848 initrd-setup-root[1080]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 00:41:25.256910 initrd-setup-root[1087]: cut: /sysroot/etc/group: No such file or directory Mar 7 00:41:25.264244 initrd-setup-root[1094]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 00:41:25.285919 initrd-setup-root[1101]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 00:41:26.376651 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 00:41:26.381888 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 00:41:26.396822 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 00:41:26.407805 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 00:41:26.417298 kernel: BTRFS info (device sda6): last unmount of filesystem a2920a34-fe1c-42ba-814e-fd8c35911ce4 Mar 7 00:41:26.436532 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 00:41:26.445016 ignition[1168]: INFO : Ignition 2.22.0 Mar 7 00:41:26.448709 ignition[1168]: INFO : Stage: mount Mar 7 00:41:26.448709 ignition[1168]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:41:26.448709 ignition[1168]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:41:26.448709 ignition[1168]: INFO : mount: mount passed Mar 7 00:41:26.448709 ignition[1168]: INFO : Ignition finished successfully Mar 7 00:41:26.449378 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 00:41:26.456584 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 00:41:26.477335 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:41:26.505297 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1180) Mar 7 00:41:26.514708 kernel: BTRFS info (device sda6): first mount of filesystem a2920a34-fe1c-42ba-814e-fd8c35911ce4 Mar 7 00:41:26.514742 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:41:26.523600 kernel: BTRFS info (device sda6): turning on async discard Mar 7 00:41:26.523625 kernel: BTRFS info (device sda6): enabling free space tree Mar 7 00:41:26.525057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:41:26.552858 ignition[1196]: INFO : Ignition 2.22.0 Mar 7 00:41:26.552858 ignition[1196]: INFO : Stage: files Mar 7 00:41:26.558686 ignition[1196]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:41:26.558686 ignition[1196]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:41:26.558686 ignition[1196]: DEBUG : files: compiled without relabeling support, skipping Mar 7 00:41:26.572519 ignition[1196]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 00:41:26.572519 ignition[1196]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 00:41:26.623935 ignition[1196]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 00:41:26.629709 ignition[1196]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 00:41:26.629709 ignition[1196]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 00:41:26.626678 unknown[1196]: wrote ssh authorized keys file for user: core Mar 7 00:41:26.677815 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:41:26.685499 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 00:41:26.711939 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 00:41:26.860496 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:41:26.860496 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 00:41:26.860496 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 7 00:41:27.138946 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 00:41:27.222784 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 00:41:27.229934 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 00:41:27.229934 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 00:41:27.229934 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:41:27.229934 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:41:27.229934 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:41:27.229934 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:41:27.229934 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:41:27.229934 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:41:27.285181 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:41:27.285181 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:41:27.285181 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 7 00:41:27.285181 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 7 00:41:27.285181 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 7 00:41:27.285181 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 7 00:41:27.625598 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 00:41:27.988472 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 7 00:41:27.988472 ignition[1196]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 7 00:41:28.056465 ignition[1196]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:41:28.070888 ignition[1196]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:41:28.070888 ignition[1196]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 7 00:41:28.084663 ignition[1196]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 7 00:41:28.084663 ignition[1196]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 00:41:28.084663 ignition[1196]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:41:28.084663 ignition[1196]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:41:28.084663 ignition[1196]: INFO : files: files passed Mar 7 00:41:28.084663 ignition[1196]: INFO : Ignition finished successfully Mar 7 00:41:28.081466 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 00:41:28.089588 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 00:41:28.114846 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 00:41:28.122266 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 00:41:28.122385 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 00:41:28.166196 initrd-setup-root-after-ignition[1227]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:41:28.166196 initrd-setup-root-after-ignition[1227]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:41:28.179545 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:41:28.173380 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:41:28.184711 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 00:41:28.195541 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 00:41:28.243826 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 00:41:28.243928 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 00:41:28.252864 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 00:41:28.262065 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 00:41:28.270196 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 00:41:28.270903 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 00:41:28.302152 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:41:28.308244 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 00:41:28.332167 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:41:28.336996 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:41:28.346095 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 00:41:28.354087 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 00:41:28.354177 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:41:28.365978 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 00:41:28.370221 systemd[1]: Stopped target basic.target - Basic System. Mar 7 00:41:28.378624 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 00:41:28.387001 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:41:28.395315 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 00:41:28.403901 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 7 00:41:28.413041 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 00:41:28.421325 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:41:28.430571 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 00:41:28.438546 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 00:41:28.447821 systemd[1]: Stopped target swap.target - Swaps. Mar 7 00:41:28.454838 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 00:41:28.454949 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:41:28.465639 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:41:28.470125 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:41:28.478867 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 00:41:28.478931 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:41:28.487542 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 00:41:28.487635 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 00:41:28.500264 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 00:41:28.500346 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:41:28.505611 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 00:41:28.505682 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 00:41:28.513124 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 00:41:28.578124 ignition[1251]: INFO : Ignition 2.22.0 Mar 7 00:41:28.578124 ignition[1251]: INFO : Stage: umount Mar 7 00:41:28.578124 ignition[1251]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:41:28.578124 ignition[1251]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:41:28.578124 ignition[1251]: INFO : umount: umount passed Mar 7 00:41:28.578124 ignition[1251]: INFO : Ignition finished successfully Mar 7 00:41:28.513187 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:41:28.523953 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 00:41:28.548885 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 00:41:28.558160 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 00:41:28.558534 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:41:28.573268 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 00:41:28.573350 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:41:28.585245 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 00:41:28.585333 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 00:41:28.593875 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 00:41:28.593948 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 00:41:28.598327 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 00:41:28.598362 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 00:41:28.602659 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 00:41:28.602684 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 00:41:28.613340 systemd[1]: Stopped target network.target - Network. Mar 7 00:41:28.619862 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 00:41:28.619917 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:41:28.627950 systemd[1]: Stopped target paths.target - Path Units. Mar 7 00:41:28.641542 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 00:41:28.645247 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:41:28.654557 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 00:41:28.662483 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 00:41:28.669734 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 00:41:28.669788 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:41:28.677876 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 00:41:28.677905 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:41:28.685374 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 00:41:28.685423 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 00:41:28.693635 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 00:41:28.693661 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 00:41:28.701418 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 00:41:28.709162 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 00:41:28.719217 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 00:41:28.719699 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 00:41:28.719768 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 00:41:28.908886 kernel: hv_netvsc 7ced8db8-728b-7ced-8db8-728b7ced8db8 eth0: Data path switched from VF: enP48247s1 Mar 7 00:41:28.728954 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 00:41:28.729045 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 00:41:28.738158 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 7 00:41:28.739817 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 00:41:28.739889 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:41:28.753859 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 7 00:41:28.754077 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 00:41:28.754168 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 00:41:28.765957 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 7 00:41:28.766392 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 7 00:41:28.773442 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 00:41:28.773478 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:41:28.782706 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 00:41:28.794024 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 00:41:28.794077 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:41:28.802163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:41:28.802210 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:41:28.813711 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 00:41:28.813763 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 00:41:28.818286 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:41:28.830014 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 7 00:41:28.841462 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 00:41:28.850604 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 00:41:28.859513 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 00:41:28.859622 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:41:28.869409 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 00:41:28.869478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 00:41:28.877440 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 00:41:28.877468 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:41:28.885512 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 00:41:28.885557 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:41:28.898925 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 00:41:28.898975 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 00:41:28.913270 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:41:28.913328 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:41:28.927737 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 00:41:28.927792 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 00:41:28.942382 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 00:41:28.957215 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 7 00:41:28.957293 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:41:28.966778 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 00:41:28.966822 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:41:28.978265 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 00:41:28.978312 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:41:28.990557 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 00:41:28.990600 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:41:28.996208 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:41:28.996257 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:41:29.009792 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 00:41:29.009863 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 00:41:29.017715 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 00:41:29.017780 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 00:41:29.023145 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 00:41:29.033055 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 00:41:29.089513 systemd[1]: Switching root. Mar 7 00:41:29.250659 systemd-journald[226]: Journal stopped Mar 7 00:41:33.979960 systemd-journald[226]: Received SIGTERM from PID 1 (systemd). Mar 7 00:41:33.979979 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 00:41:33.979987 kernel: SELinux: policy capability open_perms=1 Mar 7 00:41:33.979993 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 00:41:33.979999 kernel: SELinux: policy capability always_check_network=0 Mar 7 00:41:33.980004 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 00:41:33.980010 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 00:41:33.980015 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 00:41:33.980020 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 00:41:33.980026 kernel: SELinux: policy capability userspace_initial_context=0 Mar 7 00:41:33.980031 kernel: audit: type=1403 audit(1772844090.341:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 00:41:33.980038 systemd[1]: Successfully loaded SELinux policy in 264.681ms. Mar 7 00:41:33.980044 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.283ms. Mar 7 00:41:33.980051 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 7 00:41:33.980058 systemd[1]: Detected virtualization microsoft. Mar 7 00:41:33.980064 systemd[1]: Detected architecture arm64. Mar 7 00:41:33.980070 systemd[1]: Detected first boot. Mar 7 00:41:33.980076 systemd[1]: Hostname set to . Mar 7 00:41:33.980083 systemd[1]: Initializing machine ID from random generator. Mar 7 00:41:33.980089 zram_generator::config[1294]: No configuration found. Mar 7 00:41:33.980096 kernel: NET: Registered PF_VSOCK protocol family Mar 7 00:41:33.980101 systemd[1]: Populated /etc with preset unit settings. Mar 7 00:41:33.980107 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 7 00:41:33.980114 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 00:41:33.980120 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 00:41:33.980126 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 00:41:33.980132 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 00:41:33.980138 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 00:41:33.980144 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 00:41:33.980150 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 00:41:33.980157 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 00:41:33.980163 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 00:41:33.980169 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 00:41:33.980175 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 00:41:33.980181 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:41:33.980187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:41:33.980193 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 00:41:33.980199 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 00:41:33.980206 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 00:41:33.980213 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:41:33.980221 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 7 00:41:33.980242 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:41:33.980250 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:41:33.980256 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 00:41:33.980262 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 00:41:33.980268 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 00:41:33.980275 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 00:41:33.980282 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:41:33.980288 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:41:33.980294 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:41:33.980300 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:41:33.980306 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 00:41:33.980312 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 00:41:33.980320 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 7 00:41:33.980326 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:41:33.980332 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:41:33.980338 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:41:33.980344 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 00:41:33.980350 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 00:41:33.980357 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 00:41:33.980364 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 00:41:33.980370 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 00:41:33.980376 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 00:41:33.980383 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 00:41:33.980389 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 00:41:33.980395 systemd[1]: Reached target machines.target - Containers. Mar 7 00:41:33.980401 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 00:41:33.980408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:41:33.980415 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:41:33.980421 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 00:41:33.980427 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:41:33.980433 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:41:33.980439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:41:33.980445 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 00:41:33.980452 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:41:33.980459 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 00:41:33.980465 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 00:41:33.980471 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 00:41:33.980477 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 00:41:33.980483 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 00:41:33.980490 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 00:41:33.980497 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:41:33.980503 kernel: loop: module loaded Mar 7 00:41:33.980509 kernel: fuse: init (API version 7.41) Mar 7 00:41:33.980515 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:41:33.980520 kernel: ACPI: bus type drm_connector registered Mar 7 00:41:33.980526 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 00:41:33.980533 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 00:41:33.980539 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 7 00:41:33.980563 systemd-journald[1384]: Collecting audit messages is disabled. Mar 7 00:41:33.980579 systemd-journald[1384]: Journal started Mar 7 00:41:33.980593 systemd-journald[1384]: Runtime Journal (/run/log/journal/b6ecc9a4deca4a02ab43fd004854dfa9) is 8M, max 78.3M, 70.3M free. Mar 7 00:41:33.293702 systemd[1]: Queued start job for default target multi-user.target. Mar 7 00:41:33.301668 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 00:41:33.302039 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 00:41:33.302299 systemd[1]: systemd-journald.service: Consumed 2.290s CPU time. Mar 7 00:41:34.000825 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:41:34.007304 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 00:41:34.007335 systemd[1]: Stopped verity-setup.service. Mar 7 00:41:34.020304 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:41:34.020970 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 00:41:34.025272 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 00:41:34.030059 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 00:41:34.034376 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 00:41:34.038803 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 00:41:34.043489 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 00:41:34.047673 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 00:41:34.052671 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:41:34.057998 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 00:41:34.058148 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 00:41:34.063269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:41:34.063399 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:41:34.068700 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:41:34.068828 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:41:34.073380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:41:34.073512 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:41:34.078937 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 00:41:34.079060 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 00:41:34.083691 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:41:34.083804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:41:34.088497 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:41:34.093624 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:41:34.099087 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 00:41:34.111601 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 00:41:34.117071 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 00:41:34.129321 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 00:41:34.133816 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 00:41:34.133905 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:41:34.138802 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 7 00:41:34.146331 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 00:41:34.151000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:41:34.161419 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 00:41:34.168166 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 00:41:34.174521 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:41:34.175272 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 00:41:34.181522 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:41:34.184389 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:41:34.207497 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 00:41:34.212844 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:41:34.223360 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 7 00:41:34.231253 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:41:34.236415 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 00:41:34.241378 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 00:41:34.246491 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 00:41:34.251680 systemd-journald[1384]: Time spent on flushing to /var/log/journal/b6ecc9a4deca4a02ab43fd004854dfa9 is 9.473ms for 932 entries. Mar 7 00:41:34.251680 systemd-journald[1384]: System Journal (/var/log/journal/b6ecc9a4deca4a02ab43fd004854dfa9) is 8M, max 2.6G, 2.6G free. Mar 7 00:41:34.309967 systemd-journald[1384]: Received client request to flush runtime journal. Mar 7 00:41:34.310046 kernel: loop0: detected capacity change from 0 to 119840 Mar 7 00:41:34.258823 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 00:41:34.266366 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 7 00:41:34.274734 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:41:34.311635 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 00:41:34.337459 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 00:41:34.339514 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 7 00:41:34.363311 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Mar 7 00:41:34.363323 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Mar 7 00:41:34.365827 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:41:34.372078 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 00:41:34.509888 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 00:41:34.518506 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:41:34.534915 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Mar 7 00:41:34.534928 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Mar 7 00:41:34.538702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:41:34.679254 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 00:41:34.697048 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 00:41:34.703305 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:41:34.724247 kernel: loop1: detected capacity change from 0 to 100632 Mar 7 00:41:34.735918 systemd-udevd[1458]: Using default interface naming scheme 'v255'. Mar 7 00:41:34.931711 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:41:34.945347 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:41:34.998422 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 00:41:35.029512 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 7 00:41:35.058700 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 00:41:35.087290 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 00:41:35.147250 kernel: hv_vmbus: registering driver hyperv_fb Mar 7 00:41:35.147346 kernel: hv_vmbus: registering driver hv_balloon Mar 7 00:41:35.147366 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 7 00:41:35.160638 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 7 00:41:35.160723 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 7 00:41:35.160737 kernel: Console: switching to colour dummy device 80x25 Mar 7 00:41:35.166496 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 7 00:41:35.168280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#275 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 00:41:35.183935 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 00:41:35.223247 kernel: loop2: detected capacity change from 0 to 197488 Mar 7 00:41:35.248006 systemd-networkd[1475]: lo: Link UP Mar 7 00:41:35.248012 systemd-networkd[1475]: lo: Gained carrier Mar 7 00:41:35.249380 systemd-networkd[1475]: Enumeration completed Mar 7 00:41:35.249465 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:41:35.254123 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:41:35.254130 systemd-networkd[1475]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:41:35.256359 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 7 00:41:35.264366 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 00:41:35.280467 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:41:35.291243 kernel: loop3: detected capacity change from 0 to 27936 Mar 7 00:41:35.301323 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:41:35.301494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:41:35.310861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:41:35.323867 kernel: mlx5_core bc77:00:02.0 enP48247s1: Link up Mar 7 00:41:35.347253 kernel: hv_netvsc 7ced8db8-728b-7ced-8db8-728b7ced8db8 eth0: Data path switched to VF: enP48247s1 Mar 7 00:41:35.348133 systemd-networkd[1475]: enP48247s1: Link UP Mar 7 00:41:35.348545 systemd-networkd[1475]: eth0: Link UP Mar 7 00:41:35.348550 systemd-networkd[1475]: eth0: Gained carrier Mar 7 00:41:35.348570 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:41:35.350353 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 7 00:41:35.359556 systemd-networkd[1475]: enP48247s1: Gained carrier Mar 7 00:41:35.369372 systemd-networkd[1475]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 00:41:35.392087 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 00:41:35.398958 kernel: MACsec IEEE 802.1AE Mar 7 00:41:35.399335 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 00:41:35.457280 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 00:41:35.722352 kernel: loop4: detected capacity change from 0 to 119840 Mar 7 00:41:35.738261 kernel: loop5: detected capacity change from 0 to 100632 Mar 7 00:41:35.753267 kernel: loop6: detected capacity change from 0 to 197488 Mar 7 00:41:35.771261 kernel: loop7: detected capacity change from 0 to 27936 Mar 7 00:41:35.779359 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 7 00:41:35.779729 (sd-merge)[1605]: Merged extensions into '/usr'. Mar 7 00:41:35.789743 systemd[1]: Reload requested from client PID 1432 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 00:41:35.789756 systemd[1]: Reloading... Mar 7 00:41:35.838305 zram_generator::config[1633]: No configuration found. Mar 7 00:41:36.013726 systemd[1]: Reloading finished in 223 ms. Mar 7 00:41:36.031185 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:41:36.036048 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 00:41:36.054173 systemd[1]: Starting ensure-sysext.service... Mar 7 00:41:36.060347 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:41:36.073321 systemd[1]: Reload requested from client PID 1693 ('systemctl') (unit ensure-sysext.service)... Mar 7 00:41:36.073335 systemd[1]: Reloading... Mar 7 00:41:36.077613 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 7 00:41:36.094428 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 7 00:41:36.094830 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 00:41:36.095074 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 00:41:36.095635 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 00:41:36.096145 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Mar 7 00:41:36.096289 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Mar 7 00:41:36.099002 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:41:36.100338 systemd-tmpfiles[1694]: Skipping /boot Mar 7 00:41:36.107729 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:41:36.107820 systemd-tmpfiles[1694]: Skipping /boot Mar 7 00:41:36.135251 zram_generator::config[1724]: No configuration found. Mar 7 00:41:36.283430 systemd[1]: Reloading finished in 209 ms. Mar 7 00:41:36.297231 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:41:36.312893 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 7 00:41:36.327075 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 00:41:36.334444 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 00:41:36.342383 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:41:36.348364 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 00:41:36.356299 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:41:36.359486 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:41:36.368192 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:41:36.384031 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:41:36.390618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:41:36.390715 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 00:41:36.393676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:41:36.394181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:41:36.401640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:41:36.402437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:41:36.409911 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:41:36.410128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:41:36.419931 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 00:41:36.432370 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:41:36.433267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:41:36.440340 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:41:36.447413 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:41:36.456695 systemd-resolved[1785]: Positive Trust Anchors: Mar 7 00:41:36.456930 systemd-resolved[1785]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:41:36.456997 systemd-resolved[1785]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:41:36.457692 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:41:36.464943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:41:36.464986 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 00:41:36.465028 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 00:41:36.470500 systemd[1]: Finished ensure-sysext.service. Mar 7 00:41:36.474353 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 00:41:36.475655 systemd-resolved[1785]: Using system hostname 'ci-4459.2.3-n-e6e869ea98'. Mar 7 00:41:36.479826 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:41:36.485025 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:41:36.485168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:41:36.490830 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:41:36.490961 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:41:36.495745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:41:36.495870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:41:36.501870 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:41:36.502005 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:41:36.509630 systemd[1]: Reached target network.target - Network. Mar 7 00:41:36.513457 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:41:36.518660 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:41:36.518721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:41:36.544751 augenrules[1824]: No rules Mar 7 00:41:36.545955 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 00:41:36.546166 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 7 00:41:37.210874 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 00:41:37.216553 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 00:41:37.236324 systemd-networkd[1475]: eth0: Gained IPv6LL Mar 7 00:41:37.241039 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 00:41:37.247219 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 00:41:39.960377 ldconfig[1426]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 00:41:39.974060 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 00:41:39.980017 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 00:41:39.992525 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 00:41:39.997244 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:41:40.001573 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 00:41:40.006593 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 00:41:40.011675 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 00:41:40.015929 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 00:41:40.021612 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 00:41:40.026835 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 00:41:40.026866 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:41:40.030613 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:41:40.037320 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 00:41:40.043149 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 00:41:40.048378 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 7 00:41:40.054140 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 7 00:41:40.059289 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 7 00:41:40.073850 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 00:41:40.078975 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 7 00:41:40.084459 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 00:41:40.088913 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:41:40.092702 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:41:40.096366 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:41:40.096387 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:41:40.098484 systemd[1]: Starting chronyd.service - NTP client/server... Mar 7 00:41:40.110318 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 00:41:40.117336 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 00:41:40.127408 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 00:41:40.134346 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 00:41:40.140037 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 00:41:40.149385 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 00:41:40.153486 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 00:41:40.156067 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 7 00:41:40.160834 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 7 00:41:40.161826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:41:40.167886 jq[1845]: false Mar 7 00:41:40.169360 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 00:41:40.180341 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 00:41:40.187366 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 00:41:40.194344 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 00:41:40.202362 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 00:41:40.209636 extend-filesystems[1846]: Found /dev/sda6 Mar 7 00:41:40.212636 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 00:41:40.220490 extend-filesystems[1846]: Found /dev/sda9 Mar 7 00:41:40.226901 extend-filesystems[1846]: Checking size of /dev/sda9 Mar 7 00:41:40.246249 kernel: hv_utils: KVP IC version 4.0 Mar 7 00:41:40.221059 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 00:41:40.231941 KVP[1847]: KVP starting; pid is:1847 Mar 7 00:41:40.221483 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 00:41:40.234633 chronyd[1837]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Mar 7 00:41:40.224850 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 00:41:40.241075 KVP[1847]: KVP LIC Version: 3.1 Mar 7 00:41:40.235403 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 00:41:40.256151 jq[1872]: true Mar 7 00:41:40.251264 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 00:41:40.258018 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 00:41:40.260851 chronyd[1837]: Timezone right/UTC failed leap second check, ignoring Mar 7 00:41:40.261164 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 00:41:40.260988 chronyd[1837]: Loaded seccomp filter (level 2) Mar 7 00:41:40.261331 systemd[1]: Started chronyd.service - NTP client/server. Mar 7 00:41:40.267201 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 00:41:40.268457 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 00:41:40.277697 extend-filesystems[1846]: Old size kept for /dev/sda9 Mar 7 00:41:40.279095 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 00:41:40.290640 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 00:41:40.290810 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 00:41:40.299258 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 00:41:40.299423 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 00:41:40.322204 update_engine[1869]: I20260307 00:41:40.314182 1869 main.cc:92] Flatcar Update Engine starting Mar 7 00:41:40.326556 (ntainerd)[1887]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 00:41:40.328308 jq[1886]: true Mar 7 00:41:40.330424 systemd-logind[1864]: New seat seat0. Mar 7 00:41:40.332283 systemd-logind[1864]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 7 00:41:40.332472 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 00:41:40.412087 tar[1883]: linux-arm64/LICENSE Mar 7 00:41:40.412087 tar[1883]: linux-arm64/helm Mar 7 00:41:40.475041 dbus-daemon[1840]: [system] SELinux support is enabled Mar 7 00:41:40.475231 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 00:41:40.484324 update_engine[1869]: I20260307 00:41:40.482164 1869 update_check_scheduler.cc:74] Next update check in 8m24s Mar 7 00:41:40.486756 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 00:41:40.487176 dbus-daemon[1840]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 00:41:40.486782 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 00:41:40.496238 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 00:41:40.496261 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 00:41:40.510683 systemd[1]: Started update-engine.service - Update Engine. Mar 7 00:41:40.521793 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 00:41:40.552016 bash[1928]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:41:40.552740 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 00:41:40.562792 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 00:41:40.595148 coreos-metadata[1839]: Mar 07 00:41:40.595 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 00:41:40.598120 coreos-metadata[1839]: Mar 07 00:41:40.598 INFO Fetch successful Mar 7 00:41:40.598419 coreos-metadata[1839]: Mar 07 00:41:40.598 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 7 00:41:40.603183 coreos-metadata[1839]: Mar 07 00:41:40.602 INFO Fetch successful Mar 7 00:41:40.603183 coreos-metadata[1839]: Mar 07 00:41:40.602 INFO Fetching http://168.63.129.16/machine/66951851-749e-48b1-a3f2-07a464ae698b/72556cd1%2D3b73%2D4ac8%2Dbf89%2D13b7f178b70e.%5Fci%2D4459.2.3%2Dn%2De6e869ea98?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 7 00:41:40.606412 coreos-metadata[1839]: Mar 07 00:41:40.606 INFO Fetch successful Mar 7 00:41:40.606412 coreos-metadata[1839]: Mar 07 00:41:40.606 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 7 00:41:40.614837 coreos-metadata[1839]: Mar 07 00:41:40.614 INFO Fetch successful Mar 7 00:41:40.652054 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 00:41:40.661914 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 00:41:40.721285 locksmithd[1975]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 00:41:40.758441 sshd_keygen[1877]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 00:41:40.776546 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 00:41:40.785136 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 00:41:40.792408 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 7 00:41:40.821164 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 00:41:40.821354 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 00:41:40.830127 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 00:41:40.839029 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 7 00:41:40.858906 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 00:41:40.865466 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 00:41:40.876443 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 7 00:41:40.881740 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 00:41:40.890048 tar[1883]: linux-arm64/README.md Mar 7 00:41:40.901261 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 00:41:41.099453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:41:41.179590 containerd[1887]: time="2026-03-07T00:41:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 7 00:41:41.181222 containerd[1887]: time="2026-03-07T00:41:41.181187652Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 7 00:41:41.187036 containerd[1887]: time="2026-03-07T00:41:41.186999148Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.32µs" Mar 7 00:41:41.187036 containerd[1887]: time="2026-03-07T00:41:41.187029700Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 7 00:41:41.187108 containerd[1887]: time="2026-03-07T00:41:41.187043084Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 7 00:41:41.187189 containerd[1887]: time="2026-03-07T00:41:41.187171292Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 7 00:41:41.187189 containerd[1887]: time="2026-03-07T00:41:41.187187668Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 7 00:41:41.187214 containerd[1887]: time="2026-03-07T00:41:41.187209172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 7 00:41:41.187270 containerd[1887]: time="2026-03-07T00:41:41.187256028Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 7 00:41:41.187270 containerd[1887]: time="2026-03-07T00:41:41.187267732Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 7 00:41:41.187446 containerd[1887]: time="2026-03-07T00:41:41.187427156Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 7 00:41:41.187460 containerd[1887]: time="2026-03-07T00:41:41.187444660Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 7 00:41:41.187472 containerd[1887]: time="2026-03-07T00:41:41.187462244Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 7 00:41:41.187472 containerd[1887]: time="2026-03-07T00:41:41.187468764Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 7 00:41:41.187544 containerd[1887]: time="2026-03-07T00:41:41.187531876Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 7 00:41:41.187719 containerd[1887]: time="2026-03-07T00:41:41.187703148Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 7 00:41:41.187743 containerd[1887]: time="2026-03-07T00:41:41.187730148Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 7 00:41:41.187743 containerd[1887]: time="2026-03-07T00:41:41.187740372Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 7 00:41:41.187789 containerd[1887]: time="2026-03-07T00:41:41.187778452Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 7 00:41:41.187962 containerd[1887]: time="2026-03-07T00:41:41.187948444Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 7 00:41:41.188022 containerd[1887]: time="2026-03-07T00:41:41.188010900Z" level=info msg="metadata content store policy set" policy=shared Mar 7 00:41:41.204644 containerd[1887]: time="2026-03-07T00:41:41.204605268Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 7 00:41:41.204700 containerd[1887]: time="2026-03-07T00:41:41.204670060Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 7 00:41:41.204700 containerd[1887]: time="2026-03-07T00:41:41.204682516Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 7 00:41:41.204700 containerd[1887]: time="2026-03-07T00:41:41.204691356Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 7 00:41:41.204700 containerd[1887]: time="2026-03-07T00:41:41.204698868Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 7 00:41:41.204752 containerd[1887]: time="2026-03-07T00:41:41.204705196Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 7 00:41:41.204752 containerd[1887]: time="2026-03-07T00:41:41.204715692Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 7 00:41:41.204752 containerd[1887]: time="2026-03-07T00:41:41.204723132Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 7 00:41:41.204752 containerd[1887]: time="2026-03-07T00:41:41.204735940Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 7 00:41:41.204752 containerd[1887]: time="2026-03-07T00:41:41.204747948Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 7 00:41:41.204809 containerd[1887]: time="2026-03-07T00:41:41.204753732Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 7 00:41:41.204809 containerd[1887]: time="2026-03-07T00:41:41.204762540Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 7 00:41:41.204957 containerd[1887]: time="2026-03-07T00:41:41.204934788Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 7 00:41:41.204996 containerd[1887]: time="2026-03-07T00:41:41.204985524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 7 00:41:41.205016 containerd[1887]: time="2026-03-07T00:41:41.204996972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 7 00:41:41.205016 containerd[1887]: time="2026-03-07T00:41:41.205005460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 7 00:41:41.205016 containerd[1887]: time="2026-03-07T00:41:41.205011940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 7 00:41:41.205052 containerd[1887]: time="2026-03-07T00:41:41.205018548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 7 00:41:41.205052 containerd[1887]: time="2026-03-07T00:41:41.205025412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 7 00:41:41.205052 containerd[1887]: time="2026-03-07T00:41:41.205031636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 7 00:41:41.205052 containerd[1887]: time="2026-03-07T00:41:41.205043756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 7 00:41:41.205052 containerd[1887]: time="2026-03-07T00:41:41.205050724Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 7 00:41:41.205115 containerd[1887]: time="2026-03-07T00:41:41.205057068Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 7 00:41:41.205115 containerd[1887]: time="2026-03-07T00:41:41.205099980Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 7 00:41:41.205212 containerd[1887]: time="2026-03-07T00:41:41.205122940Z" level=info msg="Start snapshots syncer" Mar 7 00:41:41.205212 containerd[1887]: time="2026-03-07T00:41:41.205143228Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 7 00:41:41.205404 containerd[1887]: time="2026-03-07T00:41:41.205366180Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 7 00:41:41.205499 containerd[1887]: time="2026-03-07T00:41:41.205415332Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 7 00:41:41.205499 containerd[1887]: time="2026-03-07T00:41:41.205451252Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 7 00:41:41.205572 containerd[1887]: time="2026-03-07T00:41:41.205560740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 7 00:41:41.205594 containerd[1887]: time="2026-03-07T00:41:41.205577052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 7 00:41:41.205594 containerd[1887]: time="2026-03-07T00:41:41.205584332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 7 00:41:41.205594 containerd[1887]: time="2026-03-07T00:41:41.205591132Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 7 00:41:41.205628 containerd[1887]: time="2026-03-07T00:41:41.205598700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 7 00:41:41.205628 containerd[1887]: time="2026-03-07T00:41:41.205605404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 7 00:41:41.205628 containerd[1887]: time="2026-03-07T00:41:41.205618308Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 7 00:41:41.205663 containerd[1887]: time="2026-03-07T00:41:41.205636436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 7 00:41:41.205663 containerd[1887]: time="2026-03-07T00:41:41.205647300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 7 00:41:41.205663 containerd[1887]: time="2026-03-07T00:41:41.205653788Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 7 00:41:41.205716 containerd[1887]: time="2026-03-07T00:41:41.205677716Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 7 00:41:41.205716 containerd[1887]: time="2026-03-07T00:41:41.205693796Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 7 00:41:41.205716 containerd[1887]: time="2026-03-07T00:41:41.205699836Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 7 00:41:41.205716 containerd[1887]: time="2026-03-07T00:41:41.205705276Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 7 00:41:41.205716 containerd[1887]: time="2026-03-07T00:41:41.205710772Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 7 00:41:41.205716 containerd[1887]: time="2026-03-07T00:41:41.205716052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 7 00:41:41.205853 containerd[1887]: time="2026-03-07T00:41:41.205722132Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 7 00:41:41.205853 containerd[1887]: time="2026-03-07T00:41:41.205734444Z" level=info msg="runtime interface created" Mar 7 00:41:41.205853 containerd[1887]: time="2026-03-07T00:41:41.205737636Z" level=info msg="created NRI interface" Mar 7 00:41:41.205853 containerd[1887]: time="2026-03-07T00:41:41.205742652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 7 00:41:41.205853 containerd[1887]: time="2026-03-07T00:41:41.205750436Z" level=info msg="Connect containerd service" Mar 7 00:41:41.205853 containerd[1887]: time="2026-03-07T00:41:41.205775948Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 00:41:41.206945 containerd[1887]: time="2026-03-07T00:41:41.206694484Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:41:41.265102 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:41:41.569047 kubelet[2023]: E0307 00:41:41.568982 2023 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:41:41.571070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:41:41.571188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:41:41.571533 systemd[1]: kubelet.service: Consumed 494ms CPU time, 245.1M memory peak. Mar 7 00:41:41.820465 containerd[1887]: time="2026-03-07T00:41:41.820303396Z" level=info msg="Start subscribing containerd event" Mar 7 00:41:41.820465 containerd[1887]: time="2026-03-07T00:41:41.820361980Z" level=info msg="Start recovering state" Mar 7 00:41:41.820579 containerd[1887]: time="2026-03-07T00:41:41.820468980Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 00:41:41.820579 containerd[1887]: time="2026-03-07T00:41:41.820511556Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 00:41:41.820813 containerd[1887]: time="2026-03-07T00:41:41.820666500Z" level=info msg="Start event monitor" Mar 7 00:41:41.820813 containerd[1887]: time="2026-03-07T00:41:41.820687604Z" level=info msg="Start cni network conf syncer for default" Mar 7 00:41:41.820813 containerd[1887]: time="2026-03-07T00:41:41.820694636Z" level=info msg="Start streaming server" Mar 7 00:41:41.820813 containerd[1887]: time="2026-03-07T00:41:41.820701796Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 7 00:41:41.820813 containerd[1887]: time="2026-03-07T00:41:41.820707116Z" level=info msg="runtime interface starting up..." Mar 7 00:41:41.820813 containerd[1887]: time="2026-03-07T00:41:41.820711476Z" level=info msg="starting plugins..." Mar 7 00:41:41.820813 containerd[1887]: time="2026-03-07T00:41:41.820724012Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 7 00:41:41.820981 containerd[1887]: time="2026-03-07T00:41:41.820969860Z" level=info msg="containerd successfully booted in 0.641707s" Mar 7 00:41:41.821164 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 00:41:41.826750 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 00:41:41.835291 systemd[1]: Startup finished in 1.706s (kernel) + 12.096s (initrd) + 11.756s (userspace) = 25.559s. Mar 7 00:41:42.096582 login[2008]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 7 00:41:42.097597 login[2009]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:41:42.102633 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 00:41:42.103399 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 00:41:42.110591 systemd-logind[1864]: New session 2 of user core. Mar 7 00:41:42.152408 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 00:41:42.157166 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 00:41:42.164822 (systemd)[2050]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 00:41:42.166681 systemd-logind[1864]: New session c1 of user core. Mar 7 00:41:42.282386 systemd[2050]: Queued start job for default target default.target. Mar 7 00:41:42.288919 systemd[2050]: Created slice app.slice - User Application Slice. Mar 7 00:41:42.288939 systemd[2050]: Reached target paths.target - Paths. Mar 7 00:41:42.288968 systemd[2050]: Reached target timers.target - Timers. Mar 7 00:41:42.290028 systemd[2050]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 00:41:42.297475 systemd[2050]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 00:41:42.297519 systemd[2050]: Reached target sockets.target - Sockets. Mar 7 00:41:42.297548 systemd[2050]: Reached target basic.target - Basic System. Mar 7 00:41:42.297568 systemd[2050]: Reached target default.target - Main User Target. Mar 7 00:41:42.297590 systemd[2050]: Startup finished in 126ms. Mar 7 00:41:42.297680 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 00:41:42.306388 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 00:41:42.702139 waagent[2005]: 2026-03-07T00:41:42.702066Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 7 00:41:42.710003 waagent[2005]: 2026-03-07T00:41:42.706785Z INFO Daemon Daemon OS: flatcar 4459.2.3 Mar 7 00:41:42.710291 waagent[2005]: 2026-03-07T00:41:42.710257Z INFO Daemon Daemon Python: 3.11.13 Mar 7 00:41:42.713608 waagent[2005]: 2026-03-07T00:41:42.713570Z INFO Daemon Daemon Run daemon Mar 7 00:41:42.716626 waagent[2005]: 2026-03-07T00:41:42.716592Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.3' Mar 7 00:41:42.723316 waagent[2005]: 2026-03-07T00:41:42.723283Z INFO Daemon Daemon Using waagent for provisioning Mar 7 00:41:42.727566 waagent[2005]: 2026-03-07T00:41:42.727532Z INFO Daemon Daemon Activate resource disk Mar 7 00:41:42.730998 waagent[2005]: 2026-03-07T00:41:42.730964Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 7 00:41:42.739462 waagent[2005]: 2026-03-07T00:41:42.739425Z INFO Daemon Daemon Found device: None Mar 7 00:41:42.742820 waagent[2005]: 2026-03-07T00:41:42.742789Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 7 00:41:42.748945 waagent[2005]: 2026-03-07T00:41:42.748912Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 7 00:41:42.757431 waagent[2005]: 2026-03-07T00:41:42.757392Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 00:41:42.761585 waagent[2005]: 2026-03-07T00:41:42.761551Z INFO Daemon Daemon Running default provisioning handler Mar 7 00:41:42.770050 waagent[2005]: 2026-03-07T00:41:42.770018Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 7 00:41:42.780160 waagent[2005]: 2026-03-07T00:41:42.780127Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 7 00:41:42.787368 waagent[2005]: 2026-03-07T00:41:42.787329Z INFO Daemon Daemon cloud-init is enabled: False Mar 7 00:41:42.791019 waagent[2005]: 2026-03-07T00:41:42.790984Z INFO Daemon Daemon Copying ovf-env.xml Mar 7 00:41:42.845974 waagent[2005]: 2026-03-07T00:41:42.845908Z INFO Daemon Daemon Successfully mounted dvd Mar 7 00:41:42.881513 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 7 00:41:42.883451 waagent[2005]: 2026-03-07T00:41:42.883395Z INFO Daemon Daemon Detect protocol endpoint Mar 7 00:41:42.887060 waagent[2005]: 2026-03-07T00:41:42.887027Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 00:41:42.891059 waagent[2005]: 2026-03-07T00:41:42.891031Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 7 00:41:42.895694 waagent[2005]: 2026-03-07T00:41:42.895670Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 7 00:41:42.899448 waagent[2005]: 2026-03-07T00:41:42.899419Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 7 00:41:42.903163 waagent[2005]: 2026-03-07T00:41:42.903131Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 7 00:41:42.948312 waagent[2005]: 2026-03-07T00:41:42.948278Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 7 00:41:42.953003 waagent[2005]: 2026-03-07T00:41:42.952952Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 7 00:41:42.956604 waagent[2005]: 2026-03-07T00:41:42.956581Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 7 00:41:43.097346 login[2008]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:41:43.100794 systemd-logind[1864]: New session 1 of user core. Mar 7 00:41:43.108371 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 00:41:43.141287 waagent[2005]: 2026-03-07T00:41:43.141191Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 7 00:41:43.145834 waagent[2005]: 2026-03-07T00:41:43.145798Z INFO Daemon Daemon Forcing an update of the goal state. Mar 7 00:41:43.153038 waagent[2005]: 2026-03-07T00:41:43.153000Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 00:41:43.172329 waagent[2005]: 2026-03-07T00:41:43.172296Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 7 00:41:43.176804 waagent[2005]: 2026-03-07T00:41:43.176770Z INFO Daemon Mar 7 00:41:43.178890 waagent[2005]: 2026-03-07T00:41:43.178860Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 7d1006ab-7d89-481c-99d3-2893906bee6e eTag: 16456042794193153544 source: Fabric] Mar 7 00:41:43.187562 waagent[2005]: 2026-03-07T00:41:43.187531Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 7 00:41:43.192609 waagent[2005]: 2026-03-07T00:41:43.192578Z INFO Daemon Mar 7 00:41:43.194730 waagent[2005]: 2026-03-07T00:41:43.194703Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 7 00:41:43.203694 waagent[2005]: 2026-03-07T00:41:43.203639Z INFO Daemon Daemon Downloading artifacts profile blob Mar 7 00:41:43.262642 waagent[2005]: 2026-03-07T00:41:43.262582Z INFO Daemon Downloaded certificate {'thumbprint': 'E4671FF676511C14BE990AC04D77D3B3ED6D4F46', 'hasPrivateKey': True} Mar 7 00:41:43.269772 waagent[2005]: 2026-03-07T00:41:43.269734Z INFO Daemon Fetch goal state completed Mar 7 00:41:43.279422 waagent[2005]: 2026-03-07T00:41:43.279387Z INFO Daemon Daemon Starting provisioning Mar 7 00:41:43.283123 waagent[2005]: 2026-03-07T00:41:43.283090Z INFO Daemon Daemon Handle ovf-env.xml. Mar 7 00:41:43.286563 waagent[2005]: 2026-03-07T00:41:43.286538Z INFO Daemon Daemon Set hostname [ci-4459.2.3-n-e6e869ea98] Mar 7 00:41:43.292609 waagent[2005]: 2026-03-07T00:41:43.292566Z INFO Daemon Daemon Publish hostname [ci-4459.2.3-n-e6e869ea98] Mar 7 00:41:43.297283 waagent[2005]: 2026-03-07T00:41:43.297248Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 7 00:41:43.301821 waagent[2005]: 2026-03-07T00:41:43.301790Z INFO Daemon Daemon Primary interface is [eth0] Mar 7 00:41:43.332614 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:41:43.332619 systemd-networkd[1475]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:41:43.332668 systemd-networkd[1475]: eth0: DHCP lease lost Mar 7 00:41:43.333383 waagent[2005]: 2026-03-07T00:41:43.333335Z INFO Daemon Daemon Create user account if not exists Mar 7 00:41:43.337283 waagent[2005]: 2026-03-07T00:41:43.337249Z INFO Daemon Daemon User core already exists, skip useradd Mar 7 00:41:43.341797 waagent[2005]: 2026-03-07T00:41:43.341769Z INFO Daemon Daemon Configure sudoer Mar 7 00:41:43.352013 waagent[2005]: 2026-03-07T00:41:43.351968Z INFO Daemon Daemon Configure sshd Mar 7 00:41:43.359514 waagent[2005]: 2026-03-07T00:41:43.359471Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 7 00:41:43.368552 waagent[2005]: 2026-03-07T00:41:43.368522Z INFO Daemon Daemon Deploy ssh public key. Mar 7 00:41:43.374314 systemd-networkd[1475]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 00:41:44.466361 waagent[2005]: 2026-03-07T00:41:44.466319Z INFO Daemon Daemon Provisioning complete Mar 7 00:41:44.479678 waagent[2005]: 2026-03-07T00:41:44.479638Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 7 00:41:44.484483 waagent[2005]: 2026-03-07T00:41:44.484450Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 7 00:41:44.491587 waagent[2005]: 2026-03-07T00:41:44.491559Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 7 00:41:44.592262 waagent[2100]: 2026-03-07T00:41:44.591190Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 7 00:41:44.592262 waagent[2100]: 2026-03-07T00:41:44.591360Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.3 Mar 7 00:41:44.592262 waagent[2100]: 2026-03-07T00:41:44.591399Z INFO ExtHandler ExtHandler Python: 3.11.13 Mar 7 00:41:44.592262 waagent[2100]: 2026-03-07T00:41:44.591431Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 7 00:41:44.636141 waagent[2100]: 2026-03-07T00:41:44.636078Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Mar 7 00:41:44.636470 waagent[2100]: 2026-03-07T00:41:44.636438Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 00:41:44.636592 waagent[2100]: 2026-03-07T00:41:44.636568Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 00:41:44.642420 waagent[2100]: 2026-03-07T00:41:44.642374Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 00:41:44.647641 waagent[2100]: 2026-03-07T00:41:44.647608Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 7 00:41:44.648121 waagent[2100]: 2026-03-07T00:41:44.648089Z INFO ExtHandler Mar 7 00:41:44.648255 waagent[2100]: 2026-03-07T00:41:44.648214Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 05881667-2177-4ec8-834e-e112ec56c5c5 eTag: 16456042794193153544 source: Fabric] Mar 7 00:41:44.648554 waagent[2100]: 2026-03-07T00:41:44.648524Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 7 00:41:44.649065 waagent[2100]: 2026-03-07T00:41:44.649035Z INFO ExtHandler Mar 7 00:41:44.649180 waagent[2100]: 2026-03-07T00:41:44.649158Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 7 00:41:44.652691 waagent[2100]: 2026-03-07T00:41:44.652664Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 7 00:41:44.707285 waagent[2100]: 2026-03-07T00:41:44.707178Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E4671FF676511C14BE990AC04D77D3B3ED6D4F46', 'hasPrivateKey': True} Mar 7 00:41:44.707650 waagent[2100]: 2026-03-07T00:41:44.707611Z INFO ExtHandler Fetch goal state completed Mar 7 00:41:44.719578 waagent[2100]: 2026-03-07T00:41:44.719484Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Mar 7 00:41:44.722788 waagent[2100]: 2026-03-07T00:41:44.722739Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2100 Mar 7 00:41:44.722893 waagent[2100]: 2026-03-07T00:41:44.722866Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 7 00:41:44.723134 waagent[2100]: 2026-03-07T00:41:44.723107Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 7 00:41:44.724269 waagent[2100]: 2026-03-07T00:41:44.724199Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] Mar 7 00:41:44.724591 waagent[2100]: 2026-03-07T00:41:44.724557Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 7 00:41:44.724714 waagent[2100]: 2026-03-07T00:41:44.724691Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 7 00:41:44.725125 waagent[2100]: 2026-03-07T00:41:44.725095Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 7 00:41:44.780447 waagent[2100]: 2026-03-07T00:41:44.780408Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 7 00:41:44.780623 waagent[2100]: 2026-03-07T00:41:44.780595Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 7 00:41:44.785029 waagent[2100]: 2026-03-07T00:41:44.784991Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 7 00:41:44.789284 systemd[1]: Reload requested from client PID 2115 ('systemctl') (unit waagent.service)... Mar 7 00:41:44.789296 systemd[1]: Reloading... Mar 7 00:41:44.869252 zram_generator::config[2169]: No configuration found. Mar 7 00:41:45.005301 systemd[1]: Reloading finished in 215 ms. Mar 7 00:41:45.022242 waagent[2100]: 2026-03-07T00:41:45.022000Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 7 00:41:45.022242 waagent[2100]: 2026-03-07T00:41:45.022141Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 7 00:41:45.285120 waagent[2100]: 2026-03-07T00:41:45.285003Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 7 00:41:45.285547 waagent[2100]: 2026-03-07T00:41:45.285512Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 7 00:41:45.286248 waagent[2100]: 2026-03-07T00:41:45.286201Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 7 00:41:45.286370 waagent[2100]: 2026-03-07T00:41:45.286334Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 00:41:45.286551 waagent[2100]: 2026-03-07T00:41:45.286529Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 00:41:45.286718 waagent[2100]: 2026-03-07T00:41:45.286691Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 7 00:41:45.287008 waagent[2100]: 2026-03-07T00:41:45.286973Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 7 00:41:45.287305 waagent[2100]: 2026-03-07T00:41:45.287166Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 7 00:41:45.287305 waagent[2100]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 7 00:41:45.287305 waagent[2100]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 7 00:41:45.287305 waagent[2100]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 7 00:41:45.287305 waagent[2100]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 7 00:41:45.287305 waagent[2100]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 00:41:45.287305 waagent[2100]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 00:41:45.287435 waagent[2100]: 2026-03-07T00:41:45.287347Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 7 00:41:45.287535 waagent[2100]: 2026-03-07T00:41:45.287498Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 7 00:41:45.287835 waagent[2100]: 2026-03-07T00:41:45.287807Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 00:41:45.287886 waagent[2100]: 2026-03-07T00:41:45.287869Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 00:41:45.288013 waagent[2100]: 2026-03-07T00:41:45.287979Z INFO EnvHandler ExtHandler Configure routes Mar 7 00:41:45.288052 waagent[2100]: 2026-03-07T00:41:45.288031Z INFO EnvHandler ExtHandler Gateway:None Mar 7 00:41:45.288070 waagent[2100]: 2026-03-07T00:41:45.288055Z INFO EnvHandler ExtHandler Routes:None Mar 7 00:41:45.288397 waagent[2100]: 2026-03-07T00:41:45.288313Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 7 00:41:45.288397 waagent[2100]: 2026-03-07T00:41:45.288377Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 7 00:41:45.288683 waagent[2100]: 2026-03-07T00:41:45.288646Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 7 00:41:45.294250 waagent[2100]: 2026-03-07T00:41:45.293939Z INFO ExtHandler ExtHandler Mar 7 00:41:45.294250 waagent[2100]: 2026-03-07T00:41:45.293994Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 120a99dc-3280-4730-84b6-51ce71790e27 correlation eceaf9e9-9141-4526-bd57-b25372b61354 created: 2026-03-07T00:40:43.806206Z] Mar 7 00:41:45.294327 waagent[2100]: 2026-03-07T00:41:45.294252Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 7 00:41:45.294659 waagent[2100]: 2026-03-07T00:41:45.294632Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Mar 7 00:41:45.321581 waagent[2100]: 2026-03-07T00:41:45.321532Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Mar 7 00:41:45.321581 waagent[2100]: Try `iptables -h' or 'iptables --help' for more information.) Mar 7 00:41:45.321870 waagent[2100]: 2026-03-07T00:41:45.321840Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9A5230BB-D380-4DBD-8C50-A5111B0DF007;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 7 00:41:45.360811 waagent[2100]: 2026-03-07T00:41:45.360456Z INFO MonitorHandler ExtHandler Network interfaces: Mar 7 00:41:45.360811 waagent[2100]: Executing ['ip', '-a', '-o', 'link']: Mar 7 00:41:45.360811 waagent[2100]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 7 00:41:45.360811 waagent[2100]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b8:72:8b brd ff:ff:ff:ff:ff:ff Mar 7 00:41:45.360811 waagent[2100]: 3: enP48247s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b8:72:8b brd ff:ff:ff:ff:ff:ff\ altname enP48247p0s2 Mar 7 00:41:45.360811 waagent[2100]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 7 00:41:45.360811 waagent[2100]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 7 00:41:45.360811 waagent[2100]: 2: eth0 inet 10.200.20.29/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 7 00:41:45.360811 waagent[2100]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 7 00:41:45.360811 waagent[2100]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 7 00:41:45.360811 waagent[2100]: 2: eth0 inet6 fe80::7eed:8dff:feb8:728b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 7 00:41:45.391266 waagent[2100]: 2026-03-07T00:41:45.391165Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 7 00:41:45.391266 waagent[2100]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 00:41:45.391266 waagent[2100]: pkts bytes target prot opt in out source destination Mar 7 00:41:45.391266 waagent[2100]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 00:41:45.391266 waagent[2100]: pkts bytes target prot opt in out source destination Mar 7 00:41:45.391266 waagent[2100]: Chain OUTPUT (policy ACCEPT 5 packets, 458 bytes) Mar 7 00:41:45.391266 waagent[2100]: pkts bytes target prot opt in out source destination Mar 7 00:41:45.391266 waagent[2100]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 00:41:45.391266 waagent[2100]: 2 482 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 00:41:45.391266 waagent[2100]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 00:41:45.394532 waagent[2100]: 2026-03-07T00:41:45.394491Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 7 00:41:45.394532 waagent[2100]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 00:41:45.394532 waagent[2100]: pkts bytes target prot opt in out source destination Mar 7 00:41:45.394532 waagent[2100]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 00:41:45.394532 waagent[2100]: pkts bytes target prot opt in out source destination Mar 7 00:41:45.394532 waagent[2100]: Chain OUTPUT (policy ACCEPT 5 packets, 458 bytes) Mar 7 00:41:45.394532 waagent[2100]: pkts bytes target prot opt in out source destination Mar 7 00:41:45.394532 waagent[2100]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 00:41:45.394532 waagent[2100]: 7 950 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 00:41:45.394532 waagent[2100]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 00:41:45.394773 waagent[2100]: 2026-03-07T00:41:45.394702Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 7 00:41:51.801885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 00:41:51.803524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:41:51.908170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:41:51.910999 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:41:52.050768 kubelet[2249]: E0307 00:41:52.050698 2249 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:41:52.053501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:41:52.053615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:41:52.054093 systemd[1]: kubelet.service: Consumed 109ms CPU time, 109.1M memory peak. Mar 7 00:41:59.865203 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 00:41:59.866491 systemd[1]: Started sshd@0-10.200.20.29:22-10.200.16.10:49834.service - OpenSSH per-connection server daemon (10.200.16.10:49834). Mar 7 00:42:00.470886 sshd[2258]: Accepted publickey for core from 10.200.16.10 port 49834 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:42:00.471948 sshd-session[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:42:00.475180 systemd-logind[1864]: New session 3 of user core. Mar 7 00:42:00.483524 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 00:42:00.792424 systemd[1]: Started sshd@1-10.200.20.29:22-10.200.16.10:34702.service - OpenSSH per-connection server daemon (10.200.16.10:34702). Mar 7 00:42:01.210289 sshd[2264]: Accepted publickey for core from 10.200.16.10 port 34702 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:42:01.210999 sshd-session[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:42:01.214465 systemd-logind[1864]: New session 4 of user core. Mar 7 00:42:01.221358 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 00:42:01.445862 sshd[2267]: Connection closed by 10.200.16.10 port 34702 Mar 7 00:42:01.445765 sshd-session[2264]: pam_unix(sshd:session): session closed for user core Mar 7 00:42:01.449302 systemd[1]: sshd@1-10.200.20.29:22-10.200.16.10:34702.service: Deactivated successfully. Mar 7 00:42:01.450682 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 00:42:01.451346 systemd-logind[1864]: Session 4 logged out. Waiting for processes to exit. Mar 7 00:42:01.452416 systemd-logind[1864]: Removed session 4. Mar 7 00:42:01.533748 systemd[1]: Started sshd@2-10.200.20.29:22-10.200.16.10:34710.service - OpenSSH per-connection server daemon (10.200.16.10:34710). Mar 7 00:42:01.955358 sshd[2273]: Accepted publickey for core from 10.200.16.10 port 34710 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:42:01.956381 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:42:01.959601 systemd-logind[1864]: New session 5 of user core. Mar 7 00:42:01.967361 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 00:42:02.187854 sshd[2276]: Connection closed by 10.200.16.10 port 34710 Mar 7 00:42:02.187757 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Mar 7 00:42:02.190900 systemd-logind[1864]: Session 5 logged out. Waiting for processes to exit. Mar 7 00:42:02.191045 systemd[1]: sshd@2-10.200.20.29:22-10.200.16.10:34710.service: Deactivated successfully. Mar 7 00:42:02.192669 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 00:42:02.193433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 00:42:02.195096 systemd-logind[1864]: Removed session 5. Mar 7 00:42:02.196064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:42:02.276753 systemd[1]: Started sshd@3-10.200.20.29:22-10.200.16.10:34720.service - OpenSSH per-connection server daemon (10.200.16.10:34720). Mar 7 00:42:02.537976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:42:02.540945 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:42:02.568245 kubelet[2292]: E0307 00:42:02.568177 2292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:42:02.570323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:42:02.570432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:42:02.571331 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105M memory peak. Mar 7 00:42:02.697714 sshd[2285]: Accepted publickey for core from 10.200.16.10 port 34720 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:42:02.698408 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:42:02.702169 systemd-logind[1864]: New session 6 of user core. Mar 7 00:42:02.711356 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 00:42:02.931668 sshd[2300]: Connection closed by 10.200.16.10 port 34720 Mar 7 00:42:02.930766 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Mar 7 00:42:02.933692 systemd[1]: sshd@3-10.200.20.29:22-10.200.16.10:34720.service: Deactivated successfully. Mar 7 00:42:02.935208 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 00:42:02.935921 systemd-logind[1864]: Session 6 logged out. Waiting for processes to exit. Mar 7 00:42:02.937392 systemd-logind[1864]: Removed session 6. Mar 7 00:42:03.016583 systemd[1]: Started sshd@4-10.200.20.29:22-10.200.16.10:34728.service - OpenSSH per-connection server daemon (10.200.16.10:34728). Mar 7 00:42:03.433188 sshd[2306]: Accepted publickey for core from 10.200.16.10 port 34728 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:42:03.433921 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:42:03.437684 systemd-logind[1864]: New session 7 of user core. Mar 7 00:42:03.443336 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 00:42:03.759921 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 00:42:03.760137 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:42:03.797677 sudo[2310]: pam_unix(sudo:session): session closed for user root Mar 7 00:42:03.875779 sshd[2309]: Connection closed by 10.200.16.10 port 34728 Mar 7 00:42:03.874997 sshd-session[2306]: pam_unix(sshd:session): session closed for user core Mar 7 00:42:03.878697 systemd-logind[1864]: Session 7 logged out. Waiting for processes to exit. Mar 7 00:42:03.878875 systemd[1]: sshd@4-10.200.20.29:22-10.200.16.10:34728.service: Deactivated successfully. Mar 7 00:42:03.880054 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 00:42:03.882553 systemd-logind[1864]: Removed session 7. Mar 7 00:42:03.967845 systemd[1]: Started sshd@5-10.200.20.29:22-10.200.16.10:34744.service - OpenSSH per-connection server daemon (10.200.16.10:34744). Mar 7 00:42:04.055351 chronyd[1837]: Selected source PHC0 Mar 7 00:42:04.392069 sshd[2316]: Accepted publickey for core from 10.200.16.10 port 34744 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:42:04.392834 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:42:04.396147 systemd-logind[1864]: New session 8 of user core. Mar 7 00:42:04.404454 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 00:42:04.550712 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 00:42:04.551269 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:42:04.558523 sudo[2321]: pam_unix(sudo:session): session closed for user root Mar 7 00:42:04.562125 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 7 00:42:04.562335 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:42:04.569643 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 7 00:42:04.594781 augenrules[2343]: No rules Mar 7 00:42:04.595900 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 00:42:04.596169 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 7 00:42:04.600212 sudo[2320]: pam_unix(sudo:session): session closed for user root Mar 7 00:42:04.678665 sshd[2319]: Connection closed by 10.200.16.10 port 34744 Mar 7 00:42:04.678507 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Mar 7 00:42:04.682921 systemd[1]: sshd@5-10.200.20.29:22-10.200.16.10:34744.service: Deactivated successfully. Mar 7 00:42:04.684640 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 00:42:04.685569 systemd-logind[1864]: Session 8 logged out. Waiting for processes to exit. Mar 7 00:42:04.686814 systemd-logind[1864]: Removed session 8. Mar 7 00:42:04.774415 systemd[1]: Started sshd@6-10.200.20.29:22-10.200.16.10:34750.service - OpenSSH per-connection server daemon (10.200.16.10:34750). Mar 7 00:42:05.195278 sshd[2352]: Accepted publickey for core from 10.200.16.10 port 34750 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:42:05.196138 sshd-session[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:42:05.199404 systemd-logind[1864]: New session 9 of user core. Mar 7 00:42:05.207359 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 00:42:05.354030 sudo[2356]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 00:42:05.354567 sudo[2356]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:42:07.324742 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 00:42:07.334488 (dockerd)[2373]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 00:42:08.568553 dockerd[2373]: time="2026-03-07T00:42:08.568272954Z" level=info msg="Starting up" Mar 7 00:42:08.570406 dockerd[2373]: time="2026-03-07T00:42:08.570265674Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 7 00:42:08.577878 dockerd[2373]: time="2026-03-07T00:42:08.577847688Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 7 00:42:08.611712 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport950285263-merged.mount: Deactivated successfully. Mar 7 00:42:08.649098 dockerd[2373]: time="2026-03-07T00:42:08.649040495Z" level=info msg="Loading containers: start." Mar 7 00:42:08.683269 kernel: Initializing XFRM netlink socket Mar 7 00:42:08.964096 systemd-networkd[1475]: docker0: Link UP Mar 7 00:42:08.981285 dockerd[2373]: time="2026-03-07T00:42:08.981245154Z" level=info msg="Loading containers: done." Mar 7 00:42:08.990350 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3314588756-merged.mount: Deactivated successfully. Mar 7 00:42:09.003362 dockerd[2373]: time="2026-03-07T00:42:09.003304072Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 00:42:09.003749 dockerd[2373]: time="2026-03-07T00:42:09.003515157Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 7 00:42:09.003749 dockerd[2373]: time="2026-03-07T00:42:09.003606824Z" level=info msg="Initializing buildkit" Mar 7 00:42:09.059997 dockerd[2373]: time="2026-03-07T00:42:09.059958188Z" level=info msg="Completed buildkit initialization" Mar 7 00:42:09.064629 dockerd[2373]: time="2026-03-07T00:42:09.064599143Z" level=info msg="Daemon has completed initialization" Mar 7 00:42:09.064855 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 00:42:09.065037 dockerd[2373]: time="2026-03-07T00:42:09.064991786Z" level=info msg="API listen on /run/docker.sock" Mar 7 00:42:09.353553 containerd[1887]: time="2026-03-07T00:42:09.353509455Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 00:42:10.292187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996002756.mount: Deactivated successfully. Mar 7 00:42:11.520832 containerd[1887]: time="2026-03-07T00:42:11.520776686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:11.523380 containerd[1887]: time="2026-03-07T00:42:11.523305661Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701796" Mar 7 00:42:11.526582 containerd[1887]: time="2026-03-07T00:42:11.526538089Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:11.532554 containerd[1887]: time="2026-03-07T00:42:11.532511529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:11.533324 containerd[1887]: time="2026-03-07T00:42:11.532985622Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 2.17942807s" Mar 7 00:42:11.533324 containerd[1887]: time="2026-03-07T00:42:11.533020103Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 7 00:42:11.533831 containerd[1887]: time="2026-03-07T00:42:11.533808662Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 00:42:12.727994 containerd[1887]: time="2026-03-07T00:42:12.727530079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:12.730361 containerd[1887]: time="2026-03-07T00:42:12.730336209Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063039" Mar 7 00:42:12.734156 containerd[1887]: time="2026-03-07T00:42:12.734133767Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:12.738287 containerd[1887]: time="2026-03-07T00:42:12.738263854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:12.738796 containerd[1887]: time="2026-03-07T00:42:12.738772793Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 1.204934675s" Mar 7 00:42:12.738879 containerd[1887]: time="2026-03-07T00:42:12.738864806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 7 00:42:12.739714 containerd[1887]: time="2026-03-07T00:42:12.739686321Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 00:42:12.801828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 00:42:12.803045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:42:12.892624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:42:12.898448 (kubelet)[2651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:42:12.921955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:42:13.286551 kubelet[2651]: E0307 00:42:12.920753 2651 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:42:12.922041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:42:12.922406 systemd[1]: kubelet.service: Consumed 101ms CPU time, 107.1M memory peak. Mar 7 00:42:14.327255 containerd[1887]: time="2026-03-07T00:42:14.326769281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:14.330161 containerd[1887]: time="2026-03-07T00:42:14.330140785Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797901" Mar 7 00:42:14.333496 containerd[1887]: time="2026-03-07T00:42:14.333474726Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:14.337184 containerd[1887]: time="2026-03-07T00:42:14.337163871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:14.337795 containerd[1887]: time="2026-03-07T00:42:14.337700203Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 1.597981288s" Mar 7 00:42:14.337795 containerd[1887]: time="2026-03-07T00:42:14.337723308Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 7 00:42:14.338517 containerd[1887]: time="2026-03-07T00:42:14.338454058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 00:42:15.776118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1826339171.mount: Deactivated successfully. Mar 7 00:42:15.990848 containerd[1887]: time="2026-03-07T00:42:15.990795243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:15.993562 containerd[1887]: time="2026-03-07T00:42:15.993541314Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329583" Mar 7 00:42:15.997241 containerd[1887]: time="2026-03-07T00:42:15.997200168Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:16.001315 containerd[1887]: time="2026-03-07T00:42:16.001284709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:16.001914 containerd[1887]: time="2026-03-07T00:42:16.001789640Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 1.663134331s" Mar 7 00:42:16.001914 containerd[1887]: time="2026-03-07T00:42:16.001820729Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 7 00:42:16.004126 containerd[1887]: time="2026-03-07T00:42:16.004029188Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 00:42:16.695839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687824054.mount: Deactivated successfully. Mar 7 00:42:17.811432 containerd[1887]: time="2026-03-07T00:42:17.811374442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:17.817619 containerd[1887]: time="2026-03-07T00:42:17.817591222Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Mar 7 00:42:17.820530 containerd[1887]: time="2026-03-07T00:42:17.820505379Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:17.825909 containerd[1887]: time="2026-03-07T00:42:17.825880616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:17.827056 containerd[1887]: time="2026-03-07T00:42:17.827030458Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.822976773s" Mar 7 00:42:17.827079 containerd[1887]: time="2026-03-07T00:42:17.827065811Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 7 00:42:17.827528 containerd[1887]: time="2026-03-07T00:42:17.827507920Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 00:42:18.370568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163085532.mount: Deactivated successfully. Mar 7 00:42:18.394830 containerd[1887]: time="2026-03-07T00:42:18.394788596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:18.398519 containerd[1887]: time="2026-03-07T00:42:18.398491017Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 7 00:42:18.401309 containerd[1887]: time="2026-03-07T00:42:18.401274299Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:18.407175 containerd[1887]: time="2026-03-07T00:42:18.406473579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:18.407175 containerd[1887]: time="2026-03-07T00:42:18.406882239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 579.349135ms" Mar 7 00:42:18.407175 containerd[1887]: time="2026-03-07T00:42:18.406901375Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 7 00:42:18.407516 containerd[1887]: time="2026-03-07T00:42:18.407498265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 00:42:19.150704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2357646853.mount: Deactivated successfully. Mar 7 00:42:20.049256 containerd[1887]: time="2026-03-07T00:42:20.048916212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:20.051459 containerd[1887]: time="2026-03-07T00:42:20.051433886Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738165" Mar 7 00:42:20.054937 containerd[1887]: time="2026-03-07T00:42:20.054897731Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:20.058979 containerd[1887]: time="2026-03-07T00:42:20.058939377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:20.059435 containerd[1887]: time="2026-03-07T00:42:20.059405471Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.651823284s" Mar 7 00:42:20.059544 containerd[1887]: time="2026-03-07T00:42:20.059526787Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 7 00:42:21.355110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:42:21.355569 systemd[1]: kubelet.service: Consumed 101ms CPU time, 107.1M memory peak. Mar 7 00:42:21.357814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:42:21.379347 systemd[1]: Reload requested from client PID 2817 ('systemctl') (unit session-9.scope)... Mar 7 00:42:21.379361 systemd[1]: Reloading... Mar 7 00:42:21.471248 zram_generator::config[2864]: No configuration found. Mar 7 00:42:21.625652 systemd[1]: Reloading finished in 246 ms. Mar 7 00:42:21.658577 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 00:42:21.658776 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 00:42:21.659057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:42:21.659189 systemd[1]: kubelet.service: Consumed 74ms CPU time, 95M memory peak. Mar 7 00:42:21.660671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:42:21.868436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:42:21.871875 (kubelet)[2931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:42:22.008261 kubelet[2931]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:42:22.163274 kubelet[2931]: I0307 00:42:22.162937 2931 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 00:42:22.163274 kubelet[2931]: I0307 00:42:22.162993 2931 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:42:22.163274 kubelet[2931]: I0307 00:42:22.163017 2931 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 00:42:22.163274 kubelet[2931]: I0307 00:42:22.163021 2931 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:42:22.163274 kubelet[2931]: I0307 00:42:22.163207 2931 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 00:42:22.398147 kubelet[2931]: E0307 00:42:22.398060 2931 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 00:42:22.400169 kubelet[2931]: I0307 00:42:22.400033 2931 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:42:22.403496 kubelet[2931]: I0307 00:42:22.403382 2931 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 7 00:42:22.406117 kubelet[2931]: I0307 00:42:22.406100 2931 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 00:42:22.407083 kubelet[2931]: I0307 00:42:22.407000 2931 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:42:22.407315 kubelet[2931]: I0307 00:42:22.407034 2931 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-e6e869ea98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:42:22.407315 kubelet[2931]: I0307 00:42:22.407266 2931 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 00:42:22.407315 kubelet[2931]: I0307 00:42:22.407272 2931 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 00:42:22.407640 kubelet[2931]: I0307 00:42:22.407522 2931 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 00:42:22.414292 kubelet[2931]: I0307 00:42:22.414278 2931 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 00:42:22.414573 kubelet[2931]: I0307 00:42:22.414500 2931 kubelet.go:482] "Attempting to sync node with API server" Mar 7 00:42:22.414573 kubelet[2931]: I0307 00:42:22.414514 2931 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:42:22.414573 kubelet[2931]: I0307 00:42:22.414529 2931 kubelet.go:394] "Adding apiserver pod source" Mar 7 00:42:22.414573 kubelet[2931]: I0307 00:42:22.414536 2931 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:42:22.418097 kubelet[2931]: I0307 00:42:22.417488 2931 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 7 00:42:22.418210 kubelet[2931]: I0307 00:42:22.418075 2931 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:42:22.418302 kubelet[2931]: I0307 00:42:22.418291 2931 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 00:42:22.418376 kubelet[2931]: W0307 00:42:22.418366 2931 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 00:42:22.421096 kubelet[2931]: I0307 00:42:22.420410 2931 server.go:1257] "Started kubelet" Mar 7 00:42:22.421698 kubelet[2931]: I0307 00:42:22.421681 2931 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 00:42:22.424919 kubelet[2931]: E0307 00:42:22.423561 2931 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.29:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.3-n-e6e869ea98.189a685ac7195644 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.3-n-e6e869ea98,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.3-n-e6e869ea98,},FirstTimestamp:2026-03-07 00:42:22.4203833 +0000 UTC m=+0.545658939,LastTimestamp:2026-03-07 00:42:22.4203833 +0000 UTC m=+0.545658939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.3-n-e6e869ea98,}" Mar 7 00:42:22.425929 kubelet[2931]: I0307 00:42:22.425892 2931 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:42:22.427305 kubelet[2931]: I0307 00:42:22.427290 2931 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:42:22.427458 kubelet[2931]: E0307 00:42:22.427434 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:22.427458 kubelet[2931]: I0307 00:42:22.427334 2931 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 00:42:22.427753 kubelet[2931]: I0307 00:42:22.427341 2931 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 00:42:22.427787 kubelet[2931]: I0307 00:42:22.427775 2931 reconciler.go:29] "Reconciler: start to sync state" Mar 7 00:42:22.428021 kubelet[2931]: E0307 00:42:22.427996 2931 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-e6e869ea98?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="200ms" Mar 7 00:42:22.429022 kubelet[2931]: I0307 00:42:22.428205 2931 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:42:22.429022 kubelet[2931]: I0307 00:42:22.428344 2931 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 00:42:22.429022 kubelet[2931]: I0307 00:42:22.428505 2931 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:42:22.429022 kubelet[2931]: I0307 00:42:22.428559 2931 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:42:22.429769 kubelet[2931]: I0307 00:42:22.427291 2931 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 00:42:22.433607 kubelet[2931]: I0307 00:42:22.433590 2931 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:42:22.434194 kubelet[2931]: I0307 00:42:22.434174 2931 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:42:22.434621 kubelet[2931]: E0307 00:42:22.434606 2931 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:42:22.437273 kubelet[2931]: I0307 00:42:22.436433 2931 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:42:22.447712 kubelet[2931]: I0307 00:42:22.447686 2931 cpu_manager.go:225] "Starting" policy="none" Mar 7 00:42:22.447712 kubelet[2931]: I0307 00:42:22.447700 2931 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 00:42:22.447712 kubelet[2931]: I0307 00:42:22.447715 2931 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 00:42:22.455032 kubelet[2931]: I0307 00:42:22.455008 2931 policy_none.go:50] "Start" Mar 7 00:42:22.455032 kubelet[2931]: I0307 00:42:22.455031 2931 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 00:42:22.455122 kubelet[2931]: I0307 00:42:22.455041 2931 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 00:42:22.460567 kubelet[2931]: I0307 00:42:22.460545 2931 policy_none.go:44] "Start" Mar 7 00:42:22.463963 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 00:42:22.470240 kubelet[2931]: I0307 00:42:22.470144 2931 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 00:42:22.470240 kubelet[2931]: I0307 00:42:22.470174 2931 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 00:42:22.470240 kubelet[2931]: I0307 00:42:22.470192 2931 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 00:42:22.470477 kubelet[2931]: E0307 00:42:22.470446 2931 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:42:22.473815 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 00:42:22.476429 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 00:42:22.483772 kubelet[2931]: E0307 00:42:22.483741 2931 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:42:22.483930 kubelet[2931]: I0307 00:42:22.483913 2931 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 00:42:22.483950 kubelet[2931]: I0307 00:42:22.483928 2931 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:42:22.484964 kubelet[2931]: I0307 00:42:22.484881 2931 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 00:42:22.486447 kubelet[2931]: E0307 00:42:22.486425 2931 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:42:22.486508 kubelet[2931]: E0307 00:42:22.486459 2931 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:22.586255 kubelet[2931]: I0307 00:42:22.585950 2931 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.586525 kubelet[2931]: E0307 00:42:22.586502 2931 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.29:6443/api/v1/nodes\": dial tcp 10.200.20.29:6443: connect: connection refused" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.587159 systemd[1]: Created slice kubepods-burstable-pod61719c21d0c69069530ece22b99ac694.slice - libcontainer container kubepods-burstable-pod61719c21d0c69069530ece22b99ac694.slice. Mar 7 00:42:22.592819 kubelet[2931]: E0307 00:42:22.592796 2931 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.597304 systemd[1]: Created slice kubepods-burstable-pod262892673b7043436c2ab9b6db19f59f.slice - libcontainer container kubepods-burstable-pod262892673b7043436c2ab9b6db19f59f.slice. Mar 7 00:42:22.611630 kubelet[2931]: E0307 00:42:22.611489 2931 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.613930 systemd[1]: Created slice kubepods-burstable-podfe9153701148701313d1c1833e3f483b.slice - libcontainer container kubepods-burstable-podfe9153701148701313d1c1833e3f483b.slice. Mar 7 00:42:22.615236 kubelet[2931]: E0307 00:42:22.615202 2931 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.628947 kubelet[2931]: I0307 00:42:22.628708 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61719c21d0c69069530ece22b99ac694-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-e6e869ea98\" (UID: \"61719c21d0c69069530ece22b99ac694\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.628947 kubelet[2931]: I0307 00:42:22.628743 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/262892673b7043436c2ab9b6db19f59f-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" (UID: \"262892673b7043436c2ab9b6db19f59f\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.628947 kubelet[2931]: I0307 00:42:22.628755 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/262892673b7043436c2ab9b6db19f59f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" (UID: \"262892673b7043436c2ab9b6db19f59f\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.628947 kubelet[2931]: I0307 00:42:22.628766 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/262892673b7043436c2ab9b6db19f59f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" (UID: \"262892673b7043436c2ab9b6db19f59f\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.628947 kubelet[2931]: E0307 00:42:22.628786 2931 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-e6e869ea98?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="400ms" Mar 7 00:42:22.629108 kubelet[2931]: I0307 00:42:22.628866 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe9153701148701313d1c1833e3f483b-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-e6e869ea98\" (UID: \"fe9153701148701313d1c1833e3f483b\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.629108 kubelet[2931]: I0307 00:42:22.628880 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61719c21d0c69069530ece22b99ac694-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-e6e869ea98\" (UID: \"61719c21d0c69069530ece22b99ac694\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.629108 kubelet[2931]: I0307 00:42:22.628892 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61719c21d0c69069530ece22b99ac694-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-e6e869ea98\" (UID: \"61719c21d0c69069530ece22b99ac694\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.629108 kubelet[2931]: I0307 00:42:22.628901 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/262892673b7043436c2ab9b6db19f59f-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" (UID: \"262892673b7043436c2ab9b6db19f59f\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.629108 kubelet[2931]: I0307 00:42:22.628911 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/262892673b7043436c2ab9b6db19f59f-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" (UID: \"262892673b7043436c2ab9b6db19f59f\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.789657 kubelet[2931]: I0307 00:42:22.789280 2931 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.789657 kubelet[2931]: E0307 00:42:22.789580 2931 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.29:6443/api/v1/nodes\": dial tcp 10.200.20.29:6443: connect: connection refused" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:22.800136 kubelet[2931]: E0307 00:42:22.800033 2931 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.29:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.3-n-e6e869ea98.189a685ac7195644 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.3-n-e6e869ea98,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.3-n-e6e869ea98,},FirstTimestamp:2026-03-07 00:42:22.4203833 +0000 UTC m=+0.545658939,LastTimestamp:2026-03-07 00:42:22.4203833 +0000 UTC m=+0.545658939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.3-n-e6e869ea98,}" Mar 7 00:42:22.900830 containerd[1887]: time="2026-03-07T00:42:22.900786179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-e6e869ea98,Uid:61719c21d0c69069530ece22b99ac694,Namespace:kube-system,Attempt:0,}" Mar 7 00:42:22.917555 containerd[1887]: time="2026-03-07T00:42:22.917305842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-e6e869ea98,Uid:262892673b7043436c2ab9b6db19f59f,Namespace:kube-system,Attempt:0,}" Mar 7 00:42:22.922980 containerd[1887]: time="2026-03-07T00:42:22.922950028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-e6e869ea98,Uid:fe9153701148701313d1c1833e3f483b,Namespace:kube-system,Attempt:0,}" Mar 7 00:42:23.030337 kubelet[2931]: E0307 00:42:23.030304 2931 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-e6e869ea98?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="800ms" Mar 7 00:42:23.191242 kubelet[2931]: I0307 00:42:23.191200 2931 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:23.191702 kubelet[2931]: E0307 00:42:23.191668 2931 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.29:6443/api/v1/nodes\": dial tcp 10.200.20.29:6443: connect: connection refused" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:23.304250 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 7 00:42:23.544508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986365578.mount: Deactivated successfully. Mar 7 00:42:23.567433 containerd[1887]: time="2026-03-07T00:42:23.567386071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:42:23.577002 containerd[1887]: time="2026-03-07T00:42:23.576968124Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 7 00:42:23.585651 containerd[1887]: time="2026-03-07T00:42:23.585617108Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:42:23.591081 containerd[1887]: time="2026-03-07T00:42:23.591049802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:42:23.594394 containerd[1887]: time="2026-03-07T00:42:23.594358839Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:42:23.597143 containerd[1887]: time="2026-03-07T00:42:23.597110736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 7 00:42:23.599852 containerd[1887]: time="2026-03-07T00:42:23.599825328Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 7 00:42:23.602629 containerd[1887]: time="2026-03-07T00:42:23.602599874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:42:23.603306 containerd[1887]: time="2026-03-07T00:42:23.603283396Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 698.750734ms" Mar 7 00:42:23.610825 containerd[1887]: time="2026-03-07T00:42:23.610797818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 686.778433ms" Mar 7 00:42:23.631935 containerd[1887]: time="2026-03-07T00:42:23.631903486Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 701.499103ms" Mar 7 00:42:23.659250 containerd[1887]: time="2026-03-07T00:42:23.659134987Z" level=info msg="connecting to shim bdaa65a1f173f7f4c9606234dd274bf2ecbfbba19954f95af5f9dd0a447c1efc" address="unix:///run/containerd/s/490d06230635cfdf56fd032244bb74d3a329ed832e39b03de54a487207b56c16" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:42:23.669756 containerd[1887]: time="2026-03-07T00:42:23.669720475Z" level=info msg="connecting to shim 2b0885acc7e5ccea44899ec87ba73fb44faf9666b3d5c53940a49ea500d46d5e" address="unix:///run/containerd/s/a5c6b0dc9af2b40bdba14cd4c1e6baf9405cf5eba10417796d53fa8082aff9dd" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:42:23.685480 systemd[1]: Started cri-containerd-bdaa65a1f173f7f4c9606234dd274bf2ecbfbba19954f95af5f9dd0a447c1efc.scope - libcontainer container bdaa65a1f173f7f4c9606234dd274bf2ecbfbba19954f95af5f9dd0a447c1efc. Mar 7 00:42:23.688864 systemd[1]: Started cri-containerd-2b0885acc7e5ccea44899ec87ba73fb44faf9666b3d5c53940a49ea500d46d5e.scope - libcontainer container 2b0885acc7e5ccea44899ec87ba73fb44faf9666b3d5c53940a49ea500d46d5e. Mar 7 00:42:23.705771 containerd[1887]: time="2026-03-07T00:42:23.705724365Z" level=info msg="connecting to shim f8080a63ed31e107042648e4b8cb0d6be4fc297c6c961f578af70eb454158fb0" address="unix:///run/containerd/s/ae855bcddbf2d0a9960a08b394db44689e962426934e0b48482b606e8c60c94b" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:42:23.733512 systemd[1]: Started cri-containerd-f8080a63ed31e107042648e4b8cb0d6be4fc297c6c961f578af70eb454158fb0.scope - libcontainer container f8080a63ed31e107042648e4b8cb0d6be4fc297c6c961f578af70eb454158fb0. Mar 7 00:42:23.739398 containerd[1887]: time="2026-03-07T00:42:23.739334384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-e6e869ea98,Uid:262892673b7043436c2ab9b6db19f59f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b0885acc7e5ccea44899ec87ba73fb44faf9666b3d5c53940a49ea500d46d5e\"" Mar 7 00:42:23.743073 containerd[1887]: time="2026-03-07T00:42:23.743051073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-e6e869ea98,Uid:61719c21d0c69069530ece22b99ac694,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdaa65a1f173f7f4c9606234dd274bf2ecbfbba19954f95af5f9dd0a447c1efc\"" Mar 7 00:42:23.750962 containerd[1887]: time="2026-03-07T00:42:23.750940675Z" level=info msg="CreateContainer within sandbox \"2b0885acc7e5ccea44899ec87ba73fb44faf9666b3d5c53940a49ea500d46d5e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 00:42:23.758241 containerd[1887]: time="2026-03-07T00:42:23.758008771Z" level=info msg="CreateContainer within sandbox \"bdaa65a1f173f7f4c9606234dd274bf2ecbfbba19954f95af5f9dd0a447c1efc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 00:42:23.784292 containerd[1887]: time="2026-03-07T00:42:23.784254303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-e6e869ea98,Uid:fe9153701148701313d1c1833e3f483b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8080a63ed31e107042648e4b8cb0d6be4fc297c6c961f578af70eb454158fb0\"" Mar 7 00:42:23.801944 containerd[1887]: time="2026-03-07T00:42:23.801909599Z" level=info msg="Container c801eb60f41538df2dabe60d98a2236c41e8f18b0e0f2f086396d7dfe557c776: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:23.804242 containerd[1887]: time="2026-03-07T00:42:23.804130573Z" level=info msg="CreateContainer within sandbox \"f8080a63ed31e107042648e4b8cb0d6be4fc297c6c961f578af70eb454158fb0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 00:42:23.830712 kubelet[2931]: E0307 00:42:23.830674 2931 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-e6e869ea98?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="1.6s" Mar 7 00:42:23.831884 containerd[1887]: time="2026-03-07T00:42:23.831848899Z" level=info msg="Container 282c2f5071f044f7b4e91c764bc80d54cedbb7b65b478654a664861c89598a50: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:23.848015 containerd[1887]: time="2026-03-07T00:42:23.847973486Z" level=info msg="CreateContainer within sandbox \"bdaa65a1f173f7f4c9606234dd274bf2ecbfbba19954f95af5f9dd0a447c1efc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"282c2f5071f044f7b4e91c764bc80d54cedbb7b65b478654a664861c89598a50\"" Mar 7 00:42:23.848699 containerd[1887]: time="2026-03-07T00:42:23.848674449Z" level=info msg="StartContainer for \"282c2f5071f044f7b4e91c764bc80d54cedbb7b65b478654a664861c89598a50\"" Mar 7 00:42:23.849763 containerd[1887]: time="2026-03-07T00:42:23.849734494Z" level=info msg="connecting to shim 282c2f5071f044f7b4e91c764bc80d54cedbb7b65b478654a664861c89598a50" address="unix:///run/containerd/s/490d06230635cfdf56fd032244bb74d3a329ed832e39b03de54a487207b56c16" protocol=ttrpc version=3 Mar 7 00:42:23.853163 containerd[1887]: time="2026-03-07T00:42:23.853089413Z" level=info msg="CreateContainer within sandbox \"2b0885acc7e5ccea44899ec87ba73fb44faf9666b3d5c53940a49ea500d46d5e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c801eb60f41538df2dabe60d98a2236c41e8f18b0e0f2f086396d7dfe557c776\"" Mar 7 00:42:23.853681 containerd[1887]: time="2026-03-07T00:42:23.853662258Z" level=info msg="StartContainer for \"c801eb60f41538df2dabe60d98a2236c41e8f18b0e0f2f086396d7dfe557c776\"" Mar 7 00:42:23.854684 containerd[1887]: time="2026-03-07T00:42:23.854589624Z" level=info msg="connecting to shim c801eb60f41538df2dabe60d98a2236c41e8f18b0e0f2f086396d7dfe557c776" address="unix:///run/containerd/s/a5c6b0dc9af2b40bdba14cd4c1e6baf9405cf5eba10417796d53fa8082aff9dd" protocol=ttrpc version=3 Mar 7 00:42:23.855731 containerd[1887]: time="2026-03-07T00:42:23.855710864Z" level=info msg="Container fa720b54eff0001e5331ebd6870307d58527d9fd02affe5ded456b9c80ff8bb5: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:23.869427 systemd[1]: Started cri-containerd-282c2f5071f044f7b4e91c764bc80d54cedbb7b65b478654a664861c89598a50.scope - libcontainer container 282c2f5071f044f7b4e91c764bc80d54cedbb7b65b478654a664861c89598a50. Mar 7 00:42:23.873336 systemd[1]: Started cri-containerd-c801eb60f41538df2dabe60d98a2236c41e8f18b0e0f2f086396d7dfe557c776.scope - libcontainer container c801eb60f41538df2dabe60d98a2236c41e8f18b0e0f2f086396d7dfe557c776. Mar 7 00:42:23.876418 containerd[1887]: time="2026-03-07T00:42:23.876384302Z" level=info msg="CreateContainer within sandbox \"f8080a63ed31e107042648e4b8cb0d6be4fc297c6c961f578af70eb454158fb0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa720b54eff0001e5331ebd6870307d58527d9fd02affe5ded456b9c80ff8bb5\"" Mar 7 00:42:23.876894 containerd[1887]: time="2026-03-07T00:42:23.876752929Z" level=info msg="StartContainer for \"fa720b54eff0001e5331ebd6870307d58527d9fd02affe5ded456b9c80ff8bb5\"" Mar 7 00:42:23.879171 containerd[1887]: time="2026-03-07T00:42:23.879130519Z" level=info msg="connecting to shim fa720b54eff0001e5331ebd6870307d58527d9fd02affe5ded456b9c80ff8bb5" address="unix:///run/containerd/s/ae855bcddbf2d0a9960a08b394db44689e962426934e0b48482b606e8c60c94b" protocol=ttrpc version=3 Mar 7 00:42:23.899630 systemd[1]: Started cri-containerd-fa720b54eff0001e5331ebd6870307d58527d9fd02affe5ded456b9c80ff8bb5.scope - libcontainer container fa720b54eff0001e5331ebd6870307d58527d9fd02affe5ded456b9c80ff8bb5. Mar 7 00:42:23.934166 containerd[1887]: time="2026-03-07T00:42:23.934067521Z" level=info msg="StartContainer for \"282c2f5071f044f7b4e91c764bc80d54cedbb7b65b478654a664861c89598a50\" returns successfully" Mar 7 00:42:23.935396 containerd[1887]: time="2026-03-07T00:42:23.935372418Z" level=info msg="StartContainer for \"c801eb60f41538df2dabe60d98a2236c41e8f18b0e0f2f086396d7dfe557c776\" returns successfully" Mar 7 00:42:23.959207 containerd[1887]: time="2026-03-07T00:42:23.959165396Z" level=info msg="StartContainer for \"fa720b54eff0001e5331ebd6870307d58527d9fd02affe5ded456b9c80ff8bb5\" returns successfully" Mar 7 00:42:23.994619 kubelet[2931]: I0307 00:42:23.994590 2931 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:24.480248 kubelet[2931]: E0307 00:42:24.479445 2931 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:24.482274 kubelet[2931]: E0307 00:42:24.482038 2931 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:24.484119 kubelet[2931]: E0307 00:42:24.484104 2931 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:24.801314 kubelet[2931]: I0307 00:42:24.801035 2931 kubelet_node_status.go:77] "Successfully registered node" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:24.801314 kubelet[2931]: E0307 00:42:24.801071 2931 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4459.2.3-n-e6e869ea98\": node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:24.826103 kubelet[2931]: E0307 00:42:24.826075 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:24.926537 kubelet[2931]: E0307 00:42:24.926493 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:25.027036 kubelet[2931]: E0307 00:42:25.027001 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:25.127261 kubelet[2931]: E0307 00:42:25.127122 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:25.227531 kubelet[2931]: E0307 00:42:25.227487 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:25.278128 update_engine[1869]: I20260307 00:42:25.277603 1869 update_attempter.cc:509] Updating boot flags... Mar 7 00:42:25.328450 kubelet[2931]: E0307 00:42:25.328408 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:25.429464 kubelet[2931]: E0307 00:42:25.429348 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:25.486753 kubelet[2931]: E0307 00:42:25.486723 2931 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:25.487053 kubelet[2931]: E0307 00:42:25.486965 2931 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:25.487153 kubelet[2931]: E0307 00:42:25.487139 2931 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:25.530466 kubelet[2931]: E0307 00:42:25.530429 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:25.630964 kubelet[2931]: E0307 00:42:25.630919 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:25.731938 kubelet[2931]: E0307 00:42:25.731820 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:25.832389 kubelet[2931]: E0307 00:42:25.832350 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:25.933253 kubelet[2931]: E0307 00:42:25.933205 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:26.033843 kubelet[2931]: E0307 00:42:26.033737 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:26.134655 kubelet[2931]: E0307 00:42:26.134613 2931 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:26.228737 kubelet[2931]: I0307 00:42:26.228527 2931 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:26.239002 kubelet[2931]: I0307 00:42:26.238972 2931 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:42:26.239320 kubelet[2931]: I0307 00:42:26.239081 2931 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:26.251303 kubelet[2931]: I0307 00:42:26.251084 2931 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:42:26.251303 kubelet[2931]: I0307 00:42:26.251160 2931 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:26.258892 kubelet[2931]: I0307 00:42:26.258863 2931 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:42:26.419878 kubelet[2931]: I0307 00:42:26.419832 2931 apiserver.go:52] "Watching apiserver" Mar 7 00:42:26.428650 kubelet[2931]: I0307 00:42:26.428575 2931 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 00:42:26.930075 systemd[1]: Reload requested from client PID 3278 ('systemctl') (unit session-9.scope)... Mar 7 00:42:26.930364 systemd[1]: Reloading... Mar 7 00:42:27.006297 zram_generator::config[3337]: No configuration found. Mar 7 00:42:27.153207 systemd[1]: Reloading finished in 222 ms. Mar 7 00:42:27.176676 kubelet[2931]: I0307 00:42:27.176623 2931 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:42:27.177900 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:42:27.190651 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 00:42:27.190957 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:42:27.191117 systemd[1]: kubelet.service: Consumed 446ms CPU time, 121.5M memory peak. Mar 7 00:42:27.192509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:42:27.298250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:42:27.305476 (kubelet)[3389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:42:27.333995 kubelet[3389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:42:27.340441 kubelet[3389]: I0307 00:42:27.340396 3389 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 00:42:27.340441 kubelet[3389]: I0307 00:42:27.340435 3389 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:42:27.340550 kubelet[3389]: I0307 00:42:27.340453 3389 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 00:42:27.340550 kubelet[3389]: I0307 00:42:27.340457 3389 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:42:27.340675 kubelet[3389]: I0307 00:42:27.340657 3389 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 00:42:27.341627 kubelet[3389]: I0307 00:42:27.341606 3389 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 00:42:27.390778 kubelet[3389]: I0307 00:42:27.390610 3389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:42:27.397413 kubelet[3389]: I0307 00:42:27.397202 3389 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 7 00:42:27.400787 kubelet[3389]: I0307 00:42:27.400505 3389 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 00:42:27.400787 kubelet[3389]: I0307 00:42:27.400664 3389 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:42:27.400915 kubelet[3389]: I0307 00:42:27.400681 3389 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-e6e869ea98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:42:27.401016 kubelet[3389]: I0307 00:42:27.401005 3389 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 00:42:27.401055 kubelet[3389]: I0307 00:42:27.401048 3389 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 00:42:27.401100 kubelet[3389]: I0307 00:42:27.401093 3389 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 00:42:27.401692 kubelet[3389]: I0307 00:42:27.401678 3389 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 00:42:27.402222 kubelet[3389]: I0307 00:42:27.402199 3389 kubelet.go:482] "Attempting to sync node with API server" Mar 7 00:42:27.402311 kubelet[3389]: I0307 00:42:27.402300 3389 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:42:27.402372 kubelet[3389]: I0307 00:42:27.402363 3389 kubelet.go:394] "Adding apiserver pod source" Mar 7 00:42:27.402418 kubelet[3389]: I0307 00:42:27.402410 3389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:42:27.404247 kubelet[3389]: I0307 00:42:27.403714 3389 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 7 00:42:27.405769 kubelet[3389]: I0307 00:42:27.404425 3389 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:42:27.405769 kubelet[3389]: I0307 00:42:27.404460 3389 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 00:42:27.407631 kubelet[3389]: I0307 00:42:27.407612 3389 server.go:1257] "Started kubelet" Mar 7 00:42:27.412276 kubelet[3389]: I0307 00:42:27.411765 3389 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 00:42:27.426207 kubelet[3389]: I0307 00:42:27.426168 3389 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:42:27.427637 kubelet[3389]: I0307 00:42:27.427621 3389 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:42:27.430216 kubelet[3389]: I0307 00:42:27.430143 3389 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:42:27.430323 kubelet[3389]: I0307 00:42:27.430312 3389 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 00:42:27.430507 kubelet[3389]: I0307 00:42:27.430494 3389 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:42:27.430753 kubelet[3389]: I0307 00:42:27.430737 3389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:42:27.431582 kubelet[3389]: E0307 00:42:27.431557 3389 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-e6e869ea98\" not found" Mar 7 00:42:27.431640 kubelet[3389]: I0307 00:42:27.431592 3389 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 00:42:27.432087 kubelet[3389]: I0307 00:42:27.431719 3389 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 00:42:27.432087 kubelet[3389]: I0307 00:42:27.431808 3389 reconciler.go:29] "Reconciler: start to sync state" Mar 7 00:42:27.432435 kubelet[3389]: I0307 00:42:27.432419 3389 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:42:27.432518 kubelet[3389]: I0307 00:42:27.432500 3389 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:42:27.438146 kubelet[3389]: I0307 00:42:27.438115 3389 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 00:42:27.439984 kubelet[3389]: I0307 00:42:27.439965 3389 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 00:42:27.440074 kubelet[3389]: I0307 00:42:27.440066 3389 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 00:42:27.440134 kubelet[3389]: I0307 00:42:27.440128 3389 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 00:42:27.440444 kubelet[3389]: E0307 00:42:27.440412 3389 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:42:27.443696 kubelet[3389]: I0307 00:42:27.443623 3389 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:42:27.450959 kubelet[3389]: E0307 00:42:27.450929 3389 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:42:27.476431 kubelet[3389]: I0307 00:42:27.476411 3389 cpu_manager.go:225] "Starting" policy="none" Mar 7 00:42:27.477260 kubelet[3389]: I0307 00:42:27.477209 3389 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 00:42:27.477512 kubelet[3389]: I0307 00:42:27.477491 3389 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 00:42:27.477620 kubelet[3389]: I0307 00:42:27.477602 3389 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 00:42:27.477659 kubelet[3389]: I0307 00:42:27.477616 3389 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 00:42:27.477659 kubelet[3389]: I0307 00:42:27.477631 3389 policy_none.go:50] "Start" Mar 7 00:42:27.477659 kubelet[3389]: I0307 00:42:27.477637 3389 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 00:42:27.477659 kubelet[3389]: I0307 00:42:27.477644 3389 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 00:42:27.477718 kubelet[3389]: I0307 00:42:27.477714 3389 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 00:42:27.477734 kubelet[3389]: I0307 00:42:27.477720 3389 policy_none.go:44] "Start" Mar 7 00:42:27.482026 kubelet[3389]: E0307 00:42:27.481779 3389 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:42:27.482782 kubelet[3389]: I0307 00:42:27.482621 3389 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 00:42:27.482782 kubelet[3389]: I0307 00:42:27.482644 3389 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:42:27.484303 kubelet[3389]: I0307 00:42:27.484288 3389 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 00:42:27.486254 kubelet[3389]: E0307 00:42:27.486098 3389 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:42:27.542063 kubelet[3389]: I0307 00:42:27.541981 3389 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.542063 kubelet[3389]: I0307 00:42:27.541994 3389 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.543760 kubelet[3389]: I0307 00:42:27.543742 3389 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.552168 kubelet[3389]: I0307 00:42:27.552066 3389 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:42:27.552168 kubelet[3389]: E0307 00:42:27.552114 3389 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.553025 kubelet[3389]: I0307 00:42:27.553004 3389 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:42:27.553202 kubelet[3389]: I0307 00:42:27.553191 3389 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:42:27.553381 kubelet[3389]: E0307 00:42:27.553294 3389 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-e6e869ea98\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.553509 kubelet[3389]: E0307 00:42:27.553473 3389 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-n-e6e869ea98\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.585243 kubelet[3389]: I0307 00:42:27.585167 3389 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.600717 kubelet[3389]: I0307 00:42:27.600684 3389 kubelet_node_status.go:123] "Node was previously registered" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.600785 kubelet[3389]: I0307 00:42:27.600751 3389 kubelet_node_status.go:77] "Successfully registered node" node="ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.733859 kubelet[3389]: I0307 00:42:27.733703 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/262892673b7043436c2ab9b6db19f59f-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" (UID: \"262892673b7043436c2ab9b6db19f59f\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.733859 kubelet[3389]: I0307 00:42:27.733739 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/262892673b7043436c2ab9b6db19f59f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" (UID: \"262892673b7043436c2ab9b6db19f59f\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.733859 kubelet[3389]: I0307 00:42:27.733762 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe9153701148701313d1c1833e3f483b-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-e6e869ea98\" (UID: \"fe9153701148701313d1c1833e3f483b\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.733859 kubelet[3389]: I0307 00:42:27.733786 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61719c21d0c69069530ece22b99ac694-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-e6e869ea98\" (UID: \"61719c21d0c69069530ece22b99ac694\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.733859 kubelet[3389]: I0307 00:42:27.733795 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61719c21d0c69069530ece22b99ac694-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-e6e869ea98\" (UID: \"61719c21d0c69069530ece22b99ac694\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.734044 kubelet[3389]: I0307 00:42:27.733810 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/262892673b7043436c2ab9b6db19f59f-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" (UID: \"262892673b7043436c2ab9b6db19f59f\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.734044 kubelet[3389]: I0307 00:42:27.733819 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/262892673b7043436c2ab9b6db19f59f-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" (UID: \"262892673b7043436c2ab9b6db19f59f\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.734044 kubelet[3389]: I0307 00:42:27.733829 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61719c21d0c69069530ece22b99ac694-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-e6e869ea98\" (UID: \"61719c21d0c69069530ece22b99ac694\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.734044 kubelet[3389]: I0307 00:42:27.733846 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/262892673b7043436c2ab9b6db19f59f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" (UID: \"262892673b7043436c2ab9b6db19f59f\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:27.944640 sudo[3425]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 00:42:27.944849 sudo[3425]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 00:42:28.186575 sudo[3425]: pam_unix(sudo:session): session closed for user root Mar 7 00:42:28.403216 kubelet[3389]: I0307 00:42:28.403188 3389 apiserver.go:52] "Watching apiserver" Mar 7 00:42:28.432007 kubelet[3389]: I0307 00:42:28.431987 3389 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 00:42:28.444501 kubelet[3389]: I0307 00:42:28.444397 3389 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.3-n-e6e869ea98" podStartSLOduration=2.444383121 podStartE2EDuration="2.444383121s" podCreationTimestamp="2026-03-07 00:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:42:28.443876381 +0000 UTC m=+1.135198509" watchObservedRunningTime="2026-03-07 00:42:28.444383121 +0000 UTC m=+1.135705249" Mar 7 00:42:28.444501 kubelet[3389]: I0307 00:42:28.444464 3389 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" podStartSLOduration=2.444461036 podStartE2EDuration="2.444461036s" podCreationTimestamp="2026-03-07 00:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:42:28.434928064 +0000 UTC m=+1.126250192" watchObservedRunningTime="2026-03-07 00:42:28.444461036 +0000 UTC m=+1.135783172" Mar 7 00:42:28.452727 kubelet[3389]: I0307 00:42:28.452689 3389 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.3-n-e6e869ea98" podStartSLOduration=2.452683708 podStartE2EDuration="2.452683708s" podCreationTimestamp="2026-03-07 00:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:42:28.45253259 +0000 UTC m=+1.143854718" watchObservedRunningTime="2026-03-07 00:42:28.452683708 +0000 UTC m=+1.144005836" Mar 7 00:42:28.465658 kubelet[3389]: I0307 00:42:28.465636 3389 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:28.466275 kubelet[3389]: I0307 00:42:28.466260 3389 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:28.479243 kubelet[3389]: I0307 00:42:28.478908 3389 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:42:28.479243 kubelet[3389]: E0307 00:42:28.479074 3389 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-n-e6e869ea98\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:28.479627 kubelet[3389]: I0307 00:42:28.479607 3389 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:42:28.479654 kubelet[3389]: E0307 00:42:28.479636 3389 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-n-e6e869ea98\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.3-n-e6e869ea98" Mar 7 00:42:29.307788 sudo[2356]: pam_unix(sudo:session): session closed for user root Mar 7 00:42:29.385519 sshd[2355]: Connection closed by 10.200.16.10 port 34750 Mar 7 00:42:29.386090 sshd-session[2352]: pam_unix(sshd:session): session closed for user core Mar 7 00:42:29.389568 systemd-logind[1864]: Session 9 logged out. Waiting for processes to exit. Mar 7 00:42:29.390383 systemd[1]: sshd@6-10.200.20.29:22-10.200.16.10:34750.service: Deactivated successfully. Mar 7 00:42:29.392076 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 00:42:29.394285 systemd[1]: session-9.scope: Consumed 2.551s CPU time, 258.3M memory peak. Mar 7 00:42:29.396478 systemd-logind[1864]: Removed session 9. Mar 7 00:42:31.804089 kubelet[3389]: I0307 00:42:31.804060 3389 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 00:42:31.806832 kubelet[3389]: I0307 00:42:31.805681 3389 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 00:42:31.806877 containerd[1887]: time="2026-03-07T00:42:31.804762598Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 00:42:32.839799 systemd[1]: Created slice kubepods-besteffort-podd16e817b_3d2e_44eb_8b70_5eb8618fe03e.slice - libcontainer container kubepods-besteffort-podd16e817b_3d2e_44eb_8b70_5eb8618fe03e.slice. Mar 7 00:42:32.852841 systemd[1]: Created slice kubepods-burstable-pod1b916e4d_71bd_468f_8a49_17856c4dbe66.slice - libcontainer container kubepods-burstable-pod1b916e4d_71bd_468f_8a49_17856c4dbe66.slice. Mar 7 00:42:32.868658 kubelet[3389]: I0307 00:42:32.868496 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cni-path\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.868658 kubelet[3389]: I0307 00:42:32.868521 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-xtables-lock\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.868658 kubelet[3389]: I0307 00:42:32.868532 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-config-path\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.868658 kubelet[3389]: I0307 00:42:32.868541 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-host-proc-sys-net\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.868658 kubelet[3389]: I0307 00:42:32.868550 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-host-proc-sys-kernel\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.868658 kubelet[3389]: I0307 00:42:32.868562 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-lib-modules\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.868971 kubelet[3389]: I0307 00:42:32.868572 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b916e4d-71bd-468f-8a49-17856c4dbe66-hubble-tls\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.868971 kubelet[3389]: I0307 00:42:32.868581 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7cvs\" (UniqueName: \"kubernetes.io/projected/1b916e4d-71bd-468f-8a49-17856c4dbe66-kube-api-access-w7cvs\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.868971 kubelet[3389]: I0307 00:42:32.868606 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d16e817b-3d2e-44eb-8b70-5eb8618fe03e-lib-modules\") pod \"kube-proxy-r9s4w\" (UID: \"d16e817b-3d2e-44eb-8b70-5eb8618fe03e\") " pod="kube-system/kube-proxy-r9s4w" Mar 7 00:42:32.868971 kubelet[3389]: I0307 00:42:32.868641 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d16e817b-3d2e-44eb-8b70-5eb8618fe03e-kube-proxy\") pod \"kube-proxy-r9s4w\" (UID: \"d16e817b-3d2e-44eb-8b70-5eb8618fe03e\") " pod="kube-system/kube-proxy-r9s4w" Mar 7 00:42:32.868971 kubelet[3389]: I0307 00:42:32.868664 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7hh9\" (UniqueName: \"kubernetes.io/projected/d16e817b-3d2e-44eb-8b70-5eb8618fe03e-kube-api-access-l7hh9\") pod \"kube-proxy-r9s4w\" (UID: \"d16e817b-3d2e-44eb-8b70-5eb8618fe03e\") " pod="kube-system/kube-proxy-r9s4w" Mar 7 00:42:32.869047 kubelet[3389]: I0307 00:42:32.868676 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-hostproc\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.869487 kubelet[3389]: I0307 00:42:32.869290 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-run\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.869487 kubelet[3389]: I0307 00:42:32.869342 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-bpf-maps\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.869487 kubelet[3389]: I0307 00:42:32.869360 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-etc-cni-netd\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.869487 kubelet[3389]: I0307 00:42:32.869373 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b916e4d-71bd-468f-8a49-17856c4dbe66-clustermesh-secrets\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:32.869487 kubelet[3389]: I0307 00:42:32.869386 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d16e817b-3d2e-44eb-8b70-5eb8618fe03e-xtables-lock\") pod \"kube-proxy-r9s4w\" (UID: \"d16e817b-3d2e-44eb-8b70-5eb8618fe03e\") " pod="kube-system/kube-proxy-r9s4w" Mar 7 00:42:32.869487 kubelet[3389]: I0307 00:42:32.869397 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-cgroup\") pod \"cilium-cgp7l\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " pod="kube-system/cilium-cgp7l" Mar 7 00:42:33.034899 systemd[1]: Created slice kubepods-besteffort-pod1323ff54_0348_48d9_9ef2_b63b0ebf651b.slice - libcontainer container kubepods-besteffort-pod1323ff54_0348_48d9_9ef2_b63b0ebf651b.slice. Mar 7 00:42:33.070399 kubelet[3389]: I0307 00:42:33.070365 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgmxj\" (UniqueName: \"kubernetes.io/projected/1323ff54-0348-48d9-9ef2-b63b0ebf651b-kube-api-access-xgmxj\") pod \"cilium-operator-78cf5644cb-w7sxk\" (UID: \"1323ff54-0348-48d9-9ef2-b63b0ebf651b\") " pod="kube-system/cilium-operator-78cf5644cb-w7sxk" Mar 7 00:42:33.070399 kubelet[3389]: I0307 00:42:33.070398 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1323ff54-0348-48d9-9ef2-b63b0ebf651b-cilium-config-path\") pod \"cilium-operator-78cf5644cb-w7sxk\" (UID: \"1323ff54-0348-48d9-9ef2-b63b0ebf651b\") " pod="kube-system/cilium-operator-78cf5644cb-w7sxk" Mar 7 00:42:33.155428 containerd[1887]: time="2026-03-07T00:42:33.155352068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9s4w,Uid:d16e817b-3d2e-44eb-8b70-5eb8618fe03e,Namespace:kube-system,Attempt:0,}" Mar 7 00:42:33.161907 containerd[1887]: time="2026-03-07T00:42:33.161885595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cgp7l,Uid:1b916e4d-71bd-468f-8a49-17856c4dbe66,Namespace:kube-system,Attempt:0,}" Mar 7 00:42:33.219439 containerd[1887]: time="2026-03-07T00:42:33.219411333Z" level=info msg="connecting to shim eef1c1e642960b707094217c9ede1d2189304465636e3ecff5342f828c87cf6e" address="unix:///run/containerd/s/c13df697e3332a2187c45566ded83c8a579fbd6b7d4a410ba96e9403b313dc79" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:42:33.228907 containerd[1887]: time="2026-03-07T00:42:33.228779803Z" level=info msg="connecting to shim 67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f" address="unix:///run/containerd/s/dbe04f9fb14f040c2afe980eccecadcbe6bc6e836968b7c8db76e4946d884de0" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:42:33.235692 systemd[1]: Started cri-containerd-eef1c1e642960b707094217c9ede1d2189304465636e3ecff5342f828c87cf6e.scope - libcontainer container eef1c1e642960b707094217c9ede1d2189304465636e3ecff5342f828c87cf6e. Mar 7 00:42:33.251415 systemd[1]: Started cri-containerd-67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f.scope - libcontainer container 67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f. Mar 7 00:42:33.262100 containerd[1887]: time="2026-03-07T00:42:33.262068172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9s4w,Uid:d16e817b-3d2e-44eb-8b70-5eb8618fe03e,Namespace:kube-system,Attempt:0,} returns sandbox id \"eef1c1e642960b707094217c9ede1d2189304465636e3ecff5342f828c87cf6e\"" Mar 7 00:42:33.274100 containerd[1887]: time="2026-03-07T00:42:33.274072152Z" level=info msg="CreateContainer within sandbox \"eef1c1e642960b707094217c9ede1d2189304465636e3ecff5342f828c87cf6e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 00:42:33.278209 containerd[1887]: time="2026-03-07T00:42:33.278182296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cgp7l,Uid:1b916e4d-71bd-468f-8a49-17856c4dbe66,Namespace:kube-system,Attempt:0,} returns sandbox id \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\"" Mar 7 00:42:33.279791 containerd[1887]: time="2026-03-07T00:42:33.279586783Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 00:42:33.300973 containerd[1887]: time="2026-03-07T00:42:33.300941783Z" level=info msg="Container f39a5cf01b7563736bf0bcbbd1e56a39b5bd732a337d24681ce1c165d67143ed: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:33.320391 containerd[1887]: time="2026-03-07T00:42:33.320326835Z" level=info msg="CreateContainer within sandbox \"eef1c1e642960b707094217c9ede1d2189304465636e3ecff5342f828c87cf6e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f39a5cf01b7563736bf0bcbbd1e56a39b5bd732a337d24681ce1c165d67143ed\"" Mar 7 00:42:33.321609 containerd[1887]: time="2026-03-07T00:42:33.321511249Z" level=info msg="StartContainer for \"f39a5cf01b7563736bf0bcbbd1e56a39b5bd732a337d24681ce1c165d67143ed\"" Mar 7 00:42:33.322746 containerd[1887]: time="2026-03-07T00:42:33.322723792Z" level=info msg="connecting to shim f39a5cf01b7563736bf0bcbbd1e56a39b5bd732a337d24681ce1c165d67143ed" address="unix:///run/containerd/s/c13df697e3332a2187c45566ded83c8a579fbd6b7d4a410ba96e9403b313dc79" protocol=ttrpc version=3 Mar 7 00:42:33.339358 systemd[1]: Started cri-containerd-f39a5cf01b7563736bf0bcbbd1e56a39b5bd732a337d24681ce1c165d67143ed.scope - libcontainer container f39a5cf01b7563736bf0bcbbd1e56a39b5bd732a337d24681ce1c165d67143ed. Mar 7 00:42:33.344616 containerd[1887]: time="2026-03-07T00:42:33.344565580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-w7sxk,Uid:1323ff54-0348-48d9-9ef2-b63b0ebf651b,Namespace:kube-system,Attempt:0,}" Mar 7 00:42:33.377641 containerd[1887]: time="2026-03-07T00:42:33.377593403Z" level=info msg="connecting to shim ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a" address="unix:///run/containerd/s/eb8ee8842f10594bb9fcd5c31f79f5bf55e8930433d9635924b63c44e5b97aa5" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:42:33.386784 containerd[1887]: time="2026-03-07T00:42:33.386548608Z" level=info msg="StartContainer for \"f39a5cf01b7563736bf0bcbbd1e56a39b5bd732a337d24681ce1c165d67143ed\" returns successfully" Mar 7 00:42:33.402360 systemd[1]: Started cri-containerd-ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a.scope - libcontainer container ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a. Mar 7 00:42:33.434250 containerd[1887]: time="2026-03-07T00:42:33.434048804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-w7sxk,Uid:1323ff54-0348-48d9-9ef2-b63b0ebf651b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a\"" Mar 7 00:42:33.485103 kubelet[3389]: I0307 00:42:33.484988 3389 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-r9s4w" podStartSLOduration=1.4849686530000001 podStartE2EDuration="1.484968653s" podCreationTimestamp="2026-03-07 00:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:42:33.484950052 +0000 UTC m=+6.176272188" watchObservedRunningTime="2026-03-07 00:42:33.484968653 +0000 UTC m=+6.176290781" Mar 7 00:42:37.805496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount941003990.mount: Deactivated successfully. Mar 7 00:42:39.686061 containerd[1887]: time="2026-03-07T00:42:39.686021802Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:39.688769 containerd[1887]: time="2026-03-07T00:42:39.688625985Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 7 00:42:39.693009 containerd[1887]: time="2026-03-07T00:42:39.692957310Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:39.693977 containerd[1887]: time="2026-03-07T00:42:39.693902593Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.414286936s" Mar 7 00:42:39.693977 containerd[1887]: time="2026-03-07T00:42:39.693928018Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 7 00:42:39.695595 containerd[1887]: time="2026-03-07T00:42:39.695573062Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 00:42:39.702247 containerd[1887]: time="2026-03-07T00:42:39.701851082Z" level=info msg="CreateContainer within sandbox \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 00:42:39.718089 containerd[1887]: time="2026-03-07T00:42:39.718067120Z" level=info msg="Container 6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:39.733590 containerd[1887]: time="2026-03-07T00:42:39.733566171Z" level=info msg="CreateContainer within sandbox \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\"" Mar 7 00:42:39.733978 containerd[1887]: time="2026-03-07T00:42:39.733959418Z" level=info msg="StartContainer for \"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\"" Mar 7 00:42:39.734886 containerd[1887]: time="2026-03-07T00:42:39.734842482Z" level=info msg="connecting to shim 6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c" address="unix:///run/containerd/s/dbe04f9fb14f040c2afe980eccecadcbe6bc6e836968b7c8db76e4946d884de0" protocol=ttrpc version=3 Mar 7 00:42:39.751341 systemd[1]: Started cri-containerd-6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c.scope - libcontainer container 6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c. Mar 7 00:42:39.777440 systemd[1]: cri-containerd-6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c.scope: Deactivated successfully. Mar 7 00:42:39.780240 containerd[1887]: time="2026-03-07T00:42:39.780185635Z" level=info msg="received container exit event container_id:\"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\" id:\"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\" pid:3806 exited_at:{seconds:1772844159 nanos:779818525}" Mar 7 00:42:39.781494 containerd[1887]: time="2026-03-07T00:42:39.780729438Z" level=info msg="StartContainer for \"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\" returns successfully" Mar 7 00:42:39.797636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c-rootfs.mount: Deactivated successfully. Mar 7 00:42:42.514212 containerd[1887]: time="2026-03-07T00:42:42.514159024Z" level=info msg="CreateContainer within sandbox \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 00:42:42.539088 containerd[1887]: time="2026-03-07T00:42:42.538964702Z" level=info msg="Container 868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:42.554748 containerd[1887]: time="2026-03-07T00:42:42.554718787Z" level=info msg="CreateContainer within sandbox \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\"" Mar 7 00:42:42.555209 containerd[1887]: time="2026-03-07T00:42:42.555121321Z" level=info msg="StartContainer for \"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\"" Mar 7 00:42:42.556131 containerd[1887]: time="2026-03-07T00:42:42.556097557Z" level=info msg="connecting to shim 868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577" address="unix:///run/containerd/s/dbe04f9fb14f040c2afe980eccecadcbe6bc6e836968b7c8db76e4946d884de0" protocol=ttrpc version=3 Mar 7 00:42:42.571349 systemd[1]: Started cri-containerd-868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577.scope - libcontainer container 868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577. Mar 7 00:42:42.596479 containerd[1887]: time="2026-03-07T00:42:42.596451945Z" level=info msg="StartContainer for \"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\" returns successfully" Mar 7 00:42:42.611277 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:42:42.611426 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:42:42.611958 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:42:42.613498 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:42:42.615371 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 7 00:42:42.616417 systemd[1]: cri-containerd-868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577.scope: Deactivated successfully. Mar 7 00:42:42.621441 containerd[1887]: time="2026-03-07T00:42:42.621375667Z" level=info msg="received container exit event container_id:\"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\" id:\"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\" pid:3850 exited_at:{seconds:1772844162 nanos:620858168}" Mar 7 00:42:42.633472 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:42:43.144993 containerd[1887]: time="2026-03-07T00:42:43.144951397Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:43.147796 containerd[1887]: time="2026-03-07T00:42:43.147768939Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 7 00:42:43.150800 containerd[1887]: time="2026-03-07T00:42:43.150766192Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:42:43.151959 containerd[1887]: time="2026-03-07T00:42:43.151880929Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.456286762s" Mar 7 00:42:43.151959 containerd[1887]: time="2026-03-07T00:42:43.151905642Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 7 00:42:43.170258 containerd[1887]: time="2026-03-07T00:42:43.170203419Z" level=info msg="CreateContainer within sandbox \"ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 00:42:43.183821 containerd[1887]: time="2026-03-07T00:42:43.183795329Z" level=info msg="Container 2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:43.200144 containerd[1887]: time="2026-03-07T00:42:43.200117411Z" level=info msg="CreateContainer within sandbox \"ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\"" Mar 7 00:42:43.201123 containerd[1887]: time="2026-03-07T00:42:43.201103639Z" level=info msg="StartContainer for \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\"" Mar 7 00:42:43.202077 containerd[1887]: time="2026-03-07T00:42:43.201932717Z" level=info msg="connecting to shim 2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29" address="unix:///run/containerd/s/eb8ee8842f10594bb9fcd5c31f79f5bf55e8930433d9635924b63c44e5b97aa5" protocol=ttrpc version=3 Mar 7 00:42:43.218358 systemd[1]: Started cri-containerd-2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29.scope - libcontainer container 2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29. Mar 7 00:42:43.244020 containerd[1887]: time="2026-03-07T00:42:43.243991383Z" level=info msg="StartContainer for \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\" returns successfully" Mar 7 00:42:43.508777 containerd[1887]: time="2026-03-07T00:42:43.508679145Z" level=info msg="CreateContainer within sandbox \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 00:42:43.532814 containerd[1887]: time="2026-03-07T00:42:43.532783357Z" level=info msg="Container c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:43.539206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577-rootfs.mount: Deactivated successfully. Mar 7 00:42:43.554132 containerd[1887]: time="2026-03-07T00:42:43.554095004Z" level=info msg="CreateContainer within sandbox \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\"" Mar 7 00:42:43.554592 containerd[1887]: time="2026-03-07T00:42:43.554566758Z" level=info msg="StartContainer for \"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\"" Mar 7 00:42:43.556603 containerd[1887]: time="2026-03-07T00:42:43.556560918Z" level=info msg="connecting to shim c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411" address="unix:///run/containerd/s/dbe04f9fb14f040c2afe980eccecadcbe6bc6e836968b7c8db76e4946d884de0" protocol=ttrpc version=3 Mar 7 00:42:43.577588 systemd[1]: Started cri-containerd-c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411.scope - libcontainer container c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411. Mar 7 00:42:43.650483 systemd[1]: cri-containerd-c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411.scope: Deactivated successfully. Mar 7 00:42:43.654138 containerd[1887]: time="2026-03-07T00:42:43.654102513Z" level=info msg="received container exit event container_id:\"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\" id:\"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\" pid:3945 exited_at:{seconds:1772844163 nanos:653781662}" Mar 7 00:42:43.660482 containerd[1887]: time="2026-03-07T00:42:43.660454824Z" level=info msg="StartContainer for \"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\" returns successfully" Mar 7 00:42:43.670127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411-rootfs.mount: Deactivated successfully. Mar 7 00:42:43.731934 kubelet[3389]: I0307 00:42:43.731879 3389 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-w7sxk" podStartSLOduration=2.015336525 podStartE2EDuration="11.731864397s" podCreationTimestamp="2026-03-07 00:42:32 +0000 UTC" firstStartedPulling="2026-03-07 00:42:33.436104196 +0000 UTC m=+6.127426324" lastFinishedPulling="2026-03-07 00:42:43.152632052 +0000 UTC m=+15.843954196" observedRunningTime="2026-03-07 00:42:43.630995905 +0000 UTC m=+16.322318049" watchObservedRunningTime="2026-03-07 00:42:43.731864397 +0000 UTC m=+16.423186525" Mar 7 00:42:44.515322 containerd[1887]: time="2026-03-07T00:42:44.515282069Z" level=info msg="CreateContainer within sandbox \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 00:42:44.534966 containerd[1887]: time="2026-03-07T00:42:44.534592847Z" level=info msg="Container 6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:44.536767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154223679.mount: Deactivated successfully. Mar 7 00:42:44.548503 containerd[1887]: time="2026-03-07T00:42:44.548475533Z" level=info msg="CreateContainer within sandbox \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\"" Mar 7 00:42:44.549464 containerd[1887]: time="2026-03-07T00:42:44.548920045Z" level=info msg="StartContainer for \"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\"" Mar 7 00:42:44.549660 containerd[1887]: time="2026-03-07T00:42:44.549634631Z" level=info msg="connecting to shim 6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0" address="unix:///run/containerd/s/dbe04f9fb14f040c2afe980eccecadcbe6bc6e836968b7c8db76e4946d884de0" protocol=ttrpc version=3 Mar 7 00:42:44.565350 systemd[1]: Started cri-containerd-6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0.scope - libcontainer container 6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0. Mar 7 00:42:44.582343 systemd[1]: cri-containerd-6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0.scope: Deactivated successfully. Mar 7 00:42:44.592103 containerd[1887]: time="2026-03-07T00:42:44.592075125Z" level=info msg="received container exit event container_id:\"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\" id:\"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\" pid:3984 exited_at:{seconds:1772844164 nanos:583460542}" Mar 7 00:42:44.592929 containerd[1887]: time="2026-03-07T00:42:44.592907739Z" level=info msg="StartContainer for \"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\" returns successfully" Mar 7 00:42:44.606133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0-rootfs.mount: Deactivated successfully. Mar 7 00:42:45.527038 containerd[1887]: time="2026-03-07T00:42:45.527001535Z" level=info msg="CreateContainer within sandbox \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 00:42:45.548755 containerd[1887]: time="2026-03-07T00:42:45.547865746Z" level=info msg="Container 561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:45.550575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030703953.mount: Deactivated successfully. Mar 7 00:42:45.562993 containerd[1887]: time="2026-03-07T00:42:45.562958227Z" level=info msg="CreateContainer within sandbox \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\"" Mar 7 00:42:45.563487 containerd[1887]: time="2026-03-07T00:42:45.563463293Z" level=info msg="StartContainer for \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\"" Mar 7 00:42:45.564185 containerd[1887]: time="2026-03-07T00:42:45.564158126Z" level=info msg="connecting to shim 561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e" address="unix:///run/containerd/s/dbe04f9fb14f040c2afe980eccecadcbe6bc6e836968b7c8db76e4946d884de0" protocol=ttrpc version=3 Mar 7 00:42:45.584333 systemd[1]: Started cri-containerd-561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e.scope - libcontainer container 561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e. Mar 7 00:42:45.618725 containerd[1887]: time="2026-03-07T00:42:45.618690826Z" level=info msg="StartContainer for \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\" returns successfully" Mar 7 00:42:45.732124 kubelet[3389]: I0307 00:42:45.732095 3389 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 7 00:42:45.795869 systemd[1]: Created slice kubepods-burstable-poda06bbf2f_a34a_4a1e_bf91_bc703fa77b71.slice - libcontainer container kubepods-burstable-poda06bbf2f_a34a_4a1e_bf91_bc703fa77b71.slice. Mar 7 00:42:45.850537 kubelet[3389]: I0307 00:42:45.850505 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh9lw\" (UniqueName: \"kubernetes.io/projected/a06bbf2f-a34a-4a1e-bf91-bc703fa77b71-kube-api-access-xh9lw\") pod \"coredns-7d764666f9-6wf59\" (UID: \"a06bbf2f-a34a-4a1e-bf91-bc703fa77b71\") " pod="kube-system/coredns-7d764666f9-6wf59" Mar 7 00:42:45.850537 kubelet[3389]: I0307 00:42:45.850540 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a06bbf2f-a34a-4a1e-bf91-bc703fa77b71-config-volume\") pod \"coredns-7d764666f9-6wf59\" (UID: \"a06bbf2f-a34a-4a1e-bf91-bc703fa77b71\") " pod="kube-system/coredns-7d764666f9-6wf59" Mar 7 00:42:45.893180 systemd[1]: Created slice kubepods-burstable-podc16c50c3_7e9e_4721_8f22_7c4c50da5429.slice - libcontainer container kubepods-burstable-podc16c50c3_7e9e_4721_8f22_7c4c50da5429.slice. Mar 7 00:42:45.951637 kubelet[3389]: I0307 00:42:45.951219 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c16c50c3-7e9e-4721-8f22-7c4c50da5429-config-volume\") pod \"coredns-7d764666f9-nx8df\" (UID: \"c16c50c3-7e9e-4721-8f22-7c4c50da5429\") " pod="kube-system/coredns-7d764666f9-nx8df" Mar 7 00:42:45.951901 kubelet[3389]: I0307 00:42:45.951657 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnxpg\" (UniqueName: \"kubernetes.io/projected/c16c50c3-7e9e-4721-8f22-7c4c50da5429-kube-api-access-bnxpg\") pod \"coredns-7d764666f9-nx8df\" (UID: \"c16c50c3-7e9e-4721-8f22-7c4c50da5429\") " pod="kube-system/coredns-7d764666f9-nx8df" Mar 7 00:42:46.109070 containerd[1887]: time="2026-03-07T00:42:46.108794813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6wf59,Uid:a06bbf2f-a34a-4a1e-bf91-bc703fa77b71,Namespace:kube-system,Attempt:0,}" Mar 7 00:42:46.201347 containerd[1887]: time="2026-03-07T00:42:46.201321846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nx8df,Uid:c16c50c3-7e9e-4721-8f22-7c4c50da5429,Namespace:kube-system,Attempt:0,}" Mar 7 00:42:46.538211 kubelet[3389]: I0307 00:42:46.538099 3389 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-cgp7l" podStartSLOduration=2.299064201 podStartE2EDuration="14.538087139s" podCreationTimestamp="2026-03-07 00:42:32 +0000 UTC" firstStartedPulling="2026-03-07 00:42:33.279317084 +0000 UTC m=+5.970639212" lastFinishedPulling="2026-03-07 00:42:45.518340014 +0000 UTC m=+18.209662150" observedRunningTime="2026-03-07 00:42:46.536795556 +0000 UTC m=+19.228117684" watchObservedRunningTime="2026-03-07 00:42:46.538087139 +0000 UTC m=+19.229409267" Mar 7 00:42:47.561337 systemd-networkd[1475]: cilium_host: Link UP Mar 7 00:42:47.563798 systemd-networkd[1475]: cilium_net: Link UP Mar 7 00:42:47.564311 systemd-networkd[1475]: cilium_host: Gained carrier Mar 7 00:42:47.565408 systemd-networkd[1475]: cilium_net: Gained carrier Mar 7 00:42:47.565516 systemd-networkd[1475]: cilium_host: Gained IPv6LL Mar 7 00:42:47.725001 systemd-networkd[1475]: cilium_vxlan: Link UP Mar 7 00:42:47.725346 systemd-networkd[1475]: cilium_vxlan: Gained carrier Mar 7 00:42:48.003252 kernel: NET: Registered PF_ALG protocol family Mar 7 00:42:48.213321 systemd-networkd[1475]: cilium_net: Gained IPv6LL Mar 7 00:42:48.485269 systemd-networkd[1475]: lxc_health: Link UP Mar 7 00:42:48.491253 systemd-networkd[1475]: lxc_health: Gained carrier Mar 7 00:42:48.638766 systemd-networkd[1475]: lxcdf2fdaefd8db: Link UP Mar 7 00:42:48.646388 kernel: eth0: renamed from tmpfbfd9 Mar 7 00:42:48.647845 systemd-networkd[1475]: lxcdf2fdaefd8db: Gained carrier Mar 7 00:42:48.734260 kernel: eth0: renamed from tmpf3462 Mar 7 00:42:48.736209 systemd-networkd[1475]: lxcacdb72f4867e: Link UP Mar 7 00:42:48.737450 systemd-networkd[1475]: lxcacdb72f4867e: Gained carrier Mar 7 00:42:49.556430 systemd-networkd[1475]: cilium_vxlan: Gained IPv6LL Mar 7 00:42:50.004487 systemd-networkd[1475]: lxc_health: Gained IPv6LL Mar 7 00:42:50.133346 systemd-networkd[1475]: lxcacdb72f4867e: Gained IPv6LL Mar 7 00:42:50.581383 systemd-networkd[1475]: lxcdf2fdaefd8db: Gained IPv6LL Mar 7 00:42:51.166001 containerd[1887]: time="2026-03-07T00:42:51.165961452Z" level=info msg="connecting to shim fbfd9862f5c70bfd6af5f7c2428748575b741ed18e9326261167f75da269fa0b" address="unix:///run/containerd/s/e9cd02cc12f80d1b75284df8b7409e090140c6adae9d4eaa4ea76bee315ccf5e" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:42:51.171165 containerd[1887]: time="2026-03-07T00:42:51.170857173Z" level=info msg="connecting to shim f34623bec26ba51872b292282826c10412d0dea6ae1631b9c4d938f487586005" address="unix:///run/containerd/s/0ddafeaa4354c2ac391a7afde407cb30f86cbed78eb5ea7a44115f293dd29267" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:42:51.189349 systemd[1]: Started cri-containerd-fbfd9862f5c70bfd6af5f7c2428748575b741ed18e9326261167f75da269fa0b.scope - libcontainer container fbfd9862f5c70bfd6af5f7c2428748575b741ed18e9326261167f75da269fa0b. Mar 7 00:42:51.195684 systemd[1]: Started cri-containerd-f34623bec26ba51872b292282826c10412d0dea6ae1631b9c4d938f487586005.scope - libcontainer container f34623bec26ba51872b292282826c10412d0dea6ae1631b9c4d938f487586005. Mar 7 00:42:51.232344 containerd[1887]: time="2026-03-07T00:42:51.232299058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6wf59,Uid:a06bbf2f-a34a-4a1e-bf91-bc703fa77b71,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbfd9862f5c70bfd6af5f7c2428748575b741ed18e9326261167f75da269fa0b\"" Mar 7 00:42:51.242148 containerd[1887]: time="2026-03-07T00:42:51.242099509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nx8df,Uid:c16c50c3-7e9e-4721-8f22-7c4c50da5429,Namespace:kube-system,Attempt:0,} returns sandbox id \"f34623bec26ba51872b292282826c10412d0dea6ae1631b9c4d938f487586005\"" Mar 7 00:42:51.245362 containerd[1887]: time="2026-03-07T00:42:51.245344354Z" level=info msg="CreateContainer within sandbox \"fbfd9862f5c70bfd6af5f7c2428748575b741ed18e9326261167f75da269fa0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:42:51.251433 containerd[1887]: time="2026-03-07T00:42:51.251390724Z" level=info msg="CreateContainer within sandbox \"f34623bec26ba51872b292282826c10412d0dea6ae1631b9c4d938f487586005\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:42:51.268675 containerd[1887]: time="2026-03-07T00:42:51.268652436Z" level=info msg="Container 542101027b9bcf6e1d5c339bced5ae1373bef89aadc532ede2654afa0c4f750f: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:51.284475 containerd[1887]: time="2026-03-07T00:42:51.284444495Z" level=info msg="CreateContainer within sandbox \"fbfd9862f5c70bfd6af5f7c2428748575b741ed18e9326261167f75da269fa0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"542101027b9bcf6e1d5c339bced5ae1373bef89aadc532ede2654afa0c4f750f\"" Mar 7 00:42:51.285303 containerd[1887]: time="2026-03-07T00:42:51.285281709Z" level=info msg="StartContainer for \"542101027b9bcf6e1d5c339bced5ae1373bef89aadc532ede2654afa0c4f750f\"" Mar 7 00:42:51.286061 containerd[1887]: time="2026-03-07T00:42:51.285965846Z" level=info msg="connecting to shim 542101027b9bcf6e1d5c339bced5ae1373bef89aadc532ede2654afa0c4f750f" address="unix:///run/containerd/s/e9cd02cc12f80d1b75284df8b7409e090140c6adae9d4eaa4ea76bee315ccf5e" protocol=ttrpc version=3 Mar 7 00:42:51.289180 containerd[1887]: time="2026-03-07T00:42:51.289154289Z" level=info msg="Container f6abe3317b88deed37b566c9cd2357e74da2dd2e8c864aec19bbd951d5d8f7a7: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:42:51.302434 systemd[1]: Started cri-containerd-542101027b9bcf6e1d5c339bced5ae1373bef89aadc532ede2654afa0c4f750f.scope - libcontainer container 542101027b9bcf6e1d5c339bced5ae1373bef89aadc532ede2654afa0c4f750f. Mar 7 00:42:51.306804 containerd[1887]: time="2026-03-07T00:42:51.306750413Z" level=info msg="CreateContainer within sandbox \"f34623bec26ba51872b292282826c10412d0dea6ae1631b9c4d938f487586005\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6abe3317b88deed37b566c9cd2357e74da2dd2e8c864aec19bbd951d5d8f7a7\"" Mar 7 00:42:51.307477 containerd[1887]: time="2026-03-07T00:42:51.307454615Z" level=info msg="StartContainer for \"f6abe3317b88deed37b566c9cd2357e74da2dd2e8c864aec19bbd951d5d8f7a7\"" Mar 7 00:42:51.308927 containerd[1887]: time="2026-03-07T00:42:51.308902051Z" level=info msg="connecting to shim f6abe3317b88deed37b566c9cd2357e74da2dd2e8c864aec19bbd951d5d8f7a7" address="unix:///run/containerd/s/0ddafeaa4354c2ac391a7afde407cb30f86cbed78eb5ea7a44115f293dd29267" protocol=ttrpc version=3 Mar 7 00:42:51.326351 systemd[1]: Started cri-containerd-f6abe3317b88deed37b566c9cd2357e74da2dd2e8c864aec19bbd951d5d8f7a7.scope - libcontainer container f6abe3317b88deed37b566c9cd2357e74da2dd2e8c864aec19bbd951d5d8f7a7. Mar 7 00:42:51.336192 containerd[1887]: time="2026-03-07T00:42:51.336076345Z" level=info msg="StartContainer for \"542101027b9bcf6e1d5c339bced5ae1373bef89aadc532ede2654afa0c4f750f\" returns successfully" Mar 7 00:42:51.360989 containerd[1887]: time="2026-03-07T00:42:51.360959485Z" level=info msg="StartContainer for \"f6abe3317b88deed37b566c9cd2357e74da2dd2e8c864aec19bbd951d5d8f7a7\" returns successfully" Mar 7 00:42:51.547158 kubelet[3389]: I0307 00:42:51.546945 3389 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-nx8df" podStartSLOduration=19.546931887 podStartE2EDuration="19.546931887s" podCreationTimestamp="2026-03-07 00:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:42:51.545662209 +0000 UTC m=+24.236984361" watchObservedRunningTime="2026-03-07 00:42:51.546931887 +0000 UTC m=+24.238254015" Mar 7 00:42:51.570150 kubelet[3389]: I0307 00:42:51.570101 3389 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-6wf59" podStartSLOduration=19.57009058 podStartE2EDuration="19.57009058s" podCreationTimestamp="2026-03-07 00:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:42:51.569672045 +0000 UTC m=+24.260994197" watchObservedRunningTime="2026-03-07 00:42:51.57009058 +0000 UTC m=+24.261412708" Mar 7 00:42:52.153364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637270099.mount: Deactivated successfully. Mar 7 00:42:53.085965 kubelet[3389]: I0307 00:42:53.085918 3389 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:43:57.618434 systemd[1]: Started sshd@7-10.200.20.29:22-10.200.16.10:60084.service - OpenSSH per-connection server daemon (10.200.16.10:60084). Mar 7 00:43:58.034817 sshd[4706]: Accepted publickey for core from 10.200.16.10 port 60084 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:43:58.035532 sshd-session[4706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:43:58.040431 systemd-logind[1864]: New session 10 of user core. Mar 7 00:43:58.046362 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 00:43:58.341475 sshd[4709]: Connection closed by 10.200.16.10 port 60084 Mar 7 00:43:58.341548 sshd-session[4706]: pam_unix(sshd:session): session closed for user core Mar 7 00:43:58.344793 systemd[1]: sshd@7-10.200.20.29:22-10.200.16.10:60084.service: Deactivated successfully. Mar 7 00:43:58.347417 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 00:43:58.348554 systemd-logind[1864]: Session 10 logged out. Waiting for processes to exit. Mar 7 00:43:58.349734 systemd-logind[1864]: Removed session 10. Mar 7 00:44:03.428788 systemd[1]: Started sshd@8-10.200.20.29:22-10.200.16.10:42382.service - OpenSSH per-connection server daemon (10.200.16.10:42382). Mar 7 00:44:03.845307 sshd[4721]: Accepted publickey for core from 10.200.16.10 port 42382 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:03.846165 sshd-session[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:03.850221 systemd-logind[1864]: New session 11 of user core. Mar 7 00:44:03.857364 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 00:44:04.116320 sshd[4726]: Connection closed by 10.200.16.10 port 42382 Mar 7 00:44:04.115784 sshd-session[4721]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:04.118973 systemd[1]: sshd@8-10.200.20.29:22-10.200.16.10:42382.service: Deactivated successfully. Mar 7 00:44:04.120485 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 00:44:04.121109 systemd-logind[1864]: Session 11 logged out. Waiting for processes to exit. Mar 7 00:44:04.122211 systemd-logind[1864]: Removed session 11. Mar 7 00:44:09.207189 systemd[1]: Started sshd@9-10.200.20.29:22-10.200.16.10:42386.service - OpenSSH per-connection server daemon (10.200.16.10:42386). Mar 7 00:44:09.631246 sshd[4739]: Accepted publickey for core from 10.200.16.10 port 42386 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:09.632290 sshd-session[4739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:09.636171 systemd-logind[1864]: New session 12 of user core. Mar 7 00:44:09.640350 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 00:44:09.907084 sshd[4742]: Connection closed by 10.200.16.10 port 42386 Mar 7 00:44:09.907743 sshd-session[4739]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:09.911347 systemd-logind[1864]: Session 12 logged out. Waiting for processes to exit. Mar 7 00:44:09.911660 systemd[1]: sshd@9-10.200.20.29:22-10.200.16.10:42386.service: Deactivated successfully. Mar 7 00:44:09.913073 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 00:44:09.914608 systemd-logind[1864]: Removed session 12. Mar 7 00:44:14.998802 systemd[1]: Started sshd@10-10.200.20.29:22-10.200.16.10:39378.service - OpenSSH per-connection server daemon (10.200.16.10:39378). Mar 7 00:44:15.420248 sshd[4755]: Accepted publickey for core from 10.200.16.10 port 39378 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:15.421725 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:15.425147 systemd-logind[1864]: New session 13 of user core. Mar 7 00:44:15.433339 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 00:44:15.693326 sshd[4758]: Connection closed by 10.200.16.10 port 39378 Mar 7 00:44:15.693065 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:15.695965 systemd-logind[1864]: Session 13 logged out. Waiting for processes to exit. Mar 7 00:44:15.696327 systemd[1]: sshd@10-10.200.20.29:22-10.200.16.10:39378.service: Deactivated successfully. Mar 7 00:44:15.697962 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 00:44:15.700165 systemd-logind[1864]: Removed session 13. Mar 7 00:44:15.792957 systemd[1]: Started sshd@11-10.200.20.29:22-10.200.16.10:39380.service - OpenSSH per-connection server daemon (10.200.16.10:39380). Mar 7 00:44:16.221284 sshd[4771]: Accepted publickey for core from 10.200.16.10 port 39380 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:16.222018 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:16.225354 systemd-logind[1864]: New session 14 of user core. Mar 7 00:44:16.232507 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 00:44:16.524690 sshd[4774]: Connection closed by 10.200.16.10 port 39380 Mar 7 00:44:16.526067 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:16.528519 systemd[1]: sshd@11-10.200.20.29:22-10.200.16.10:39380.service: Deactivated successfully. Mar 7 00:44:16.530557 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 00:44:16.531635 systemd-logind[1864]: Session 14 logged out. Waiting for processes to exit. Mar 7 00:44:16.533819 systemd-logind[1864]: Removed session 14. Mar 7 00:44:16.618044 systemd[1]: Started sshd@12-10.200.20.29:22-10.200.16.10:39388.service - OpenSSH per-connection server daemon (10.200.16.10:39388). Mar 7 00:44:17.042346 sshd[4784]: Accepted publickey for core from 10.200.16.10 port 39388 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:17.043478 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:17.046901 systemd-logind[1864]: New session 15 of user core. Mar 7 00:44:17.055443 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 00:44:17.319736 sshd[4787]: Connection closed by 10.200.16.10 port 39388 Mar 7 00:44:17.319642 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:17.323213 systemd[1]: sshd@12-10.200.20.29:22-10.200.16.10:39388.service: Deactivated successfully. Mar 7 00:44:17.325018 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 00:44:17.325958 systemd-logind[1864]: Session 15 logged out. Waiting for processes to exit. Mar 7 00:44:17.328133 systemd-logind[1864]: Removed session 15. Mar 7 00:44:22.413373 systemd[1]: Started sshd@13-10.200.20.29:22-10.200.16.10:60484.service - OpenSSH per-connection server daemon (10.200.16.10:60484). Mar 7 00:44:22.825795 sshd[4800]: Accepted publickey for core from 10.200.16.10 port 60484 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:22.826850 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:22.830470 systemd-logind[1864]: New session 16 of user core. Mar 7 00:44:22.837380 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 00:44:23.099734 sshd[4803]: Connection closed by 10.200.16.10 port 60484 Mar 7 00:44:23.100407 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:23.104040 systemd[1]: sshd@13-10.200.20.29:22-10.200.16.10:60484.service: Deactivated successfully. Mar 7 00:44:23.106702 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 00:44:23.107884 systemd-logind[1864]: Session 16 logged out. Waiting for processes to exit. Mar 7 00:44:23.109037 systemd-logind[1864]: Removed session 16. Mar 7 00:44:23.188484 systemd[1]: Started sshd@14-10.200.20.29:22-10.200.16.10:60498.service - OpenSSH per-connection server daemon (10.200.16.10:60498). Mar 7 00:44:23.615197 sshd[4815]: Accepted publickey for core from 10.200.16.10 port 60498 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:23.616140 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:23.619623 systemd-logind[1864]: New session 17 of user core. Mar 7 00:44:23.629364 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 00:44:23.935346 sshd[4818]: Connection closed by 10.200.16.10 port 60498 Mar 7 00:44:23.935709 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:23.939101 systemd-logind[1864]: Session 17 logged out. Waiting for processes to exit. Mar 7 00:44:23.939509 systemd[1]: sshd@14-10.200.20.29:22-10.200.16.10:60498.service: Deactivated successfully. Mar 7 00:44:23.941523 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 00:44:23.943710 systemd-logind[1864]: Removed session 17. Mar 7 00:44:24.026481 systemd[1]: Started sshd@15-10.200.20.29:22-10.200.16.10:60506.service - OpenSSH per-connection server daemon (10.200.16.10:60506). Mar 7 00:44:24.442692 sshd[4829]: Accepted publickey for core from 10.200.16.10 port 60506 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:24.443725 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:24.447266 systemd-logind[1864]: New session 18 of user core. Mar 7 00:44:24.457347 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 00:44:25.130082 sshd[4832]: Connection closed by 10.200.16.10 port 60506 Mar 7 00:44:25.130713 sshd-session[4829]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:25.134449 systemd-logind[1864]: Session 18 logged out. Waiting for processes to exit. Mar 7 00:44:25.134759 systemd[1]: sshd@15-10.200.20.29:22-10.200.16.10:60506.service: Deactivated successfully. Mar 7 00:44:25.136428 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 00:44:25.137904 systemd-logind[1864]: Removed session 18. Mar 7 00:44:25.218186 systemd[1]: Started sshd@16-10.200.20.29:22-10.200.16.10:60508.service - OpenSSH per-connection server daemon (10.200.16.10:60508). Mar 7 00:44:25.633810 sshd[4847]: Accepted publickey for core from 10.200.16.10 port 60508 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:25.638278 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:25.641803 systemd-logind[1864]: New session 19 of user core. Mar 7 00:44:25.647454 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 00:44:25.978611 sshd[4850]: Connection closed by 10.200.16.10 port 60508 Mar 7 00:44:25.979681 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:25.983028 systemd-logind[1864]: Session 19 logged out. Waiting for processes to exit. Mar 7 00:44:25.983189 systemd[1]: sshd@16-10.200.20.29:22-10.200.16.10:60508.service: Deactivated successfully. Mar 7 00:44:25.984798 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 00:44:25.986770 systemd-logind[1864]: Removed session 19. Mar 7 00:44:26.074772 systemd[1]: Started sshd@17-10.200.20.29:22-10.200.16.10:60520.service - OpenSSH per-connection server daemon (10.200.16.10:60520). Mar 7 00:44:26.495124 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 60520 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:26.496096 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:26.499466 systemd-logind[1864]: New session 20 of user core. Mar 7 00:44:26.506368 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 00:44:26.767781 sshd[4865]: Connection closed by 10.200.16.10 port 60520 Mar 7 00:44:26.768319 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:26.772027 systemd[1]: sshd@17-10.200.20.29:22-10.200.16.10:60520.service: Deactivated successfully. Mar 7 00:44:26.774834 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 00:44:26.776481 systemd-logind[1864]: Session 20 logged out. Waiting for processes to exit. Mar 7 00:44:26.777687 systemd-logind[1864]: Removed session 20. Mar 7 00:44:31.861447 systemd[1]: Started sshd@18-10.200.20.29:22-10.200.16.10:54788.service - OpenSSH per-connection server daemon (10.200.16.10:54788). Mar 7 00:44:32.276926 sshd[4880]: Accepted publickey for core from 10.200.16.10 port 54788 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:32.277675 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:32.281290 systemd-logind[1864]: New session 21 of user core. Mar 7 00:44:32.286368 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 00:44:32.551962 sshd[4883]: Connection closed by 10.200.16.10 port 54788 Mar 7 00:44:32.551265 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:32.554428 systemd[1]: sshd@18-10.200.20.29:22-10.200.16.10:54788.service: Deactivated successfully. Mar 7 00:44:32.557365 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 00:44:32.560128 systemd-logind[1864]: Session 21 logged out. Waiting for processes to exit. Mar 7 00:44:32.561216 systemd-logind[1864]: Removed session 21. Mar 7 00:44:37.651021 systemd[1]: Started sshd@19-10.200.20.29:22-10.200.16.10:54804.service - OpenSSH per-connection server daemon (10.200.16.10:54804). Mar 7 00:44:38.075192 sshd[4897]: Accepted publickey for core from 10.200.16.10 port 54804 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:38.075936 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:38.079442 systemd-logind[1864]: New session 22 of user core. Mar 7 00:44:38.084346 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 00:44:38.347461 sshd[4900]: Connection closed by 10.200.16.10 port 54804 Mar 7 00:44:38.346847 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:38.350359 systemd-logind[1864]: Session 22 logged out. Waiting for processes to exit. Mar 7 00:44:38.350885 systemd[1]: sshd@19-10.200.20.29:22-10.200.16.10:54804.service: Deactivated successfully. Mar 7 00:44:38.352540 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 00:44:38.354127 systemd-logind[1864]: Removed session 22. Mar 7 00:44:38.439428 systemd[1]: Started sshd@20-10.200.20.29:22-10.200.16.10:54812.service - OpenSSH per-connection server daemon (10.200.16.10:54812). Mar 7 00:44:38.858176 sshd[4912]: Accepted publickey for core from 10.200.16.10 port 54812 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:38.859218 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:38.862891 systemd-logind[1864]: New session 23 of user core. Mar 7 00:44:38.872553 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 00:44:40.311977 containerd[1887]: time="2026-03-07T00:44:40.311934979Z" level=info msg="StopContainer for \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\" with timeout 30 (s)" Mar 7 00:44:40.312825 containerd[1887]: time="2026-03-07T00:44:40.312685182Z" level=info msg="Stop container \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\" with signal terminated" Mar 7 00:44:40.326523 systemd[1]: cri-containerd-2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29.scope: Deactivated successfully. Mar 7 00:44:40.328653 containerd[1887]: time="2026-03-07T00:44:40.328602646Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:44:40.331269 containerd[1887]: time="2026-03-07T00:44:40.330706425Z" level=info msg="received container exit event container_id:\"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\" id:\"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\" pid:3913 exited_at:{seconds:1772844280 nanos:330191911}" Mar 7 00:44:40.340766 containerd[1887]: time="2026-03-07T00:44:40.340677158Z" level=info msg="StopContainer for \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\" with timeout 2 (s)" Mar 7 00:44:40.341208 containerd[1887]: time="2026-03-07T00:44:40.341189768Z" level=info msg="Stop container \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\" with signal terminated" Mar 7 00:44:40.348981 systemd-networkd[1475]: lxc_health: Link DOWN Mar 7 00:44:40.348987 systemd-networkd[1475]: lxc_health: Lost carrier Mar 7 00:44:40.362164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29-rootfs.mount: Deactivated successfully. Mar 7 00:44:40.368861 systemd[1]: cri-containerd-561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e.scope: Deactivated successfully. Mar 7 00:44:40.369123 systemd[1]: cri-containerd-561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e.scope: Consumed 4.189s CPU time, 125.7M memory peak, 144K read from disk, 12.9M written to disk. Mar 7 00:44:40.370353 containerd[1887]: time="2026-03-07T00:44:40.370327505Z" level=info msg="received container exit event container_id:\"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\" id:\"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\" pid:4021 exited_at:{seconds:1772844280 nanos:370164955}" Mar 7 00:44:40.386273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e-rootfs.mount: Deactivated successfully. Mar 7 00:44:40.423429 containerd[1887]: time="2026-03-07T00:44:40.423393817Z" level=info msg="StopContainer for \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\" returns successfully" Mar 7 00:44:40.424202 containerd[1887]: time="2026-03-07T00:44:40.424180085Z" level=info msg="StopPodSandbox for \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\"" Mar 7 00:44:40.424451 containerd[1887]: time="2026-03-07T00:44:40.424425286Z" level=info msg="Container to stop \"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:44:40.424593 containerd[1887]: time="2026-03-07T00:44:40.424521753Z" level=info msg="Container to stop \"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:44:40.424593 containerd[1887]: time="2026-03-07T00:44:40.424536514Z" level=info msg="Container to stop \"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:44:40.424593 containerd[1887]: time="2026-03-07T00:44:40.424543202Z" level=info msg="Container to stop \"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:44:40.424593 containerd[1887]: time="2026-03-07T00:44:40.424549650Z" level=info msg="Container to stop \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:44:40.426695 containerd[1887]: time="2026-03-07T00:44:40.426609108Z" level=info msg="StopContainer for \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\" returns successfully" Mar 7 00:44:40.426895 containerd[1887]: time="2026-03-07T00:44:40.426878293Z" level=info msg="StopPodSandbox for \"ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a\"" Mar 7 00:44:40.426988 containerd[1887]: time="2026-03-07T00:44:40.426973649Z" level=info msg="Container to stop \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:44:40.432411 systemd[1]: cri-containerd-67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f.scope: Deactivated successfully. Mar 7 00:44:40.433729 systemd[1]: cri-containerd-ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a.scope: Deactivated successfully. Mar 7 00:44:40.436045 containerd[1887]: time="2026-03-07T00:44:40.436012332Z" level=info msg="received sandbox exit event container_id:\"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" id:\"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" exit_status:137 exited_at:{seconds:1772844280 nanos:435776867}" monitor_name=podsandbox Mar 7 00:44:40.438248 containerd[1887]: time="2026-03-07T00:44:40.438206466Z" level=info msg="received sandbox exit event container_id:\"ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a\" id:\"ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a\" exit_status:137 exited_at:{seconds:1772844280 nanos:437851341}" monitor_name=podsandbox Mar 7 00:44:40.460079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a-rootfs.mount: Deactivated successfully. Mar 7 00:44:40.463411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f-rootfs.mount: Deactivated successfully. Mar 7 00:44:40.474875 containerd[1887]: time="2026-03-07T00:44:40.474847263Z" level=info msg="shim disconnected" id=ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a namespace=k8s.io Mar 7 00:44:40.475194 containerd[1887]: time="2026-03-07T00:44:40.475027286Z" level=warning msg="cleaning up after shim disconnected" id=ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a namespace=k8s.io Mar 7 00:44:40.475194 containerd[1887]: time="2026-03-07T00:44:40.475064127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:44:40.476858 containerd[1887]: time="2026-03-07T00:44:40.476825902Z" level=info msg="shim disconnected" id=67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f namespace=k8s.io Mar 7 00:44:40.476932 containerd[1887]: time="2026-03-07T00:44:40.476854559Z" level=warning msg="cleaning up after shim disconnected" id=67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f namespace=k8s.io Mar 7 00:44:40.476932 containerd[1887]: time="2026-03-07T00:44:40.476872552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:44:40.485633 containerd[1887]: time="2026-03-07T00:44:40.485603416Z" level=info msg="received sandbox container exit event sandbox_id:\"ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a\" exit_status:137 exited_at:{seconds:1772844280 nanos:437851341}" monitor_name=criService Mar 7 00:44:40.487018 containerd[1887]: time="2026-03-07T00:44:40.486987801Z" level=info msg="TearDown network for sandbox \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" successfully" Mar 7 00:44:40.487018 containerd[1887]: time="2026-03-07T00:44:40.487010226Z" level=info msg="StopPodSandbox for \"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" returns successfully" Mar 7 00:44:40.487198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f-shm.mount: Deactivated successfully. Mar 7 00:44:40.487936 containerd[1887]: time="2026-03-07T00:44:40.485606864Z" level=info msg="received sandbox container exit event sandbox_id:\"67adddc57de86ae7b0bbad3eb9f689b30956ca5e51a2435393d58ecce5305a0f\" exit_status:137 exited_at:{seconds:1772844280 nanos:435776867}" monitor_name=criService Mar 7 00:44:40.488500 containerd[1887]: time="2026-03-07T00:44:40.488103657Z" level=info msg="TearDown network for sandbox \"ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a\" successfully" Mar 7 00:44:40.488500 containerd[1887]: time="2026-03-07T00:44:40.488122426Z" level=info msg="StopPodSandbox for \"ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a\" returns successfully" Mar 7 00:44:40.544389 kubelet[3389]: I0307 00:44:40.544348 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-host-proc-sys-net" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:44:40.544389 kubelet[3389]: I0307 00:44:40.544397 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-host-proc-sys-net\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544748 kubelet[3389]: I0307 00:44:40.544414 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1323ff54-0348-48d9-9ef2-b63b0ebf651b-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1323ff54-0348-48d9-9ef2-b63b0ebf651b-cilium-config-path\") pod \"1323ff54-0348-48d9-9ef2-b63b0ebf651b\" (UID: \"1323ff54-0348-48d9-9ef2-b63b0ebf651b\") " Mar 7 00:44:40.544748 kubelet[3389]: I0307 00:44:40.544429 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-xtables-lock\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544748 kubelet[3389]: I0307 00:44:40.544440 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-lib-modules\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544748 kubelet[3389]: I0307 00:44:40.544452 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/1b916e4d-71bd-468f-8a49-17856c4dbe66-hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b916e4d-71bd-468f-8a49-17856c4dbe66-hubble-tls\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544748 kubelet[3389]: I0307 00:44:40.544462 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-etc-cni-netd\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544826 kubelet[3389]: I0307 00:44:40.544474 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/1b916e4d-71bd-468f-8a49-17856c4dbe66-kube-api-access-w7cvs\" (UniqueName: \"kubernetes.io/projected/1b916e4d-71bd-468f-8a49-17856c4dbe66-kube-api-access-w7cvs\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544826 kubelet[3389]: I0307 00:44:40.544485 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-bpf-maps\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544826 kubelet[3389]: I0307 00:44:40.544499 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-config-path\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544826 kubelet[3389]: I0307 00:44:40.544512 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-host-proc-sys-kernel\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544826 kubelet[3389]: I0307 00:44:40.544539 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-run\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544898 kubelet[3389]: I0307 00:44:40.544551 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cni-path\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cni-path\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544898 kubelet[3389]: I0307 00:44:40.544561 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-hostproc\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-hostproc\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544898 kubelet[3389]: I0307 00:44:40.544572 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-cgroup\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544898 kubelet[3389]: I0307 00:44:40.544583 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/1b916e4d-71bd-468f-8a49-17856c4dbe66-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b916e4d-71bd-468f-8a49-17856c4dbe66-clustermesh-secrets\") pod \"1b916e4d-71bd-468f-8a49-17856c4dbe66\" (UID: \"1b916e4d-71bd-468f-8a49-17856c4dbe66\") " Mar 7 00:44:40.544898 kubelet[3389]: I0307 00:44:40.544594 3389 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/1323ff54-0348-48d9-9ef2-b63b0ebf651b-kube-api-access-xgmxj\" (UniqueName: \"kubernetes.io/projected/1323ff54-0348-48d9-9ef2-b63b0ebf651b-kube-api-access-xgmxj\") pod \"1323ff54-0348-48d9-9ef2-b63b0ebf651b\" (UID: \"1323ff54-0348-48d9-9ef2-b63b0ebf651b\") " Mar 7 00:44:40.544969 kubelet[3389]: I0307 00:44:40.544617 3389 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-host-proc-sys-net\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.545584 kubelet[3389]: I0307 00:44:40.545565 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-bpf-maps" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:44:40.547030 kubelet[3389]: I0307 00:44:40.547007 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1323ff54-0348-48d9-9ef2-b63b0ebf651b-cilium-config-path" pod "1323ff54-0348-48d9-9ef2-b63b0ebf651b" (UID: "1323ff54-0348-48d9-9ef2-b63b0ebf651b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:44:40.547144 kubelet[3389]: I0307 00:44:40.547132 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-xtables-lock" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:44:40.547218 kubelet[3389]: I0307 00:44:40.547207 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-lib-modules" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:44:40.547362 kubelet[3389]: I0307 00:44:40.547340 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-config-path" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:44:40.547400 kubelet[3389]: I0307 00:44:40.547372 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-host-proc-sys-kernel" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:44:40.547400 kubelet[3389]: I0307 00:44:40.547384 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-run" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:44:40.547400 kubelet[3389]: I0307 00:44:40.547394 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cni-path" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:44:40.547458 kubelet[3389]: I0307 00:44:40.547402 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-hostproc" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:44:40.547458 kubelet[3389]: I0307 00:44:40.547409 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-cgroup" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:44:40.547738 kubelet[3389]: I0307 00:44:40.547718 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-etc-cni-netd" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:44:40.549127 kubelet[3389]: I0307 00:44:40.549103 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1323ff54-0348-48d9-9ef2-b63b0ebf651b-kube-api-access-xgmxj" pod "1323ff54-0348-48d9-9ef2-b63b0ebf651b" (UID: "1323ff54-0348-48d9-9ef2-b63b0ebf651b"). InnerVolumeSpecName "kube-api-access-xgmxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:44:40.549786 kubelet[3389]: I0307 00:44:40.549766 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b916e4d-71bd-468f-8a49-17856c4dbe66-hubble-tls" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:44:40.550358 kubelet[3389]: I0307 00:44:40.550339 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b916e4d-71bd-468f-8a49-17856c4dbe66-kube-api-access-w7cvs" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "kube-api-access-w7cvs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:44:40.550837 kubelet[3389]: I0307 00:44:40.550814 3389 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b916e4d-71bd-468f-8a49-17856c4dbe66-clustermesh-secrets" pod "1b916e4d-71bd-468f-8a49-17856c4dbe66" (UID: "1b916e4d-71bd-468f-8a49-17856c4dbe66"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 00:44:40.645372 kubelet[3389]: I0307 00:44:40.645248 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-config-path\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.645908 kubelet[3389]: I0307 00:44:40.645609 3389 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-host-proc-sys-kernel\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.645908 kubelet[3389]: I0307 00:44:40.645624 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-run\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.645908 kubelet[3389]: I0307 00:44:40.645631 3389 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cni-path\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.645908 kubelet[3389]: I0307 00:44:40.645652 3389 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-hostproc\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.645908 kubelet[3389]: I0307 00:44:40.645657 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-cilium-cgroup\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.645908 kubelet[3389]: I0307 00:44:40.645775 3389 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b916e4d-71bd-468f-8a49-17856c4dbe66-clustermesh-secrets\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.645908 kubelet[3389]: I0307 00:44:40.645783 3389 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xgmxj\" (UniqueName: \"kubernetes.io/projected/1323ff54-0348-48d9-9ef2-b63b0ebf651b-kube-api-access-xgmxj\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.645908 kubelet[3389]: I0307 00:44:40.645791 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1323ff54-0348-48d9-9ef2-b63b0ebf651b-cilium-config-path\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.646129 kubelet[3389]: I0307 00:44:40.645798 3389 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-xtables-lock\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.646129 kubelet[3389]: I0307 00:44:40.645805 3389 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-lib-modules\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.646129 kubelet[3389]: I0307 00:44:40.645810 3389 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b916e4d-71bd-468f-8a49-17856c4dbe66-hubble-tls\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.646129 kubelet[3389]: I0307 00:44:40.645816 3389 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-etc-cni-netd\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.646318 kubelet[3389]: I0307 00:44:40.646288 3389 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w7cvs\" (UniqueName: \"kubernetes.io/projected/1b916e4d-71bd-468f-8a49-17856c4dbe66-kube-api-access-w7cvs\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.646318 kubelet[3389]: I0307 00:44:40.646299 3389 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b916e4d-71bd-468f-8a49-17856c4dbe66-bpf-maps\") on node \"ci-4459.2.3-n-e6e869ea98\" DevicePath \"\"" Mar 7 00:44:40.724202 kubelet[3389]: I0307 00:44:40.724106 3389 scope.go:122] "RemoveContainer" containerID="2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29" Mar 7 00:44:40.727280 containerd[1887]: time="2026-03-07T00:44:40.727215416Z" level=info msg="RemoveContainer for \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\"" Mar 7 00:44:40.731495 systemd[1]: Removed slice kubepods-besteffort-pod1323ff54_0348_48d9_9ef2_b63b0ebf651b.slice - libcontainer container kubepods-besteffort-pod1323ff54_0348_48d9_9ef2_b63b0ebf651b.slice. Mar 7 00:44:40.739918 systemd[1]: Removed slice kubepods-burstable-pod1b916e4d_71bd_468f_8a49_17856c4dbe66.slice - libcontainer container kubepods-burstable-pod1b916e4d_71bd_468f_8a49_17856c4dbe66.slice. Mar 7 00:44:40.739996 systemd[1]: kubepods-burstable-pod1b916e4d_71bd_468f_8a49_17856c4dbe66.slice: Consumed 4.247s CPU time, 126.1M memory peak, 144K read from disk, 12.9M written to disk. Mar 7 00:44:40.744173 containerd[1887]: time="2026-03-07T00:44:40.744130997Z" level=info msg="RemoveContainer for \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\" returns successfully" Mar 7 00:44:40.745317 containerd[1887]: time="2026-03-07T00:44:40.745291174Z" level=error msg="ContainerStatus for \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\": not found" Mar 7 00:44:40.745366 kubelet[3389]: I0307 00:44:40.745104 3389 scope.go:122] "RemoveContainer" containerID="2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29" Mar 7 00:44:40.745566 kubelet[3389]: E0307 00:44:40.745501 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\": not found" containerID="2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29" Mar 7 00:44:40.745735 kubelet[3389]: I0307 00:44:40.745704 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29"} err="failed to get container status \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a99eb33bd18ada7cdc0fa771b371dc1be9bfe140569b128a1fe63c2695fac29\": not found" Mar 7 00:44:40.745806 kubelet[3389]: I0307 00:44:40.745796 3389 scope.go:122] "RemoveContainer" containerID="561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e" Mar 7 00:44:40.747204 containerd[1887]: time="2026-03-07T00:44:40.747183266Z" level=info msg="RemoveContainer for \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\"" Mar 7 00:44:40.758734 containerd[1887]: time="2026-03-07T00:44:40.758353713Z" level=info msg="RemoveContainer for \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\" returns successfully" Mar 7 00:44:40.759645 kubelet[3389]: I0307 00:44:40.759625 3389 scope.go:122] "RemoveContainer" containerID="6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0" Mar 7 00:44:40.761698 containerd[1887]: time="2026-03-07T00:44:40.761672591Z" level=info msg="RemoveContainer for \"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\"" Mar 7 00:44:40.770141 containerd[1887]: time="2026-03-07T00:44:40.769985400Z" level=info msg="RemoveContainer for \"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\" returns successfully" Mar 7 00:44:40.770621 kubelet[3389]: I0307 00:44:40.770596 3389 scope.go:122] "RemoveContainer" containerID="c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411" Mar 7 00:44:40.773310 containerd[1887]: time="2026-03-07T00:44:40.773272310Z" level=info msg="RemoveContainer for \"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\"" Mar 7 00:44:40.780440 containerd[1887]: time="2026-03-07T00:44:40.780414381Z" level=info msg="RemoveContainer for \"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\" returns successfully" Mar 7 00:44:40.780642 kubelet[3389]: I0307 00:44:40.780559 3389 scope.go:122] "RemoveContainer" containerID="868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577" Mar 7 00:44:40.781809 containerd[1887]: time="2026-03-07T00:44:40.781783390Z" level=info msg="RemoveContainer for \"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\"" Mar 7 00:44:40.788590 containerd[1887]: time="2026-03-07T00:44:40.788565712Z" level=info msg="RemoveContainer for \"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\" returns successfully" Mar 7 00:44:40.788760 kubelet[3389]: I0307 00:44:40.788738 3389 scope.go:122] "RemoveContainer" containerID="6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c" Mar 7 00:44:40.790027 containerd[1887]: time="2026-03-07T00:44:40.790002212Z" level=info msg="RemoveContainer for \"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\"" Mar 7 00:44:40.798273 containerd[1887]: time="2026-03-07T00:44:40.798213585Z" level=info msg="RemoveContainer for \"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\" returns successfully" Mar 7 00:44:40.798495 kubelet[3389]: I0307 00:44:40.798406 3389 scope.go:122] "RemoveContainer" containerID="561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e" Mar 7 00:44:40.798706 containerd[1887]: time="2026-03-07T00:44:40.798672729Z" level=error msg="ContainerStatus for \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\": not found" Mar 7 00:44:40.798904 kubelet[3389]: E0307 00:44:40.798872 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\": not found" containerID="561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e" Mar 7 00:44:40.798949 kubelet[3389]: I0307 00:44:40.798918 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e"} err="failed to get container status \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\": rpc error: code = NotFound desc = an error occurred when try to find container \"561e64064f3a28529f0278e019614de32067760a308aaa189e3ea7e23ed0e28e\": not found" Mar 7 00:44:40.798949 kubelet[3389]: I0307 00:44:40.798936 3389 scope.go:122] "RemoveContainer" containerID="6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0" Mar 7 00:44:40.799188 containerd[1887]: time="2026-03-07T00:44:40.799157899Z" level=error msg="ContainerStatus for \"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\": not found" Mar 7 00:44:40.799294 kubelet[3389]: E0307 00:44:40.799271 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\": not found" containerID="6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0" Mar 7 00:44:40.799336 kubelet[3389]: I0307 00:44:40.799294 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0"} err="failed to get container status \"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"6391b52d37288b8c6d2bb9f5c10b3418004ace448b991834acbc7fdb8bbb82f0\": not found" Mar 7 00:44:40.799336 kubelet[3389]: I0307 00:44:40.799308 3389 scope.go:122] "RemoveContainer" containerID="c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411" Mar 7 00:44:40.799524 containerd[1887]: time="2026-03-07T00:44:40.799494519Z" level=error msg="ContainerStatus for \"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\": not found" Mar 7 00:44:40.799746 kubelet[3389]: E0307 00:44:40.799723 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\": not found" containerID="c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411" Mar 7 00:44:40.799746 kubelet[3389]: I0307 00:44:40.799745 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411"} err="failed to get container status \"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\": rpc error: code = NotFound desc = an error occurred when try to find container \"c76379515e8bf0a70466c32e277a36b6017b2c73195e6ad0ef149a829d41e411\": not found" Mar 7 00:44:40.799746 kubelet[3389]: I0307 00:44:40.799756 3389 scope.go:122] "RemoveContainer" containerID="868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577" Mar 7 00:44:40.799904 containerd[1887]: time="2026-03-07T00:44:40.799880116Z" level=error msg="ContainerStatus for \"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\": not found" Mar 7 00:44:40.800089 kubelet[3389]: E0307 00:44:40.799976 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\": not found" containerID="868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577" Mar 7 00:44:40.800208 kubelet[3389]: I0307 00:44:40.800138 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577"} err="failed to get container status \"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\": rpc error: code = NotFound desc = an error occurred when try to find container \"868e15d127d4273493e8c25a2edf2d89934f1fe010544d7a237df941e1347577\": not found" Mar 7 00:44:40.800208 kubelet[3389]: I0307 00:44:40.800157 3389 scope.go:122] "RemoveContainer" containerID="6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c" Mar 7 00:44:40.800532 containerd[1887]: time="2026-03-07T00:44:40.800464817Z" level=error msg="ContainerStatus for \"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\": not found" Mar 7 00:44:40.800615 kubelet[3389]: E0307 00:44:40.800591 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\": not found" containerID="6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c" Mar 7 00:44:40.800615 kubelet[3389]: I0307 00:44:40.800611 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c"} err="failed to get container status \"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a60afec1341d3279452a97144cca8d81375433060a5dc30e952ed6a6c7d842c\": not found" Mar 7 00:44:41.362203 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad3a0733a588a5ee1ba45bc3cc0784da84e08989101e07f2b21e850d72affb0a-shm.mount: Deactivated successfully. Mar 7 00:44:41.362311 systemd[1]: var-lib-kubelet-pods-1323ff54\x2d0348\x2d48d9\x2d9ef2\x2db63b0ebf651b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxgmxj.mount: Deactivated successfully. Mar 7 00:44:41.362356 systemd[1]: var-lib-kubelet-pods-1b916e4d\x2d71bd\x2d468f\x2d8a49\x2d17856c4dbe66-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw7cvs.mount: Deactivated successfully. Mar 7 00:44:41.362392 systemd[1]: var-lib-kubelet-pods-1b916e4d\x2d71bd\x2d468f\x2d8a49\x2d17856c4dbe66-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 00:44:41.362432 systemd[1]: var-lib-kubelet-pods-1b916e4d\x2d71bd\x2d468f\x2d8a49\x2d17856c4dbe66-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 00:44:41.443412 kubelet[3389]: I0307 00:44:41.443374 3389 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1323ff54-0348-48d9-9ef2-b63b0ebf651b" path="/var/lib/kubelet/pods/1323ff54-0348-48d9-9ef2-b63b0ebf651b/volumes" Mar 7 00:44:41.443673 kubelet[3389]: I0307 00:44:41.443655 3389 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1b916e4d-71bd-468f-8a49-17856c4dbe66" path="/var/lib/kubelet/pods/1b916e4d-71bd-468f-8a49-17856c4dbe66/volumes" Mar 7 00:44:42.321577 sshd[4915]: Connection closed by 10.200.16.10 port 54812 Mar 7 00:44:42.322421 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:42.325661 systemd[1]: sshd@20-10.200.20.29:22-10.200.16.10:54812.service: Deactivated successfully. Mar 7 00:44:42.327253 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 00:44:42.329186 systemd-logind[1864]: Session 23 logged out. Waiting for processes to exit. Mar 7 00:44:42.330512 systemd-logind[1864]: Removed session 23. Mar 7 00:44:42.409831 systemd[1]: Started sshd@21-10.200.20.29:22-10.200.16.10:39082.service - OpenSSH per-connection server daemon (10.200.16.10:39082). Mar 7 00:44:42.514890 kubelet[3389]: E0307 00:44:42.514850 3389 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 00:44:42.827122 sshd[5058]: Accepted publickey for core from 10.200.16.10 port 39082 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:42.827819 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:42.831162 systemd-logind[1864]: New session 24 of user core. Mar 7 00:44:42.839340 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 00:44:43.591733 systemd[1]: Created slice kubepods-burstable-pod68972a5f_02ca_4527_a38d_923a62fca7b7.slice - libcontainer container kubepods-burstable-pod68972a5f_02ca_4527_a38d_923a62fca7b7.slice. Mar 7 00:44:43.630785 sshd[5061]: Connection closed by 10.200.16.10 port 39082 Mar 7 00:44:43.631408 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:43.633991 systemd-logind[1864]: Session 24 logged out. Waiting for processes to exit. Mar 7 00:44:43.634087 systemd[1]: sshd@21-10.200.20.29:22-10.200.16.10:39082.service: Deactivated successfully. Mar 7 00:44:43.635554 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 00:44:43.637569 systemd-logind[1864]: Removed session 24. Mar 7 00:44:43.662921 kubelet[3389]: I0307 00:44:43.662891 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68972a5f-02ca-4527-a38d-923a62fca7b7-cilium-cgroup\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663427 kubelet[3389]: I0307 00:44:43.663304 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68972a5f-02ca-4527-a38d-923a62fca7b7-hubble-tls\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663427 kubelet[3389]: I0307 00:44:43.663329 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68972a5f-02ca-4527-a38d-923a62fca7b7-etc-cni-netd\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663427 kubelet[3389]: I0307 00:44:43.663341 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/68972a5f-02ca-4527-a38d-923a62fca7b7-cilium-ipsec-secrets\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663427 kubelet[3389]: I0307 00:44:43.663379 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmmsl\" (UniqueName: \"kubernetes.io/projected/68972a5f-02ca-4527-a38d-923a62fca7b7-kube-api-access-zmmsl\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663427 kubelet[3389]: I0307 00:44:43.663391 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68972a5f-02ca-4527-a38d-923a62fca7b7-hostproc\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663427 kubelet[3389]: I0307 00:44:43.663399 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68972a5f-02ca-4527-a38d-923a62fca7b7-xtables-lock\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663564 kubelet[3389]: I0307 00:44:43.663408 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68972a5f-02ca-4527-a38d-923a62fca7b7-host-proc-sys-net\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663745 kubelet[3389]: I0307 00:44:43.663418 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68972a5f-02ca-4527-a38d-923a62fca7b7-cilium-run\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663745 kubelet[3389]: I0307 00:44:43.663651 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68972a5f-02ca-4527-a38d-923a62fca7b7-bpf-maps\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663745 kubelet[3389]: I0307 00:44:43.663700 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68972a5f-02ca-4527-a38d-923a62fca7b7-cni-path\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663745 kubelet[3389]: I0307 00:44:43.663711 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68972a5f-02ca-4527-a38d-923a62fca7b7-cilium-config-path\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663745 kubelet[3389]: I0307 00:44:43.663721 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68972a5f-02ca-4527-a38d-923a62fca7b7-clustermesh-secrets\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663745 kubelet[3389]: I0307 00:44:43.663730 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68972a5f-02ca-4527-a38d-923a62fca7b7-lib-modules\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.663934 kubelet[3389]: I0307 00:44:43.663901 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68972a5f-02ca-4527-a38d-923a62fca7b7-host-proc-sys-kernel\") pod \"cilium-ktbsv\" (UID: \"68972a5f-02ca-4527-a38d-923a62fca7b7\") " pod="kube-system/cilium-ktbsv" Mar 7 00:44:43.718924 systemd[1]: Started sshd@22-10.200.20.29:22-10.200.16.10:39084.service - OpenSSH per-connection server daemon (10.200.16.10:39084). Mar 7 00:44:43.902949 containerd[1887]: time="2026-03-07T00:44:43.902841277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ktbsv,Uid:68972a5f-02ca-4527-a38d-923a62fca7b7,Namespace:kube-system,Attempt:0,}" Mar 7 00:44:43.934763 containerd[1887]: time="2026-03-07T00:44:43.934728496Z" level=info msg="connecting to shim 3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609" address="unix:///run/containerd/s/d5396b1f0f576245ab06c2d15fde9aec676b112a6bfc4a79380826d3888b6620" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:44:43.952351 systemd[1]: Started cri-containerd-3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609.scope - libcontainer container 3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609. Mar 7 00:44:43.973460 containerd[1887]: time="2026-03-07T00:44:43.973423871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ktbsv,Uid:68972a5f-02ca-4527-a38d-923a62fca7b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609\"" Mar 7 00:44:43.982877 containerd[1887]: time="2026-03-07T00:44:43.982848880Z" level=info msg="CreateContainer within sandbox \"3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 00:44:43.997120 containerd[1887]: time="2026-03-07T00:44:43.997087756Z" level=info msg="Container c614f69bb0f3aa8c3b5cacc804f8c06b30652685636ab1f041f2eb7fda6253bc: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:44:44.012936 containerd[1887]: time="2026-03-07T00:44:44.012904890Z" level=info msg="CreateContainer within sandbox \"3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c614f69bb0f3aa8c3b5cacc804f8c06b30652685636ab1f041f2eb7fda6253bc\"" Mar 7 00:44:44.014272 containerd[1887]: time="2026-03-07T00:44:44.014074443Z" level=info msg="StartContainer for \"c614f69bb0f3aa8c3b5cacc804f8c06b30652685636ab1f041f2eb7fda6253bc\"" Mar 7 00:44:44.015175 containerd[1887]: time="2026-03-07T00:44:44.015142402Z" level=info msg="connecting to shim c614f69bb0f3aa8c3b5cacc804f8c06b30652685636ab1f041f2eb7fda6253bc" address="unix:///run/containerd/s/d5396b1f0f576245ab06c2d15fde9aec676b112a6bfc4a79380826d3888b6620" protocol=ttrpc version=3 Mar 7 00:44:44.031351 systemd[1]: Started cri-containerd-c614f69bb0f3aa8c3b5cacc804f8c06b30652685636ab1f041f2eb7fda6253bc.scope - libcontainer container c614f69bb0f3aa8c3b5cacc804f8c06b30652685636ab1f041f2eb7fda6253bc. Mar 7 00:44:44.062260 containerd[1887]: time="2026-03-07T00:44:44.062217451Z" level=info msg="StartContainer for \"c614f69bb0f3aa8c3b5cacc804f8c06b30652685636ab1f041f2eb7fda6253bc\" returns successfully" Mar 7 00:44:44.065955 systemd[1]: cri-containerd-c614f69bb0f3aa8c3b5cacc804f8c06b30652685636ab1f041f2eb7fda6253bc.scope: Deactivated successfully. Mar 7 00:44:44.068312 containerd[1887]: time="2026-03-07T00:44:44.068279940Z" level=info msg="received container exit event container_id:\"c614f69bb0f3aa8c3b5cacc804f8c06b30652685636ab1f041f2eb7fda6253bc\" id:\"c614f69bb0f3aa8c3b5cacc804f8c06b30652685636ab1f041f2eb7fda6253bc\" pid:5136 exited_at:{seconds:1772844284 nanos:67959121}" Mar 7 00:44:44.143392 sshd[5071]: Accepted publickey for core from 10.200.16.10 port 39084 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:44.144591 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:44.148315 systemd-logind[1864]: New session 25 of user core. Mar 7 00:44:44.159345 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 00:44:44.377411 sshd[5170]: Connection closed by 10.200.16.10 port 39084 Mar 7 00:44:44.377318 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:44.381954 systemd[1]: sshd@22-10.200.20.29:22-10.200.16.10:39084.service: Deactivated successfully. Mar 7 00:44:44.384487 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 00:44:44.386020 systemd-logind[1864]: Session 25 logged out. Waiting for processes to exit. Mar 7 00:44:44.387354 systemd-logind[1864]: Removed session 25. Mar 7 00:44:44.465433 systemd[1]: Started sshd@23-10.200.20.29:22-10.200.16.10:39098.service - OpenSSH per-connection server daemon (10.200.16.10:39098). Mar 7 00:44:44.751124 containerd[1887]: time="2026-03-07T00:44:44.750374422Z" level=info msg="CreateContainer within sandbox \"3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 00:44:44.766830 containerd[1887]: time="2026-03-07T00:44:44.766793735Z" level=info msg="Container 882b3811188e3883cd8f96e0a6271faeecd72da1d195d9210c7241909da36f9d: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:44:44.783608 containerd[1887]: time="2026-03-07T00:44:44.783573822Z" level=info msg="CreateContainer within sandbox \"3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"882b3811188e3883cd8f96e0a6271faeecd72da1d195d9210c7241909da36f9d\"" Mar 7 00:44:44.784384 containerd[1887]: time="2026-03-07T00:44:44.784009501Z" level=info msg="StartContainer for \"882b3811188e3883cd8f96e0a6271faeecd72da1d195d9210c7241909da36f9d\"" Mar 7 00:44:44.785603 containerd[1887]: time="2026-03-07T00:44:44.785580229Z" level=info msg="connecting to shim 882b3811188e3883cd8f96e0a6271faeecd72da1d195d9210c7241909da36f9d" address="unix:///run/containerd/s/d5396b1f0f576245ab06c2d15fde9aec676b112a6bfc4a79380826d3888b6620" protocol=ttrpc version=3 Mar 7 00:44:44.805369 systemd[1]: Started cri-containerd-882b3811188e3883cd8f96e0a6271faeecd72da1d195d9210c7241909da36f9d.scope - libcontainer container 882b3811188e3883cd8f96e0a6271faeecd72da1d195d9210c7241909da36f9d. Mar 7 00:44:44.832325 containerd[1887]: time="2026-03-07T00:44:44.832286910Z" level=info msg="StartContainer for \"882b3811188e3883cd8f96e0a6271faeecd72da1d195d9210c7241909da36f9d\" returns successfully" Mar 7 00:44:44.834924 systemd[1]: cri-containerd-882b3811188e3883cd8f96e0a6271faeecd72da1d195d9210c7241909da36f9d.scope: Deactivated successfully. Mar 7 00:44:44.835720 containerd[1887]: time="2026-03-07T00:44:44.835529490Z" level=info msg="received container exit event container_id:\"882b3811188e3883cd8f96e0a6271faeecd72da1d195d9210c7241909da36f9d\" id:\"882b3811188e3883cd8f96e0a6271faeecd72da1d195d9210c7241909da36f9d\" pid:5193 exited_at:{seconds:1772844284 nanos:834777991}" Mar 7 00:44:44.850438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-882b3811188e3883cd8f96e0a6271faeecd72da1d195d9210c7241909da36f9d-rootfs.mount: Deactivated successfully. Mar 7 00:44:44.891241 sshd[5177]: Accepted publickey for core from 10.200.16.10 port 39098 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:44:44.892603 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:44:44.896257 systemd-logind[1864]: New session 26 of user core. Mar 7 00:44:44.904342 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 00:44:45.752396 containerd[1887]: time="2026-03-07T00:44:45.752355612Z" level=info msg="CreateContainer within sandbox \"3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 00:44:45.782054 containerd[1887]: time="2026-03-07T00:44:45.779476083Z" level=info msg="Container 86c26d59c9e0242143b6df42852f759c17e339ab521fad80c76c9326efbca45f: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:44:45.800200 containerd[1887]: time="2026-03-07T00:44:45.800157132Z" level=info msg="CreateContainer within sandbox \"3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"86c26d59c9e0242143b6df42852f759c17e339ab521fad80c76c9326efbca45f\"" Mar 7 00:44:45.802712 containerd[1887]: time="2026-03-07T00:44:45.802565354Z" level=info msg="StartContainer for \"86c26d59c9e0242143b6df42852f759c17e339ab521fad80c76c9326efbca45f\"" Mar 7 00:44:45.804451 containerd[1887]: time="2026-03-07T00:44:45.804341385Z" level=info msg="connecting to shim 86c26d59c9e0242143b6df42852f759c17e339ab521fad80c76c9326efbca45f" address="unix:///run/containerd/s/d5396b1f0f576245ab06c2d15fde9aec676b112a6bfc4a79380826d3888b6620" protocol=ttrpc version=3 Mar 7 00:44:45.822350 systemd[1]: Started cri-containerd-86c26d59c9e0242143b6df42852f759c17e339ab521fad80c76c9326efbca45f.scope - libcontainer container 86c26d59c9e0242143b6df42852f759c17e339ab521fad80c76c9326efbca45f. Mar 7 00:44:45.871211 systemd[1]: cri-containerd-86c26d59c9e0242143b6df42852f759c17e339ab521fad80c76c9326efbca45f.scope: Deactivated successfully. Mar 7 00:44:45.874200 containerd[1887]: time="2026-03-07T00:44:45.874014829Z" level=info msg="received container exit event container_id:\"86c26d59c9e0242143b6df42852f759c17e339ab521fad80c76c9326efbca45f\" id:\"86c26d59c9e0242143b6df42852f759c17e339ab521fad80c76c9326efbca45f\" pid:5243 exited_at:{seconds:1772844285 nanos:872075079}" Mar 7 00:44:45.876898 containerd[1887]: time="2026-03-07T00:44:45.876800832Z" level=info msg="StartContainer for \"86c26d59c9e0242143b6df42852f759c17e339ab521fad80c76c9326efbca45f\" returns successfully" Mar 7 00:44:45.890619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86c26d59c9e0242143b6df42852f759c17e339ab521fad80c76c9326efbca45f-rootfs.mount: Deactivated successfully. Mar 7 00:44:46.763278 containerd[1887]: time="2026-03-07T00:44:46.763218503Z" level=info msg="CreateContainer within sandbox \"3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 00:44:46.779743 containerd[1887]: time="2026-03-07T00:44:46.779712218Z" level=info msg="Container 641437c7dba9a3c94d8e6b982df27927dbe1e258f51997dd796835bb62cb3e5f: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:44:46.782856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3840998812.mount: Deactivated successfully. Mar 7 00:44:46.797434 containerd[1887]: time="2026-03-07T00:44:46.797399433Z" level=info msg="CreateContainer within sandbox \"3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"641437c7dba9a3c94d8e6b982df27927dbe1e258f51997dd796835bb62cb3e5f\"" Mar 7 00:44:46.798004 containerd[1887]: time="2026-03-07T00:44:46.797853153Z" level=info msg="StartContainer for \"641437c7dba9a3c94d8e6b982df27927dbe1e258f51997dd796835bb62cb3e5f\"" Mar 7 00:44:46.800062 containerd[1887]: time="2026-03-07T00:44:46.799902394Z" level=info msg="connecting to shim 641437c7dba9a3c94d8e6b982df27927dbe1e258f51997dd796835bb62cb3e5f" address="unix:///run/containerd/s/d5396b1f0f576245ab06c2d15fde9aec676b112a6bfc4a79380826d3888b6620" protocol=ttrpc version=3 Mar 7 00:44:46.821355 systemd[1]: Started cri-containerd-641437c7dba9a3c94d8e6b982df27927dbe1e258f51997dd796835bb62cb3e5f.scope - libcontainer container 641437c7dba9a3c94d8e6b982df27927dbe1e258f51997dd796835bb62cb3e5f. Mar 7 00:44:46.842246 systemd[1]: cri-containerd-641437c7dba9a3c94d8e6b982df27927dbe1e258f51997dd796835bb62cb3e5f.scope: Deactivated successfully. Mar 7 00:44:46.849167 containerd[1887]: time="2026-03-07T00:44:46.849123509Z" level=info msg="received container exit event container_id:\"641437c7dba9a3c94d8e6b982df27927dbe1e258f51997dd796835bb62cb3e5f\" id:\"641437c7dba9a3c94d8e6b982df27927dbe1e258f51997dd796835bb62cb3e5f\" pid:5283 exited_at:{seconds:1772844286 nanos:843003907}" Mar 7 00:44:46.850727 containerd[1887]: time="2026-03-07T00:44:46.850703981Z" level=info msg="StartContainer for \"641437c7dba9a3c94d8e6b982df27927dbe1e258f51997dd796835bb62cb3e5f\" returns successfully" Mar 7 00:44:46.863841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-641437c7dba9a3c94d8e6b982df27927dbe1e258f51997dd796835bb62cb3e5f-rootfs.mount: Deactivated successfully. Mar 7 00:44:47.515722 kubelet[3389]: E0307 00:44:47.515679 3389 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 00:44:47.766630 containerd[1887]: time="2026-03-07T00:44:47.766515995Z" level=info msg="CreateContainer within sandbox \"3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 00:44:47.785931 containerd[1887]: time="2026-03-07T00:44:47.785891886Z" level=info msg="Container 960d42b3efda275d55da270e4722579f1a97980dcdf7c30ad875ef0de0d1f4ad: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:44:47.802067 containerd[1887]: time="2026-03-07T00:44:47.802034221Z" level=info msg="CreateContainer within sandbox \"3b8d0068341adec06bbec7c57c65af0287f61b9ef59e75b01ed24b3ca3ecd609\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"960d42b3efda275d55da270e4722579f1a97980dcdf7c30ad875ef0de0d1f4ad\"" Mar 7 00:44:47.802692 containerd[1887]: time="2026-03-07T00:44:47.802669036Z" level=info msg="StartContainer for \"960d42b3efda275d55da270e4722579f1a97980dcdf7c30ad875ef0de0d1f4ad\"" Mar 7 00:44:47.803653 containerd[1887]: time="2026-03-07T00:44:47.803629902Z" level=info msg="connecting to shim 960d42b3efda275d55da270e4722579f1a97980dcdf7c30ad875ef0de0d1f4ad" address="unix:///run/containerd/s/d5396b1f0f576245ab06c2d15fde9aec676b112a6bfc4a79380826d3888b6620" protocol=ttrpc version=3 Mar 7 00:44:47.822355 systemd[1]: Started cri-containerd-960d42b3efda275d55da270e4722579f1a97980dcdf7c30ad875ef0de0d1f4ad.scope - libcontainer container 960d42b3efda275d55da270e4722579f1a97980dcdf7c30ad875ef0de0d1f4ad. Mar 7 00:44:47.857245 containerd[1887]: time="2026-03-07T00:44:47.857189156Z" level=info msg="StartContainer for \"960d42b3efda275d55da270e4722579f1a97980dcdf7c30ad875ef0de0d1f4ad\" returns successfully" Mar 7 00:44:48.120264 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 7 00:44:48.774294 kubelet[3389]: I0307 00:44:48.774242 3389 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-ktbsv" podStartSLOduration=5.774220165 podStartE2EDuration="5.774220165s" podCreationTimestamp="2026-03-07 00:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:44:48.773387951 +0000 UTC m=+141.464710135" watchObservedRunningTime="2026-03-07 00:44:48.774220165 +0000 UTC m=+141.465542293" Mar 7 00:44:50.042406 kubelet[3389]: I0307 00:44:50.042353 3389 setters.go:546] "Node became not ready" node="ci-4459.2.3-n-e6e869ea98" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T00:44:50Z","lastTransitionTime":"2026-03-07T00:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 7 00:44:50.517109 systemd-networkd[1475]: lxc_health: Link UP Mar 7 00:44:50.523748 systemd-networkd[1475]: lxc_health: Gained carrier Mar 7 00:44:52.308421 systemd-networkd[1475]: lxc_health: Gained IPv6LL Mar 7 00:44:57.579514 kubelet[3389]: E0307 00:44:57.579475 3389 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39152->127.0.0.1:46635: write tcp 127.0.0.1:39152->127.0.0.1:46635: write: broken pipe Mar 7 00:44:57.665325 sshd[5224]: Connection closed by 10.200.16.10 port 39098 Mar 7 00:44:57.665930 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Mar 7 00:44:57.669377 systemd[1]: sshd@23-10.200.20.29:22-10.200.16.10:39098.service: Deactivated successfully. Mar 7 00:44:57.671035 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 00:44:57.671868 systemd-logind[1864]: Session 26 logged out. Waiting for processes to exit. Mar 7 00:44:57.673127 systemd-logind[1864]: Removed session 26.