Jan 28 00:47:12.058430 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 28 00:47:12.058450 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Jan 27 22:35:34 -00 2026 Jan 28 00:47:12.058456 kernel: KASLR enabled Jan 28 00:47:12.058460 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 28 00:47:12.058464 kernel: printk: legacy bootconsole [pl11] enabled Jan 28 00:47:12.058469 kernel: efi: EFI v2.7 by EDK II Jan 28 00:47:12.058474 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e3f9018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 28 00:47:12.058478 kernel: random: crng init done Jan 28 00:47:12.058482 kernel: secureboot: Secure boot disabled Jan 28 00:47:12.058486 kernel: ACPI: Early table checksum verification disabled Jan 28 00:47:12.058490 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 28 00:47:12.058494 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:12.058498 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:12.058501 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 28 00:47:12.058507 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:12.058512 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:12.058516 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:12.058520 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:12.058524 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:12.058530 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:12.058534 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 28 00:47:12.058538 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:12.058543 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 28 00:47:12.058547 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 28 00:47:12.058551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 28 00:47:12.058555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 28 00:47:12.058559 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 28 00:47:12.058564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 28 00:47:12.058568 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 28 00:47:12.058572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 28 00:47:12.058577 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 28 00:47:12.058581 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 28 00:47:12.058585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 28 00:47:12.058590 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 28 00:47:12.058594 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 28 00:47:12.058598 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 28 00:47:12.058602 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 28 00:47:12.058607 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 28 00:47:12.058611 kernel: Zone ranges: Jan 28 00:47:12.058615 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 28 00:47:12.058622 kernel: DMA32 empty Jan 28 00:47:12.058626 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 00:47:12.058631 kernel: Device empty Jan 28 00:47:12.058635 kernel: Movable zone start for each node Jan 28 00:47:12.058639 kernel: Early memory node ranges Jan 28 00:47:12.058644 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 28 00:47:12.058649 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 28 00:47:12.058653 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 28 00:47:12.058658 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 28 00:47:12.058662 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 28 00:47:12.058667 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 28 00:47:12.058671 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 00:47:12.058675 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 28 00:47:12.058680 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 28 00:47:12.058684 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 28 00:47:12.058688 kernel: psci: probing for conduit method from ACPI. Jan 28 00:47:12.058693 kernel: psci: PSCIv1.3 detected in firmware. Jan 28 00:47:12.058697 kernel: psci: Using standard PSCI v0.2 function IDs Jan 28 00:47:12.058702 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 28 00:47:12.058706 kernel: psci: SMC Calling Convention v1.4 Jan 28 00:47:12.058711 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 28 00:47:12.058715 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 28 00:47:12.058719 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 28 00:47:12.058724 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 28 00:47:12.058728 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 28 00:47:12.058733 kernel: Detected PIPT I-cache on CPU0 Jan 28 00:47:12.058737 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 28 00:47:12.058741 kernel: CPU features: detected: GIC system register CPU interface Jan 28 00:47:12.058746 kernel: CPU features: detected: Spectre-v4 Jan 28 00:47:12.058750 kernel: CPU features: detected: Spectre-BHB Jan 28 00:47:12.058756 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 28 00:47:12.058760 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 28 00:47:12.058764 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 28 00:47:12.058769 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 28 00:47:12.058773 kernel: alternatives: applying boot alternatives Jan 28 00:47:12.058778 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f94df361d6ccbf6d3bccdda215ef8c4de18f0915f7435d65b20126d9bf4aaef1 Jan 28 00:47:12.058783 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 00:47:12.058787 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 00:47:12.058792 kernel: Fallback order for Node 0: 0 Jan 28 00:47:12.058796 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 28 00:47:12.058801 kernel: Policy zone: Normal Jan 28 00:47:12.058806 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 00:47:12.058810 kernel: software IO TLB: area num 2. Jan 28 00:47:12.058815 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Jan 28 00:47:12.058819 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 00:47:12.058823 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 00:47:12.058829 kernel: rcu: RCU event tracing is enabled. Jan 28 00:47:12.058833 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 00:47:12.058837 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 00:47:12.058842 kernel: Tracing variant of Tasks RCU enabled. Jan 28 00:47:12.058846 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 00:47:12.058850 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 00:47:12.058856 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 00:47:12.058860 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 00:47:12.058865 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 28 00:47:12.058869 kernel: GICv3: 960 SPIs implemented Jan 28 00:47:12.058873 kernel: GICv3: 0 Extended SPIs implemented Jan 28 00:47:12.058878 kernel: Root IRQ handler: gic_handle_irq Jan 28 00:47:12.058882 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 28 00:47:12.058886 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 28 00:47:12.058891 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 28 00:47:12.058895 kernel: ITS: No ITS available, not enabling LPIs Jan 28 00:47:12.058900 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 00:47:12.058905 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 28 00:47:12.058912 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 00:47:12.058916 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 28 00:47:12.058920 kernel: Console: colour dummy device 80x25 Jan 28 00:47:12.058925 kernel: printk: legacy console [tty1] enabled Jan 28 00:47:12.058930 kernel: ACPI: Core revision 20240827 Jan 28 00:47:12.058935 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 28 00:47:12.058939 kernel: pid_max: default: 32768 minimum: 301 Jan 28 00:47:12.058944 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 00:47:12.058948 kernel: landlock: Up and running. Jan 28 00:47:12.058953 kernel: SELinux: Initializing. Jan 28 00:47:12.058958 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:47:12.058974 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:47:12.058978 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 28 00:47:12.058983 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 28 00:47:12.058991 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 28 00:47:12.058997 kernel: rcu: Hierarchical SRCU implementation. Jan 28 00:47:12.059002 kernel: rcu: Max phase no-delay instances is 400. Jan 28 00:47:12.059007 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 00:47:12.059011 kernel: Remapping and enabling EFI services. Jan 28 00:47:12.059016 kernel: smp: Bringing up secondary CPUs ... Jan 28 00:47:12.059021 kernel: Detected PIPT I-cache on CPU1 Jan 28 00:47:12.059027 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 28 00:47:12.059032 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 28 00:47:12.059036 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 00:47:12.059041 kernel: SMP: Total of 2 processors activated. Jan 28 00:47:12.059046 kernel: CPU: All CPU(s) started at EL1 Jan 28 00:47:12.059051 kernel: CPU features: detected: 32-bit EL0 Support Jan 28 00:47:12.059056 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 28 00:47:12.059061 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 28 00:47:12.059066 kernel: CPU features: detected: Common not Private translations Jan 28 00:47:12.059070 kernel: CPU features: detected: CRC32 instructions Jan 28 00:47:12.059075 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 28 00:47:12.059080 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 28 00:47:12.059085 kernel: CPU features: detected: LSE atomic instructions Jan 28 00:47:12.059089 kernel: CPU features: detected: Privileged Access Never Jan 28 00:47:12.059095 kernel: CPU features: detected: Speculation barrier (SB) Jan 28 00:47:12.059100 kernel: CPU features: detected: TLB range maintenance instructions Jan 28 00:47:12.059105 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 28 00:47:12.059109 kernel: CPU features: detected: Scalable Vector Extension Jan 28 00:47:12.059114 kernel: alternatives: applying system-wide alternatives Jan 28 00:47:12.059119 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 28 00:47:12.059124 kernel: SVE: maximum available vector length 16 bytes per vector Jan 28 00:47:12.059129 kernel: SVE: default vector length 16 bytes per vector Jan 28 00:47:12.059134 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Jan 28 00:47:12.059139 kernel: devtmpfs: initialized Jan 28 00:47:12.059144 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 00:47:12.059149 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 00:47:12.059154 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 28 00:47:12.059158 kernel: 0 pages in range for non-PLT usage Jan 28 00:47:12.059163 kernel: 508400 pages in range for PLT usage Jan 28 00:47:12.059168 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 00:47:12.059172 kernel: SMBIOS 3.1.0 present. Jan 28 00:47:12.059178 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 28 00:47:12.059183 kernel: DMI: Memory slots populated: 2/2 Jan 28 00:47:12.059188 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 00:47:12.059192 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 28 00:47:12.059197 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 28 00:47:12.059202 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 28 00:47:12.059207 kernel: audit: initializing netlink subsys (disabled) Jan 28 00:47:12.059212 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 28 00:47:12.059216 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 00:47:12.059222 kernel: cpuidle: using governor menu Jan 28 00:47:12.059227 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 28 00:47:12.059231 kernel: ASID allocator initialised with 32768 entries Jan 28 00:47:12.059236 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 00:47:12.059241 kernel: Serial: AMBA PL011 UART driver Jan 28 00:47:12.059246 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 00:47:12.059250 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 00:47:12.059255 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 28 00:47:12.059260 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 28 00:47:12.059265 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 00:47:12.059270 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 00:47:12.059275 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 28 00:47:12.059279 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 28 00:47:12.059284 kernel: ACPI: Added _OSI(Module Device) Jan 28 00:47:12.059289 kernel: ACPI: Added _OSI(Processor Device) Jan 28 00:47:12.059293 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 00:47:12.059298 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 00:47:12.059303 kernel: ACPI: Interpreter enabled Jan 28 00:47:12.059308 kernel: ACPI: Using GIC for interrupt routing Jan 28 00:47:12.059313 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 28 00:47:12.059318 kernel: printk: legacy console [ttyAMA0] enabled Jan 28 00:47:12.059322 kernel: printk: legacy bootconsole [pl11] disabled Jan 28 00:47:12.059327 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 28 00:47:12.059332 kernel: ACPI: CPU0 has been hot-added Jan 28 00:47:12.059337 kernel: ACPI: CPU1 has been hot-added Jan 28 00:47:12.059342 kernel: iommu: Default domain type: Translated Jan 28 00:47:12.059346 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 28 00:47:12.059352 kernel: efivars: Registered efivars operations Jan 28 00:47:12.059357 kernel: vgaarb: loaded Jan 28 00:47:12.059362 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 28 00:47:12.059366 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 00:47:12.059371 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 00:47:12.059375 kernel: pnp: PnP ACPI init Jan 28 00:47:12.059380 kernel: pnp: PnP ACPI: found 0 devices Jan 28 00:47:12.059385 kernel: NET: Registered PF_INET protocol family Jan 28 00:47:12.059390 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 00:47:12.059395 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 00:47:12.059400 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 00:47:12.059405 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 00:47:12.059410 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 00:47:12.059415 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 00:47:12.059419 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:47:12.059424 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:47:12.059429 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 00:47:12.059434 kernel: PCI: CLS 0 bytes, default 64 Jan 28 00:47:12.059438 kernel: kvm [1]: HYP mode not available Jan 28 00:47:12.059444 kernel: Initialise system trusted keyrings Jan 28 00:47:12.059449 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 00:47:12.059454 kernel: Key type asymmetric registered Jan 28 00:47:12.059458 kernel: Asymmetric key parser 'x509' registered Jan 28 00:47:12.059463 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 28 00:47:12.059468 kernel: io scheduler mq-deadline registered Jan 28 00:47:12.059472 kernel: io scheduler kyber registered Jan 28 00:47:12.059477 kernel: io scheduler bfq registered Jan 28 00:47:12.059482 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 00:47:12.059487 kernel: thunder_xcv, ver 1.0 Jan 28 00:47:12.059492 kernel: thunder_bgx, ver 1.0 Jan 28 00:47:12.059497 kernel: nicpf, ver 1.0 Jan 28 00:47:12.059501 kernel: nicvf, ver 1.0 Jan 28 00:47:12.059617 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 28 00:47:12.059668 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-28T00:47:11 UTC (1769561231) Jan 28 00:47:12.059675 kernel: efifb: probing for efifb Jan 28 00:47:12.059681 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 28 00:47:12.059686 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 28 00:47:12.059691 kernel: efifb: scrolling: redraw Jan 28 00:47:12.059696 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 00:47:12.059700 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 00:47:12.059705 kernel: fb0: EFI VGA frame buffer device Jan 28 00:47:12.059710 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 28 00:47:12.059715 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 00:47:12.059720 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 28 00:47:12.059725 kernel: watchdog: NMI not fully supported Jan 28 00:47:12.059730 kernel: watchdog: Hard watchdog permanently disabled Jan 28 00:47:12.059735 kernel: NET: Registered PF_INET6 protocol family Jan 28 00:47:12.059740 kernel: Segment Routing with IPv6 Jan 28 00:47:12.059745 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 00:47:12.059749 kernel: NET: Registered PF_PACKET protocol family Jan 28 00:47:12.059754 kernel: Key type dns_resolver registered Jan 28 00:47:12.059759 kernel: registered taskstats version 1 Jan 28 00:47:12.059763 kernel: Loading compiled-in X.509 certificates Jan 28 00:47:12.059768 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 79637fe16a8be85dde8ec0d00305a4ac90a53e25' Jan 28 00:47:12.059774 kernel: Demotion targets for Node 0: null Jan 28 00:47:12.059778 kernel: Key type .fscrypt registered Jan 28 00:47:12.059783 kernel: Key type fscrypt-provisioning registered Jan 28 00:47:12.059788 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 00:47:12.059793 kernel: ima: Allocated hash algorithm: sha1 Jan 28 00:47:12.059797 kernel: ima: No architecture policies found Jan 28 00:47:12.059802 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 28 00:47:12.059807 kernel: clk: Disabling unused clocks Jan 28 00:47:12.059812 kernel: PM: genpd: Disabling unused power domains Jan 28 00:47:12.059817 kernel: Warning: unable to open an initial console. Jan 28 00:47:12.059822 kernel: Freeing unused kernel memory: 39552K Jan 28 00:47:12.059827 kernel: Run /init as init process Jan 28 00:47:12.059831 kernel: with arguments: Jan 28 00:47:12.059836 kernel: /init Jan 28 00:47:12.059841 kernel: with environment: Jan 28 00:47:12.059845 kernel: HOME=/ Jan 28 00:47:12.059850 kernel: TERM=linux Jan 28 00:47:12.059856 systemd[1]: Successfully made /usr/ read-only. Jan 28 00:47:12.059863 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 00:47:12.059869 systemd[1]: Detected virtualization microsoft. Jan 28 00:47:12.059874 systemd[1]: Detected architecture arm64. Jan 28 00:47:12.059879 systemd[1]: Running in initrd. Jan 28 00:47:12.059884 systemd[1]: No hostname configured, using default hostname. Jan 28 00:47:12.059889 systemd[1]: Hostname set to . Jan 28 00:47:12.059894 systemd[1]: Initializing machine ID from random generator. Jan 28 00:47:12.059900 systemd[1]: Queued start job for default target initrd.target. Jan 28 00:47:12.059905 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:47:12.059911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:47:12.059916 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 00:47:12.059922 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:47:12.059927 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 00:47:12.059933 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 00:47:12.059940 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 00:47:12.059945 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 00:47:12.059950 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:47:12.059955 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:47:12.059961 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:47:12.063660 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:47:12.063667 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:47:12.063673 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:47:12.063683 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:47:12.063688 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:47:12.063693 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 00:47:12.063699 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 00:47:12.063704 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:47:12.063709 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:47:12.063714 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:47:12.063719 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:47:12.063725 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 00:47:12.063731 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:47:12.063736 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 00:47:12.063742 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 00:47:12.063747 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 00:47:12.063752 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:47:12.063757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:47:12.063790 systemd-journald[225]: Collecting audit messages is disabled. Jan 28 00:47:12.063804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:12.063811 systemd-journald[225]: Journal started Jan 28 00:47:12.063826 systemd-journald[225]: Runtime Journal (/run/log/journal/13e2d3e0040b44b6a7827a377d09aff8) is 8M, max 78.3M, 70.3M free. Jan 28 00:47:12.064140 systemd-modules-load[227]: Inserted module 'overlay' Jan 28 00:47:12.074200 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:47:12.086162 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 00:47:12.100280 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 00:47:12.100299 kernel: Bridge firewalling registered Jan 28 00:47:12.096112 systemd-modules-load[227]: Inserted module 'br_netfilter' Jan 28 00:47:12.100174 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:47:12.105669 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 00:47:12.109517 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:47:12.127783 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:12.138895 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:47:12.160584 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:47:12.165070 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:47:12.186657 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:47:12.199520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:47:12.207195 systemd-tmpfiles[254]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 00:47:12.212526 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:47:12.220461 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:47:12.231063 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:47:12.243067 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 00:47:12.266848 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:47:12.273563 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:47:12.296560 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f94df361d6ccbf6d3bccdda215ef8c4de18f0915f7435d65b20126d9bf4aaef1 Jan 28 00:47:12.325319 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:47:12.333542 systemd-resolved[264]: Positive Trust Anchors: Jan 28 00:47:12.333550 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:47:12.333568 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:47:12.335155 systemd-resolved[264]: Defaulting to hostname 'linux'. Jan 28 00:47:12.336501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:47:12.341830 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:47:12.430980 kernel: SCSI subsystem initialized Jan 28 00:47:12.435982 kernel: Loading iSCSI transport class v2.0-870. Jan 28 00:47:12.442978 kernel: iscsi: registered transport (tcp) Jan 28 00:47:12.456416 kernel: iscsi: registered transport (qla4xxx) Jan 28 00:47:12.456470 kernel: QLogic iSCSI HBA Driver Jan 28 00:47:12.469507 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:47:12.485231 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:47:12.497034 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:47:12.539269 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 00:47:12.546090 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 00:47:12.609984 kernel: raid6: neonx8 gen() 18555 MB/s Jan 28 00:47:12.627971 kernel: raid6: neonx4 gen() 18546 MB/s Jan 28 00:47:12.646972 kernel: raid6: neonx2 gen() 17049 MB/s Jan 28 00:47:12.665972 kernel: raid6: neonx1 gen() 15004 MB/s Jan 28 00:47:12.685973 kernel: raid6: int64x8 gen() 10530 MB/s Jan 28 00:47:12.704970 kernel: raid6: int64x4 gen() 10605 MB/s Jan 28 00:47:12.723970 kernel: raid6: int64x2 gen() 8992 MB/s Jan 28 00:47:12.746098 kernel: raid6: int64x1 gen() 7018 MB/s Jan 28 00:47:12.746177 kernel: raid6: using algorithm neonx8 gen() 18555 MB/s Jan 28 00:47:12.768268 kernel: raid6: .... xor() 14906 MB/s, rmw enabled Jan 28 00:47:12.768341 kernel: raid6: using neon recovery algorithm Jan 28 00:47:12.776049 kernel: xor: measuring software checksum speed Jan 28 00:47:12.776068 kernel: 8regs : 28653 MB/sec Jan 28 00:47:12.778524 kernel: 32regs : 28824 MB/sec Jan 28 00:47:12.780996 kernel: arm64_neon : 37566 MB/sec Jan 28 00:47:12.784051 kernel: xor: using function: arm64_neon (37566 MB/sec) Jan 28 00:47:12.821981 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 00:47:12.827329 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:47:12.836747 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:47:12.869374 systemd-udevd[475]: Using default interface naming scheme 'v255'. Jan 28 00:47:12.872317 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:47:12.886375 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 00:47:12.908656 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Jan 28 00:47:12.926696 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:47:12.932719 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:47:12.974738 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:47:12.987092 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 00:47:13.047980 kernel: hv_vmbus: Vmbus version:5.3 Jan 28 00:47:13.049446 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:47:13.049580 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:13.069042 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:13.092042 kernel: hv_vmbus: registering driver hid_hyperv Jan 28 00:47:13.092061 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 28 00:47:13.092068 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 28 00:47:13.092075 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 28 00:47:13.092083 kernel: PTP clock support registered Jan 28 00:47:13.079905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:13.097039 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 00:47:13.117241 kernel: hv_utils: Registering HyperV Utility Driver Jan 28 00:47:13.117277 kernel: hv_vmbus: registering driver hv_utils Jan 28 00:47:13.117292 kernel: hv_vmbus: registering driver hv_storvsc Jan 28 00:47:13.117299 kernel: hv_vmbus: registering driver hv_netvsc Jan 28 00:47:13.117471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:47:12.928156 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 28 00:47:12.929208 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 28 00:47:12.929220 kernel: hv_utils: Heartbeat IC version 3.0 Jan 28 00:47:12.929225 kernel: hv_utils: Shutdown IC version 3.2 Jan 28 00:47:12.929230 kernel: hv_utils: TimeSync IC version 4.0 Jan 28 00:47:12.929236 kernel: scsi host0: storvsc_host_t Jan 28 00:47:12.929341 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 28 00:47:12.929357 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 28 00:47:12.929417 kernel: scsi host1: storvsc_host_t Jan 28 00:47:12.929482 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 28 00:47:12.929493 systemd-journald[225]: Time jumped backwards, rotating. Jan 28 00:47:13.117546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:12.896410 systemd-resolved[264]: Clock change detected. Flushing caches. Jan 28 00:47:12.920153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:12.969029 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 28 00:47:12.969184 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 28 00:47:12.969253 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 28 00:47:12.975400 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 28 00:47:12.975534 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 28 00:47:12.981548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#141 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:47:12.982474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:12.996541 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:47:12.996712 kernel: hv_netvsc 7ced8db6-9264-7ced-8db6-92647ced8db6 eth0: VF slot 1 added Jan 28 00:47:13.010441 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 00:47:13.010472 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 28 00:47:13.013714 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 28 00:47:13.016949 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 00:47:13.021025 kernel: hv_vmbus: registering driver hv_pci Jan 28 00:47:13.021048 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 28 00:47:13.025996 kernel: hv_pci 81096aa5-6e52-47a3-bfbf-6c3a162f6c98: PCI VMBus probing: Using version 0x10004 Jan 28 00:47:13.039550 kernel: hv_pci 81096aa5-6e52-47a3-bfbf-6c3a162f6c98: PCI host bridge to bus 6e52:00 Jan 28 00:47:13.039695 kernel: pci_bus 6e52:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 28 00:47:13.039772 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#185 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 00:47:13.049753 kernel: pci_bus 6e52:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 00:47:13.056141 kernel: pci 6e52:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 28 00:47:13.064075 kernel: pci 6e52:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 00:47:13.070031 kernel: pci 6e52:00:02.0: enabling Extended Tags Jan 28 00:47:13.090208 kernel: pci 6e52:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6e52:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 28 00:47:13.090252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#236 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 00:47:13.102051 kernel: pci_bus 6e52:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 00:47:13.102192 kernel: pci 6e52:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 28 00:47:13.158862 kernel: mlx5_core 6e52:00:02.0: enabling device (0000 -> 0002) Jan 28 00:47:13.166440 kernel: mlx5_core 6e52:00:02.0: PTM is not supported by PCIe Jan 28 00:47:13.166573 kernel: mlx5_core 6e52:00:02.0: firmware version: 16.30.5026 Jan 28 00:47:13.343555 kernel: hv_netvsc 7ced8db6-9264-7ced-8db6-92647ced8db6 eth0: VF registering: eth1 Jan 28 00:47:13.343759 kernel: mlx5_core 6e52:00:02.0 eth1: joined to eth0 Jan 28 00:47:13.350030 kernel: mlx5_core 6e52:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 28 00:47:13.358067 kernel: mlx5_core 6e52:00:02.0 enP28242s1: renamed from eth1 Jan 28 00:47:13.524617 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 28 00:47:13.645501 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 28 00:47:13.663844 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 28 00:47:13.672039 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 28 00:47:13.687470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 00:47:13.693363 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 00:47:13.712045 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 00:47:13.717516 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:47:13.726522 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:47:13.751384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#145 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:47:13.743004 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:47:13.756103 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 00:47:13.768039 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 00:47:13.783078 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:47:14.779788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:47:14.805069 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 00:47:14.805114 disk-uuid[658]: The operation has completed successfully. Jan 28 00:47:14.878953 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 00:47:14.879057 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 00:47:14.902188 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 00:47:14.923348 sh[824]: Success Jan 28 00:47:14.959792 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 00:47:14.959839 kernel: device-mapper: uevent: version 1.0.3 Jan 28 00:47:14.965050 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 00:47:14.973030 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 28 00:47:15.263925 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:47:15.279179 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 00:47:15.286772 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 00:47:15.313603 kernel: BTRFS: device fsid a5f8185f-aa1a-4e36-bd3e-ad4fa971117f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (842) Jan 28 00:47:15.313644 kernel: BTRFS info (device dm-0): first mount of filesystem a5f8185f-aa1a-4e36-bd3e-ad4fa971117f Jan 28 00:47:15.318026 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:47:15.843689 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 00:47:15.843769 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 00:47:15.878193 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 00:47:15.882124 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 00:47:15.889674 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 00:47:15.890384 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 00:47:15.910970 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 00:47:15.936032 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (865) Jan 28 00:47:15.945904 kernel: BTRFS info (device sda6): first mount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:47:15.945950 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:47:15.971634 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:47:15.971694 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:47:15.982210 kernel: BTRFS info (device sda6): last unmount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:47:15.982209 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 00:47:15.987079 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 00:47:16.036550 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:47:16.047449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:47:16.077991 systemd-networkd[1011]: lo: Link UP Jan 28 00:47:16.078002 systemd-networkd[1011]: lo: Gained carrier Jan 28 00:47:16.078736 systemd-networkd[1011]: Enumeration completed Jan 28 00:47:16.080956 systemd-networkd[1011]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:47:16.080960 systemd-networkd[1011]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:47:16.080974 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:47:16.088335 systemd[1]: Reached target network.target - Network. Jan 28 00:47:16.156036 kernel: mlx5_core 6e52:00:02.0 enP28242s1: Link up Jan 28 00:47:16.187034 kernel: hv_netvsc 7ced8db6-9264-7ced-8db6-92647ced8db6 eth0: Data path switched to VF: enP28242s1 Jan 28 00:47:16.187457 systemd-networkd[1011]: enP28242s1: Link UP Jan 28 00:47:16.187520 systemd-networkd[1011]: eth0: Link UP Jan 28 00:47:16.187590 systemd-networkd[1011]: eth0: Gained carrier Jan 28 00:47:16.187603 systemd-networkd[1011]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:47:16.196385 systemd-networkd[1011]: enP28242s1: Gained carrier Jan 28 00:47:16.211054 systemd-networkd[1011]: eth0: DHCPv4 address 10.200.20.26/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:47:17.585290 systemd-networkd[1011]: eth0: Gained IPv6LL Jan 28 00:47:17.635956 ignition[940]: Ignition 2.22.0 Jan 28 00:47:17.635968 ignition[940]: Stage: fetch-offline Jan 28 00:47:17.638454 ignition[940]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:17.645102 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:47:17.638462 ignition[940]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:17.651257 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 00:47:17.638538 ignition[940]: parsed url from cmdline: "" Jan 28 00:47:17.638541 ignition[940]: no config URL provided Jan 28 00:47:17.638544 ignition[940]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:47:17.638549 ignition[940]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:47:17.638552 ignition[940]: failed to fetch config: resource requires networking Jan 28 00:47:17.638687 ignition[940]: Ignition finished successfully Jan 28 00:47:17.681032 ignition[1021]: Ignition 2.22.0 Jan 28 00:47:17.681038 ignition[1021]: Stage: fetch Jan 28 00:47:17.681280 ignition[1021]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:17.681287 ignition[1021]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:17.681372 ignition[1021]: parsed url from cmdline: "" Jan 28 00:47:17.681375 ignition[1021]: no config URL provided Jan 28 00:47:17.681380 ignition[1021]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:47:17.681385 ignition[1021]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:47:17.681400 ignition[1021]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 28 00:47:17.789026 ignition[1021]: GET result: OK Jan 28 00:47:17.789080 ignition[1021]: config has been read from IMDS userdata Jan 28 00:47:17.789107 ignition[1021]: parsing config with SHA512: 1ee0a6031cf85daace1064b5de9cc1d8b978edf06e34957dfd0c74da3f90522898f96e8fa4b027991f5e4b253273ac9c6ed4aa41c2bbf5064c41698422d3ccc6 Jan 28 00:47:17.791967 unknown[1021]: fetched base config from "system" Jan 28 00:47:17.792264 ignition[1021]: fetch: fetch complete Jan 28 00:47:17.791972 unknown[1021]: fetched base config from "system" Jan 28 00:47:17.792267 ignition[1021]: fetch: fetch passed Jan 28 00:47:17.791975 unknown[1021]: fetched user config from "azure" Jan 28 00:47:17.792302 ignition[1021]: Ignition finished successfully Jan 28 00:47:17.796970 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 00:47:17.805413 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 00:47:17.841541 ignition[1027]: Ignition 2.22.0 Jan 28 00:47:17.841552 ignition[1027]: Stage: kargs Jan 28 00:47:17.841723 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:17.850702 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 00:47:17.841731 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:17.844669 ignition[1027]: kargs: kargs passed Jan 28 00:47:17.861148 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 00:47:17.844721 ignition[1027]: Ignition finished successfully Jan 28 00:47:17.884918 ignition[1033]: Ignition 2.22.0 Jan 28 00:47:17.884934 ignition[1033]: Stage: disks Jan 28 00:47:17.885143 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:17.890512 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 00:47:17.885150 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:17.894676 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 00:47:17.885678 ignition[1033]: disks: disks passed Jan 28 00:47:17.902797 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 00:47:17.885717 ignition[1033]: Ignition finished successfully Jan 28 00:47:17.911064 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:47:17.919248 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:47:17.927384 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:47:17.934681 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 00:47:18.024635 systemd-fsck[1042]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 28 00:47:18.032593 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 00:47:18.038404 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 00:47:18.302038 kernel: EXT4-fs (sda9): mounted filesystem e7dac9ee-22c5-4146-a097-e1ea6c8c1663 r/w with ordered data mode. Quota mode: none. Jan 28 00:47:18.303067 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 00:47:18.306705 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 00:47:18.329776 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:47:18.343495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 00:47:18.364411 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1056) Jan 28 00:47:18.364453 kernel: BTRFS info (device sda6): first mount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:47:18.368648 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:47:18.369614 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 00:47:18.382486 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:47:18.382504 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:47:18.387534 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 00:47:18.387566 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:47:18.402280 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:47:18.412729 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 00:47:18.417641 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 00:47:19.374229 coreos-metadata[1062]: Jan 28 00:47:19.374 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 00:47:19.380675 coreos-metadata[1062]: Jan 28 00:47:19.380 INFO Fetch successful Jan 28 00:47:19.380675 coreos-metadata[1062]: Jan 28 00:47:19.380 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 28 00:47:19.393422 coreos-metadata[1062]: Jan 28 00:47:19.393 INFO Fetch successful Jan 28 00:47:19.397874 coreos-metadata[1062]: Jan 28 00:47:19.393 INFO wrote hostname ci-4459.2.3-n-ec09cdb4df to /sysroot/etc/hostname Jan 28 00:47:19.398508 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 00:47:19.768734 initrd-setup-root[1087]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 00:47:19.811488 initrd-setup-root[1094]: cut: /sysroot/etc/group: No such file or directory Jan 28 00:47:19.830759 initrd-setup-root[1101]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 00:47:19.836663 initrd-setup-root[1108]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 00:47:20.748587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 00:47:20.758664 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 00:47:20.773216 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 00:47:20.782906 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 00:47:20.795046 kernel: BTRFS info (device sda6): last unmount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:47:20.814951 ignition[1177]: INFO : Ignition 2.22.0 Jan 28 00:47:20.820341 ignition[1177]: INFO : Stage: mount Jan 28 00:47:20.820341 ignition[1177]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:20.820341 ignition[1177]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:20.820341 ignition[1177]: INFO : mount: mount passed Jan 28 00:47:20.820341 ignition[1177]: INFO : Ignition finished successfully Jan 28 00:47:20.821074 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 00:47:20.831169 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 00:47:20.840966 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 00:47:20.860128 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:47:20.882030 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1188) Jan 28 00:47:20.882062 kernel: BTRFS info (device sda6): first mount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:47:20.891104 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:47:20.900516 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:47:20.900548 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:47:20.902461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:47:20.933543 ignition[1206]: INFO : Ignition 2.22.0 Jan 28 00:47:20.933543 ignition[1206]: INFO : Stage: files Jan 28 00:47:20.939654 ignition[1206]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:20.939654 ignition[1206]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:20.939654 ignition[1206]: DEBUG : files: compiled without relabeling support, skipping Jan 28 00:47:20.952896 ignition[1206]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 00:47:20.952896 ignition[1206]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 00:47:20.997786 ignition[1206]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 00:47:21.003592 ignition[1206]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 00:47:21.003592 ignition[1206]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 00:47:20.998179 unknown[1206]: wrote ssh authorized keys file for user: core Jan 28 00:47:21.141541 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 28 00:47:21.141541 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 28 00:47:21.183303 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 00:47:21.305106 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 28 00:47:21.305106 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 00:47:21.305106 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 28 00:47:21.563324 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 00:47:21.695666 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 00:47:21.703239 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 28 00:47:21.703239 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 00:47:21.703239 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:47:21.703239 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:47:21.703239 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:47:21.703239 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:47:21.703239 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:47:21.703239 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:47:21.757701 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:47:21.757701 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:47:21.757701 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 00:47:21.757701 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 00:47:21.757701 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 00:47:21.757701 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 28 00:47:22.272255 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 28 00:47:22.520176 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 00:47:22.520176 ignition[1206]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 28 00:47:22.760564 ignition[1206]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:47:22.769108 ignition[1206]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:47:22.769108 ignition[1206]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 28 00:47:22.769108 ignition[1206]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 28 00:47:22.769108 ignition[1206]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 00:47:22.769108 ignition[1206]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:47:22.769108 ignition[1206]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:47:22.769108 ignition[1206]: INFO : files: files passed Jan 28 00:47:22.769108 ignition[1206]: INFO : Ignition finished successfully Jan 28 00:47:22.769863 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 00:47:22.781533 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 00:47:22.803634 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 00:47:22.852869 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:47:22.852869 initrd-setup-root-after-ignition[1234]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:47:22.823049 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 00:47:22.880949 initrd-setup-root-after-ignition[1238]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:47:22.823180 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 00:47:22.847995 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:47:22.858004 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 00:47:22.869196 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 00:47:22.908233 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 00:47:22.908348 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 00:47:22.917574 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 00:47:22.925210 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 00:47:22.934289 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 00:47:22.934987 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 00:47:22.968958 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:47:22.975009 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 00:47:22.995932 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:47:23.000589 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:47:23.009949 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 00:47:23.017971 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 00:47:23.018074 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:47:23.029548 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 00:47:23.033592 systemd[1]: Stopped target basic.target - Basic System. Jan 28 00:47:23.041899 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 00:47:23.049837 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:47:23.058233 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 00:47:23.066833 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 00:47:23.075627 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 00:47:23.083894 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:47:23.093186 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 00:47:23.101247 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 00:47:23.110001 systemd[1]: Stopped target swap.target - Swaps. Jan 28 00:47:23.117410 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 00:47:23.117515 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:47:23.128274 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:47:23.136842 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:47:23.145525 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 00:47:23.145580 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:47:23.154522 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 00:47:23.154615 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 00:47:23.167217 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 00:47:23.167298 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:47:23.172437 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 00:47:23.172507 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 00:47:23.181807 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 00:47:23.243071 ignition[1259]: INFO : Ignition 2.22.0 Jan 28 00:47:23.243071 ignition[1259]: INFO : Stage: umount Jan 28 00:47:23.243071 ignition[1259]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:23.243071 ignition[1259]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:23.243071 ignition[1259]: INFO : umount: umount passed Jan 28 00:47:23.243071 ignition[1259]: INFO : Ignition finished successfully Jan 28 00:47:23.181873 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 00:47:23.191243 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 00:47:23.205131 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 00:47:23.205239 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:47:23.220038 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 00:47:23.229253 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 00:47:23.229358 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:47:23.238958 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 00:47:23.239095 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:47:23.247961 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 00:47:23.249093 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 00:47:23.254469 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 00:47:23.254538 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 00:47:23.263304 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 00:47:23.263367 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 00:47:23.271799 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 00:47:23.271843 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 00:47:23.276070 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 00:47:23.276100 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 00:47:23.288744 systemd[1]: Stopped target network.target - Network. Jan 28 00:47:23.297312 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 00:47:23.297378 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:47:23.312379 systemd[1]: Stopped target paths.target - Path Units. Jan 28 00:47:23.319551 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 00:47:23.324032 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:47:23.329278 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 00:47:23.336372 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 00:47:23.345295 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 00:47:23.345345 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:47:23.353674 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 00:47:23.353701 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:47:23.357679 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 00:47:23.357725 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 00:47:23.365274 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 00:47:23.365301 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 00:47:23.373903 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 00:47:23.381318 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 00:47:23.401170 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 00:47:23.401634 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 00:47:23.401724 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 00:47:23.412757 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 28 00:47:23.412904 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 00:47:23.412993 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 00:47:23.610058 kernel: hv_netvsc 7ced8db6-9264-7ced-8db6-92647ced8db6 eth0: Data path switched from VF: enP28242s1 Jan 28 00:47:23.424275 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 28 00:47:23.425069 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 00:47:23.432300 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 00:47:23.432338 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:47:23.441520 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 00:47:23.454555 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 00:47:23.454608 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:47:23.464227 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:47:23.464272 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:47:23.475487 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 00:47:23.475521 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 00:47:23.480131 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 00:47:23.480164 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:47:23.493871 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:47:23.502399 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 28 00:47:23.502448 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 28 00:47:23.525971 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 00:47:23.526111 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:47:23.537461 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 00:47:23.537489 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 00:47:23.546145 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 00:47:23.546168 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:47:23.555456 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 00:47:23.555500 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:47:23.567499 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 00:47:23.567541 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 00:47:23.579537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:47:23.579572 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:47:23.599917 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 00:47:23.611055 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 00:47:23.611109 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:47:23.624240 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 00:47:23.624276 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:47:23.636855 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:47:23.636914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:23.646886 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 28 00:47:23.646924 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 28 00:47:23.830357 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jan 28 00:47:23.646950 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 00:47:23.647180 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 00:47:23.647296 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 00:47:23.690638 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 00:47:23.690750 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 00:47:23.701249 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 00:47:23.701324 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 00:47:23.709914 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 00:47:23.719224 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 00:47:23.719295 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 00:47:23.727646 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 00:47:23.754070 systemd[1]: Switching root. Jan 28 00:47:23.877731 systemd-journald[225]: Journal stopped Jan 28 00:47:29.305826 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 00:47:29.305844 kernel: SELinux: policy capability open_perms=1 Jan 28 00:47:29.305852 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 00:47:29.305857 kernel: SELinux: policy capability always_check_network=0 Jan 28 00:47:29.305863 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 00:47:29.305869 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 00:47:29.305875 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 00:47:29.305880 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 00:47:29.305885 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 00:47:29.305892 systemd[1]: Successfully loaded SELinux policy in 232.767ms. Jan 28 00:47:29.305898 kernel: audit: type=1403 audit(1769561245.327:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 00:47:29.305905 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.295ms. Jan 28 00:47:29.305914 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 00:47:29.305920 systemd[1]: Detected virtualization microsoft. Jan 28 00:47:29.305926 systemd[1]: Detected architecture arm64. Jan 28 00:47:29.305932 systemd[1]: Detected first boot. Jan 28 00:47:29.305939 systemd[1]: Hostname set to . Jan 28 00:47:29.305945 systemd[1]: Initializing machine ID from random generator. Jan 28 00:47:29.305951 zram_generator::config[1301]: No configuration found. Jan 28 00:47:29.305957 kernel: NET: Registered PF_VSOCK protocol family Jan 28 00:47:29.305963 systemd[1]: Populated /etc with preset unit settings. Jan 28 00:47:29.305969 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 28 00:47:29.305975 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 00:47:29.305982 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 00:47:29.305987 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 00:47:29.305993 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 00:47:29.306000 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 00:47:29.306006 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 00:47:29.306025 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 00:47:29.306032 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 00:47:29.306039 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 00:47:29.306045 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 00:47:29.306051 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 00:47:29.306058 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:47:29.306064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:47:29.306070 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 00:47:29.306076 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 00:47:29.306082 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 00:47:29.306089 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:47:29.306095 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 28 00:47:29.306103 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:47:29.306109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:47:29.306115 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 00:47:29.306121 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 00:47:29.306127 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 00:47:29.306133 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 00:47:29.306140 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:47:29.306146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:47:29.306152 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:47:29.306158 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:47:29.306164 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 00:47:29.306170 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 00:47:29.306178 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 00:47:29.306185 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:47:29.306192 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:47:29.306198 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:47:29.306204 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 00:47:29.306210 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 00:47:29.306216 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 00:47:29.306223 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 00:47:29.306229 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 00:47:29.306235 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 00:47:29.306241 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 00:47:29.306248 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 00:47:29.306254 systemd[1]: Reached target machines.target - Containers. Jan 28 00:47:29.306260 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 00:47:29.306266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:47:29.306273 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:47:29.306279 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 00:47:29.306286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:47:29.306292 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:47:29.306298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:47:29.306304 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 00:47:29.306310 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:47:29.306316 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 00:47:29.306323 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 00:47:29.306330 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 00:47:29.306337 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 00:47:29.306343 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 00:47:29.306349 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:47:29.306355 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:47:29.306361 kernel: fuse: init (API version 7.41) Jan 28 00:47:29.306367 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:47:29.306373 kernel: ACPI: bus type drm_connector registered Jan 28 00:47:29.306379 kernel: loop: module loaded Jan 28 00:47:29.306399 systemd-journald[1384]: Collecting audit messages is disabled. Jan 28 00:47:29.306414 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:47:29.306422 systemd-journald[1384]: Journal started Jan 28 00:47:29.306437 systemd-journald[1384]: Runtime Journal (/run/log/journal/a5bce8cca8d64f0d90b0116f07aea983) is 8M, max 78.3M, 70.3M free. Jan 28 00:47:28.627235 systemd[1]: Queued start job for default target multi-user.target. Jan 28 00:47:28.638487 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 28 00:47:28.638870 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 00:47:28.639137 systemd[1]: systemd-journald.service: Consumed 2.345s CPU time. Jan 28 00:47:29.326589 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 00:47:29.341022 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 00:47:29.360243 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:47:29.366756 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 00:47:29.366809 systemd[1]: Stopped verity-setup.service. Jan 28 00:47:29.379877 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:47:29.380568 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 00:47:29.385005 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 00:47:29.389720 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 00:47:29.394065 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 00:47:29.398380 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 00:47:29.402904 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 00:47:29.409038 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 00:47:29.413949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:47:29.419059 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 00:47:29.419207 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 00:47:29.424564 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:47:29.424696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:47:29.429480 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:47:29.429628 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:47:29.434149 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:47:29.434283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:47:29.439411 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 00:47:29.439537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 00:47:29.444258 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:47:29.444397 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:47:29.451041 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:47:29.456310 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:47:29.461558 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 00:47:29.466923 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 00:47:29.481291 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:47:29.489201 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 00:47:29.497949 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 00:47:29.503028 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 00:47:29.503057 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:47:29.507981 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 00:47:29.513746 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 00:47:29.517852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:47:29.523704 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 00:47:29.528751 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 00:47:29.533473 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:47:29.535138 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 00:47:29.539784 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:47:29.542905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:47:29.550679 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 00:47:29.558181 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 00:47:29.567173 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:47:29.575668 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 00:47:29.583027 systemd-journald[1384]: Time spent on flushing to /var/log/journal/a5bce8cca8d64f0d90b0116f07aea983 is 62.136ms for 938 entries. Jan 28 00:47:29.583027 systemd-journald[1384]: System Journal (/var/log/journal/a5bce8cca8d64f0d90b0116f07aea983) is 11.8M, max 2.6G, 2.6G free. Jan 28 00:47:29.758685 systemd-journald[1384]: Received client request to flush runtime journal. Jan 28 00:47:29.758736 systemd-journald[1384]: /var/log/journal/a5bce8cca8d64f0d90b0116f07aea983/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 28 00:47:29.758760 systemd-journald[1384]: Rotating system journal. Jan 28 00:47:29.758776 kernel: loop0: detected capacity change from 0 to 119840 Jan 28 00:47:29.582688 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 00:47:29.593133 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 00:47:29.602684 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 00:47:29.621211 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 00:47:29.672092 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:47:29.761067 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 00:47:29.761710 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 00:47:29.767523 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 00:47:29.810052 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 00:47:29.815505 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:47:29.869061 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Jan 28 00:47:29.869536 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Jan 28 00:47:29.873381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:47:30.114045 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 00:47:30.145035 kernel: loop1: detected capacity change from 0 to 200800 Jan 28 00:47:30.189126 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 00:47:30.195669 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:47:30.210033 kernel: loop2: detected capacity change from 0 to 27936 Jan 28 00:47:30.222008 systemd-udevd[1464]: Using default interface naming scheme 'v255'. Jan 28 00:47:30.509436 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:47:30.520880 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:47:30.571431 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 00:47:30.585869 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 28 00:47:30.643053 kernel: loop3: detected capacity change from 0 to 100632 Jan 28 00:47:30.655863 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 00:47:30.668066 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 00:47:30.686075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#208 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 00:47:30.700231 kernel: hv_vmbus: registering driver hv_balloon Jan 28 00:47:30.700323 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 28 00:47:30.709294 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 28 00:47:30.763925 kernel: hv_vmbus: registering driver hyperv_fb Jan 28 00:47:30.764024 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 28 00:47:30.772107 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 28 00:47:30.778828 kernel: Console: switching to colour dummy device 80x25 Jan 28 00:47:30.784903 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 00:47:30.786881 systemd-networkd[1485]: lo: Link UP Jan 28 00:47:30.787385 systemd-networkd[1485]: lo: Gained carrier Jan 28 00:47:30.788605 systemd-networkd[1485]: Enumeration completed Jan 28 00:47:30.788772 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:47:30.789044 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:47:30.789105 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:47:30.796977 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 00:47:30.806124 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 00:47:30.817431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:30.831858 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:47:30.834046 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:30.839935 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 00:47:30.844194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:30.852030 kernel: mlx5_core 6e52:00:02.0 enP28242s1: Link up Jan 28 00:47:30.878126 kernel: hv_netvsc 7ced8db6-9264-7ced-8db6-92647ced8db6 eth0: Data path switched to VF: enP28242s1 Jan 28 00:47:30.878464 systemd-networkd[1485]: enP28242s1: Link UP Jan 28 00:47:30.879540 systemd-networkd[1485]: eth0: Link UP Jan 28 00:47:30.879611 systemd-networkd[1485]: eth0: Gained carrier Jan 28 00:47:30.879634 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:47:30.881135 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 00:47:30.891379 systemd-networkd[1485]: enP28242s1: Gained carrier Jan 28 00:47:30.898169 systemd-networkd[1485]: eth0: DHCPv4 address 10.200.20.26/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:47:30.927680 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 00:47:30.933530 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 00:47:30.951042 kernel: MACsec IEEE 802.1AE Jan 28 00:47:31.012966 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 00:47:31.091037 kernel: loop4: detected capacity change from 0 to 119840 Jan 28 00:47:31.107088 kernel: loop5: detected capacity change from 0 to 200800 Jan 28 00:47:31.122037 kernel: loop6: detected capacity change from 0 to 27936 Jan 28 00:47:31.134031 kernel: loop7: detected capacity change from 0 to 100632 Jan 28 00:47:31.142867 (sd-merge)[1609]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 28 00:47:31.143306 (sd-merge)[1609]: Merged extensions into '/usr'. Jan 28 00:47:31.145921 systemd[1]: Reload requested from client PID 1440 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 00:47:31.146032 systemd[1]: Reloading... Jan 28 00:47:31.191042 zram_generator::config[1639]: No configuration found. Jan 28 00:47:31.361860 systemd[1]: Reloading finished in 215 ms. Jan 28 00:47:31.380426 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 00:47:31.385885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:31.397974 systemd[1]: Starting ensure-sysext.service... Jan 28 00:47:31.404121 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:47:31.415588 systemd[1]: Reload requested from client PID 1697 ('systemctl') (unit ensure-sysext.service)... Jan 28 00:47:31.415602 systemd[1]: Reloading... Jan 28 00:47:31.446762 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 00:47:31.448316 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 00:47:31.448687 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 00:47:31.448932 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 00:47:31.449512 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 00:47:31.449809 systemd-tmpfiles[1698]: ACLs are not supported, ignoring. Jan 28 00:47:31.449916 systemd-tmpfiles[1698]: ACLs are not supported, ignoring. Jan 28 00:47:31.476034 zram_generator::config[1735]: No configuration found. Jan 28 00:47:31.482188 systemd-tmpfiles[1698]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:47:31.482197 systemd-tmpfiles[1698]: Skipping /boot Jan 28 00:47:31.487760 systemd-tmpfiles[1698]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:47:31.487770 systemd-tmpfiles[1698]: Skipping /boot Jan 28 00:47:31.622172 systemd[1]: Reloading finished in 206 ms. Jan 28 00:47:31.637457 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:47:31.657149 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 00:47:31.671794 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 00:47:31.680207 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 00:47:31.686113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:47:31.692208 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 00:47:31.699308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:47:31.700158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:47:31.706924 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:47:31.720208 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:47:31.724512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:47:31.724601 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:47:31.725416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:47:31.725564 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:47:31.731357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:47:31.731487 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:47:31.743221 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:47:31.749711 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:47:31.757686 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:47:31.764362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:47:31.764463 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:47:31.765207 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 00:47:31.775398 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 00:47:31.781372 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:47:31.781508 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:47:31.786880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:47:31.787127 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:47:31.793678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:47:31.793812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:47:31.805703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:47:31.806759 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:47:31.813930 systemd-resolved[1789]: Positive Trust Anchors: Jan 28 00:47:31.813943 systemd-resolved[1789]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:47:31.813963 systemd-resolved[1789]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:47:31.815440 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:47:31.821848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:47:31.827338 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:47:31.832396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:47:31.832490 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:47:31.832593 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 00:47:31.838669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:47:31.838814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:47:31.843694 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:47:31.843818 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:47:31.844736 systemd-resolved[1789]: Using system hostname 'ci-4459.2.3-n-ec09cdb4df'. Jan 28 00:47:31.848338 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:47:31.853507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:47:31.853637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:47:31.859168 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:47:31.859291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:47:31.866390 systemd[1]: Finished ensure-sysext.service. Jan 28 00:47:31.872197 systemd[1]: Reached target network.target - Network. Jan 28 00:47:31.874322 augenrules[1832]: No rules Jan 28 00:47:31.875881 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:47:31.880442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:47:31.880502 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:47:31.880713 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:47:31.880882 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 00:47:32.177171 systemd-networkd[1485]: eth0: Gained IPv6LL Jan 28 00:47:32.181867 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 00:47:32.188396 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 00:47:32.699154 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 00:47:32.704519 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 00:47:35.657693 ldconfig[1434]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 00:47:35.668520 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 00:47:35.674657 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 00:47:35.686227 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 00:47:35.690992 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:47:35.695440 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 00:47:35.700796 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 00:47:35.705949 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 00:47:35.710230 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 00:47:35.715430 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 00:47:35.720201 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 00:47:35.720228 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:47:35.723684 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:47:35.743547 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 00:47:35.749250 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 00:47:35.754708 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 00:47:35.759838 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 00:47:35.765347 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 00:47:35.771179 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 00:47:35.775465 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 00:47:35.781143 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 00:47:35.785392 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:47:35.789164 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:47:35.792884 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:47:35.792911 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:47:35.795116 systemd[1]: Starting chronyd.service - NTP client/server... Jan 28 00:47:35.806118 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 00:47:35.813061 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 00:47:35.821680 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 00:47:35.827858 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 00:47:35.835417 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 00:47:35.847305 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 00:47:35.848927 jq[1854]: false Jan 28 00:47:35.851519 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 00:47:35.854146 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 28 00:47:35.858966 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 28 00:47:35.859859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:47:35.866108 KVP[1856]: KVP starting; pid is:1856 Jan 28 00:47:35.867507 chronyd[1846]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 28 00:47:35.867868 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 00:47:35.877945 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 00:47:35.885164 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 00:47:35.892368 KVP[1856]: KVP LIC Version: 3.1 Jan 28 00:47:35.893086 kernel: hv_utils: KVP IC version 4.0 Jan 28 00:47:35.895179 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 00:47:35.903033 extend-filesystems[1855]: Found /dev/sda6 Jan 28 00:47:35.906210 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 00:47:35.911523 chronyd[1846]: Timezone right/UTC failed leap second check, ignoring Jan 28 00:47:35.911660 chronyd[1846]: Loaded seccomp filter (level 2) Jan 28 00:47:35.921521 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 00:47:35.926932 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 00:47:35.927496 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 00:47:35.928161 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 00:47:35.938298 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 00:47:35.944366 systemd[1]: Started chronyd.service - NTP client/server. Jan 28 00:47:35.948119 jq[1881]: true Jan 28 00:47:35.951629 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 00:47:35.958694 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 00:47:35.958849 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 00:47:35.959531 extend-filesystems[1855]: Found /dev/sda9 Jan 28 00:47:35.962475 extend-filesystems[1855]: Checking size of /dev/sda9 Jan 28 00:47:35.965950 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 00:47:35.969331 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 00:47:35.977117 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 00:47:35.985268 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 00:47:35.986238 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 00:47:36.014898 update_engine[1880]: I20260128 00:47:36.011842 1880 main.cc:92] Flatcar Update Engine starting Jan 28 00:47:36.015694 (ntainerd)[1892]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 00:47:36.017394 jq[1891]: true Jan 28 00:47:36.018842 extend-filesystems[1855]: Old size kept for /dev/sda9 Jan 28 00:47:36.024942 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 00:47:36.027055 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 00:47:36.037332 systemd-logind[1876]: New seat seat0. Jan 28 00:47:36.048555 systemd-logind[1876]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 00:47:36.048721 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 00:47:36.066448 tar[1890]: linux-arm64/LICENSE Jan 28 00:47:36.066448 tar[1890]: linux-arm64/helm Jan 28 00:47:36.151600 dbus-daemon[1849]: [system] SELinux support is enabled Jan 28 00:47:36.157450 update_engine[1880]: I20260128 00:47:36.154481 1880 update_check_scheduler.cc:74] Next update check in 11m32s Jan 28 00:47:36.151801 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 00:47:36.162388 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 00:47:36.163206 dbus-daemon[1849]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 00:47:36.162413 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 00:47:36.171412 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 00:47:36.171487 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 00:47:36.181127 systemd[1]: Started update-engine.service - Update Engine. Jan 28 00:47:36.195020 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 00:47:36.217459 bash[1935]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:47:36.219290 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 00:47:36.235555 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 00:47:36.249988 coreos-metadata[1848]: Jan 28 00:47:36.249 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 00:47:36.252792 coreos-metadata[1848]: Jan 28 00:47:36.252 INFO Fetch successful Jan 28 00:47:36.254225 coreos-metadata[1848]: Jan 28 00:47:36.252 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 28 00:47:36.260259 sshd_keygen[1879]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 00:47:36.260450 coreos-metadata[1848]: Jan 28 00:47:36.258 INFO Fetch successful Jan 28 00:47:36.260450 coreos-metadata[1848]: Jan 28 00:47:36.260 INFO Fetching http://168.63.129.16/machine/6b07fc83-7646-4707-8575-520b7c49b825/bc268f6b%2D1724%2D4df6%2D8770%2D179fbdfaab2c.%5Fci%2D4459.2.3%2Dn%2Dec09cdb4df?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 28 00:47:36.261066 coreos-metadata[1848]: Jan 28 00:47:36.261 INFO Fetch successful Jan 28 00:47:36.261205 coreos-metadata[1848]: Jan 28 00:47:36.261 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 28 00:47:36.271047 coreos-metadata[1848]: Jan 28 00:47:36.270 INFO Fetch successful Jan 28 00:47:36.293317 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 00:47:36.305748 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 00:47:36.314315 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 28 00:47:36.338384 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 00:47:36.345794 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 00:47:36.345974 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 00:47:36.354563 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 00:47:36.357130 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 00:47:36.379169 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 28 00:47:36.394702 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 00:47:36.404296 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 00:47:36.410780 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 28 00:47:36.420098 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 00:47:36.503967 locksmithd[1958]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 00:47:36.523039 tar[1890]: linux-arm64/README.md Jan 28 00:47:36.536982 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 00:47:36.810292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:47:36.875330 containerd[1892]: time="2026-01-28T00:47:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 00:47:36.876651 containerd[1892]: time="2026-01-28T00:47:36.876277976Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 28 00:47:36.882242 containerd[1892]: time="2026-01-28T00:47:36.882214320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.616µs" Jan 28 00:47:36.882318 containerd[1892]: time="2026-01-28T00:47:36.882303744Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 00:47:36.882382 containerd[1892]: time="2026-01-28T00:47:36.882371392Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 00:47:36.882568 containerd[1892]: time="2026-01-28T00:47:36.882550824Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 00:47:36.882621 containerd[1892]: time="2026-01-28T00:47:36.882609832Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 00:47:36.882676 containerd[1892]: time="2026-01-28T00:47:36.882665816Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 00:47:36.882768 containerd[1892]: time="2026-01-28T00:47:36.882754032Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 00:47:36.882811 containerd[1892]: time="2026-01-28T00:47:36.882799440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 00:47:36.883073 containerd[1892]: time="2026-01-28T00:47:36.883052640Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 00:47:36.883128 containerd[1892]: time="2026-01-28T00:47:36.883116664Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 00:47:36.883799 containerd[1892]: time="2026-01-28T00:47:36.883166544Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 00:47:36.883799 containerd[1892]: time="2026-01-28T00:47:36.883178368Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 00:47:36.883799 containerd[1892]: time="2026-01-28T00:47:36.883258448Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 00:47:36.883799 containerd[1892]: time="2026-01-28T00:47:36.883419888Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 00:47:36.883799 containerd[1892]: time="2026-01-28T00:47:36.883439336Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 00:47:36.883799 containerd[1892]: time="2026-01-28T00:47:36.883445688Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 00:47:36.883799 containerd[1892]: time="2026-01-28T00:47:36.883480536Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 00:47:36.883799 containerd[1892]: time="2026-01-28T00:47:36.883624928Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 00:47:36.883799 containerd[1892]: time="2026-01-28T00:47:36.883679944Z" level=info msg="metadata content store policy set" policy=shared Jan 28 00:47:36.908964 containerd[1892]: time="2026-01-28T00:47:36.908919120Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 00:47:36.909148 containerd[1892]: time="2026-01-28T00:47:36.909134936Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 00:47:36.909254 containerd[1892]: time="2026-01-28T00:47:36.909242448Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 00:47:36.909314 containerd[1892]: time="2026-01-28T00:47:36.909303936Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 00:47:36.909354 containerd[1892]: time="2026-01-28T00:47:36.909346632Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 00:47:36.909388 containerd[1892]: time="2026-01-28T00:47:36.909380656Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 00:47:36.909424 containerd[1892]: time="2026-01-28T00:47:36.909417088Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 00:47:36.909459 containerd[1892]: time="2026-01-28T00:47:36.909452112Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 00:47:36.909495 containerd[1892]: time="2026-01-28T00:47:36.909487456Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 00:47:36.909532 containerd[1892]: time="2026-01-28T00:47:36.909523304Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 00:47:36.909564 containerd[1892]: time="2026-01-28T00:47:36.909556488Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 00:47:36.909610 containerd[1892]: time="2026-01-28T00:47:36.909600064Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 00:47:36.909793 containerd[1892]: time="2026-01-28T00:47:36.909776416Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 00:47:36.909859 containerd[1892]: time="2026-01-28T00:47:36.909848288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 00:47:36.909900 containerd[1892]: time="2026-01-28T00:47:36.909891576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 00:47:36.909948 containerd[1892]: time="2026-01-28T00:47:36.909937256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 00:47:36.909992 containerd[1892]: time="2026-01-28T00:47:36.909982600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 00:47:36.910057 containerd[1892]: time="2026-01-28T00:47:36.910047072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 00:47:36.910120 containerd[1892]: time="2026-01-28T00:47:36.910108368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 00:47:36.910170 containerd[1892]: time="2026-01-28T00:47:36.910158912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 00:47:36.910320 containerd[1892]: time="2026-01-28T00:47:36.910208080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 00:47:36.910320 containerd[1892]: time="2026-01-28T00:47:36.910223600Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 00:47:36.910320 containerd[1892]: time="2026-01-28T00:47:36.910232576Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 00:47:36.910320 containerd[1892]: time="2026-01-28T00:47:36.910288472Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 00:47:36.910320 containerd[1892]: time="2026-01-28T00:47:36.910298832Z" level=info msg="Start snapshots syncer" Jan 28 00:47:36.910426 containerd[1892]: time="2026-01-28T00:47:36.910416816Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 00:47:36.910807 containerd[1892]: time="2026-01-28T00:47:36.910720544Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 00:47:36.910807 containerd[1892]: time="2026-01-28T00:47:36.910763216Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 00:47:36.910987 containerd[1892]: time="2026-01-28T00:47:36.910928888Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 00:47:36.911294 containerd[1892]: time="2026-01-28T00:47:36.911163952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 00:47:36.911294 containerd[1892]: time="2026-01-28T00:47:36.911189224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 00:47:36.911294 containerd[1892]: time="2026-01-28T00:47:36.911196720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 00:47:36.911294 containerd[1892]: time="2026-01-28T00:47:36.911206984Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 00:47:36.911294 containerd[1892]: time="2026-01-28T00:47:36.911215912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 00:47:36.911294 containerd[1892]: time="2026-01-28T00:47:36.911222616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 00:47:36.911294 containerd[1892]: time="2026-01-28T00:47:36.911229064Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 00:47:36.911294 containerd[1892]: time="2026-01-28T00:47:36.911247064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 00:47:36.911294 containerd[1892]: time="2026-01-28T00:47:36.911255544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 00:47:36.911294 containerd[1892]: time="2026-01-28T00:47:36.911263488Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 00:47:36.911525 containerd[1892]: time="2026-01-28T00:47:36.911469360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 00:47:36.911525 containerd[1892]: time="2026-01-28T00:47:36.911491944Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 00:47:36.911525 containerd[1892]: time="2026-01-28T00:47:36.911499008Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 00:47:36.911525 containerd[1892]: time="2026-01-28T00:47:36.911505904Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 00:47:36.911639 containerd[1892]: time="2026-01-28T00:47:36.911510904Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 00:47:36.911686 containerd[1892]: time="2026-01-28T00:47:36.911674416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 00:47:36.911809 containerd[1892]: time="2026-01-28T00:47:36.911714024Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 00:47:36.911809 containerd[1892]: time="2026-01-28T00:47:36.911732800Z" level=info msg="runtime interface created" Jan 28 00:47:36.911809 containerd[1892]: time="2026-01-28T00:47:36.911737624Z" level=info msg="created NRI interface" Jan 28 00:47:36.911809 containerd[1892]: time="2026-01-28T00:47:36.911749976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 00:47:36.911809 containerd[1892]: time="2026-01-28T00:47:36.911764208Z" level=info msg="Connect containerd service" Jan 28 00:47:36.911809 containerd[1892]: time="2026-01-28T00:47:36.911790472Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 00:47:36.912828 containerd[1892]: time="2026-01-28T00:47:36.912599424Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:47:36.990525 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:47:37.286496 kubelet[2035]: E0128 00:47:37.286383 2035 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:47:37.288375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:47:37.288487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:47:37.288944 systemd[1]: kubelet.service: Consumed 500ms CPU time, 248.6M memory peak. Jan 28 00:47:37.349816 containerd[1892]: time="2026-01-28T00:47:37.349345176Z" level=info msg="Start subscribing containerd event" Jan 28 00:47:37.349816 containerd[1892]: time="2026-01-28T00:47:37.349407472Z" level=info msg="Start recovering state" Jan 28 00:47:37.349816 containerd[1892]: time="2026-01-28T00:47:37.349485072Z" level=info msg="Start event monitor" Jan 28 00:47:37.349816 containerd[1892]: time="2026-01-28T00:47:37.349495128Z" level=info msg="Start cni network conf syncer for default" Jan 28 00:47:37.349816 containerd[1892]: time="2026-01-28T00:47:37.349499872Z" level=info msg="Start streaming server" Jan 28 00:47:37.349816 containerd[1892]: time="2026-01-28T00:47:37.349506120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 00:47:37.349816 containerd[1892]: time="2026-01-28T00:47:37.349510752Z" level=info msg="runtime interface starting up..." Jan 28 00:47:37.349816 containerd[1892]: time="2026-01-28T00:47:37.349514424Z" level=info msg="starting plugins..." Jan 28 00:47:37.349816 containerd[1892]: time="2026-01-28T00:47:37.349525824Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 00:47:37.350146 containerd[1892]: time="2026-01-28T00:47:37.350117896Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 00:47:37.350169 containerd[1892]: time="2026-01-28T00:47:37.350160080Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 00:47:37.350224 containerd[1892]: time="2026-01-28T00:47:37.350204704Z" level=info msg="containerd successfully booted in 0.475231s" Jan 28 00:47:37.351150 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 00:47:37.356433 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 00:47:37.361965 systemd[1]: Startup finished in 1.699s (kernel) + 13.716s (initrd) + 12.265s (userspace) = 27.680s. Jan 28 00:47:37.744152 login[2014]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 28 00:47:37.745728 login[2015]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:47:37.754494 systemd-logind[1876]: New session 2 of user core. Jan 28 00:47:37.756419 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 00:47:37.759107 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 00:47:37.796093 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 00:47:37.798187 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 00:47:37.810337 (systemd)[2061]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 00:47:37.812376 systemd-logind[1876]: New session c1 of user core. Jan 28 00:47:37.929123 systemd[2061]: Queued start job for default target default.target. Jan 28 00:47:37.939750 systemd[2061]: Created slice app.slice - User Application Slice. Jan 28 00:47:37.939988 systemd[2061]: Reached target paths.target - Paths. Jan 28 00:47:37.940053 systemd[2061]: Reached target timers.target - Timers. Jan 28 00:47:37.941121 systemd[2061]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 00:47:37.948653 systemd[2061]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 00:47:37.948798 systemd[2061]: Reached target sockets.target - Sockets. Jan 28 00:47:37.948883 systemd[2061]: Reached target basic.target - Basic System. Jan 28 00:47:37.949158 systemd[2061]: Reached target default.target - Main User Target. Jan 28 00:47:37.949181 systemd[2061]: Startup finished in 132ms. Jan 28 00:47:37.949327 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 00:47:37.957140 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 00:47:38.489953 waagent[2012]: 2026-01-28T00:47:38.485992Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 28 00:47:38.490257 waagent[2012]: 2026-01-28T00:47:38.490106Z INFO Daemon Daemon OS: flatcar 4459.2.3 Jan 28 00:47:38.493265 waagent[2012]: 2026-01-28T00:47:38.493233Z INFO Daemon Daemon Python: 3.11.13 Jan 28 00:47:38.496536 waagent[2012]: 2026-01-28T00:47:38.496489Z INFO Daemon Daemon Run daemon Jan 28 00:47:38.499487 waagent[2012]: 2026-01-28T00:47:38.499340Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.3' Jan 28 00:47:38.505625 waagent[2012]: 2026-01-28T00:47:38.505588Z INFO Daemon Daemon Using waagent for provisioning Jan 28 00:47:38.509308 waagent[2012]: 2026-01-28T00:47:38.509279Z INFO Daemon Daemon Activate resource disk Jan 28 00:47:38.512723 waagent[2012]: 2026-01-28T00:47:38.512696Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 28 00:47:38.520496 waagent[2012]: 2026-01-28T00:47:38.520463Z INFO Daemon Daemon Found device: None Jan 28 00:47:38.523622 waagent[2012]: 2026-01-28T00:47:38.523594Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 28 00:47:38.529664 waagent[2012]: 2026-01-28T00:47:38.529637Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 28 00:47:38.537749 waagent[2012]: 2026-01-28T00:47:38.537712Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 00:47:38.541840 waagent[2012]: 2026-01-28T00:47:38.541810Z INFO Daemon Daemon Running default provisioning handler Jan 28 00:47:38.550933 waagent[2012]: 2026-01-28T00:47:38.550892Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 28 00:47:38.560786 waagent[2012]: 2026-01-28T00:47:38.560752Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 28 00:47:38.567424 waagent[2012]: 2026-01-28T00:47:38.567397Z INFO Daemon Daemon cloud-init is enabled: False Jan 28 00:47:38.570874 waagent[2012]: 2026-01-28T00:47:38.570853Z INFO Daemon Daemon Copying ovf-env.xml Jan 28 00:47:38.687580 waagent[2012]: 2026-01-28T00:47:38.687480Z INFO Daemon Daemon Successfully mounted dvd Jan 28 00:47:38.713656 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 28 00:47:38.715807 waagent[2012]: 2026-01-28T00:47:38.715757Z INFO Daemon Daemon Detect protocol endpoint Jan 28 00:47:38.719484 waagent[2012]: 2026-01-28T00:47:38.719449Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 00:47:38.723483 waagent[2012]: 2026-01-28T00:47:38.723454Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 28 00:47:38.728171 waagent[2012]: 2026-01-28T00:47:38.728143Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 28 00:47:38.732281 waagent[2012]: 2026-01-28T00:47:38.732248Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 28 00:47:38.736028 waagent[2012]: 2026-01-28T00:47:38.735992Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 28 00:47:38.744509 login[2014]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:47:38.748390 systemd-logind[1876]: New session 1 of user core. Jan 28 00:47:38.765126 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 00:47:38.780413 waagent[2012]: 2026-01-28T00:47:38.780374Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 28 00:47:38.785608 waagent[2012]: 2026-01-28T00:47:38.785580Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 28 00:47:38.789618 waagent[2012]: 2026-01-28T00:47:38.789586Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 28 00:47:38.956177 waagent[2012]: 2026-01-28T00:47:38.956096Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 28 00:47:38.960787 waagent[2012]: 2026-01-28T00:47:38.960754Z INFO Daemon Daemon Forcing an update of the goal state. Jan 28 00:47:38.968164 waagent[2012]: 2026-01-28T00:47:38.968128Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 00:47:38.986489 waagent[2012]: 2026-01-28T00:47:38.986458Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 28 00:47:38.990658 waagent[2012]: 2026-01-28T00:47:38.990624Z INFO Daemon Jan 28 00:47:38.992674 waagent[2012]: 2026-01-28T00:47:38.992646Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9a37001b-8a0d-471b-ade6-9b2128a4dccb eTag: 12584527307283320518 source: Fabric] Jan 28 00:47:39.000615 waagent[2012]: 2026-01-28T00:47:39.000559Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 28 00:47:39.005459 waagent[2012]: 2026-01-28T00:47:39.005431Z INFO Daemon Jan 28 00:47:39.007501 waagent[2012]: 2026-01-28T00:47:39.007475Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 28 00:47:39.015694 waagent[2012]: 2026-01-28T00:47:39.015664Z INFO Daemon Daemon Downloading artifacts profile blob Jan 28 00:47:39.136964 waagent[2012]: 2026-01-28T00:47:39.136905Z INFO Daemon Downloaded certificate {'thumbprint': '88B2947EA1F9A09F81CD09730EA4B61A2ED523C7', 'hasPrivateKey': True} Jan 28 00:47:39.144089 waagent[2012]: 2026-01-28T00:47:39.144050Z INFO Daemon Fetch goal state completed Jan 28 00:47:39.179141 waagent[2012]: 2026-01-28T00:47:39.179108Z INFO Daemon Daemon Starting provisioning Jan 28 00:47:39.183059 waagent[2012]: 2026-01-28T00:47:39.183023Z INFO Daemon Daemon Handle ovf-env.xml. Jan 28 00:47:39.186478 waagent[2012]: 2026-01-28T00:47:39.186453Z INFO Daemon Daemon Set hostname [ci-4459.2.3-n-ec09cdb4df] Jan 28 00:47:39.192132 waagent[2012]: 2026-01-28T00:47:39.192095Z INFO Daemon Daemon Publish hostname [ci-4459.2.3-n-ec09cdb4df] Jan 28 00:47:39.196807 waagent[2012]: 2026-01-28T00:47:39.196772Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 28 00:47:39.201332 waagent[2012]: 2026-01-28T00:47:39.201302Z INFO Daemon Daemon Primary interface is [eth0] Jan 28 00:47:39.210546 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:47:39.210552 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:47:39.210599 systemd-networkd[1485]: eth0: DHCP lease lost Jan 28 00:47:39.211876 waagent[2012]: 2026-01-28T00:47:39.211831Z INFO Daemon Daemon Create user account if not exists Jan 28 00:47:39.215888 waagent[2012]: 2026-01-28T00:47:39.215859Z INFO Daemon Daemon User core already exists, skip useradd Jan 28 00:47:39.219918 waagent[2012]: 2026-01-28T00:47:39.219894Z INFO Daemon Daemon Configure sudoer Jan 28 00:47:39.230490 waagent[2012]: 2026-01-28T00:47:39.227333Z INFO Daemon Daemon Configure sshd Jan 28 00:47:39.233428 waagent[2012]: 2026-01-28T00:47:39.233391Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 28 00:47:39.242823 waagent[2012]: 2026-01-28T00:47:39.242792Z INFO Daemon Daemon Deploy ssh public key. Jan 28 00:47:39.249067 systemd-networkd[1485]: eth0: DHCPv4 address 10.200.20.26/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:47:40.369628 waagent[2012]: 2026-01-28T00:47:40.369562Z INFO Daemon Daemon Provisioning complete Jan 28 00:47:40.383706 waagent[2012]: 2026-01-28T00:47:40.383668Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 28 00:47:40.389213 waagent[2012]: 2026-01-28T00:47:40.389169Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 28 00:47:40.398300 waagent[2012]: 2026-01-28T00:47:40.398259Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 28 00:47:40.496052 waagent[2111]: 2026-01-28T00:47:40.495143Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 28 00:47:40.496052 waagent[2111]: 2026-01-28T00:47:40.495263Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.3 Jan 28 00:47:40.496052 waagent[2111]: 2026-01-28T00:47:40.495299Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 28 00:47:40.496052 waagent[2111]: 2026-01-28T00:47:40.495332Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 28 00:47:40.565980 waagent[2111]: 2026-01-28T00:47:40.565917Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 28 00:47:40.566324 waagent[2111]: 2026-01-28T00:47:40.566295Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:47:40.566449 waagent[2111]: 2026-01-28T00:47:40.566425Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:47:40.572331 waagent[2111]: 2026-01-28T00:47:40.572290Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 00:47:40.580063 waagent[2111]: 2026-01-28T00:47:40.580036Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 28 00:47:40.580480 waagent[2111]: 2026-01-28T00:47:40.580451Z INFO ExtHandler Jan 28 00:47:40.580594 waagent[2111]: 2026-01-28T00:47:40.580573Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f99fea16-f0ae-4ff2-9fc8-df2cdd9b62dd eTag: 12584527307283320518 source: Fabric] Jan 28 00:47:40.580883 waagent[2111]: 2026-01-28T00:47:40.580856Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 28 00:47:40.581383 waagent[2111]: 2026-01-28T00:47:40.581353Z INFO ExtHandler Jan 28 00:47:40.581495 waagent[2111]: 2026-01-28T00:47:40.581473Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 28 00:47:40.586045 waagent[2111]: 2026-01-28T00:47:40.584805Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 28 00:47:40.656426 waagent[2111]: 2026-01-28T00:47:40.655160Z INFO ExtHandler Downloaded certificate {'thumbprint': '88B2947EA1F9A09F81CD09730EA4B61A2ED523C7', 'hasPrivateKey': True} Jan 28 00:47:40.656426 waagent[2111]: 2026-01-28T00:47:40.655702Z INFO ExtHandler Fetch goal state completed Jan 28 00:47:40.670476 waagent[2111]: 2026-01-28T00:47:40.670409Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 28 00:47:40.674896 waagent[2111]: 2026-01-28T00:47:40.674808Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2111 Jan 28 00:47:40.675222 waagent[2111]: 2026-01-28T00:47:40.675186Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 28 00:47:40.675708 waagent[2111]: 2026-01-28T00:47:40.675670Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 28 00:47:40.677126 waagent[2111]: 2026-01-28T00:47:40.677087Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] Jan 28 00:47:40.677542 waagent[2111]: 2026-01-28T00:47:40.677507Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 28 00:47:40.677755 waagent[2111]: 2026-01-28T00:47:40.677726Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 28 00:47:40.678308 waagent[2111]: 2026-01-28T00:47:40.678275Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 28 00:47:40.709858 waagent[2111]: 2026-01-28T00:47:40.709825Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 28 00:47:40.710182 waagent[2111]: 2026-01-28T00:47:40.710153Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 28 00:47:40.714572 waagent[2111]: 2026-01-28T00:47:40.714549Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 28 00:47:40.718827 systemd[1]: Reload requested from client PID 2126 ('systemctl') (unit waagent.service)... Jan 28 00:47:40.719009 systemd[1]: Reloading... Jan 28 00:47:40.781173 zram_generator::config[2165]: No configuration found. Jan 28 00:47:40.934157 systemd[1]: Reloading finished in 214 ms. Jan 28 00:47:40.947916 waagent[2111]: 2026-01-28T00:47:40.947803Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 28 00:47:40.947989 waagent[2111]: 2026-01-28T00:47:40.947942Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 28 00:47:42.111529 waagent[2111]: 2026-01-28T00:47:42.111451Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 28 00:47:42.111801 waagent[2111]: 2026-01-28T00:47:42.111753Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 28 00:47:42.112410 waagent[2111]: 2026-01-28T00:47:42.112370Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 28 00:47:42.112693 waagent[2111]: 2026-01-28T00:47:42.112629Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 28 00:47:42.113037 waagent[2111]: 2026-01-28T00:47:42.112853Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:47:42.113037 waagent[2111]: 2026-01-28T00:47:42.112921Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:47:42.113310 waagent[2111]: 2026-01-28T00:47:42.113228Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 28 00:47:42.113383 waagent[2111]: 2026-01-28T00:47:42.113286Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 28 00:47:42.113460 waagent[2111]: 2026-01-28T00:47:42.113378Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 28 00:47:42.114035 waagent[2111]: 2026-01-28T00:47:42.113596Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:47:42.114035 waagent[2111]: 2026-01-28T00:47:42.113651Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:47:42.114035 waagent[2111]: 2026-01-28T00:47:42.113734Z INFO EnvHandler ExtHandler Configure routes Jan 28 00:47:42.114035 waagent[2111]: 2026-01-28T00:47:42.113772Z INFO EnvHandler ExtHandler Gateway:None Jan 28 00:47:42.114035 waagent[2111]: 2026-01-28T00:47:42.113795Z INFO EnvHandler ExtHandler Routes:None Jan 28 00:47:42.114310 waagent[2111]: 2026-01-28T00:47:42.114256Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 28 00:47:42.114357 waagent[2111]: 2026-01-28T00:47:42.114308Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 28 00:47:42.114925 waagent[2111]: 2026-01-28T00:47:42.114901Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 28 00:47:42.115188 waagent[2111]: 2026-01-28T00:47:42.115166Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 28 00:47:42.115188 waagent[2111]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 28 00:47:42.115188 waagent[2111]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 28 00:47:42.115188 waagent[2111]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 28 00:47:42.115188 waagent[2111]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:47:42.115188 waagent[2111]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:47:42.115188 waagent[2111]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:47:42.121355 waagent[2111]: 2026-01-28T00:47:42.121321Z INFO ExtHandler ExtHandler Jan 28 00:47:42.121399 waagent[2111]: 2026-01-28T00:47:42.121385Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 43319b63-50d2-4ba6-af4f-00f80de8519e correlation c17bc9a1-9b1a-480e-a896-5ca922ea4126 created: 2026-01-28T00:46:37.321648Z] Jan 28 00:47:42.121658 waagent[2111]: 2026-01-28T00:47:42.121627Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 28 00:47:42.124324 waagent[2111]: 2026-01-28T00:47:42.124058Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 28 00:47:42.147553 waagent[2111]: 2026-01-28T00:47:42.147520Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 28 00:47:42.147553 waagent[2111]: Try `iptables -h' or 'iptables --help' for more information.) Jan 28 00:47:42.147954 waagent[2111]: 2026-01-28T00:47:42.147925Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C8DD776D-3ECC-4964-BD0E-B8C70BC49089;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 28 00:47:42.180898 waagent[2111]: 2026-01-28T00:47:42.180854Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 28 00:47:42.180898 waagent[2111]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:47:42.180898 waagent[2111]: pkts bytes target prot opt in out source destination Jan 28 00:47:42.180898 waagent[2111]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:47:42.180898 waagent[2111]: pkts bytes target prot opt in out source destination Jan 28 00:47:42.180898 waagent[2111]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:47:42.180898 waagent[2111]: pkts bytes target prot opt in out source destination Jan 28 00:47:42.180898 waagent[2111]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 00:47:42.180898 waagent[2111]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 00:47:42.180898 waagent[2111]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 00:47:42.183563 waagent[2111]: 2026-01-28T00:47:42.183288Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 28 00:47:42.183563 waagent[2111]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:47:42.183563 waagent[2111]: pkts bytes target prot opt in out source destination Jan 28 00:47:42.183563 waagent[2111]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:47:42.183563 waagent[2111]: pkts bytes target prot opt in out source destination Jan 28 00:47:42.183563 waagent[2111]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:47:42.183563 waagent[2111]: pkts bytes target prot opt in out source destination Jan 28 00:47:42.183563 waagent[2111]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 00:47:42.183563 waagent[2111]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 00:47:42.183563 waagent[2111]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 00:47:42.183563 waagent[2111]: 2026-01-28T00:47:42.183481Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 28 00:47:42.187924 waagent[2111]: 2026-01-28T00:47:42.187892Z INFO MonitorHandler ExtHandler Network interfaces: Jan 28 00:47:42.187924 waagent[2111]: Executing ['ip', '-a', '-o', 'link']: Jan 28 00:47:42.187924 waagent[2111]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 28 00:47:42.187924 waagent[2111]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b6:92:64 brd ff:ff:ff:ff:ff:ff Jan 28 00:47:42.187924 waagent[2111]: 3: enP28242s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b6:92:64 brd ff:ff:ff:ff:ff:ff\ altname enP28242p0s2 Jan 28 00:47:42.187924 waagent[2111]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 28 00:47:42.187924 waagent[2111]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 28 00:47:42.187924 waagent[2111]: 2: eth0 inet 10.200.20.26/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 28 00:47:42.187924 waagent[2111]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 28 00:47:42.187924 waagent[2111]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 28 00:47:42.187924 waagent[2111]: 2: eth0 inet6 fe80::7eed:8dff:feb6:9264/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 28 00:47:47.384694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 00:47:47.385987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:47:47.479996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:47:47.482999 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:47:47.616300 kubelet[2260]: E0128 00:47:47.616255 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:47:47.618904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:47:47.619035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:47:47.619463 systemd[1]: kubelet.service: Consumed 108ms CPU time, 107.5M memory peak. Jan 28 00:47:53.444180 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 00:47:53.446196 systemd[1]: Started sshd@0-10.200.20.26:22-10.200.16.10:41822.service - OpenSSH per-connection server daemon (10.200.16.10:41822). Jan 28 00:47:53.974447 sshd[2268]: Accepted publickey for core from 10.200.16.10 port 41822 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:47:53.975431 sshd-session[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:47:53.978793 systemd-logind[1876]: New session 3 of user core. Jan 28 00:47:53.986362 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 00:47:54.392198 systemd[1]: Started sshd@1-10.200.20.26:22-10.200.16.10:41830.service - OpenSSH per-connection server daemon (10.200.16.10:41830). Jan 28 00:47:54.888342 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 41830 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:47:54.889339 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:47:54.892968 systemd-logind[1876]: New session 4 of user core. Jan 28 00:47:54.899304 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 00:47:55.239543 sshd[2277]: Connection closed by 10.200.16.10 port 41830 Jan 28 00:47:55.239088 sshd-session[2274]: pam_unix(sshd:session): session closed for user core Jan 28 00:47:55.242234 systemd-logind[1876]: Session 4 logged out. Waiting for processes to exit. Jan 28 00:47:55.242741 systemd[1]: sshd@1-10.200.20.26:22-10.200.16.10:41830.service: Deactivated successfully. Jan 28 00:47:55.244117 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 00:47:55.246255 systemd-logind[1876]: Removed session 4. Jan 28 00:47:55.325805 systemd[1]: Started sshd@2-10.200.20.26:22-10.200.16.10:41840.service - OpenSSH per-connection server daemon (10.200.16.10:41840). Jan 28 00:47:55.817065 sshd[2283]: Accepted publickey for core from 10.200.16.10 port 41840 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:47:55.817928 sshd-session[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:47:55.821237 systemd-logind[1876]: New session 5 of user core. Jan 28 00:47:55.833149 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 00:47:56.164709 sshd[2286]: Connection closed by 10.200.16.10 port 41840 Jan 28 00:47:56.164704 sshd-session[2283]: pam_unix(sshd:session): session closed for user core Jan 28 00:47:56.168681 systemd[1]: sshd@2-10.200.20.26:22-10.200.16.10:41840.service: Deactivated successfully. Jan 28 00:47:56.170330 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 00:47:56.171565 systemd-logind[1876]: Session 5 logged out. Waiting for processes to exit. Jan 28 00:47:56.172620 systemd-logind[1876]: Removed session 5. Jan 28 00:47:56.246210 systemd[1]: Started sshd@3-10.200.20.26:22-10.200.16.10:41844.service - OpenSSH per-connection server daemon (10.200.16.10:41844). Jan 28 00:47:56.706747 sshd[2292]: Accepted publickey for core from 10.200.16.10 port 41844 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:47:56.707428 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:47:56.711246 systemd-logind[1876]: New session 6 of user core. Jan 28 00:47:56.717131 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 00:47:57.038132 sshd[2295]: Connection closed by 10.200.16.10 port 41844 Jan 28 00:47:57.038610 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Jan 28 00:47:57.042101 systemd-logind[1876]: Session 6 logged out. Waiting for processes to exit. Jan 28 00:47:57.042765 systemd[1]: sshd@3-10.200.20.26:22-10.200.16.10:41844.service: Deactivated successfully. Jan 28 00:47:57.044519 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 00:47:57.045921 systemd-logind[1876]: Removed session 6. Jan 28 00:47:57.120236 systemd[1]: Started sshd@4-10.200.20.26:22-10.200.16.10:41856.service - OpenSSH per-connection server daemon (10.200.16.10:41856). Jan 28 00:47:57.577399 sshd[2301]: Accepted publickey for core from 10.200.16.10 port 41856 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:47:57.578121 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:47:57.581414 systemd-logind[1876]: New session 7 of user core. Jan 28 00:47:57.591159 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 00:47:57.634663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 00:47:57.635972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:47:59.520761 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 00:47:59.520980 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:47:59.682594 sudo[2308]: pam_unix(sudo:session): session closed for user root Jan 28 00:47:59.757458 chronyd[1846]: Selected source PHC0 Jan 28 00:47:59.760717 sshd[2304]: Connection closed by 10.200.16.10 port 41856 Jan 28 00:47:59.761142 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Jan 28 00:47:59.764765 systemd[1]: sshd@4-10.200.20.26:22-10.200.16.10:41856.service: Deactivated successfully. Jan 28 00:47:59.766076 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 00:47:59.766650 systemd-logind[1876]: Session 7 logged out. Waiting for processes to exit. Jan 28 00:47:59.767653 systemd-logind[1876]: Removed session 7. Jan 28 00:47:59.845763 systemd[1]: Started sshd@5-10.200.20.26:22-10.200.16.10:47380.service - OpenSSH per-connection server daemon (10.200.16.10:47380). Jan 28 00:47:59.977848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:47:59.980608 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:48:00.008682 kubelet[2322]: E0128 00:48:00.008630 2322 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:48:00.010802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:48:00.010915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:48:00.011227 systemd[1]: kubelet.service: Consumed 105ms CPU time, 107.2M memory peak. Jan 28 00:48:00.296071 sshd[2314]: Accepted publickey for core from 10.200.16.10 port 47380 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:48:00.296977 sshd-session[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:48:00.300675 systemd-logind[1876]: New session 8 of user core. Jan 28 00:48:00.308369 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 00:48:00.550486 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 00:48:00.551194 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:48:01.096917 sudo[2331]: pam_unix(sudo:session): session closed for user root Jan 28 00:48:01.100958 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 00:48:01.101494 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:48:01.109450 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 00:48:01.137986 augenrules[2353]: No rules Jan 28 00:48:01.139282 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:48:01.139569 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 00:48:01.140849 sudo[2330]: pam_unix(sudo:session): session closed for user root Jan 28 00:48:01.211433 sshd[2329]: Connection closed by 10.200.16.10 port 47380 Jan 28 00:48:01.212143 sshd-session[2314]: pam_unix(sshd:session): session closed for user core Jan 28 00:48:01.215983 systemd[1]: sshd@5-10.200.20.26:22-10.200.16.10:47380.service: Deactivated successfully. Jan 28 00:48:01.217516 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 00:48:01.218213 systemd-logind[1876]: Session 8 logged out. Waiting for processes to exit. Jan 28 00:48:01.219453 systemd-logind[1876]: Removed session 8. Jan 28 00:48:01.299692 systemd[1]: Started sshd@6-10.200.20.26:22-10.200.16.10:47388.service - OpenSSH per-connection server daemon (10.200.16.10:47388). Jan 28 00:48:01.792876 sshd[2362]: Accepted publickey for core from 10.200.16.10 port 47388 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:48:01.793622 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:48:01.796954 systemd-logind[1876]: New session 9 of user core. Jan 28 00:48:01.807154 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 00:48:02.066337 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 00:48:02.066546 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:48:03.804576 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 00:48:03.816269 (dockerd)[2385]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 00:48:05.059037 dockerd[2385]: time="2026-01-28T00:48:05.057185395Z" level=info msg="Starting up" Jan 28 00:48:05.060029 dockerd[2385]: time="2026-01-28T00:48:05.059932971Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 00:48:05.067850 dockerd[2385]: time="2026-01-28T00:48:05.067821355Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 00:48:05.150820 dockerd[2385]: time="2026-01-28T00:48:05.150616683Z" level=info msg="Loading containers: start." Jan 28 00:48:05.163032 kernel: Initializing XFRM netlink socket Jan 28 00:48:05.534281 systemd-networkd[1485]: docker0: Link UP Jan 28 00:48:05.550929 dockerd[2385]: time="2026-01-28T00:48:05.550460603Z" level=info msg="Loading containers: done." Jan 28 00:48:05.559872 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2245738767-merged.mount: Deactivated successfully. Jan 28 00:48:05.572815 dockerd[2385]: time="2026-01-28T00:48:05.572531475Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 00:48:05.572815 dockerd[2385]: time="2026-01-28T00:48:05.572620307Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 00:48:05.572815 dockerd[2385]: time="2026-01-28T00:48:05.572693179Z" level=info msg="Initializing buildkit" Jan 28 00:48:05.616917 dockerd[2385]: time="2026-01-28T00:48:05.616851563Z" level=info msg="Completed buildkit initialization" Jan 28 00:48:05.622205 dockerd[2385]: time="2026-01-28T00:48:05.622169859Z" level=info msg="Daemon has completed initialization" Jan 28 00:48:05.622369 dockerd[2385]: time="2026-01-28T00:48:05.622246707Z" level=info msg="API listen on /run/docker.sock" Jan 28 00:48:05.622557 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 00:48:06.328444 containerd[1892]: time="2026-01-28T00:48:06.328407795Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 28 00:48:07.051621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount111855527.mount: Deactivated successfully. Jan 28 00:48:08.464040 containerd[1892]: time="2026-01-28T00:48:08.463976185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:08.467986 containerd[1892]: time="2026-01-28T00:48:08.467952601Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Jan 28 00:48:08.471401 containerd[1892]: time="2026-01-28T00:48:08.471354372Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:08.476076 containerd[1892]: time="2026-01-28T00:48:08.476038836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:08.477187 containerd[1892]: time="2026-01-28T00:48:08.477150557Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.148705144s" Jan 28 00:48:08.477187 containerd[1892]: time="2026-01-28T00:48:08.477186374Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 28 00:48:08.477990 containerd[1892]: time="2026-01-28T00:48:08.477811801Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 28 00:48:09.657558 containerd[1892]: time="2026-01-28T00:48:09.657320549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:09.661259 containerd[1892]: time="2026-01-28T00:48:09.661229344Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Jan 28 00:48:09.664542 containerd[1892]: time="2026-01-28T00:48:09.664509845Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:09.669188 containerd[1892]: time="2026-01-28T00:48:09.669109687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:09.669757 containerd[1892]: time="2026-01-28T00:48:09.669566422Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.191729633s" Jan 28 00:48:09.669757 containerd[1892]: time="2026-01-28T00:48:09.669592079Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 28 00:48:09.669955 containerd[1892]: time="2026-01-28T00:48:09.669938091Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 28 00:48:10.134689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 00:48:10.137542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:10.241373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:10.248300 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:48:10.274103 kubelet[2662]: E0128 00:48:10.274059 2662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:48:10.276130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:48:10.276240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:48:10.276535 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.6M memory peak. Jan 28 00:48:10.953733 containerd[1892]: time="2026-01-28T00:48:10.953629532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:10.958065 containerd[1892]: time="2026-01-28T00:48:10.958038215Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Jan 28 00:48:10.961236 containerd[1892]: time="2026-01-28T00:48:10.961201969Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:10.966087 containerd[1892]: time="2026-01-28T00:48:10.966045739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:10.967119 containerd[1892]: time="2026-01-28T00:48:10.966450401Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.296489509s" Jan 28 00:48:10.967119 containerd[1892]: time="2026-01-28T00:48:10.966479618Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 28 00:48:10.967329 containerd[1892]: time="2026-01-28T00:48:10.967312550Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 28 00:48:12.428180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376571099.mount: Deactivated successfully. Jan 28 00:48:12.638886 containerd[1892]: time="2026-01-28T00:48:12.638832002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:12.641998 containerd[1892]: time="2026-01-28T00:48:12.641970683Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Jan 28 00:48:12.645156 containerd[1892]: time="2026-01-28T00:48:12.645131885Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:12.649424 containerd[1892]: time="2026-01-28T00:48:12.649397924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:12.649792 containerd[1892]: time="2026-01-28T00:48:12.649771032Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.682312958s" Jan 28 00:48:12.649811 containerd[1892]: time="2026-01-28T00:48:12.649797945Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 28 00:48:12.651360 containerd[1892]: time="2026-01-28T00:48:12.651334701Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 28 00:48:13.270002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731271573.mount: Deactivated successfully. Jan 28 00:48:14.402983 containerd[1892]: time="2026-01-28T00:48:14.402895967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:14.405680 containerd[1892]: time="2026-01-28T00:48:14.405634666Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Jan 28 00:48:14.409268 containerd[1892]: time="2026-01-28T00:48:14.409244107Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:14.414192 containerd[1892]: time="2026-01-28T00:48:14.414148303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:14.414879 containerd[1892]: time="2026-01-28T00:48:14.414745739Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.76338127s" Jan 28 00:48:14.414879 containerd[1892]: time="2026-01-28T00:48:14.414771676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 28 00:48:14.415444 containerd[1892]: time="2026-01-28T00:48:14.415419370Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 28 00:48:14.939859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438663047.mount: Deactivated successfully. Jan 28 00:48:14.959529 containerd[1892]: time="2026-01-28T00:48:14.959062153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:14.962354 containerd[1892]: time="2026-01-28T00:48:14.962326950Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Jan 28 00:48:14.965077 containerd[1892]: time="2026-01-28T00:48:14.965054297Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:14.969895 containerd[1892]: time="2026-01-28T00:48:14.969863066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:14.970323 containerd[1892]: time="2026-01-28T00:48:14.970300545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 554.855679ms" Jan 28 00:48:14.970515 containerd[1892]: time="2026-01-28T00:48:14.970404644Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 28 00:48:14.970925 containerd[1892]: time="2026-01-28T00:48:14.970848683Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 28 00:48:15.590357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667477093.mount: Deactivated successfully. Jan 28 00:48:18.813028 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 28 00:48:19.366058 containerd[1892]: time="2026-01-28T00:48:19.365716380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:19.369948 containerd[1892]: time="2026-01-28T00:48:19.369916814Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Jan 28 00:48:19.372817 containerd[1892]: time="2026-01-28T00:48:19.372791307Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:19.382227 containerd[1892]: time="2026-01-28T00:48:19.382173480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:19.382719 containerd[1892]: time="2026-01-28T00:48:19.382577848Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.411501013s" Jan 28 00:48:19.382719 containerd[1892]: time="2026-01-28T00:48:19.382607633Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 28 00:48:20.384706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 00:48:20.388182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:20.577146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:20.584410 (kubelet)[2818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:48:20.609324 kubelet[2818]: E0128 00:48:20.609278 2818 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:48:20.611191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:48:20.611396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:48:20.611902 systemd[1]: kubelet.service: Consumed 98ms CPU time, 106.8M memory peak. Jan 28 00:48:21.697123 update_engine[1880]: I20260128 00:48:21.697043 1880 update_attempter.cc:509] Updating boot flags... Jan 28 00:48:22.055408 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:22.055783 systemd[1]: kubelet.service: Consumed 98ms CPU time, 106.8M memory peak. Jan 28 00:48:22.058133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:22.082152 systemd[1]: Reload requested from client PID 2948 ('systemctl') (unit session-9.scope)... Jan 28 00:48:22.082165 systemd[1]: Reloading... Jan 28 00:48:22.172058 zram_generator::config[2995]: No configuration found. Jan 28 00:48:22.321956 systemd[1]: Reloading finished in 239 ms. Jan 28 00:48:22.367476 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 00:48:22.367532 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 00:48:22.367724 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:22.367757 systemd[1]: kubelet.service: Consumed 72ms CPU time, 95M memory peak. Jan 28 00:48:22.368774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:22.574073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:22.579236 (kubelet)[3062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:48:22.604467 kubelet[3062]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:48:22.604467 kubelet[3062]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:48:22.706874 kubelet[3062]: I0128 00:48:22.706803 3062 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:48:23.282326 kubelet[3062]: I0128 00:48:23.282213 3062 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 00:48:23.282326 kubelet[3062]: I0128 00:48:23.282249 3062 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:48:23.283253 kubelet[3062]: I0128 00:48:23.283238 3062 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 00:48:23.284808 kubelet[3062]: I0128 00:48:23.283320 3062 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:48:23.284808 kubelet[3062]: I0128 00:48:23.283509 3062 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 00:48:23.291816 kubelet[3062]: E0128 00:48:23.291787 3062 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 00:48:23.292601 kubelet[3062]: I0128 00:48:23.292578 3062 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:48:23.297093 kubelet[3062]: I0128 00:48:23.297079 3062 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 00:48:23.299634 kubelet[3062]: I0128 00:48:23.299620 3062 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 00:48:23.299911 kubelet[3062]: I0128 00:48:23.299886 3062 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:48:23.300103 kubelet[3062]: I0128 00:48:23.299965 3062 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-ec09cdb4df","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:48:23.300233 kubelet[3062]: I0128 00:48:23.300222 3062 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:48:23.300285 kubelet[3062]: I0128 00:48:23.300277 3062 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 00:48:23.300412 kubelet[3062]: I0128 00:48:23.300403 3062 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 00:48:23.305796 kubelet[3062]: I0128 00:48:23.305775 3062 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:48:23.306850 kubelet[3062]: I0128 00:48:23.306833 3062 kubelet.go:475] "Attempting to sync node with API server" Jan 28 00:48:23.306933 kubelet[3062]: I0128 00:48:23.306922 3062 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:48:23.307350 kubelet[3062]: E0128 00:48:23.307328 3062 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-ec09cdb4df&limit=500&resourceVersion=0\": dial tcp 10.200.20.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 00:48:23.308053 kubelet[3062]: I0128 00:48:23.307641 3062 kubelet.go:387] "Adding apiserver pod source" Jan 28 00:48:23.308108 kubelet[3062]: I0128 00:48:23.308058 3062 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:48:23.308630 kubelet[3062]: E0128 00:48:23.308597 3062 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 00:48:23.308866 kubelet[3062]: I0128 00:48:23.308849 3062 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 00:48:23.309224 kubelet[3062]: I0128 00:48:23.309207 3062 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 00:48:23.309276 kubelet[3062]: I0128 00:48:23.309230 3062 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 00:48:23.309276 kubelet[3062]: W0128 00:48:23.309262 3062 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 00:48:23.311797 kubelet[3062]: I0128 00:48:23.311779 3062 server.go:1262] "Started kubelet" Jan 28 00:48:23.311945 kubelet[3062]: I0128 00:48:23.311925 3062 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:48:23.312549 kubelet[3062]: I0128 00:48:23.312528 3062 server.go:310] "Adding debug handlers to kubelet server" Jan 28 00:48:23.313961 kubelet[3062]: I0128 00:48:23.313909 3062 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:48:23.314036 kubelet[3062]: I0128 00:48:23.313969 3062 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 00:48:23.314216 kubelet[3062]: I0128 00:48:23.314194 3062 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:48:23.315721 kubelet[3062]: E0128 00:48:23.314299 3062 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.26:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.3-n-ec09cdb4df.188ebea13c2cacf5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.3-n-ec09cdb4df,UID:ci-4459.2.3-n-ec09cdb4df,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.3-n-ec09cdb4df,},FirstTimestamp:2026-01-28 00:48:23.311756533 +0000 UTC m=+0.729876592,LastTimestamp:2026-01-28 00:48:23.311756533 +0000 UTC m=+0.729876592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.3-n-ec09cdb4df,}" Jan 28 00:48:23.316501 kubelet[3062]: I0128 00:48:23.316480 3062 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:48:23.317042 kubelet[3062]: I0128 00:48:23.316927 3062 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:48:23.318769 kubelet[3062]: E0128 00:48:23.318749 3062 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:48:23.318966 kubelet[3062]: E0128 00:48:23.318948 3062 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ec09cdb4df\" not found" Jan 28 00:48:23.319630 kubelet[3062]: I0128 00:48:23.318977 3062 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 00:48:23.319630 kubelet[3062]: I0128 00:48:23.319149 3062 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 00:48:23.319630 kubelet[3062]: I0128 00:48:23.319187 3062 reconciler.go:29] "Reconciler: start to sync state" Jan 28 00:48:23.319630 kubelet[3062]: E0128 00:48:23.319620 3062 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 00:48:23.321009 kubelet[3062]: E0128 00:48:23.320952 3062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-ec09cdb4df?timeout=10s\": dial tcp 10.200.20.26:6443: connect: connection refused" interval="200ms" Jan 28 00:48:23.321390 kubelet[3062]: I0128 00:48:23.321371 3062 factory.go:223] Registration of the containerd container factory successfully Jan 28 00:48:23.321390 kubelet[3062]: I0128 00:48:23.321384 3062 factory.go:223] Registration of the systemd container factory successfully Jan 28 00:48:23.321460 kubelet[3062]: I0128 00:48:23.321444 3062 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:48:23.349402 kubelet[3062]: I0128 00:48:23.349379 3062 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:48:23.349619 kubelet[3062]: I0128 00:48:23.349609 3062 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:48:23.349889 kubelet[3062]: I0128 00:48:23.349688 3062 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:48:23.352267 kubelet[3062]: I0128 00:48:23.352238 3062 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 00:48:23.353350 kubelet[3062]: I0128 00:48:23.353323 3062 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 00:48:23.353350 kubelet[3062]: I0128 00:48:23.353348 3062 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 00:48:23.353416 kubelet[3062]: I0128 00:48:23.353386 3062 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 00:48:23.353433 kubelet[3062]: E0128 00:48:23.353418 3062 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:48:23.354752 kubelet[3062]: E0128 00:48:23.354716 3062 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 00:48:23.356055 kubelet[3062]: I0128 00:48:23.355837 3062 policy_none.go:49] "None policy: Start" Jan 28 00:48:23.356055 kubelet[3062]: I0128 00:48:23.355863 3062 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 00:48:23.356055 kubelet[3062]: I0128 00:48:23.355873 3062 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 00:48:23.361026 kubelet[3062]: I0128 00:48:23.360990 3062 policy_none.go:47] "Start" Jan 28 00:48:23.364875 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 00:48:23.372448 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 00:48:23.375482 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 00:48:23.383705 kubelet[3062]: E0128 00:48:23.383682 3062 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 00:48:23.383705 kubelet[3062]: I0128 00:48:23.383857 3062 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:48:23.383705 kubelet[3062]: I0128 00:48:23.383867 3062 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:48:23.384218 kubelet[3062]: I0128 00:48:23.384181 3062 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:48:23.386412 kubelet[3062]: E0128 00:48:23.386386 3062 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:48:23.386684 kubelet[3062]: E0128 00:48:23.386579 3062 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.3-n-ec09cdb4df\" not found" Jan 28 00:48:23.466653 systemd[1]: Created slice kubepods-burstable-pod21ad18f7cf66b38214571eced76d96c7.slice - libcontainer container kubepods-burstable-pod21ad18f7cf66b38214571eced76d96c7.slice. Jan 28 00:48:23.474048 kubelet[3062]: E0128 00:48:23.473878 3062 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ec09cdb4df\" not found" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.478708 systemd[1]: Created slice kubepods-burstable-pod131f42115a1ca716b8507b91cb01ad9d.slice - libcontainer container kubepods-burstable-pod131f42115a1ca716b8507b91cb01ad9d.slice. Jan 28 00:48:23.479973 kubelet[3062]: E0128 00:48:23.479951 3062 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ec09cdb4df\" not found" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.482089 systemd[1]: Created slice kubepods-burstable-pod56ac6d8078dc44826554f97b2e350918.slice - libcontainer container kubepods-burstable-pod56ac6d8078dc44826554f97b2e350918.slice. Jan 28 00:48:23.483308 kubelet[3062]: E0128 00:48:23.483165 3062 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ec09cdb4df\" not found" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.485471 kubelet[3062]: I0128 00:48:23.485451 3062 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.485981 kubelet[3062]: E0128 00:48:23.485946 3062 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.26:6443/api/v1/nodes\": dial tcp 10.200.20.26:6443: connect: connection refused" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.521611 kubelet[3062]: E0128 00:48:23.521562 3062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-ec09cdb4df?timeout=10s\": dial tcp 10.200.20.26:6443: connect: connection refused" interval="400ms" Jan 28 00:48:23.620992 kubelet[3062]: I0128 00:48:23.620810 3062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/131f42115a1ca716b8507b91cb01ad9d-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" (UID: \"131f42115a1ca716b8507b91cb01ad9d\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.620992 kubelet[3062]: I0128 00:48:23.620846 3062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/131f42115a1ca716b8507b91cb01ad9d-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" (UID: \"131f42115a1ca716b8507b91cb01ad9d\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.620992 kubelet[3062]: I0128 00:48:23.620861 3062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/131f42115a1ca716b8507b91cb01ad9d-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" (UID: \"131f42115a1ca716b8507b91cb01ad9d\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.620992 kubelet[3062]: I0128 00:48:23.620889 3062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/56ac6d8078dc44826554f97b2e350918-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-ec09cdb4df\" (UID: \"56ac6d8078dc44826554f97b2e350918\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.620992 kubelet[3062]: I0128 00:48:23.620899 3062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21ad18f7cf66b38214571eced76d96c7-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-ec09cdb4df\" (UID: \"21ad18f7cf66b38214571eced76d96c7\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.621390 kubelet[3062]: I0128 00:48:23.620909 3062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/131f42115a1ca716b8507b91cb01ad9d-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" (UID: \"131f42115a1ca716b8507b91cb01ad9d\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.621390 kubelet[3062]: I0128 00:48:23.620920 3062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/131f42115a1ca716b8507b91cb01ad9d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" (UID: \"131f42115a1ca716b8507b91cb01ad9d\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.621390 kubelet[3062]: I0128 00:48:23.620931 3062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21ad18f7cf66b38214571eced76d96c7-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-ec09cdb4df\" (UID: \"21ad18f7cf66b38214571eced76d96c7\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.621390 kubelet[3062]: I0128 00:48:23.620941 3062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21ad18f7cf66b38214571eced76d96c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-ec09cdb4df\" (UID: \"21ad18f7cf66b38214571eced76d96c7\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.688522 kubelet[3062]: I0128 00:48:23.688477 3062 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.689024 kubelet[3062]: E0128 00:48:23.688990 3062 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.26:6443/api/v1/nodes\": dial tcp 10.200.20.26:6443: connect: connection refused" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:23.781074 containerd[1892]: time="2026-01-28T00:48:23.781006187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-ec09cdb4df,Uid:21ad18f7cf66b38214571eced76d96c7,Namespace:kube-system,Attempt:0,}" Jan 28 00:48:23.785695 containerd[1892]: time="2026-01-28T00:48:23.785543873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-ec09cdb4df,Uid:131f42115a1ca716b8507b91cb01ad9d,Namespace:kube-system,Attempt:0,}" Jan 28 00:48:23.790224 containerd[1892]: time="2026-01-28T00:48:23.790194309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-ec09cdb4df,Uid:56ac6d8078dc44826554f97b2e350918,Namespace:kube-system,Attempt:0,}" Jan 28 00:48:23.922921 kubelet[3062]: E0128 00:48:23.922872 3062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-ec09cdb4df?timeout=10s\": dial tcp 10.200.20.26:6443: connect: connection refused" interval="800ms" Jan 28 00:48:24.091034 kubelet[3062]: I0128 00:48:24.090744 3062 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:24.091289 kubelet[3062]: E0128 00:48:24.091241 3062 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.26:6443/api/v1/nodes\": dial tcp 10.200.20.26:6443: connect: connection refused" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:24.183089 kubelet[3062]: E0128 00:48:24.182953 3062 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 00:48:24.528215 kubelet[3062]: E0128 00:48:24.528085 3062 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 00:48:24.530411 kubelet[3062]: E0128 00:48:24.530370 3062 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-ec09cdb4df&limit=500&resourceVersion=0\": dial tcp 10.200.20.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 00:48:24.641331 kubelet[3062]: E0128 00:48:24.641281 3062 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 00:48:24.724429 kubelet[3062]: E0128 00:48:24.724392 3062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-ec09cdb4df?timeout=10s\": dial tcp 10.200.20.26:6443: connect: connection refused" interval="1.6s" Jan 28 00:48:24.893486 kubelet[3062]: I0128 00:48:24.893124 3062 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:24.893486 kubelet[3062]: E0128 00:48:24.893450 3062 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.26:6443/api/v1/nodes\": dial tcp 10.200.20.26:6443: connect: connection refused" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:25.295736 kubelet[3062]: E0128 00:48:25.295637 3062 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 00:48:25.627886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621234301.mount: Deactivated successfully. Jan 28 00:48:25.662697 containerd[1892]: time="2026-01-28T00:48:25.662219353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:48:25.669235 containerd[1892]: time="2026-01-28T00:48:25.669195601Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 28 00:48:25.676272 containerd[1892]: time="2026-01-28T00:48:25.676237730Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:48:25.680972 containerd[1892]: time="2026-01-28T00:48:25.680526568Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:48:25.683158 containerd[1892]: time="2026-01-28T00:48:25.683125758Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:48:25.689908 containerd[1892]: time="2026-01-28T00:48:25.689874518Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 00:48:25.693766 containerd[1892]: time="2026-01-28T00:48:25.693728478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 00:48:25.698785 containerd[1892]: time="2026-01-28T00:48:25.698737660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:48:25.700037 containerd[1892]: time="2026-01-28T00:48:25.699087551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.913994926s" Jan 28 00:48:25.703489 containerd[1892]: time="2026-01-28T00:48:25.703459192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.909951597s" Jan 28 00:48:25.712671 containerd[1892]: time="2026-01-28T00:48:25.712621280Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.916124114s" Jan 28 00:48:25.764842 containerd[1892]: time="2026-01-28T00:48:25.764795289Z" level=info msg="connecting to shim 35096d6edec32a52d883febf3ca5f235ddec75f16ae8bc61519b945f852f8486" address="unix:///run/containerd/s/7bf9f518f19bb16cec3f64a92b75d53d8a6757d75fbac2af149dc59118a6f616" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:48:25.784593 containerd[1892]: time="2026-01-28T00:48:25.783679195Z" level=info msg="connecting to shim f3ad44fe048e3bbc757ee71ccf84aabd580b9eb19cd3283d0d532b4476da47d6" address="unix:///run/containerd/s/bf10271b3d8a642e8d7288e58517e83efffb5cc8c6201f0e6893d03e0d2ff174" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:48:25.785316 containerd[1892]: time="2026-01-28T00:48:25.785212309Z" level=info msg="connecting to shim 2338ea530647b28b7b68089c0700176f4236cb414adfb1309b1bd95756bf59aa" address="unix:///run/containerd/s/cb63940f87d1e9a7665c5ff2e8b0069d9eefec8d7330f06acdecc4f05172f795" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:48:25.798195 systemd[1]: Started cri-containerd-35096d6edec32a52d883febf3ca5f235ddec75f16ae8bc61519b945f852f8486.scope - libcontainer container 35096d6edec32a52d883febf3ca5f235ddec75f16ae8bc61519b945f852f8486. Jan 28 00:48:25.805832 systemd[1]: Started cri-containerd-f3ad44fe048e3bbc757ee71ccf84aabd580b9eb19cd3283d0d532b4476da47d6.scope - libcontainer container f3ad44fe048e3bbc757ee71ccf84aabd580b9eb19cd3283d0d532b4476da47d6. Jan 28 00:48:25.811412 systemd[1]: Started cri-containerd-2338ea530647b28b7b68089c0700176f4236cb414adfb1309b1bd95756bf59aa.scope - libcontainer container 2338ea530647b28b7b68089c0700176f4236cb414adfb1309b1bd95756bf59aa. Jan 28 00:48:25.859638 containerd[1892]: time="2026-01-28T00:48:25.859599198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-ec09cdb4df,Uid:21ad18f7cf66b38214571eced76d96c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"35096d6edec32a52d883febf3ca5f235ddec75f16ae8bc61519b945f852f8486\"" Jan 28 00:48:25.862958 containerd[1892]: time="2026-01-28T00:48:25.862922805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-ec09cdb4df,Uid:131f42115a1ca716b8507b91cb01ad9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2338ea530647b28b7b68089c0700176f4236cb414adfb1309b1bd95756bf59aa\"" Jan 28 00:48:25.866126 containerd[1892]: time="2026-01-28T00:48:25.866084493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-ec09cdb4df,Uid:56ac6d8078dc44826554f97b2e350918,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3ad44fe048e3bbc757ee71ccf84aabd580b9eb19cd3283d0d532b4476da47d6\"" Jan 28 00:48:25.870077 containerd[1892]: time="2026-01-28T00:48:25.870047097Z" level=info msg="CreateContainer within sandbox \"35096d6edec32a52d883febf3ca5f235ddec75f16ae8bc61519b945f852f8486\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 00:48:25.874933 containerd[1892]: time="2026-01-28T00:48:25.874834159Z" level=info msg="CreateContainer within sandbox \"2338ea530647b28b7b68089c0700176f4236cb414adfb1309b1bd95756bf59aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 00:48:25.879614 containerd[1892]: time="2026-01-28T00:48:25.879514066Z" level=info msg="CreateContainer within sandbox \"f3ad44fe048e3bbc757ee71ccf84aabd580b9eb19cd3283d0d532b4476da47d6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 00:48:25.895783 containerd[1892]: time="2026-01-28T00:48:25.895048509Z" level=info msg="Container 30264b3c46167fe4b1ae21644392578518690c9985c135d87699394108b82e6e: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:25.928917 containerd[1892]: time="2026-01-28T00:48:25.928870886Z" level=info msg="Container 0a71c4008b1e8fd8746463cf0b4679a4f184fac2970dadb30bd70e2c75d65156: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:25.930410 containerd[1892]: time="2026-01-28T00:48:25.930385120Z" level=info msg="CreateContainer within sandbox \"35096d6edec32a52d883febf3ca5f235ddec75f16ae8bc61519b945f852f8486\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"30264b3c46167fe4b1ae21644392578518690c9985c135d87699394108b82e6e\"" Jan 28 00:48:25.931510 containerd[1892]: time="2026-01-28T00:48:25.931129689Z" level=info msg="StartContainer for \"30264b3c46167fe4b1ae21644392578518690c9985c135d87699394108b82e6e\"" Jan 28 00:48:25.932227 containerd[1892]: time="2026-01-28T00:48:25.932202956Z" level=info msg="connecting to shim 30264b3c46167fe4b1ae21644392578518690c9985c135d87699394108b82e6e" address="unix:///run/containerd/s/7bf9f518f19bb16cec3f64a92b75d53d8a6757d75fbac2af149dc59118a6f616" protocol=ttrpc version=3 Jan 28 00:48:25.935201 containerd[1892]: time="2026-01-28T00:48:25.935146230Z" level=info msg="Container 305cbaaf60554cd1d509a9624340417a3eb8dfb5f9a2646f9d5c8e7973db4f8e: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:25.949164 systemd[1]: Started cri-containerd-30264b3c46167fe4b1ae21644392578518690c9985c135d87699394108b82e6e.scope - libcontainer container 30264b3c46167fe4b1ae21644392578518690c9985c135d87699394108b82e6e. Jan 28 00:48:25.952683 containerd[1892]: time="2026-01-28T00:48:25.952474180Z" level=info msg="CreateContainer within sandbox \"2338ea530647b28b7b68089c0700176f4236cb414adfb1309b1bd95756bf59aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a71c4008b1e8fd8746463cf0b4679a4f184fac2970dadb30bd70e2c75d65156\"" Jan 28 00:48:25.953836 containerd[1892]: time="2026-01-28T00:48:25.953297399Z" level=info msg="StartContainer for \"0a71c4008b1e8fd8746463cf0b4679a4f184fac2970dadb30bd70e2c75d65156\"" Jan 28 00:48:25.954643 containerd[1892]: time="2026-01-28T00:48:25.954620059Z" level=info msg="connecting to shim 0a71c4008b1e8fd8746463cf0b4679a4f184fac2970dadb30bd70e2c75d65156" address="unix:///run/containerd/s/cb63940f87d1e9a7665c5ff2e8b0069d9eefec8d7330f06acdecc4f05172f795" protocol=ttrpc version=3 Jan 28 00:48:25.969883 containerd[1892]: time="2026-01-28T00:48:25.969849459Z" level=info msg="CreateContainer within sandbox \"f3ad44fe048e3bbc757ee71ccf84aabd580b9eb19cd3283d0d532b4476da47d6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"305cbaaf60554cd1d509a9624340417a3eb8dfb5f9a2646f9d5c8e7973db4f8e\"" Jan 28 00:48:25.970513 containerd[1892]: time="2026-01-28T00:48:25.970492521Z" level=info msg="StartContainer for \"305cbaaf60554cd1d509a9624340417a3eb8dfb5f9a2646f9d5c8e7973db4f8e\"" Jan 28 00:48:25.974860 containerd[1892]: time="2026-01-28T00:48:25.974816584Z" level=info msg="connecting to shim 305cbaaf60554cd1d509a9624340417a3eb8dfb5f9a2646f9d5c8e7973db4f8e" address="unix:///run/containerd/s/bf10271b3d8a642e8d7288e58517e83efffb5cc8c6201f0e6893d03e0d2ff174" protocol=ttrpc version=3 Jan 28 00:48:25.975226 systemd[1]: Started cri-containerd-0a71c4008b1e8fd8746463cf0b4679a4f184fac2970dadb30bd70e2c75d65156.scope - libcontainer container 0a71c4008b1e8fd8746463cf0b4679a4f184fac2970dadb30bd70e2c75d65156. Jan 28 00:48:26.005196 systemd[1]: Started cri-containerd-305cbaaf60554cd1d509a9624340417a3eb8dfb5f9a2646f9d5c8e7973db4f8e.scope - libcontainer container 305cbaaf60554cd1d509a9624340417a3eb8dfb5f9a2646f9d5c8e7973db4f8e. Jan 28 00:48:26.016030 containerd[1892]: time="2026-01-28T00:48:26.015957731Z" level=info msg="StartContainer for \"30264b3c46167fe4b1ae21644392578518690c9985c135d87699394108b82e6e\" returns successfully" Jan 28 00:48:26.029480 containerd[1892]: time="2026-01-28T00:48:26.029440418Z" level=info msg="StartContainer for \"0a71c4008b1e8fd8746463cf0b4679a4f184fac2970dadb30bd70e2c75d65156\" returns successfully" Jan 28 00:48:26.083886 containerd[1892]: time="2026-01-28T00:48:26.083852877Z" level=info msg="StartContainer for \"305cbaaf60554cd1d509a9624340417a3eb8dfb5f9a2646f9d5c8e7973db4f8e\" returns successfully" Jan 28 00:48:26.368100 kubelet[3062]: E0128 00:48:26.368027 3062 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ec09cdb4df\" not found" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:26.371872 kubelet[3062]: E0128 00:48:26.371853 3062 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ec09cdb4df\" not found" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:26.374470 kubelet[3062]: E0128 00:48:26.374400 3062 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ec09cdb4df\" not found" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:26.495550 kubelet[3062]: I0128 00:48:26.495527 3062 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.148028 kubelet[3062]: E0128 00:48:27.147185 3062 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.3-n-ec09cdb4df\" not found" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.262825 kubelet[3062]: I0128 00:48:27.262649 3062 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.311091 kubelet[3062]: I0128 00:48:27.311053 3062 apiserver.go:52] "Watching apiserver" Jan 28 00:48:27.320030 kubelet[3062]: I0128 00:48:27.319990 3062 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 00:48:27.321089 kubelet[3062]: I0128 00:48:27.321047 3062 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.327018 kubelet[3062]: E0128 00:48:27.326975 3062 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-n-ec09cdb4df\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.327018 kubelet[3062]: I0128 00:48:27.327008 3062 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.328112 kubelet[3062]: E0128 00:48:27.328085 3062 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-ec09cdb4df\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.328112 kubelet[3062]: I0128 00:48:27.328109 3062 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.331052 kubelet[3062]: E0128 00:48:27.331028 3062 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.373159 kubelet[3062]: I0128 00:48:27.373130 3062 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.373821 kubelet[3062]: I0128 00:48:27.373481 3062 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.377208 kubelet[3062]: E0128 00:48:27.376961 3062 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-n-ec09cdb4df\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:27.377208 kubelet[3062]: E0128 00:48:27.377107 3062 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-ec09cdb4df\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.142203 systemd[1]: Reload requested from client PID 3341 ('systemctl') (unit session-9.scope)... Jan 28 00:48:29.142218 systemd[1]: Reloading... Jan 28 00:48:29.224138 zram_generator::config[3388]: No configuration found. Jan 28 00:48:29.384279 systemd[1]: Reloading finished in 241 ms. Jan 28 00:48:29.408890 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:29.424317 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 00:48:29.424509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:29.424556 systemd[1]: kubelet.service: Consumed 879ms CPU time, 119.6M memory peak. Jan 28 00:48:29.426481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:29.547207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:29.556414 (kubelet)[3452]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:48:29.581895 kubelet[3452]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:48:29.581895 kubelet[3452]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:48:29.581895 kubelet[3452]: I0128 00:48:29.580809 3452 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:48:29.586130 kubelet[3452]: I0128 00:48:29.586103 3452 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 00:48:29.586130 kubelet[3452]: I0128 00:48:29.586124 3452 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:48:29.586208 kubelet[3452]: I0128 00:48:29.586147 3452 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 00:48:29.586208 kubelet[3452]: I0128 00:48:29.586152 3452 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:48:29.586574 kubelet[3452]: I0128 00:48:29.586298 3452 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 00:48:29.587271 kubelet[3452]: I0128 00:48:29.587255 3452 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 00:48:29.592133 kubelet[3452]: I0128 00:48:29.592102 3452 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:48:29.600217 kubelet[3452]: I0128 00:48:29.600159 3452 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 00:48:29.602457 kubelet[3452]: I0128 00:48:29.602433 3452 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 00:48:29.602702 kubelet[3452]: I0128 00:48:29.602585 3452 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:48:29.602821 kubelet[3452]: I0128 00:48:29.602605 3452 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-ec09cdb4df","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:48:29.602821 kubelet[3452]: I0128 00:48:29.602724 3452 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:48:29.602821 kubelet[3452]: I0128 00:48:29.602731 3452 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 00:48:29.602821 kubelet[3452]: I0128 00:48:29.602750 3452 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 00:48:29.603312 kubelet[3452]: I0128 00:48:29.603295 3452 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:48:29.603434 kubelet[3452]: I0128 00:48:29.603416 3452 kubelet.go:475] "Attempting to sync node with API server" Jan 28 00:48:29.603434 kubelet[3452]: I0128 00:48:29.603431 3452 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:48:29.603478 kubelet[3452]: I0128 00:48:29.603449 3452 kubelet.go:387] "Adding apiserver pod source" Jan 28 00:48:29.605054 kubelet[3452]: I0128 00:48:29.603457 3452 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:48:29.610790 kubelet[3452]: I0128 00:48:29.609356 3452 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 00:48:29.610790 kubelet[3452]: I0128 00:48:29.609711 3452 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 00:48:29.610790 kubelet[3452]: I0128 00:48:29.609728 3452 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 00:48:29.616661 kubelet[3452]: I0128 00:48:29.616224 3452 server.go:1262] "Started kubelet" Jan 28 00:48:29.617891 kubelet[3452]: I0128 00:48:29.617347 3452 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:48:29.618280 kubelet[3452]: I0128 00:48:29.618088 3452 server.go:310] "Adding debug handlers to kubelet server" Jan 28 00:48:29.619342 kubelet[3452]: I0128 00:48:29.619189 3452 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:48:29.619704 kubelet[3452]: I0128 00:48:29.619598 3452 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 00:48:29.620806 kubelet[3452]: I0128 00:48:29.620134 3452 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:48:29.622195 kubelet[3452]: I0128 00:48:29.621962 3452 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:48:29.623766 kubelet[3452]: I0128 00:48:29.623270 3452 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:48:29.628274 kubelet[3452]: I0128 00:48:29.628171 3452 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 00:48:29.630616 kubelet[3452]: I0128 00:48:29.630444 3452 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 00:48:29.630616 kubelet[3452]: I0128 00:48:29.630572 3452 reconciler.go:29] "Reconciler: start to sync state" Jan 28 00:48:29.632912 kubelet[3452]: I0128 00:48:29.632520 3452 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 00:48:29.633409 kubelet[3452]: I0128 00:48:29.633336 3452 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 00:48:29.633409 kubelet[3452]: I0128 00:48:29.633357 3452 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 00:48:29.633409 kubelet[3452]: I0128 00:48:29.633375 3452 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 00:48:29.633500 kubelet[3452]: E0128 00:48:29.633412 3452 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:48:29.643538 kubelet[3452]: E0128 00:48:29.643320 3452 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:48:29.644588 kubelet[3452]: I0128 00:48:29.644398 3452 factory.go:223] Registration of the containerd container factory successfully Jan 28 00:48:29.645417 kubelet[3452]: I0128 00:48:29.645355 3452 factory.go:223] Registration of the systemd container factory successfully Jan 28 00:48:29.647078 kubelet[3452]: I0128 00:48:29.647056 3452 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:48:29.691581 kubelet[3452]: I0128 00:48:29.691267 3452 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:48:29.692710 kubelet[3452]: I0128 00:48:29.692685 3452 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:48:29.693140 kubelet[3452]: I0128 00:48:29.693057 3452 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:48:29.693863 kubelet[3452]: I0128 00:48:29.693666 3452 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 00:48:29.694515 kubelet[3452]: I0128 00:48:29.694474 3452 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 00:48:29.694755 kubelet[3452]: I0128 00:48:29.694741 3452 policy_none.go:49] "None policy: Start" Jan 28 00:48:29.694827 kubelet[3452]: I0128 00:48:29.694818 3452 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 00:48:29.694873 kubelet[3452]: I0128 00:48:29.694864 3452 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 00:48:29.695054 kubelet[3452]: I0128 00:48:29.695042 3452 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 28 00:48:29.695114 kubelet[3452]: I0128 00:48:29.695107 3452 policy_none.go:47] "Start" Jan 28 00:48:29.699779 kubelet[3452]: E0128 00:48:29.699764 3452 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 00:48:29.700188 kubelet[3452]: I0128 00:48:29.700172 3452 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:48:29.700355 kubelet[3452]: I0128 00:48:29.700329 3452 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:48:29.700810 kubelet[3452]: I0128 00:48:29.700646 3452 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:48:29.701790 kubelet[3452]: E0128 00:48:29.701774 3452 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:48:29.734748 kubelet[3452]: I0128 00:48:29.734709 3452 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.735030 kubelet[3452]: I0128 00:48:29.734938 3452 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.735030 kubelet[3452]: I0128 00:48:29.735009 3452 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.740867 kubelet[3452]: I0128 00:48:29.740841 3452 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 00:48:29.745604 kubelet[3452]: I0128 00:48:29.745496 3452 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 00:48:29.745989 kubelet[3452]: I0128 00:48:29.745683 3452 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 00:48:29.813815 kubelet[3452]: I0128 00:48:29.811241 3452 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.820477 kubelet[3452]: I0128 00:48:29.820444 3452 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.820605 kubelet[3452]: I0128 00:48:29.820530 3452 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.831507 kubelet[3452]: I0128 00:48:29.831338 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/131f42115a1ca716b8507b91cb01ad9d-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" (UID: \"131f42115a1ca716b8507b91cb01ad9d\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.831507 kubelet[3452]: I0128 00:48:29.831366 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/131f42115a1ca716b8507b91cb01ad9d-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" (UID: \"131f42115a1ca716b8507b91cb01ad9d\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.831507 kubelet[3452]: I0128 00:48:29.831381 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/131f42115a1ca716b8507b91cb01ad9d-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" (UID: \"131f42115a1ca716b8507b91cb01ad9d\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.831507 kubelet[3452]: I0128 00:48:29.831392 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/131f42115a1ca716b8507b91cb01ad9d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" (UID: \"131f42115a1ca716b8507b91cb01ad9d\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.831507 kubelet[3452]: I0128 00:48:29.831404 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/56ac6d8078dc44826554f97b2e350918-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-ec09cdb4df\" (UID: \"56ac6d8078dc44826554f97b2e350918\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.831699 kubelet[3452]: I0128 00:48:29.831417 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21ad18f7cf66b38214571eced76d96c7-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-ec09cdb4df\" (UID: \"21ad18f7cf66b38214571eced76d96c7\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.831699 kubelet[3452]: I0128 00:48:29.831427 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21ad18f7cf66b38214571eced76d96c7-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-ec09cdb4df\" (UID: \"21ad18f7cf66b38214571eced76d96c7\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.831699 kubelet[3452]: I0128 00:48:29.831439 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/131f42115a1ca716b8507b91cb01ad9d-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" (UID: \"131f42115a1ca716b8507b91cb01ad9d\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:29.831699 kubelet[3452]: I0128 00:48:29.831451 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21ad18f7cf66b38214571eced76d96c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-ec09cdb4df\" (UID: \"21ad18f7cf66b38214571eced76d96c7\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:30.605948 kubelet[3452]: I0128 00:48:30.605665 3452 apiserver.go:52] "Watching apiserver" Jan 28 00:48:30.631201 kubelet[3452]: I0128 00:48:30.631166 3452 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 00:48:30.673023 kubelet[3452]: I0128 00:48:30.672986 3452 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:30.673169 kubelet[3452]: I0128 00:48:30.673148 3452 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:30.673507 kubelet[3452]: I0128 00:48:30.673485 3452 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:30.684113 kubelet[3452]: I0128 00:48:30.683841 3452 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 00:48:30.684113 kubelet[3452]: E0128 00:48:30.683919 3452 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-ec09cdb4df\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:30.685829 kubelet[3452]: I0128 00:48:30.685782 3452 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 00:48:30.685829 kubelet[3452]: E0128 00:48:30.685819 3452 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-n-ec09cdb4df\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:30.686976 kubelet[3452]: I0128 00:48:30.686954 3452 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 00:48:30.687060 kubelet[3452]: E0128 00:48:30.686993 3452 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-n-ec09cdb4df\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" Jan 28 00:48:30.720374 kubelet[3452]: I0128 00:48:30.720183 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ec09cdb4df" podStartSLOduration=1.720166437 podStartE2EDuration="1.720166437s" podCreationTimestamp="2026-01-28 00:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:48:30.720151941 +0000 UTC m=+1.160865958" watchObservedRunningTime="2026-01-28 00:48:30.720166437 +0000 UTC m=+1.160880462" Jan 28 00:48:30.730593 kubelet[3452]: I0128 00:48:30.730541 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ec09cdb4df" podStartSLOduration=1.73052494 podStartE2EDuration="1.73052494s" podCreationTimestamp="2026-01-28 00:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:48:30.729795268 +0000 UTC m=+1.170509285" watchObservedRunningTime="2026-01-28 00:48:30.73052494 +0000 UTC m=+1.171238957" Jan 28 00:48:30.740193 kubelet[3452]: I0128 00:48:30.740113 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ec09cdb4df" podStartSLOduration=1.740077377 podStartE2EDuration="1.740077377s" podCreationTimestamp="2026-01-28 00:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:48:30.739821481 +0000 UTC m=+1.180535498" watchObservedRunningTime="2026-01-28 00:48:30.740077377 +0000 UTC m=+1.180791394" Jan 28 00:48:30.816883 sudo[3490]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 28 00:48:30.817752 sudo[3490]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 28 00:48:31.058362 sudo[3490]: pam_unix(sudo:session): session closed for user root Jan 28 00:48:32.346748 sudo[2366]: pam_unix(sudo:session): session closed for user root Jan 28 00:48:32.424632 sshd[2365]: Connection closed by 10.200.16.10 port 47388 Jan 28 00:48:32.424539 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Jan 28 00:48:32.428867 systemd-logind[1876]: Session 9 logged out. Waiting for processes to exit. Jan 28 00:48:32.429459 systemd[1]: sshd@6-10.200.20.26:22-10.200.16.10:47388.service: Deactivated successfully. Jan 28 00:48:32.431531 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 00:48:32.431761 systemd[1]: session-9.scope: Consumed 3.532s CPU time, 262.1M memory peak. Jan 28 00:48:32.434054 systemd-logind[1876]: Removed session 9. Jan 28 00:48:35.789720 kubelet[3452]: I0128 00:48:35.789573 3452 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 00:48:35.790293 containerd[1892]: time="2026-01-28T00:48:35.790259676Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 00:48:35.790846 kubelet[3452]: I0128 00:48:35.790445 3452 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 00:48:36.650791 systemd[1]: Created slice kubepods-burstable-poda94ecc52_8097_45ec_977e_59e6e58bdce3.slice - libcontainer container kubepods-burstable-poda94ecc52_8097_45ec_977e_59e6e58bdce3.slice. Jan 28 00:48:36.660838 systemd[1]: Created slice kubepods-besteffort-pode234023d_1bd5_4cdb_ae78_e1934bfce7d1.slice - libcontainer container kubepods-besteffort-pode234023d_1bd5_4cdb_ae78_e1934bfce7d1.slice. Jan 28 00:48:36.668117 kubelet[3452]: I0128 00:48:36.668084 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-etc-cni-netd\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.668295 kubelet[3452]: I0128 00:48:36.668221 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-xtables-lock\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.668295 kubelet[3452]: I0128 00:48:36.668261 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e234023d-1bd5-4cdb-ae78-e1934bfce7d1-kube-proxy\") pod \"kube-proxy-k75gz\" (UID: \"e234023d-1bd5-4cdb-ae78-e1934bfce7d1\") " pod="kube-system/kube-proxy-k75gz" Jan 28 00:48:36.668295 kubelet[3452]: I0128 00:48:36.668272 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e234023d-1bd5-4cdb-ae78-e1934bfce7d1-xtables-lock\") pod \"kube-proxy-k75gz\" (UID: \"e234023d-1bd5-4cdb-ae78-e1934bfce7d1\") " pod="kube-system/kube-proxy-k75gz" Jan 28 00:48:36.668295 kubelet[3452]: I0128 00:48:36.668280 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e234023d-1bd5-4cdb-ae78-e1934bfce7d1-lib-modules\") pod \"kube-proxy-k75gz\" (UID: \"e234023d-1bd5-4cdb-ae78-e1934bfce7d1\") " pod="kube-system/kube-proxy-k75gz" Jan 28 00:48:36.668655 kubelet[3452]: I0128 00:48:36.668545 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-lib-modules\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.668655 kubelet[3452]: I0128 00:48:36.668573 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-host-proc-sys-net\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.668655 kubelet[3452]: I0128 00:48:36.668608 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a94ecc52-8097-45ec-977e-59e6e58bdce3-hubble-tls\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.668655 kubelet[3452]: I0128 00:48:36.668627 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5xbz\" (UniqueName: \"kubernetes.io/projected/a94ecc52-8097-45ec-977e-59e6e58bdce3-kube-api-access-c5xbz\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.668655 kubelet[3452]: I0128 00:48:36.668638 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-run\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.668851 kubelet[3452]: I0128 00:48:36.668645 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-bpf-maps\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.668851 kubelet[3452]: I0128 00:48:36.668804 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-hostproc\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.668851 kubelet[3452]: I0128 00:48:36.668817 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-cgroup\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.668851 kubelet[3452]: I0128 00:48:36.668826 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a94ecc52-8097-45ec-977e-59e6e58bdce3-clustermesh-secrets\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.668851 kubelet[3452]: I0128 00:48:36.668839 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-host-proc-sys-kernel\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.669023 kubelet[3452]: I0128 00:48:36.668974 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cni-path\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.669023 kubelet[3452]: I0128 00:48:36.668995 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-config-path\") pod \"cilium-4dxwj\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " pod="kube-system/cilium-4dxwj" Jan 28 00:48:36.669023 kubelet[3452]: I0128 00:48:36.669006 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccbg2\" (UniqueName: \"kubernetes.io/projected/e234023d-1bd5-4cdb-ae78-e1934bfce7d1-kube-api-access-ccbg2\") pod \"kube-proxy-k75gz\" (UID: \"e234023d-1bd5-4cdb-ae78-e1934bfce7d1\") " pod="kube-system/kube-proxy-k75gz" Jan 28 00:48:36.961589 containerd[1892]: time="2026-01-28T00:48:36.960969666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dxwj,Uid:a94ecc52-8097-45ec-977e-59e6e58bdce3,Namespace:kube-system,Attempt:0,}" Jan 28 00:48:36.975741 containerd[1892]: time="2026-01-28T00:48:36.975712261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k75gz,Uid:e234023d-1bd5-4cdb-ae78-e1934bfce7d1,Namespace:kube-system,Attempt:0,}" Jan 28 00:48:37.022398 containerd[1892]: time="2026-01-28T00:48:37.022247553Z" level=info msg="connecting to shim 7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401" address="unix:///run/containerd/s/7bf41dfd6c17993f09727ce8d096953486e47589aec2658004b53362abcc560a" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:48:37.051352 containerd[1892]: time="2026-01-28T00:48:37.051072417Z" level=info msg="connecting to shim 3070933daec284320c865d6aae619cec0adafa3b30d691595aecc9415df7c8d2" address="unix:///run/containerd/s/c3fc7ebb17ce576811e7870a1d7724b35204c8d1f03a2eb3aa7ffb48397be187" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:48:37.052230 systemd[1]: Created slice kubepods-besteffort-pod7c32d953_d1dc_4175_9b3c_0ad23940fc49.slice - libcontainer container kubepods-besteffort-pod7c32d953_d1dc_4175_9b3c_0ad23940fc49.slice. Jan 28 00:48:37.072049 kubelet[3452]: I0128 00:48:37.071993 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbd6c\" (UniqueName: \"kubernetes.io/projected/7c32d953-d1dc-4175-9b3c-0ad23940fc49-kube-api-access-fbd6c\") pod \"cilium-operator-6f9c7c5859-42h42\" (UID: \"7c32d953-d1dc-4175-9b3c-0ad23940fc49\") " pod="kube-system/cilium-operator-6f9c7c5859-42h42" Jan 28 00:48:37.072314 kubelet[3452]: I0128 00:48:37.072062 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c32d953-d1dc-4175-9b3c-0ad23940fc49-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-42h42\" (UID: \"7c32d953-d1dc-4175-9b3c-0ad23940fc49\") " pod="kube-system/cilium-operator-6f9c7c5859-42h42" Jan 28 00:48:37.081160 systemd[1]: Started cri-containerd-3070933daec284320c865d6aae619cec0adafa3b30d691595aecc9415df7c8d2.scope - libcontainer container 3070933daec284320c865d6aae619cec0adafa3b30d691595aecc9415df7c8d2. Jan 28 00:48:37.083136 systemd[1]: Started cri-containerd-7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401.scope - libcontainer container 7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401. Jan 28 00:48:37.134144 containerd[1892]: time="2026-01-28T00:48:37.134110721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dxwj,Uid:a94ecc52-8097-45ec-977e-59e6e58bdce3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\"" Jan 28 00:48:37.137958 containerd[1892]: time="2026-01-28T00:48:37.137926597Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 28 00:48:37.147507 containerd[1892]: time="2026-01-28T00:48:37.147480193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k75gz,Uid:e234023d-1bd5-4cdb-ae78-e1934bfce7d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3070933daec284320c865d6aae619cec0adafa3b30d691595aecc9415df7c8d2\"" Jan 28 00:48:37.155389 containerd[1892]: time="2026-01-28T00:48:37.155362977Z" level=info msg="CreateContainer within sandbox \"3070933daec284320c865d6aae619cec0adafa3b30d691595aecc9415df7c8d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 00:48:37.174489 containerd[1892]: time="2026-01-28T00:48:37.174375142Z" level=info msg="Container 9ca8d5e0f03c8a7f46395201692c775722710a5ff261a8f61d356c05e5562cf3: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:37.200358 containerd[1892]: time="2026-01-28T00:48:37.200312486Z" level=info msg="CreateContainer within sandbox \"3070933daec284320c865d6aae619cec0adafa3b30d691595aecc9415df7c8d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ca8d5e0f03c8a7f46395201692c775722710a5ff261a8f61d356c05e5562cf3\"" Jan 28 00:48:37.201831 containerd[1892]: time="2026-01-28T00:48:37.201141319Z" level=info msg="StartContainer for \"9ca8d5e0f03c8a7f46395201692c775722710a5ff261a8f61d356c05e5562cf3\"" Jan 28 00:48:37.202173 containerd[1892]: time="2026-01-28T00:48:37.202146998Z" level=info msg="connecting to shim 9ca8d5e0f03c8a7f46395201692c775722710a5ff261a8f61d356c05e5562cf3" address="unix:///run/containerd/s/c3fc7ebb17ce576811e7870a1d7724b35204c8d1f03a2eb3aa7ffb48397be187" protocol=ttrpc version=3 Jan 28 00:48:37.218267 systemd[1]: Started cri-containerd-9ca8d5e0f03c8a7f46395201692c775722710a5ff261a8f61d356c05e5562cf3.scope - libcontainer container 9ca8d5e0f03c8a7f46395201692c775722710a5ff261a8f61d356c05e5562cf3. Jan 28 00:48:37.271350 containerd[1892]: time="2026-01-28T00:48:37.271319014Z" level=info msg="StartContainer for \"9ca8d5e0f03c8a7f46395201692c775722710a5ff261a8f61d356c05e5562cf3\" returns successfully" Jan 28 00:48:37.403354 containerd[1892]: time="2026-01-28T00:48:37.403048012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-42h42,Uid:7c32d953-d1dc-4175-9b3c-0ad23940fc49,Namespace:kube-system,Attempt:0,}" Jan 28 00:48:37.441417 containerd[1892]: time="2026-01-28T00:48:37.441379182Z" level=info msg="connecting to shim 5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069" address="unix:///run/containerd/s/24b9ef4e7c86220960bb772d342acb59f146470b987cc21cdc3a2438af582c90" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:48:37.459111 systemd[1]: Started cri-containerd-5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069.scope - libcontainer container 5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069. Jan 28 00:48:37.495989 containerd[1892]: time="2026-01-28T00:48:37.495540035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-42h42,Uid:7c32d953-d1dc-4175-9b3c-0ad23940fc49,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069\"" Jan 28 00:48:37.701214 kubelet[3452]: I0128 00:48:37.700876 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k75gz" podStartSLOduration=1.700863528 podStartE2EDuration="1.700863528s" podCreationTimestamp="2026-01-28 00:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:48:37.696623038 +0000 UTC m=+8.137337063" watchObservedRunningTime="2026-01-28 00:48:37.700863528 +0000 UTC m=+8.141577545" Jan 28 00:48:41.773640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3881472558.mount: Deactivated successfully. Jan 28 00:48:43.131277 containerd[1892]: time="2026-01-28T00:48:43.131205495Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:43.149386 containerd[1892]: time="2026-01-28T00:48:43.149189677Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 28 00:48:43.152093 containerd[1892]: time="2026-01-28T00:48:43.152064734Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:43.153141 containerd[1892]: time="2026-01-28T00:48:43.153112913Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.015134883s" Jan 28 00:48:43.153205 containerd[1892]: time="2026-01-28T00:48:43.153145554Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 28 00:48:43.154583 containerd[1892]: time="2026-01-28T00:48:43.154562898Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 28 00:48:43.169548 containerd[1892]: time="2026-01-28T00:48:43.169434855Z" level=info msg="CreateContainer within sandbox \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 00:48:43.218777 containerd[1892]: time="2026-01-28T00:48:43.218129856Z" level=info msg="Container 0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:43.762193 containerd[1892]: time="2026-01-28T00:48:43.762124535Z" level=info msg="CreateContainer within sandbox \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\"" Jan 28 00:48:43.762886 containerd[1892]: time="2026-01-28T00:48:43.762841863Z" level=info msg="StartContainer for \"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\"" Jan 28 00:48:43.763948 containerd[1892]: time="2026-01-28T00:48:43.763883274Z" level=info msg="connecting to shim 0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed" address="unix:///run/containerd/s/7bf41dfd6c17993f09727ce8d096953486e47589aec2658004b53362abcc560a" protocol=ttrpc version=3 Jan 28 00:48:43.781139 systemd[1]: Started cri-containerd-0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed.scope - libcontainer container 0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed. Jan 28 00:48:43.804554 containerd[1892]: time="2026-01-28T00:48:43.804512219Z" level=info msg="StartContainer for \"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\" returns successfully" Jan 28 00:48:43.809749 systemd[1]: cri-containerd-0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed.scope: Deactivated successfully. Jan 28 00:48:43.812658 containerd[1892]: time="2026-01-28T00:48:43.812496425Z" level=info msg="received container exit event container_id:\"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\" id:\"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\" pid:3875 exited_at:{seconds:1769561323 nanos:812122836}" Jan 28 00:48:44.215988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed-rootfs.mount: Deactivated successfully. Jan 28 00:48:45.712879 containerd[1892]: time="2026-01-28T00:48:45.712840330Z" level=info msg="CreateContainer within sandbox \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 00:48:45.734982 containerd[1892]: time="2026-01-28T00:48:45.734559046Z" level=info msg="Container 48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:45.749809 containerd[1892]: time="2026-01-28T00:48:45.749769230Z" level=info msg="CreateContainer within sandbox \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\"" Jan 28 00:48:45.751193 containerd[1892]: time="2026-01-28T00:48:45.751163269Z" level=info msg="StartContainer for \"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\"" Jan 28 00:48:45.752022 containerd[1892]: time="2026-01-28T00:48:45.751994793Z" level=info msg="connecting to shim 48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3" address="unix:///run/containerd/s/7bf41dfd6c17993f09727ce8d096953486e47589aec2658004b53362abcc560a" protocol=ttrpc version=3 Jan 28 00:48:45.770133 systemd[1]: Started cri-containerd-48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3.scope - libcontainer container 48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3. Jan 28 00:48:45.797078 containerd[1892]: time="2026-01-28T00:48:45.797050608Z" level=info msg="StartContainer for \"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\" returns successfully" Jan 28 00:48:45.804766 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:48:45.806091 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:48:45.806292 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:48:45.808755 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:48:45.811388 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 28 00:48:45.813568 systemd[1]: cri-containerd-48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3.scope: Deactivated successfully. Jan 28 00:48:45.816240 containerd[1892]: time="2026-01-28T00:48:45.816188813Z" level=info msg="received container exit event container_id:\"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\" id:\"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\" pid:3919 exited_at:{seconds:1769561325 nanos:815511046}" Jan 28 00:48:45.831269 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:48:46.531643 containerd[1892]: time="2026-01-28T00:48:46.531583908Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:46.534538 containerd[1892]: time="2026-01-28T00:48:46.534416396Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 28 00:48:46.538408 containerd[1892]: time="2026-01-28T00:48:46.538370841Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:46.539381 containerd[1892]: time="2026-01-28T00:48:46.539287536Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.384603266s" Jan 28 00:48:46.539381 containerd[1892]: time="2026-01-28T00:48:46.539314593Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 28 00:48:46.557644 containerd[1892]: time="2026-01-28T00:48:46.557595633Z" level=info msg="CreateContainer within sandbox \"5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 28 00:48:46.575849 containerd[1892]: time="2026-01-28T00:48:46.575489428Z" level=info msg="Container f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:46.592462 containerd[1892]: time="2026-01-28T00:48:46.592430119Z" level=info msg="CreateContainer within sandbox \"5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\"" Jan 28 00:48:46.593076 containerd[1892]: time="2026-01-28T00:48:46.593052180Z" level=info msg="StartContainer for \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\"" Jan 28 00:48:46.593622 containerd[1892]: time="2026-01-28T00:48:46.593587158Z" level=info msg="connecting to shim f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67" address="unix:///run/containerd/s/24b9ef4e7c86220960bb772d342acb59f146470b987cc21cdc3a2438af582c90" protocol=ttrpc version=3 Jan 28 00:48:46.613152 systemd[1]: Started cri-containerd-f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67.scope - libcontainer container f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67. Jan 28 00:48:46.638933 containerd[1892]: time="2026-01-28T00:48:46.638904869Z" level=info msg="StartContainer for \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\" returns successfully" Jan 28 00:48:46.722051 containerd[1892]: time="2026-01-28T00:48:46.722015838Z" level=info msg="CreateContainer within sandbox \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 00:48:46.736478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3-rootfs.mount: Deactivated successfully. Jan 28 00:48:46.750879 containerd[1892]: time="2026-01-28T00:48:46.750791008Z" level=info msg="Container ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:46.756312 kubelet[3452]: I0128 00:48:46.756264 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-42h42" podStartSLOduration=0.712988336 podStartE2EDuration="9.756248864s" podCreationTimestamp="2026-01-28 00:48:37 +0000 UTC" firstStartedPulling="2026-01-28 00:48:37.496728424 +0000 UTC m=+7.937442449" lastFinishedPulling="2026-01-28 00:48:46.53998896 +0000 UTC m=+16.980702977" observedRunningTime="2026-01-28 00:48:46.724153542 +0000 UTC m=+17.164867583" watchObservedRunningTime="2026-01-28 00:48:46.756248864 +0000 UTC m=+17.196962881" Jan 28 00:48:46.770430 containerd[1892]: time="2026-01-28T00:48:46.770393645Z" level=info msg="CreateContainer within sandbox \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\"" Jan 28 00:48:46.771110 containerd[1892]: time="2026-01-28T00:48:46.770996809Z" level=info msg="StartContainer for \"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\"" Jan 28 00:48:46.772571 containerd[1892]: time="2026-01-28T00:48:46.772544629Z" level=info msg="connecting to shim ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97" address="unix:///run/containerd/s/7bf41dfd6c17993f09727ce8d096953486e47589aec2658004b53362abcc560a" protocol=ttrpc version=3 Jan 28 00:48:46.795766 systemd[1]: Started cri-containerd-ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97.scope - libcontainer container ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97. Jan 28 00:48:46.861968 systemd[1]: cri-containerd-ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97.scope: Deactivated successfully. Jan 28 00:48:46.868098 containerd[1892]: time="2026-01-28T00:48:46.868004007Z" level=info msg="received container exit event container_id:\"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\" id:\"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\" pid:4017 exited_at:{seconds:1769561326 nanos:867872338}" Jan 28 00:48:46.868630 containerd[1892]: time="2026-01-28T00:48:46.868451334Z" level=info msg="StartContainer for \"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\" returns successfully" Jan 28 00:48:46.890007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97-rootfs.mount: Deactivated successfully. Jan 28 00:48:47.740817 containerd[1892]: time="2026-01-28T00:48:47.740766478Z" level=info msg="CreateContainer within sandbox \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 00:48:47.766932 containerd[1892]: time="2026-01-28T00:48:47.766849605Z" level=info msg="Container 1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:47.781527 containerd[1892]: time="2026-01-28T00:48:47.781492946Z" level=info msg="CreateContainer within sandbox \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\"" Jan 28 00:48:47.782313 containerd[1892]: time="2026-01-28T00:48:47.782266949Z" level=info msg="StartContainer for \"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\"" Jan 28 00:48:47.783381 containerd[1892]: time="2026-01-28T00:48:47.783302352Z" level=info msg="connecting to shim 1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4" address="unix:///run/containerd/s/7bf41dfd6c17993f09727ce8d096953486e47589aec2658004b53362abcc560a" protocol=ttrpc version=3 Jan 28 00:48:47.803146 systemd[1]: Started cri-containerd-1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4.scope - libcontainer container 1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4. Jan 28 00:48:47.821890 systemd[1]: cri-containerd-1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4.scope: Deactivated successfully. Jan 28 00:48:47.828698 containerd[1892]: time="2026-01-28T00:48:47.828565755Z" level=info msg="received container exit event container_id:\"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\" id:\"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\" pid:4057 exited_at:{seconds:1769561327 nanos:823246467}" Jan 28 00:48:47.829790 containerd[1892]: time="2026-01-28T00:48:47.829771041Z" level=info msg="StartContainer for \"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\" returns successfully" Jan 28 00:48:47.843377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4-rootfs.mount: Deactivated successfully. Jan 28 00:48:48.730054 containerd[1892]: time="2026-01-28T00:48:48.729355481Z" level=info msg="CreateContainer within sandbox \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 00:48:48.761344 containerd[1892]: time="2026-01-28T00:48:48.761281428Z" level=info msg="Container b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:48.766468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188680239.mount: Deactivated successfully. Jan 28 00:48:48.783033 containerd[1892]: time="2026-01-28T00:48:48.782865431Z" level=info msg="CreateContainer within sandbox \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\"" Jan 28 00:48:48.784185 containerd[1892]: time="2026-01-28T00:48:48.784162248Z" level=info msg="StartContainer for \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\"" Jan 28 00:48:48.784835 containerd[1892]: time="2026-01-28T00:48:48.784809684Z" level=info msg="connecting to shim b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc" address="unix:///run/containerd/s/7bf41dfd6c17993f09727ce8d096953486e47589aec2658004b53362abcc560a" protocol=ttrpc version=3 Jan 28 00:48:48.802139 systemd[1]: Started cri-containerd-b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc.scope - libcontainer container b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc. Jan 28 00:48:48.838041 containerd[1892]: time="2026-01-28T00:48:48.837780937Z" level=info msg="StartContainer for \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\" returns successfully" Jan 28 00:48:48.963435 kubelet[3452]: I0128 00:48:48.963400 3452 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 28 00:48:49.003271 systemd[1]: Created slice kubepods-burstable-pod8dba03e2_2de1_4d7f_8328_5c4780bc7a01.slice - libcontainer container kubepods-burstable-pod8dba03e2_2de1_4d7f_8328_5c4780bc7a01.slice. Jan 28 00:48:49.011055 systemd[1]: Created slice kubepods-burstable-pod57f18a65_c408_4cc3_9222_c5baadc43752.slice - libcontainer container kubepods-burstable-pod57f18a65_c408_4cc3_9222_c5baadc43752.slice. Jan 28 00:48:49.044897 kubelet[3452]: I0128 00:48:49.044863 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5rfs\" (UniqueName: \"kubernetes.io/projected/8dba03e2-2de1-4d7f-8328-5c4780bc7a01-kube-api-access-g5rfs\") pod \"coredns-66bc5c9577-4mj7z\" (UID: \"8dba03e2-2de1-4d7f-8328-5c4780bc7a01\") " pod="kube-system/coredns-66bc5c9577-4mj7z" Jan 28 00:48:49.044897 kubelet[3452]: I0128 00:48:49.044907 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8dba03e2-2de1-4d7f-8328-5c4780bc7a01-config-volume\") pod \"coredns-66bc5c9577-4mj7z\" (UID: \"8dba03e2-2de1-4d7f-8328-5c4780bc7a01\") " pod="kube-system/coredns-66bc5c9577-4mj7z" Jan 28 00:48:49.045113 kubelet[3452]: I0128 00:48:49.044921 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57f18a65-c408-4cc3-9222-c5baadc43752-config-volume\") pod \"coredns-66bc5c9577-gtvng\" (UID: \"57f18a65-c408-4cc3-9222-c5baadc43752\") " pod="kube-system/coredns-66bc5c9577-gtvng" Jan 28 00:48:49.045113 kubelet[3452]: I0128 00:48:49.044944 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlnkq\" (UniqueName: \"kubernetes.io/projected/57f18a65-c408-4cc3-9222-c5baadc43752-kube-api-access-wlnkq\") pod \"coredns-66bc5c9577-gtvng\" (UID: \"57f18a65-c408-4cc3-9222-c5baadc43752\") " pod="kube-system/coredns-66bc5c9577-gtvng" Jan 28 00:48:49.312425 containerd[1892]: time="2026-01-28T00:48:49.312172720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4mj7z,Uid:8dba03e2-2de1-4d7f-8328-5c4780bc7a01,Namespace:kube-system,Attempt:0,}" Jan 28 00:48:49.326146 containerd[1892]: time="2026-01-28T00:48:49.326081248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gtvng,Uid:57f18a65-c408-4cc3-9222-c5baadc43752,Namespace:kube-system,Attempt:0,}" Jan 28 00:48:50.878469 systemd-networkd[1485]: cilium_host: Link UP Jan 28 00:48:50.879673 systemd-networkd[1485]: cilium_net: Link UP Jan 28 00:48:50.879769 systemd-networkd[1485]: cilium_host: Gained carrier Jan 28 00:48:50.879843 systemd-networkd[1485]: cilium_net: Gained carrier Jan 28 00:48:51.022632 systemd-networkd[1485]: cilium_vxlan: Link UP Jan 28 00:48:51.022897 systemd-networkd[1485]: cilium_vxlan: Gained carrier Jan 28 00:48:51.138181 systemd-networkd[1485]: cilium_net: Gained IPv6LL Jan 28 00:48:51.280048 kernel: NET: Registered PF_ALG protocol family Jan 28 00:48:51.473210 systemd-networkd[1485]: cilium_host: Gained IPv6LL Jan 28 00:48:51.836165 systemd-networkd[1485]: lxc_health: Link UP Jan 28 00:48:51.845865 systemd-networkd[1485]: lxc_health: Gained carrier Jan 28 00:48:52.356406 systemd-networkd[1485]: lxcd4df354cf9f2: Link UP Jan 28 00:48:52.367075 kernel: eth0: renamed from tmp61b30 Jan 28 00:48:52.371137 systemd-networkd[1485]: lxcd4df354cf9f2: Gained carrier Jan 28 00:48:52.383942 systemd-networkd[1485]: lxcb4fac58c9c66: Link UP Jan 28 00:48:52.390070 kernel: eth0: renamed from tmp42b21 Jan 28 00:48:52.391215 systemd-networkd[1485]: lxcb4fac58c9c66: Gained carrier Jan 28 00:48:52.753215 systemd-networkd[1485]: cilium_vxlan: Gained IPv6LL Jan 28 00:48:52.976855 kubelet[3452]: I0128 00:48:52.976262 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4dxwj" podStartSLOduration=10.959393981 podStartE2EDuration="16.976248367s" podCreationTimestamp="2026-01-28 00:48:36 +0000 UTC" firstStartedPulling="2026-01-28 00:48:37.13748908 +0000 UTC m=+7.578203097" lastFinishedPulling="2026-01-28 00:48:43.154343466 +0000 UTC m=+13.595057483" observedRunningTime="2026-01-28 00:48:49.739759765 +0000 UTC m=+20.180473782" watchObservedRunningTime="2026-01-28 00:48:52.976248367 +0000 UTC m=+23.416962384" Jan 28 00:48:53.201223 systemd-networkd[1485]: lxc_health: Gained IPv6LL Jan 28 00:48:53.842346 systemd-networkd[1485]: lxcb4fac58c9c66: Gained IPv6LL Jan 28 00:48:54.225263 systemd-networkd[1485]: lxcd4df354cf9f2: Gained IPv6LL Jan 28 00:48:54.913667 containerd[1892]: time="2026-01-28T00:48:54.913537619Z" level=info msg="connecting to shim 61b304c6601c85f79e0960f9297fa1fb6746893aeea9de5bfe9de67ded0be8ef" address="unix:///run/containerd/s/9b521a8f293bbdac780b1b5d790e519c06221dd8810cacc15decae425e2b6c4d" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:48:54.937278 containerd[1892]: time="2026-01-28T00:48:54.937183079Z" level=info msg="connecting to shim 42b21dbe25bf87cc877b3a3660d21069c3bb5037568887ea5d3615a20f2a9a50" address="unix:///run/containerd/s/c5db0d569d44b8c0f8a5e33b0900ef9adb780364292b99752eb8802071b26362" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:48:54.947385 systemd[1]: Started cri-containerd-61b304c6601c85f79e0960f9297fa1fb6746893aeea9de5bfe9de67ded0be8ef.scope - libcontainer container 61b304c6601c85f79e0960f9297fa1fb6746893aeea9de5bfe9de67ded0be8ef. Jan 28 00:48:54.961244 systemd[1]: Started cri-containerd-42b21dbe25bf87cc877b3a3660d21069c3bb5037568887ea5d3615a20f2a9a50.scope - libcontainer container 42b21dbe25bf87cc877b3a3660d21069c3bb5037568887ea5d3615a20f2a9a50. Jan 28 00:48:54.991087 containerd[1892]: time="2026-01-28T00:48:54.991035863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4mj7z,Uid:8dba03e2-2de1-4d7f-8328-5c4780bc7a01,Namespace:kube-system,Attempt:0,} returns sandbox id \"61b304c6601c85f79e0960f9297fa1fb6746893aeea9de5bfe9de67ded0be8ef\"" Jan 28 00:48:54.997025 containerd[1892]: time="2026-01-28T00:48:54.996960163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gtvng,Uid:57f18a65-c408-4cc3-9222-c5baadc43752,Namespace:kube-system,Attempt:0,} returns sandbox id \"42b21dbe25bf87cc877b3a3660d21069c3bb5037568887ea5d3615a20f2a9a50\"" Jan 28 00:48:55.003675 containerd[1892]: time="2026-01-28T00:48:55.003640270Z" level=info msg="CreateContainer within sandbox \"61b304c6601c85f79e0960f9297fa1fb6746893aeea9de5bfe9de67ded0be8ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:48:55.009605 containerd[1892]: time="2026-01-28T00:48:55.009567266Z" level=info msg="CreateContainer within sandbox \"42b21dbe25bf87cc877b3a3660d21069c3bb5037568887ea5d3615a20f2a9a50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:48:55.031034 containerd[1892]: time="2026-01-28T00:48:55.030619636Z" level=info msg="Container d56f440759ccba2d90f52fd62aba180304c9572141d25ceb372c950bb7519561: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:55.035070 containerd[1892]: time="2026-01-28T00:48:55.035040952Z" level=info msg="Container 580ba5326325b3749fdba409c9213aade3960450e6f1fe937031e322ee6e7211: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:55.053895 containerd[1892]: time="2026-01-28T00:48:55.053855363Z" level=info msg="CreateContainer within sandbox \"61b304c6601c85f79e0960f9297fa1fb6746893aeea9de5bfe9de67ded0be8ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d56f440759ccba2d90f52fd62aba180304c9572141d25ceb372c950bb7519561\"" Jan 28 00:48:55.054847 containerd[1892]: time="2026-01-28T00:48:55.054806490Z" level=info msg="StartContainer for \"d56f440759ccba2d90f52fd62aba180304c9572141d25ceb372c950bb7519561\"" Jan 28 00:48:55.058166 containerd[1892]: time="2026-01-28T00:48:55.058095850Z" level=info msg="connecting to shim d56f440759ccba2d90f52fd62aba180304c9572141d25ceb372c950bb7519561" address="unix:///run/containerd/s/9b521a8f293bbdac780b1b5d790e519c06221dd8810cacc15decae425e2b6c4d" protocol=ttrpc version=3 Jan 28 00:48:55.060495 containerd[1892]: time="2026-01-28T00:48:55.060448332Z" level=info msg="CreateContainer within sandbox \"42b21dbe25bf87cc877b3a3660d21069c3bb5037568887ea5d3615a20f2a9a50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"580ba5326325b3749fdba409c9213aade3960450e6f1fe937031e322ee6e7211\"" Jan 28 00:48:55.062639 containerd[1892]: time="2026-01-28T00:48:55.062613073Z" level=info msg="StartContainer for \"580ba5326325b3749fdba409c9213aade3960450e6f1fe937031e322ee6e7211\"" Jan 28 00:48:55.064741 containerd[1892]: time="2026-01-28T00:48:55.064717011Z" level=info msg="connecting to shim 580ba5326325b3749fdba409c9213aade3960450e6f1fe937031e322ee6e7211" address="unix:///run/containerd/s/c5db0d569d44b8c0f8a5e33b0900ef9adb780364292b99752eb8802071b26362" protocol=ttrpc version=3 Jan 28 00:48:55.090179 systemd[1]: Started cri-containerd-d56f440759ccba2d90f52fd62aba180304c9572141d25ceb372c950bb7519561.scope - libcontainer container d56f440759ccba2d90f52fd62aba180304c9572141d25ceb372c950bb7519561. Jan 28 00:48:55.101393 systemd[1]: Started cri-containerd-580ba5326325b3749fdba409c9213aade3960450e6f1fe937031e322ee6e7211.scope - libcontainer container 580ba5326325b3749fdba409c9213aade3960450e6f1fe937031e322ee6e7211. Jan 28 00:48:55.154182 containerd[1892]: time="2026-01-28T00:48:55.154085288Z" level=info msg="StartContainer for \"d56f440759ccba2d90f52fd62aba180304c9572141d25ceb372c950bb7519561\" returns successfully" Jan 28 00:48:55.156338 containerd[1892]: time="2026-01-28T00:48:55.156307566Z" level=info msg="StartContainer for \"580ba5326325b3749fdba409c9213aade3960450e6f1fe937031e322ee6e7211\" returns successfully" Jan 28 00:48:55.757392 kubelet[3452]: I0128 00:48:55.757334 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gtvng" podStartSLOduration=18.757322346 podStartE2EDuration="18.757322346s" podCreationTimestamp="2026-01-28 00:48:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:48:55.757009111 +0000 UTC m=+26.197723136" watchObservedRunningTime="2026-01-28 00:48:55.757322346 +0000 UTC m=+26.198036363" Jan 28 00:48:55.793481 kubelet[3452]: I0128 00:48:55.793400 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4mj7z" podStartSLOduration=18.793385509 podStartE2EDuration="18.793385509s" podCreationTimestamp="2026-01-28 00:48:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:48:55.791803631 +0000 UTC m=+26.232517664" watchObservedRunningTime="2026-01-28 00:48:55.793385509 +0000 UTC m=+26.234099534" Jan 28 00:48:55.902600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867046822.mount: Deactivated successfully. Jan 28 00:50:02.478870 systemd[1]: Started sshd@7-10.200.20.26:22-10.200.16.10:57946.service - OpenSSH per-connection server daemon (10.200.16.10:57946). Jan 28 00:50:02.969072 sshd[4779]: Accepted publickey for core from 10.200.16.10 port 57946 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:02.970140 sshd-session[4779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:02.973755 systemd-logind[1876]: New session 10 of user core. Jan 28 00:50:02.979139 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 00:50:03.370107 sshd[4782]: Connection closed by 10.200.16.10 port 57946 Jan 28 00:50:03.369906 sshd-session[4779]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:03.374115 systemd[1]: sshd@7-10.200.20.26:22-10.200.16.10:57946.service: Deactivated successfully. Jan 28 00:50:03.375613 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 00:50:03.376329 systemd-logind[1876]: Session 10 logged out. Waiting for processes to exit. Jan 28 00:50:03.377645 systemd-logind[1876]: Removed session 10. Jan 28 00:50:08.460962 systemd[1]: Started sshd@8-10.200.20.26:22-10.200.16.10:57954.service - OpenSSH per-connection server daemon (10.200.16.10:57954). Jan 28 00:50:08.944241 sshd[4798]: Accepted publickey for core from 10.200.16.10 port 57954 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:08.945282 sshd-session[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:08.948857 systemd-logind[1876]: New session 11 of user core. Jan 28 00:50:08.954271 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 00:50:09.328929 sshd[4801]: Connection closed by 10.200.16.10 port 57954 Jan 28 00:50:09.329448 sshd-session[4798]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:09.333108 systemd-logind[1876]: Session 11 logged out. Waiting for processes to exit. Jan 28 00:50:09.333794 systemd[1]: sshd@8-10.200.20.26:22-10.200.16.10:57954.service: Deactivated successfully. Jan 28 00:50:09.336224 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 00:50:09.337623 systemd-logind[1876]: Removed session 11. Jan 28 00:50:14.420630 systemd[1]: Started sshd@9-10.200.20.26:22-10.200.16.10:48512.service - OpenSSH per-connection server daemon (10.200.16.10:48512). Jan 28 00:50:14.910568 sshd[4813]: Accepted publickey for core from 10.200.16.10 port 48512 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:14.911721 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:14.915383 systemd-logind[1876]: New session 12 of user core. Jan 28 00:50:14.925142 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 00:50:15.303659 sshd[4816]: Connection closed by 10.200.16.10 port 48512 Jan 28 00:50:15.304342 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:15.307247 systemd[1]: sshd@9-10.200.20.26:22-10.200.16.10:48512.service: Deactivated successfully. Jan 28 00:50:15.310799 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 00:50:15.311959 systemd-logind[1876]: Session 12 logged out. Waiting for processes to exit. Jan 28 00:50:15.313598 systemd-logind[1876]: Removed session 12. Jan 28 00:50:20.384768 systemd[1]: Started sshd@10-10.200.20.26:22-10.200.16.10:48088.service - OpenSSH per-connection server daemon (10.200.16.10:48088). Jan 28 00:50:20.836808 sshd[4829]: Accepted publickey for core from 10.200.16.10 port 48088 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:20.837819 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:20.841516 systemd-logind[1876]: New session 13 of user core. Jan 28 00:50:20.850158 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 00:50:21.202671 sshd[4832]: Connection closed by 10.200.16.10 port 48088 Jan 28 00:50:21.203063 sshd-session[4829]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:21.206731 systemd[1]: sshd@10-10.200.20.26:22-10.200.16.10:48088.service: Deactivated successfully. Jan 28 00:50:21.208643 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 00:50:21.210045 systemd-logind[1876]: Session 13 logged out. Waiting for processes to exit. Jan 28 00:50:21.211520 systemd-logind[1876]: Removed session 13. Jan 28 00:50:21.292951 systemd[1]: Started sshd@11-10.200.20.26:22-10.200.16.10:48098.service - OpenSSH per-connection server daemon (10.200.16.10:48098). Jan 28 00:50:21.781980 sshd[4844]: Accepted publickey for core from 10.200.16.10 port 48098 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:21.782688 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:21.786224 systemd-logind[1876]: New session 14 of user core. Jan 28 00:50:21.793153 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 00:50:22.197135 sshd[4847]: Connection closed by 10.200.16.10 port 48098 Jan 28 00:50:22.197697 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:22.200870 systemd-logind[1876]: Session 14 logged out. Waiting for processes to exit. Jan 28 00:50:22.201645 systemd[1]: sshd@11-10.200.20.26:22-10.200.16.10:48098.service: Deactivated successfully. Jan 28 00:50:22.203528 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 00:50:22.205924 systemd-logind[1876]: Removed session 14. Jan 28 00:50:22.290124 systemd[1]: Started sshd@12-10.200.20.26:22-10.200.16.10:48104.service - OpenSSH per-connection server daemon (10.200.16.10:48104). Jan 28 00:50:22.779038 sshd[4857]: Accepted publickey for core from 10.200.16.10 port 48104 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:22.780236 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:22.783609 systemd-logind[1876]: New session 15 of user core. Jan 28 00:50:22.789324 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 00:50:23.171304 sshd[4860]: Connection closed by 10.200.16.10 port 48104 Jan 28 00:50:23.171223 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:23.174447 systemd[1]: sshd@12-10.200.20.26:22-10.200.16.10:48104.service: Deactivated successfully. Jan 28 00:50:23.176432 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 00:50:23.178677 systemd-logind[1876]: Session 15 logged out. Waiting for processes to exit. Jan 28 00:50:23.179914 systemd-logind[1876]: Removed session 15. Jan 28 00:50:28.274381 systemd[1]: Started sshd@13-10.200.20.26:22-10.200.16.10:48118.service - OpenSSH per-connection server daemon (10.200.16.10:48118). Jan 28 00:50:28.720466 sshd[4873]: Accepted publickey for core from 10.200.16.10 port 48118 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:28.721480 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:28.724970 systemd-logind[1876]: New session 16 of user core. Jan 28 00:50:28.735302 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 00:50:29.088495 sshd[4876]: Connection closed by 10.200.16.10 port 48118 Jan 28 00:50:29.089162 sshd-session[4873]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:29.092328 systemd[1]: sshd@13-10.200.20.26:22-10.200.16.10:48118.service: Deactivated successfully. Jan 28 00:50:29.094229 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 00:50:29.095489 systemd-logind[1876]: Session 16 logged out. Waiting for processes to exit. Jan 28 00:50:29.097125 systemd-logind[1876]: Removed session 16. Jan 28 00:50:29.170678 systemd[1]: Started sshd@14-10.200.20.26:22-10.200.16.10:48128.service - OpenSSH per-connection server daemon (10.200.16.10:48128). Jan 28 00:50:29.621871 sshd[4887]: Accepted publickey for core from 10.200.16.10 port 48128 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:29.622916 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:29.626915 systemd-logind[1876]: New session 17 of user core. Jan 28 00:50:29.631146 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 00:50:30.028279 sshd[4890]: Connection closed by 10.200.16.10 port 48128 Jan 28 00:50:30.029251 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:30.033309 systemd-logind[1876]: Session 17 logged out. Waiting for processes to exit. Jan 28 00:50:30.033497 systemd[1]: sshd@14-10.200.20.26:22-10.200.16.10:48128.service: Deactivated successfully. Jan 28 00:50:30.035412 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 00:50:30.037239 systemd-logind[1876]: Removed session 17. Jan 28 00:50:30.122806 systemd[1]: Started sshd@15-10.200.20.26:22-10.200.16.10:51800.service - OpenSSH per-connection server daemon (10.200.16.10:51800). Jan 28 00:50:30.613612 sshd[4902]: Accepted publickey for core from 10.200.16.10 port 51800 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:30.614670 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:30.618409 systemd-logind[1876]: New session 18 of user core. Jan 28 00:50:30.624302 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 00:50:31.419055 sshd[4905]: Connection closed by 10.200.16.10 port 51800 Jan 28 00:50:31.419686 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:31.422723 systemd-logind[1876]: Session 18 logged out. Waiting for processes to exit. Jan 28 00:50:31.422842 systemd[1]: sshd@15-10.200.20.26:22-10.200.16.10:51800.service: Deactivated successfully. Jan 28 00:50:31.424856 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 00:50:31.427756 systemd-logind[1876]: Removed session 18. Jan 28 00:50:31.514821 systemd[1]: Started sshd@16-10.200.20.26:22-10.200.16.10:51806.service - OpenSSH per-connection server daemon (10.200.16.10:51806). Jan 28 00:50:32.010895 sshd[4920]: Accepted publickey for core from 10.200.16.10 port 51806 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:32.011955 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:32.016051 systemd-logind[1876]: New session 19 of user core. Jan 28 00:50:32.021132 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 00:50:32.480328 sshd[4923]: Connection closed by 10.200.16.10 port 51806 Jan 28 00:50:32.479820 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:32.482627 systemd-logind[1876]: Session 19 logged out. Waiting for processes to exit. Jan 28 00:50:32.482979 systemd[1]: sshd@16-10.200.20.26:22-10.200.16.10:51806.service: Deactivated successfully. Jan 28 00:50:32.484848 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 00:50:32.487067 systemd-logind[1876]: Removed session 19. Jan 28 00:50:32.568933 systemd[1]: Started sshd@17-10.200.20.26:22-10.200.16.10:51814.service - OpenSSH per-connection server daemon (10.200.16.10:51814). Jan 28 00:50:33.021610 sshd[4935]: Accepted publickey for core from 10.200.16.10 port 51814 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:33.022338 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:33.025881 systemd-logind[1876]: New session 20 of user core. Jan 28 00:50:33.036449 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 00:50:33.386542 sshd[4938]: Connection closed by 10.200.16.10 port 51814 Jan 28 00:50:33.386450 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:33.390912 systemd[1]: sshd@17-10.200.20.26:22-10.200.16.10:51814.service: Deactivated successfully. Jan 28 00:50:33.392751 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 00:50:33.393721 systemd-logind[1876]: Session 20 logged out. Waiting for processes to exit. Jan 28 00:50:33.394938 systemd-logind[1876]: Removed session 20. Jan 28 00:50:38.484217 systemd[1]: Started sshd@18-10.200.20.26:22-10.200.16.10:51820.service - OpenSSH per-connection server daemon (10.200.16.10:51820). Jan 28 00:50:38.974070 sshd[4953]: Accepted publickey for core from 10.200.16.10 port 51820 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:38.974864 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:38.978278 systemd-logind[1876]: New session 21 of user core. Jan 28 00:50:38.985136 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 00:50:39.353133 sshd[4956]: Connection closed by 10.200.16.10 port 51820 Jan 28 00:50:39.353877 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:39.356793 systemd[1]: sshd@18-10.200.20.26:22-10.200.16.10:51820.service: Deactivated successfully. Jan 28 00:50:39.358778 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 00:50:39.361959 systemd-logind[1876]: Session 21 logged out. Waiting for processes to exit. Jan 28 00:50:39.362678 systemd-logind[1876]: Removed session 21. Jan 28 00:50:44.435834 systemd[1]: Started sshd@19-10.200.20.26:22-10.200.16.10:33790.service - OpenSSH per-connection server daemon (10.200.16.10:33790). Jan 28 00:50:44.886623 sshd[4968]: Accepted publickey for core from 10.200.16.10 port 33790 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:44.887524 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:44.891162 systemd-logind[1876]: New session 22 of user core. Jan 28 00:50:44.902177 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 00:50:45.253772 sshd[4971]: Connection closed by 10.200.16.10 port 33790 Jan 28 00:50:45.253585 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:45.257426 systemd[1]: sshd@19-10.200.20.26:22-10.200.16.10:33790.service: Deactivated successfully. Jan 28 00:50:45.261342 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 00:50:45.262093 systemd-logind[1876]: Session 22 logged out. Waiting for processes to exit. Jan 28 00:50:45.263233 systemd-logind[1876]: Removed session 22. Jan 28 00:50:45.342207 systemd[1]: Started sshd@20-10.200.20.26:22-10.200.16.10:33806.service - OpenSSH per-connection server daemon (10.200.16.10:33806). Jan 28 00:50:45.840563 sshd[4983]: Accepted publickey for core from 10.200.16.10 port 33806 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:45.841666 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:45.845247 systemd-logind[1876]: New session 23 of user core. Jan 28 00:50:45.856175 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 00:50:47.400254 containerd[1892]: time="2026-01-28T00:50:47.400206556Z" level=info msg="StopContainer for \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\" with timeout 30 (s)" Jan 28 00:50:47.401218 containerd[1892]: time="2026-01-28T00:50:47.401195925Z" level=info msg="Stop container \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\" with signal terminated" Jan 28 00:50:47.420269 containerd[1892]: time="2026-01-28T00:50:47.420209028Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:50:47.427258 systemd[1]: cri-containerd-f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67.scope: Deactivated successfully. Jan 28 00:50:47.429487 containerd[1892]: time="2026-01-28T00:50:47.429368256Z" level=info msg="received container exit event container_id:\"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\" id:\"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\" pid:3982 exited_at:{seconds:1769561447 nanos:428263651}" Jan 28 00:50:47.439049 containerd[1892]: time="2026-01-28T00:50:47.438881672Z" level=info msg="StopContainer for \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\" with timeout 2 (s)" Jan 28 00:50:47.439332 containerd[1892]: time="2026-01-28T00:50:47.439309270Z" level=info msg="Stop container \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\" with signal terminated" Jan 28 00:50:47.448179 systemd-networkd[1485]: lxc_health: Link DOWN Jan 28 00:50:47.448186 systemd-networkd[1485]: lxc_health: Lost carrier Jan 28 00:50:47.462817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67-rootfs.mount: Deactivated successfully. Jan 28 00:50:47.466797 systemd[1]: cri-containerd-b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc.scope: Deactivated successfully. Jan 28 00:50:47.467383 systemd[1]: cri-containerd-b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc.scope: Consumed 4.365s CPU time, 122.9M memory peak, 128K read from disk, 12.9M written to disk. Jan 28 00:50:47.468478 containerd[1892]: time="2026-01-28T00:50:47.468439465Z" level=info msg="received container exit event container_id:\"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\" id:\"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\" pid:4094 exited_at:{seconds:1769561447 nanos:468041460}" Jan 28 00:50:47.485786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc-rootfs.mount: Deactivated successfully. Jan 28 00:50:47.527320 containerd[1892]: time="2026-01-28T00:50:47.527280051Z" level=info msg="StopContainer for \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\" returns successfully" Jan 28 00:50:47.527981 containerd[1892]: time="2026-01-28T00:50:47.527956842Z" level=info msg="StopPodSandbox for \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\"" Jan 28 00:50:47.528161 containerd[1892]: time="2026-01-28T00:50:47.528141512Z" level=info msg="Container to stop \"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:50:47.528219 containerd[1892]: time="2026-01-28T00:50:47.528209523Z" level=info msg="Container to stop \"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:50:47.528255 containerd[1892]: time="2026-01-28T00:50:47.528246852Z" level=info msg="Container to stop \"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:50:47.528430 containerd[1892]: time="2026-01-28T00:50:47.528280173Z" level=info msg="Container to stop \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:50:47.528430 containerd[1892]: time="2026-01-28T00:50:47.528290205Z" level=info msg="Container to stop \"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:50:47.531749 containerd[1892]: time="2026-01-28T00:50:47.531725225Z" level=info msg="StopContainer for \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\" returns successfully" Jan 28 00:50:47.532630 containerd[1892]: time="2026-01-28T00:50:47.532596134Z" level=info msg="StopPodSandbox for \"5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069\"" Jan 28 00:50:47.532787 containerd[1892]: time="2026-01-28T00:50:47.532650256Z" level=info msg="Container to stop \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:50:47.535561 systemd[1]: cri-containerd-7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401.scope: Deactivated successfully. Jan 28 00:50:47.539394 containerd[1892]: time="2026-01-28T00:50:47.539325864Z" level=info msg="received sandbox exit event container_id:\"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" id:\"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" exit_status:137 exited_at:{seconds:1769561447 nanos:538566295}" monitor_name=podsandbox Jan 28 00:50:47.543966 systemd[1]: cri-containerd-5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069.scope: Deactivated successfully. Jan 28 00:50:47.546634 containerd[1892]: time="2026-01-28T00:50:47.545711919Z" level=info msg="received sandbox exit event container_id:\"5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069\" id:\"5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069\" exit_status:137 exited_at:{seconds:1769561447 nanos:543907802}" monitor_name=podsandbox Jan 28 00:50:47.566817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401-rootfs.mount: Deactivated successfully. Jan 28 00:50:47.572814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069-rootfs.mount: Deactivated successfully. Jan 28 00:50:47.584386 containerd[1892]: time="2026-01-28T00:50:47.584188652Z" level=info msg="shim disconnected" id=7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401 namespace=k8s.io Jan 28 00:50:47.584386 containerd[1892]: time="2026-01-28T00:50:47.584220645Z" level=warning msg="cleaning up after shim disconnected" id=7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401 namespace=k8s.io Jan 28 00:50:47.584386 containerd[1892]: time="2026-01-28T00:50:47.584248094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 00:50:47.584386 containerd[1892]: time="2026-01-28T00:50:47.584286944Z" level=info msg="shim disconnected" id=5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069 namespace=k8s.io Jan 28 00:50:47.584386 containerd[1892]: time="2026-01-28T00:50:47.584303392Z" level=warning msg="cleaning up after shim disconnected" id=5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069 namespace=k8s.io Jan 28 00:50:47.584386 containerd[1892]: time="2026-01-28T00:50:47.584318329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 00:50:47.599301 containerd[1892]: time="2026-01-28T00:50:47.597550797Z" level=info msg="received sandbox container exit event sandbox_id:\"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" exit_status:137 exited_at:{seconds:1769561447 nanos:538566295}" monitor_name=criService Jan 28 00:50:47.599301 containerd[1892]: time="2026-01-28T00:50:47.598163210Z" level=info msg="TearDown network for sandbox \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" successfully" Jan 28 00:50:47.599301 containerd[1892]: time="2026-01-28T00:50:47.598183947Z" level=info msg="StopPodSandbox for \"7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401\" returns successfully" Jan 28 00:50:47.599368 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7de16bfd2830b6e4a668622eab1888999b8b635a1154f0425fa336af0a064401-shm.mount: Deactivated successfully. Jan 28 00:50:47.603667 containerd[1892]: time="2026-01-28T00:50:47.603631914Z" level=info msg="received sandbox container exit event sandbox_id:\"5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069\" exit_status:137 exited_at:{seconds:1769561447 nanos:543907802}" monitor_name=criService Jan 28 00:50:47.603769 containerd[1892]: time="2026-01-28T00:50:47.603733309Z" level=info msg="TearDown network for sandbox \"5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069\" successfully" Jan 28 00:50:47.603769 containerd[1892]: time="2026-01-28T00:50:47.603747150Z" level=info msg="StopPodSandbox for \"5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069\" returns successfully" Jan 28 00:50:47.725203 kubelet[3452]: I0128 00:50:47.725074 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c32d953-d1dc-4175-9b3c-0ad23940fc49-cilium-config-path\") pod \"7c32d953-d1dc-4175-9b3c-0ad23940fc49\" (UID: \"7c32d953-d1dc-4175-9b3c-0ad23940fc49\") " Jan 28 00:50:47.726298 kubelet[3452]: I0128 00:50:47.726259 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5xbz\" (UniqueName: \"kubernetes.io/projected/a94ecc52-8097-45ec-977e-59e6e58bdce3-kube-api-access-c5xbz\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726503 kubelet[3452]: I0128 00:50:47.726485 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-run\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726503 kubelet[3452]: I0128 00:50:47.726504 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-xtables-lock\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726565 kubelet[3452]: I0128 00:50:47.726514 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-host-proc-sys-net\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726565 kubelet[3452]: I0128 00:50:47.726532 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-config-path\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726565 kubelet[3452]: I0128 00:50:47.726543 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbd6c\" (UniqueName: \"kubernetes.io/projected/7c32d953-d1dc-4175-9b3c-0ad23940fc49-kube-api-access-fbd6c\") pod \"7c32d953-d1dc-4175-9b3c-0ad23940fc49\" (UID: \"7c32d953-d1dc-4175-9b3c-0ad23940fc49\") " Jan 28 00:50:47.726565 kubelet[3452]: I0128 00:50:47.726554 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-cgroup\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726565 kubelet[3452]: I0128 00:50:47.726564 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-hostproc\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726648 kubelet[3452]: I0128 00:50:47.726574 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a94ecc52-8097-45ec-977e-59e6e58bdce3-clustermesh-secrets\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726648 kubelet[3452]: I0128 00:50:47.726583 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cni-path\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726648 kubelet[3452]: I0128 00:50:47.726594 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-etc-cni-netd\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726648 kubelet[3452]: I0128 00:50:47.726602 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-lib-modules\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726648 kubelet[3452]: I0128 00:50:47.726611 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-bpf-maps\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726648 kubelet[3452]: I0128 00:50:47.726620 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-host-proc-sys-kernel\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.726737 kubelet[3452]: I0128 00:50:47.726632 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a94ecc52-8097-45ec-977e-59e6e58bdce3-hubble-tls\") pod \"a94ecc52-8097-45ec-977e-59e6e58bdce3\" (UID: \"a94ecc52-8097-45ec-977e-59e6e58bdce3\") " Jan 28 00:50:47.727724 kubelet[3452]: I0128 00:50:47.727694 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c32d953-d1dc-4175-9b3c-0ad23940fc49-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c32d953-d1dc-4175-9b3c-0ad23940fc49" (UID: "7c32d953-d1dc-4175-9b3c-0ad23940fc49"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 00:50:47.727840 kubelet[3452]: I0128 00:50:47.727827 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:50:47.728227 kubelet[3452]: I0128 00:50:47.728197 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-hostproc" (OuterVolumeSpecName: "hostproc") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:50:47.729701 kubelet[3452]: I0128 00:50:47.729676 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cni-path" (OuterVolumeSpecName: "cni-path") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:50:47.729772 kubelet[3452]: I0128 00:50:47.729706 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:50:47.729772 kubelet[3452]: I0128 00:50:47.729726 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:50:47.729772 kubelet[3452]: I0128 00:50:47.729753 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:50:47.729772 kubelet[3452]: I0128 00:50:47.729762 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:50:47.729838 kubelet[3452]: I0128 00:50:47.729783 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:50:47.729838 kubelet[3452]: I0128 00:50:47.729792 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:50:47.729838 kubelet[3452]: I0128 00:50:47.729807 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:50:47.730030 kubelet[3452]: I0128 00:50:47.729995 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a94ecc52-8097-45ec-977e-59e6e58bdce3-kube-api-access-c5xbz" (OuterVolumeSpecName: "kube-api-access-c5xbz") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "kube-api-access-c5xbz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 00:50:47.731637 kubelet[3452]: I0128 00:50:47.731506 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a94ecc52-8097-45ec-977e-59e6e58bdce3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 00:50:47.731707 kubelet[3452]: I0128 00:50:47.731644 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a94ecc52-8097-45ec-977e-59e6e58bdce3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 00:50:47.731829 kubelet[3452]: I0128 00:50:47.731806 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c32d953-d1dc-4175-9b3c-0ad23940fc49-kube-api-access-fbd6c" (OuterVolumeSpecName: "kube-api-access-fbd6c") pod "7c32d953-d1dc-4175-9b3c-0ad23940fc49" (UID: "7c32d953-d1dc-4175-9b3c-0ad23940fc49"). InnerVolumeSpecName "kube-api-access-fbd6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 00:50:47.732508 kubelet[3452]: I0128 00:50:47.732483 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a94ecc52-8097-45ec-977e-59e6e58bdce3" (UID: "a94ecc52-8097-45ec-977e-59e6e58bdce3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 00:50:47.827391 kubelet[3452]: I0128 00:50:47.827288 3452 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-xtables-lock\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827391 kubelet[3452]: I0128 00:50:47.827329 3452 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-host-proc-sys-net\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827391 kubelet[3452]: I0128 00:50:47.827338 3452 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-config-path\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827391 kubelet[3452]: I0128 00:50:47.827344 3452 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fbd6c\" (UniqueName: \"kubernetes.io/projected/7c32d953-d1dc-4175-9b3c-0ad23940fc49-kube-api-access-fbd6c\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827391 kubelet[3452]: I0128 00:50:47.827352 3452 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-cgroup\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827391 kubelet[3452]: I0128 00:50:47.827363 3452 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-hostproc\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827391 kubelet[3452]: I0128 00:50:47.827369 3452 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a94ecc52-8097-45ec-977e-59e6e58bdce3-clustermesh-secrets\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827391 kubelet[3452]: I0128 00:50:47.827374 3452 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cni-path\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827670 kubelet[3452]: I0128 00:50:47.827393 3452 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-etc-cni-netd\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827670 kubelet[3452]: I0128 00:50:47.827401 3452 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-lib-modules\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827670 kubelet[3452]: I0128 00:50:47.827408 3452 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-bpf-maps\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827670 kubelet[3452]: I0128 00:50:47.827413 3452 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-host-proc-sys-kernel\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827670 kubelet[3452]: I0128 00:50:47.827418 3452 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a94ecc52-8097-45ec-977e-59e6e58bdce3-hubble-tls\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827670 kubelet[3452]: I0128 00:50:47.827424 3452 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c32d953-d1dc-4175-9b3c-0ad23940fc49-cilium-config-path\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827670 kubelet[3452]: I0128 00:50:47.827431 3452 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c5xbz\" (UniqueName: \"kubernetes.io/projected/a94ecc52-8097-45ec-977e-59e6e58bdce3-kube-api-access-c5xbz\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.827670 kubelet[3452]: I0128 00:50:47.827437 3452 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a94ecc52-8097-45ec-977e-59e6e58bdce3-cilium-run\") on node \"ci-4459.2.3-n-ec09cdb4df\" DevicePath \"\"" Jan 28 00:50:47.943753 kubelet[3452]: I0128 00:50:47.943676 3452 scope.go:117] "RemoveContainer" containerID="f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67" Jan 28 00:50:47.946667 containerd[1892]: time="2026-01-28T00:50:47.946621163Z" level=info msg="RemoveContainer for \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\"" Jan 28 00:50:47.948868 systemd[1]: Removed slice kubepods-besteffort-pod7c32d953_d1dc_4175_9b3c_0ad23940fc49.slice - libcontainer container kubepods-besteffort-pod7c32d953_d1dc_4175_9b3c_0ad23940fc49.slice. Jan 28 00:50:47.955577 containerd[1892]: time="2026-01-28T00:50:47.955510046Z" level=info msg="RemoveContainer for \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\" returns successfully" Jan 28 00:50:47.957841 kubelet[3452]: I0128 00:50:47.957791 3452 scope.go:117] "RemoveContainer" containerID="f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67" Jan 28 00:50:47.958148 containerd[1892]: time="2026-01-28T00:50:47.958045715Z" level=error msg="ContainerStatus for \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\": not found" Jan 28 00:50:47.959031 kubelet[3452]: E0128 00:50:47.958216 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\": not found" containerID="f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67" Jan 28 00:50:47.959031 kubelet[3452]: I0128 00:50:47.958244 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67"} err="failed to get container status \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0048bc8fc74726f9840975234532f592ab17bf17df5716798340446aee11b67\": not found" Jan 28 00:50:47.959031 kubelet[3452]: I0128 00:50:47.958273 3452 scope.go:117] "RemoveContainer" containerID="b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc" Jan 28 00:50:47.959223 systemd[1]: Removed slice kubepods-burstable-poda94ecc52_8097_45ec_977e_59e6e58bdce3.slice - libcontainer container kubepods-burstable-poda94ecc52_8097_45ec_977e_59e6e58bdce3.slice. Jan 28 00:50:47.959918 systemd[1]: kubepods-burstable-poda94ecc52_8097_45ec_977e_59e6e58bdce3.slice: Consumed 4.426s CPU time, 123.3M memory peak, 128K read from disk, 12.9M written to disk. Jan 28 00:50:47.962958 containerd[1892]: time="2026-01-28T00:50:47.962550978Z" level=info msg="RemoveContainer for \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\"" Jan 28 00:50:47.972875 containerd[1892]: time="2026-01-28T00:50:47.972837604Z" level=info msg="RemoveContainer for \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\" returns successfully" Jan 28 00:50:47.973225 kubelet[3452]: I0128 00:50:47.973193 3452 scope.go:117] "RemoveContainer" containerID="1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4" Jan 28 00:50:47.974449 containerd[1892]: time="2026-01-28T00:50:47.974398616Z" level=info msg="RemoveContainer for \"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\"" Jan 28 00:50:47.983064 containerd[1892]: time="2026-01-28T00:50:47.982950655Z" level=info msg="RemoveContainer for \"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\" returns successfully" Jan 28 00:50:47.983399 kubelet[3452]: I0128 00:50:47.983279 3452 scope.go:117] "RemoveContainer" containerID="ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97" Jan 28 00:50:47.985839 containerd[1892]: time="2026-01-28T00:50:47.985809495Z" level=info msg="RemoveContainer for \"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\"" Jan 28 00:50:47.996135 containerd[1892]: time="2026-01-28T00:50:47.996100705Z" level=info msg="RemoveContainer for \"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\" returns successfully" Jan 28 00:50:47.996449 kubelet[3452]: I0128 00:50:47.996331 3452 scope.go:117] "RemoveContainer" containerID="48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3" Jan 28 00:50:47.997760 containerd[1892]: time="2026-01-28T00:50:47.997735800Z" level=info msg="RemoveContainer for \"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\"" Jan 28 00:50:48.005506 containerd[1892]: time="2026-01-28T00:50:48.005473843Z" level=info msg="RemoveContainer for \"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\" returns successfully" Jan 28 00:50:48.005804 kubelet[3452]: I0128 00:50:48.005710 3452 scope.go:117] "RemoveContainer" containerID="0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed" Jan 28 00:50:48.007059 containerd[1892]: time="2026-01-28T00:50:48.006992486Z" level=info msg="RemoveContainer for \"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\"" Jan 28 00:50:48.020303 containerd[1892]: time="2026-01-28T00:50:48.020270228Z" level=info msg="RemoveContainer for \"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\" returns successfully" Jan 28 00:50:48.020629 kubelet[3452]: I0128 00:50:48.020498 3452 scope.go:117] "RemoveContainer" containerID="b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc" Jan 28 00:50:48.020918 containerd[1892]: time="2026-01-28T00:50:48.020844127Z" level=error msg="ContainerStatus for \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\": not found" Jan 28 00:50:48.021068 kubelet[3452]: E0128 00:50:48.021031 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\": not found" containerID="b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc" Jan 28 00:50:48.021149 kubelet[3452]: I0128 00:50:48.021062 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc"} err="failed to get container status \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3b2dc7f6474cfee9a48776b7524f7cc06adfaf9f89e1bbe00b71b893d3643fc\": not found" Jan 28 00:50:48.021149 kubelet[3452]: I0128 00:50:48.021083 3452 scope.go:117] "RemoveContainer" containerID="1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4" Jan 28 00:50:48.021270 containerd[1892]: time="2026-01-28T00:50:48.021240196Z" level=error msg="ContainerStatus for \"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\": not found" Jan 28 00:50:48.021362 kubelet[3452]: E0128 00:50:48.021341 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\": not found" containerID="1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4" Jan 28 00:50:48.021401 kubelet[3452]: I0128 00:50:48.021362 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4"} err="failed to get container status \"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d451f27ec6e847f24e0a2e08a7a287dfa557f78d784b701ba892ae3a803c9c4\": not found" Jan 28 00:50:48.021401 kubelet[3452]: I0128 00:50:48.021373 3452 scope.go:117] "RemoveContainer" containerID="ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97" Jan 28 00:50:48.021532 containerd[1892]: time="2026-01-28T00:50:48.021504549Z" level=error msg="ContainerStatus for \"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\": not found" Jan 28 00:50:48.021678 kubelet[3452]: E0128 00:50:48.021656 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\": not found" containerID="ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97" Jan 28 00:50:48.021798 kubelet[3452]: I0128 00:50:48.021779 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97"} err="failed to get container status \"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce73659a2e660a83c4e40f86e2eebf79b8fb3fffce44f3ff6cf5337797341a97\": not found" Jan 28 00:50:48.021937 kubelet[3452]: I0128 00:50:48.021853 3452 scope.go:117] "RemoveContainer" containerID="48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3" Jan 28 00:50:48.022321 containerd[1892]: time="2026-01-28T00:50:48.022276743Z" level=error msg="ContainerStatus for \"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\": not found" Jan 28 00:50:48.022513 kubelet[3452]: E0128 00:50:48.022487 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\": not found" containerID="48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3" Jan 28 00:50:48.022559 kubelet[3452]: I0128 00:50:48.022510 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3"} err="failed to get container status \"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\": rpc error: code = NotFound desc = an error occurred when try to find container \"48f707035212f6fe0ff09d72362f5a29afbf105514ddcb08aaa94b120e0e9af3\": not found" Jan 28 00:50:48.022559 kubelet[3452]: I0128 00:50:48.022525 3452 scope.go:117] "RemoveContainer" containerID="0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed" Jan 28 00:50:48.022764 containerd[1892]: time="2026-01-28T00:50:48.022665116Z" level=error msg="ContainerStatus for \"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\": not found" Jan 28 00:50:48.022897 kubelet[3452]: E0128 00:50:48.022855 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\": not found" containerID="0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed" Jan 28 00:50:48.022897 kubelet[3452]: I0128 00:50:48.022877 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed"} err="failed to get container status \"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"0942fecf7ab90e50bb5d09dd424576cb99e612d277dd9a404e71fb43fd70b2ed\": not found" Jan 28 00:50:48.463289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a41ad7b3d600bea192b0c96fdfb725b2c1d84e0375e8acdf64cd982887ef069-shm.mount: Deactivated successfully. Jan 28 00:50:48.463382 systemd[1]: var-lib-kubelet-pods-7c32d953\x2dd1dc\x2d4175\x2d9b3c\x2d0ad23940fc49-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfbd6c.mount: Deactivated successfully. Jan 28 00:50:48.463428 systemd[1]: var-lib-kubelet-pods-a94ecc52\x2d8097\x2d45ec\x2d977e\x2d59e6e58bdce3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc5xbz.mount: Deactivated successfully. Jan 28 00:50:48.463468 systemd[1]: var-lib-kubelet-pods-a94ecc52\x2d8097\x2d45ec\x2d977e\x2d59e6e58bdce3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 28 00:50:48.463505 systemd[1]: var-lib-kubelet-pods-a94ecc52\x2d8097\x2d45ec\x2d977e\x2d59e6e58bdce3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 28 00:50:49.420376 sshd[4986]: Connection closed by 10.200.16.10 port 33806 Jan 28 00:50:49.420917 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:49.423856 systemd-logind[1876]: Session 23 logged out. Waiting for processes to exit. Jan 28 00:50:49.424550 systemd[1]: sshd@20-10.200.20.26:22-10.200.16.10:33806.service: Deactivated successfully. Jan 28 00:50:49.426003 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 00:50:49.428524 systemd-logind[1876]: Removed session 23. Jan 28 00:50:49.504545 systemd[1]: Started sshd@21-10.200.20.26:22-10.200.16.10:49716.service - OpenSSH per-connection server daemon (10.200.16.10:49716). Jan 28 00:50:49.635638 kubelet[3452]: I0128 00:50:49.635591 3452 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c32d953-d1dc-4175-9b3c-0ad23940fc49" path="/var/lib/kubelet/pods/7c32d953-d1dc-4175-9b3c-0ad23940fc49/volumes" Jan 28 00:50:49.636001 kubelet[3452]: I0128 00:50:49.635890 3452 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a94ecc52-8097-45ec-977e-59e6e58bdce3" path="/var/lib/kubelet/pods/a94ecc52-8097-45ec-977e-59e6e58bdce3/volumes" Jan 28 00:50:49.740348 kubelet[3452]: E0128 00:50:49.740158 3452 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 00:50:49.961409 sshd[5131]: Accepted publickey for core from 10.200.16.10 port 49716 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:49.962451 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:49.966220 systemd-logind[1876]: New session 24 of user core. Jan 28 00:50:49.974137 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 00:50:50.911257 systemd[1]: Created slice kubepods-burstable-pod2c80cca5_d5f7_4c51_a40e_4a5aadedcdd8.slice - libcontainer container kubepods-burstable-pod2c80cca5_d5f7_4c51_a40e_4a5aadedcdd8.slice. Jan 28 00:50:50.940374 sshd[5134]: Connection closed by 10.200.16.10 port 49716 Jan 28 00:50:50.941207 sshd-session[5131]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:50.945669 kubelet[3452]: I0128 00:50:50.945439 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-cilium-ipsec-secrets\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946270 kubelet[3452]: I0128 00:50:50.945891 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-cilium-run\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946270 kubelet[3452]: I0128 00:50:50.945947 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-hostproc\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946270 kubelet[3452]: I0128 00:50:50.945958 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-lib-modules\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946270 kubelet[3452]: I0128 00:50:50.945972 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-host-proc-sys-net\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946270 kubelet[3452]: I0128 00:50:50.945987 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gz6n\" (UniqueName: \"kubernetes.io/projected/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-kube-api-access-9gz6n\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946270 kubelet[3452]: I0128 00:50:50.945997 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-cilium-cgroup\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.945711 systemd[1]: sshd@21-10.200.20.26:22-10.200.16.10:49716.service: Deactivated successfully. Jan 28 00:50:50.946433 kubelet[3452]: I0128 00:50:50.946032 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-cni-path\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946433 kubelet[3452]: I0128 00:50:50.946042 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-etc-cni-netd\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946433 kubelet[3452]: I0128 00:50:50.946051 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-xtables-lock\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946433 kubelet[3452]: I0128 00:50:50.946059 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-clustermesh-secrets\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946433 kubelet[3452]: I0128 00:50:50.946069 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-host-proc-sys-kernel\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946433 kubelet[3452]: I0128 00:50:50.946078 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-bpf-maps\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946522 kubelet[3452]: I0128 00:50:50.946088 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-hubble-tls\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946522 kubelet[3452]: I0128 00:50:50.946097 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8-cilium-config-path\") pod \"cilium-vzrkw\" (UID: \"2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8\") " pod="kube-system/cilium-vzrkw" Jan 28 00:50:50.946584 systemd-logind[1876]: Session 24 logged out. Waiting for processes to exit. Jan 28 00:50:50.948583 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 00:50:50.949860 systemd-logind[1876]: Removed session 24. Jan 28 00:50:51.029368 systemd[1]: Started sshd@22-10.200.20.26:22-10.200.16.10:49732.service - OpenSSH per-connection server daemon (10.200.16.10:49732). Jan 28 00:50:51.222574 containerd[1892]: time="2026-01-28T00:50:51.222170237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vzrkw,Uid:2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8,Namespace:kube-system,Attempt:0,}" Jan 28 00:50:51.257175 containerd[1892]: time="2026-01-28T00:50:51.257127762Z" level=info msg="connecting to shim aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a" address="unix:///run/containerd/s/d6918dd1479c98f3ff18a92ca1003c405f61df9b4d6e84b4f45be48f8e173d7e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:50:51.274159 systemd[1]: Started cri-containerd-aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a.scope - libcontainer container aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a. Jan 28 00:50:51.295644 containerd[1892]: time="2026-01-28T00:50:51.295608374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vzrkw,Uid:2c80cca5-d5f7-4c51-a40e-4a5aadedcdd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a\"" Jan 28 00:50:51.303532 containerd[1892]: time="2026-01-28T00:50:51.303490534Z" level=info msg="CreateContainer within sandbox \"aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 00:50:51.320559 containerd[1892]: time="2026-01-28T00:50:51.320118740Z" level=info msg="Container 7becc9f61dc1e64309313b7a53f9e2594475fb213110ff84aae82a2a35506df9: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:50:51.335212 containerd[1892]: time="2026-01-28T00:50:51.335175182Z" level=info msg="CreateContainer within sandbox \"aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7becc9f61dc1e64309313b7a53f9e2594475fb213110ff84aae82a2a35506df9\"" Jan 28 00:50:51.336135 containerd[1892]: time="2026-01-28T00:50:51.336117885Z" level=info msg="StartContainer for \"7becc9f61dc1e64309313b7a53f9e2594475fb213110ff84aae82a2a35506df9\"" Jan 28 00:50:51.337106 containerd[1892]: time="2026-01-28T00:50:51.337080037Z" level=info msg="connecting to shim 7becc9f61dc1e64309313b7a53f9e2594475fb213110ff84aae82a2a35506df9" address="unix:///run/containerd/s/d6918dd1479c98f3ff18a92ca1003c405f61df9b4d6e84b4f45be48f8e173d7e" protocol=ttrpc version=3 Jan 28 00:50:51.355169 systemd[1]: Started cri-containerd-7becc9f61dc1e64309313b7a53f9e2594475fb213110ff84aae82a2a35506df9.scope - libcontainer container 7becc9f61dc1e64309313b7a53f9e2594475fb213110ff84aae82a2a35506df9. Jan 28 00:50:51.381627 containerd[1892]: time="2026-01-28T00:50:51.381595139Z" level=info msg="StartContainer for \"7becc9f61dc1e64309313b7a53f9e2594475fb213110ff84aae82a2a35506df9\" returns successfully" Jan 28 00:50:51.384383 systemd[1]: cri-containerd-7becc9f61dc1e64309313b7a53f9e2594475fb213110ff84aae82a2a35506df9.scope: Deactivated successfully. Jan 28 00:50:51.388381 containerd[1892]: time="2026-01-28T00:50:51.388347334Z" level=info msg="received container exit event container_id:\"7becc9f61dc1e64309313b7a53f9e2594475fb213110ff84aae82a2a35506df9\" id:\"7becc9f61dc1e64309313b7a53f9e2594475fb213110ff84aae82a2a35506df9\" pid:5209 exited_at:{seconds:1769561451 nanos:387976602}" Jan 28 00:50:51.516970 sshd[5144]: Accepted publickey for core from 10.200.16.10 port 49732 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:51.517559 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:51.522238 systemd-logind[1876]: New session 25 of user core. Jan 28 00:50:51.529183 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 00:50:51.863563 sshd[5240]: Connection closed by 10.200.16.10 port 49732 Jan 28 00:50:51.864285 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:51.867542 systemd[1]: sshd@22-10.200.20.26:22-10.200.16.10:49732.service: Deactivated successfully. Jan 28 00:50:51.869168 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 00:50:51.869811 systemd-logind[1876]: Session 25 logged out. Waiting for processes to exit. Jan 28 00:50:51.870926 systemd-logind[1876]: Removed session 25. Jan 28 00:50:51.958085 systemd[1]: Started sshd@23-10.200.20.26:22-10.200.16.10:49748.service - OpenSSH per-connection server daemon (10.200.16.10:49748). Jan 28 00:50:51.975341 containerd[1892]: time="2026-01-28T00:50:51.975148099Z" level=info msg="CreateContainer within sandbox \"aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 00:50:51.992333 containerd[1892]: time="2026-01-28T00:50:51.991867100Z" level=info msg="Container 6ad05634554f73e1d97e5c1d21de8b12b3ee45dd7743fd3f73674533d0d8ba42: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:50:52.005273 containerd[1892]: time="2026-01-28T00:50:52.005200404Z" level=info msg="CreateContainer within sandbox \"aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6ad05634554f73e1d97e5c1d21de8b12b3ee45dd7743fd3f73674533d0d8ba42\"" Jan 28 00:50:52.006063 containerd[1892]: time="2026-01-28T00:50:52.006042472Z" level=info msg="StartContainer for \"6ad05634554f73e1d97e5c1d21de8b12b3ee45dd7743fd3f73674533d0d8ba42\"" Jan 28 00:50:52.006820 containerd[1892]: time="2026-01-28T00:50:52.006792977Z" level=info msg="connecting to shim 6ad05634554f73e1d97e5c1d21de8b12b3ee45dd7743fd3f73674533d0d8ba42" address="unix:///run/containerd/s/d6918dd1479c98f3ff18a92ca1003c405f61df9b4d6e84b4f45be48f8e173d7e" protocol=ttrpc version=3 Jan 28 00:50:52.025172 systemd[1]: Started cri-containerd-6ad05634554f73e1d97e5c1d21de8b12b3ee45dd7743fd3f73674533d0d8ba42.scope - libcontainer container 6ad05634554f73e1d97e5c1d21de8b12b3ee45dd7743fd3f73674533d0d8ba42. Jan 28 00:50:52.055956 containerd[1892]: time="2026-01-28T00:50:52.055748916Z" level=info msg="StartContainer for \"6ad05634554f73e1d97e5c1d21de8b12b3ee45dd7743fd3f73674533d0d8ba42\" returns successfully" Jan 28 00:50:52.057168 systemd[1]: cri-containerd-6ad05634554f73e1d97e5c1d21de8b12b3ee45dd7743fd3f73674533d0d8ba42.scope: Deactivated successfully. Jan 28 00:50:52.058135 containerd[1892]: time="2026-01-28T00:50:52.057761704Z" level=info msg="received container exit event container_id:\"6ad05634554f73e1d97e5c1d21de8b12b3ee45dd7743fd3f73674533d0d8ba42\" id:\"6ad05634554f73e1d97e5c1d21de8b12b3ee45dd7743fd3f73674533d0d8ba42\" pid:5264 exited_at:{seconds:1769561452 nanos:57465438}" Jan 28 00:50:52.075184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ad05634554f73e1d97e5c1d21de8b12b3ee45dd7743fd3f73674533d0d8ba42-rootfs.mount: Deactivated successfully. Jan 28 00:50:52.452681 sshd[5247]: Accepted publickey for core from 10.200.16.10 port 49748 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:52.453928 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:52.458005 systemd-logind[1876]: New session 26 of user core. Jan 28 00:50:52.466157 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 00:50:52.622651 kubelet[3452]: I0128 00:50:52.622604 3452 setters.go:543] "Node became not ready" node="ci-4459.2.3-n-ec09cdb4df" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T00:50:52Z","lastTransitionTime":"2026-01-28T00:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 28 00:50:52.977617 containerd[1892]: time="2026-01-28T00:50:52.977572404Z" level=info msg="CreateContainer within sandbox \"aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 00:50:53.004044 containerd[1892]: time="2026-01-28T00:50:53.003109077Z" level=info msg="Container 8d79ff7181f7df30f7a2216e1c3ca334358bc5c39adbf76a64bb0443266587fa: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:50:53.003224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1173260679.mount: Deactivated successfully. Jan 28 00:50:53.022552 containerd[1892]: time="2026-01-28T00:50:53.022505632Z" level=info msg="CreateContainer within sandbox \"aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d79ff7181f7df30f7a2216e1c3ca334358bc5c39adbf76a64bb0443266587fa\"" Jan 28 00:50:53.023165 containerd[1892]: time="2026-01-28T00:50:53.023126397Z" level=info msg="StartContainer for \"8d79ff7181f7df30f7a2216e1c3ca334358bc5c39adbf76a64bb0443266587fa\"" Jan 28 00:50:53.024384 containerd[1892]: time="2026-01-28T00:50:53.024338510Z" level=info msg="connecting to shim 8d79ff7181f7df30f7a2216e1c3ca334358bc5c39adbf76a64bb0443266587fa" address="unix:///run/containerd/s/d6918dd1479c98f3ff18a92ca1003c405f61df9b4d6e84b4f45be48f8e173d7e" protocol=ttrpc version=3 Jan 28 00:50:53.045159 systemd[1]: Started cri-containerd-8d79ff7181f7df30f7a2216e1c3ca334358bc5c39adbf76a64bb0443266587fa.scope - libcontainer container 8d79ff7181f7df30f7a2216e1c3ca334358bc5c39adbf76a64bb0443266587fa. Jan 28 00:50:53.108307 systemd[1]: cri-containerd-8d79ff7181f7df30f7a2216e1c3ca334358bc5c39adbf76a64bb0443266587fa.scope: Deactivated successfully. Jan 28 00:50:53.111586 containerd[1892]: time="2026-01-28T00:50:53.111553197Z" level=info msg="received container exit event container_id:\"8d79ff7181f7df30f7a2216e1c3ca334358bc5c39adbf76a64bb0443266587fa\" id:\"8d79ff7181f7df30f7a2216e1c3ca334358bc5c39adbf76a64bb0443266587fa\" pid:5312 exited_at:{seconds:1769561453 nanos:111299916}" Jan 28 00:50:53.113533 containerd[1892]: time="2026-01-28T00:50:53.113471109Z" level=info msg="StartContainer for \"8d79ff7181f7df30f7a2216e1c3ca334358bc5c39adbf76a64bb0443266587fa\" returns successfully" Jan 28 00:50:53.132904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d79ff7181f7df30f7a2216e1c3ca334358bc5c39adbf76a64bb0443266587fa-rootfs.mount: Deactivated successfully. Jan 28 00:50:53.980453 containerd[1892]: time="2026-01-28T00:50:53.980414589Z" level=info msg="CreateContainer within sandbox \"aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 00:50:54.003813 containerd[1892]: time="2026-01-28T00:50:54.003703978Z" level=info msg="Container 4c9fc29e08af97700175389c9761839b636109f081243466b1217d6f5333aefb: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:50:54.021524 containerd[1892]: time="2026-01-28T00:50:54.021465751Z" level=info msg="CreateContainer within sandbox \"aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4c9fc29e08af97700175389c9761839b636109f081243466b1217d6f5333aefb\"" Jan 28 00:50:54.022303 containerd[1892]: time="2026-01-28T00:50:54.022227048Z" level=info msg="StartContainer for \"4c9fc29e08af97700175389c9761839b636109f081243466b1217d6f5333aefb\"" Jan 28 00:50:54.022856 containerd[1892]: time="2026-01-28T00:50:54.022834420Z" level=info msg="connecting to shim 4c9fc29e08af97700175389c9761839b636109f081243466b1217d6f5333aefb" address="unix:///run/containerd/s/d6918dd1479c98f3ff18a92ca1003c405f61df9b4d6e84b4f45be48f8e173d7e" protocol=ttrpc version=3 Jan 28 00:50:54.040211 systemd[1]: Started cri-containerd-4c9fc29e08af97700175389c9761839b636109f081243466b1217d6f5333aefb.scope - libcontainer container 4c9fc29e08af97700175389c9761839b636109f081243466b1217d6f5333aefb. Jan 28 00:50:54.060168 systemd[1]: cri-containerd-4c9fc29e08af97700175389c9761839b636109f081243466b1217d6f5333aefb.scope: Deactivated successfully. Jan 28 00:50:54.066238 containerd[1892]: time="2026-01-28T00:50:54.066144426Z" level=info msg="received container exit event container_id:\"4c9fc29e08af97700175389c9761839b636109f081243466b1217d6f5333aefb\" id:\"4c9fc29e08af97700175389c9761839b636109f081243466b1217d6f5333aefb\" pid:5350 exited_at:{seconds:1769561454 nanos:60997061}" Jan 28 00:50:54.071933 containerd[1892]: time="2026-01-28T00:50:54.071837113Z" level=info msg="StartContainer for \"4c9fc29e08af97700175389c9761839b636109f081243466b1217d6f5333aefb\" returns successfully" Jan 28 00:50:54.131942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c9fc29e08af97700175389c9761839b636109f081243466b1217d6f5333aefb-rootfs.mount: Deactivated successfully. Jan 28 00:50:54.741685 kubelet[3452]: E0128 00:50:54.741652 3452 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 00:50:54.988027 containerd[1892]: time="2026-01-28T00:50:54.987973427Z" level=info msg="CreateContainer within sandbox \"aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 00:50:55.011708 containerd[1892]: time="2026-01-28T00:50:55.011160589Z" level=info msg="Container 4be04a74791a8fd15277b76648478a396d91baa09138d9aaf6c079fa38438262: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:50:55.043973 containerd[1892]: time="2026-01-28T00:50:55.043928080Z" level=info msg="CreateContainer within sandbox \"aba8c6fc18618f5746c8f290d54c72aa15f367f7fade5690e482c68290f51c3a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4be04a74791a8fd15277b76648478a396d91baa09138d9aaf6c079fa38438262\"" Jan 28 00:50:55.045803 containerd[1892]: time="2026-01-28T00:50:55.045768046Z" level=info msg="StartContainer for \"4be04a74791a8fd15277b76648478a396d91baa09138d9aaf6c079fa38438262\"" Jan 28 00:50:55.046579 containerd[1892]: time="2026-01-28T00:50:55.046536880Z" level=info msg="connecting to shim 4be04a74791a8fd15277b76648478a396d91baa09138d9aaf6c079fa38438262" address="unix:///run/containerd/s/d6918dd1479c98f3ff18a92ca1003c405f61df9b4d6e84b4f45be48f8e173d7e" protocol=ttrpc version=3 Jan 28 00:50:55.064154 systemd[1]: Started cri-containerd-4be04a74791a8fd15277b76648478a396d91baa09138d9aaf6c079fa38438262.scope - libcontainer container 4be04a74791a8fd15277b76648478a396d91baa09138d9aaf6c079fa38438262. Jan 28 00:50:55.110148 containerd[1892]: time="2026-01-28T00:50:55.110103341Z" level=info msg="StartContainer for \"4be04a74791a8fd15277b76648478a396d91baa09138d9aaf6c079fa38438262\" returns successfully" Jan 28 00:50:55.465051 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 28 00:50:56.003482 kubelet[3452]: I0128 00:50:56.003424 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vzrkw" podStartSLOduration=6.003409878 podStartE2EDuration="6.003409878s" podCreationTimestamp="2026-01-28 00:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:50:56.003379221 +0000 UTC m=+146.444093286" watchObservedRunningTime="2026-01-28 00:50:56.003409878 +0000 UTC m=+146.444123911" Jan 28 00:50:57.890046 systemd-networkd[1485]: lxc_health: Link UP Jan 28 00:50:57.892074 systemd-networkd[1485]: lxc_health: Gained carrier Jan 28 00:50:58.990364 kubelet[3452]: E0128 00:50:58.990319 3452 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56368->127.0.0.1:39037: write tcp 127.0.0.1:56368->127.0.0.1:39037: write: connection reset by peer Jan 28 00:50:59.537309 systemd-networkd[1485]: lxc_health: Gained IPv6LL Jan 28 00:51:03.233064 sshd[5294]: Connection closed by 10.200.16.10 port 49748 Jan 28 00:51:03.233801 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:03.237295 systemd[1]: sshd@23-10.200.20.26:22-10.200.16.10:49748.service: Deactivated successfully. Jan 28 00:51:03.239497 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 00:51:03.241391 systemd-logind[1876]: Session 26 logged out. Waiting for processes to exit. Jan 28 00:51:03.243014 systemd-logind[1876]: Removed session 26.