Jan 28 00:50:17.088172 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 28 00:50:17.088191 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Jan 27 22:35:34 -00 2026 Jan 28 00:50:17.088198 kernel: KASLR enabled Jan 28 00:50:17.088202 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 28 00:50:17.088206 kernel: printk: legacy bootconsole [pl11] enabled Jan 28 00:50:17.088211 kernel: efi: EFI v2.7 by EDK II Jan 28 00:50:17.088216 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 28 00:50:17.088220 kernel: random: crng init done Jan 28 00:50:17.088224 kernel: secureboot: Secure boot disabled Jan 28 00:50:17.088228 kernel: ACPI: Early table checksum verification disabled Jan 28 00:50:17.088232 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 28 00:50:17.088236 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:50:17.088240 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:50:17.088244 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 28 00:50:17.088250 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:50:17.088254 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:50:17.088258 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:50:17.088262 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:50:17.088267 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:50:17.088272 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:50:17.088276 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 28 00:50:17.088280 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:50:17.088285 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 28 00:50:17.088289 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 28 00:50:17.088293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 28 00:50:17.088297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 28 00:50:17.088301 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 28 00:50:17.088306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 28 00:50:17.088310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 28 00:50:17.088314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 28 00:50:17.088319 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 28 00:50:17.088323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 28 00:50:17.088328 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 28 00:50:17.088332 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 28 00:50:17.088336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 28 00:50:17.088340 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 28 00:50:17.088344 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 28 00:50:17.088349 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 28 00:50:17.088353 kernel: Zone ranges: Jan 28 00:50:17.088357 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 28 00:50:17.088364 kernel: DMA32 empty Jan 28 00:50:17.088368 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 00:50:17.088373 kernel: Device empty Jan 28 00:50:17.088377 kernel: Movable zone start for each node Jan 28 00:50:17.088381 kernel: Early memory node ranges Jan 28 00:50:17.088386 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 28 00:50:17.088391 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 28 00:50:17.088395 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 28 00:50:17.088400 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 28 00:50:17.088404 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 28 00:50:17.088408 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 28 00:50:17.088413 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 00:50:17.088417 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 28 00:50:17.088421 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 28 00:50:17.088426 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 28 00:50:17.088430 kernel: psci: probing for conduit method from ACPI. Jan 28 00:50:17.088434 kernel: psci: PSCIv1.3 detected in firmware. Jan 28 00:50:17.088439 kernel: psci: Using standard PSCI v0.2 function IDs Jan 28 00:50:17.088444 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 28 00:50:17.088448 kernel: psci: SMC Calling Convention v1.4 Jan 28 00:50:17.088453 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 28 00:50:17.088457 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 28 00:50:17.088461 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 28 00:50:17.088466 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 28 00:50:17.088470 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 28 00:50:17.088475 kernel: Detected PIPT I-cache on CPU0 Jan 28 00:50:17.088479 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 28 00:50:17.088483 kernel: CPU features: detected: GIC system register CPU interface Jan 28 00:50:17.088488 kernel: CPU features: detected: Spectre-v4 Jan 28 00:50:17.088492 kernel: CPU features: detected: Spectre-BHB Jan 28 00:50:17.088497 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 28 00:50:17.088502 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 28 00:50:17.088506 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 28 00:50:17.088510 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 28 00:50:17.088515 kernel: alternatives: applying boot alternatives Jan 28 00:50:17.088520 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f94df361d6ccbf6d3bccdda215ef8c4de18f0915f7435d65b20126d9bf4aaef1 Jan 28 00:50:17.088525 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 00:50:17.088529 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 00:50:17.088533 kernel: Fallback order for Node 0: 0 Jan 28 00:50:17.088538 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 28 00:50:17.088543 kernel: Policy zone: Normal Jan 28 00:50:17.088547 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 00:50:17.088552 kernel: software IO TLB: area num 2. Jan 28 00:50:17.088556 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Jan 28 00:50:17.088560 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 00:50:17.088565 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 00:50:17.088570 kernel: rcu: RCU event tracing is enabled. Jan 28 00:50:17.088574 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 00:50:17.088579 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 00:50:17.088583 kernel: Tracing variant of Tasks RCU enabled. Jan 28 00:50:17.088587 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 00:50:17.088592 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 00:50:17.088597 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 00:50:17.088602 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 00:50:17.088606 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 28 00:50:17.088610 kernel: GICv3: 960 SPIs implemented Jan 28 00:50:17.088615 kernel: GICv3: 0 Extended SPIs implemented Jan 28 00:50:17.088619 kernel: Root IRQ handler: gic_handle_irq Jan 28 00:50:17.088623 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 28 00:50:17.088628 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 28 00:50:17.088632 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 28 00:50:17.088636 kernel: ITS: No ITS available, not enabling LPIs Jan 28 00:50:17.088641 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 00:50:17.088646 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 28 00:50:17.088650 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 00:50:17.088655 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 28 00:50:17.088659 kernel: Console: colour dummy device 80x25 Jan 28 00:50:17.088664 kernel: printk: legacy console [tty1] enabled Jan 28 00:50:17.088669 kernel: ACPI: Core revision 20240827 Jan 28 00:50:17.088673 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 28 00:50:17.088678 kernel: pid_max: default: 32768 minimum: 301 Jan 28 00:50:17.088683 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 00:50:17.088687 kernel: landlock: Up and running. Jan 28 00:50:17.088692 kernel: SELinux: Initializing. Jan 28 00:50:17.088697 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:50:17.088702 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:50:17.088706 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 28 00:50:17.088711 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 28 00:50:17.088719 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 28 00:50:17.088725 kernel: rcu: Hierarchical SRCU implementation. Jan 28 00:50:17.088729 kernel: rcu: Max phase no-delay instances is 400. Jan 28 00:50:17.088734 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 00:50:17.088739 kernel: Remapping and enabling EFI services. Jan 28 00:50:17.088744 kernel: smp: Bringing up secondary CPUs ... Jan 28 00:50:17.088749 kernel: Detected PIPT I-cache on CPU1 Jan 28 00:50:17.088755 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 28 00:50:17.088759 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 28 00:50:17.088764 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 00:50:17.088769 kernel: SMP: Total of 2 processors activated. Jan 28 00:50:17.088774 kernel: CPU: All CPU(s) started at EL1 Jan 28 00:50:17.088779 kernel: CPU features: detected: 32-bit EL0 Support Jan 28 00:50:17.088784 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 28 00:50:17.088789 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 28 00:50:17.088794 kernel: CPU features: detected: Common not Private translations Jan 28 00:50:17.088798 kernel: CPU features: detected: CRC32 instructions Jan 28 00:50:17.088803 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 28 00:50:17.088808 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 28 00:50:17.088813 kernel: CPU features: detected: LSE atomic instructions Jan 28 00:50:17.088818 kernel: CPU features: detected: Privileged Access Never Jan 28 00:50:17.088823 kernel: CPU features: detected: Speculation barrier (SB) Jan 28 00:50:17.088828 kernel: CPU features: detected: TLB range maintenance instructions Jan 28 00:50:17.088833 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 28 00:50:17.088838 kernel: CPU features: detected: Scalable Vector Extension Jan 28 00:50:17.088842 kernel: alternatives: applying system-wide alternatives Jan 28 00:50:17.088847 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 28 00:50:17.088852 kernel: SVE: maximum available vector length 16 bytes per vector Jan 28 00:50:17.088869 kernel: SVE: default vector length 16 bytes per vector Jan 28 00:50:17.088875 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Jan 28 00:50:17.088881 kernel: devtmpfs: initialized Jan 28 00:50:17.088886 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 00:50:17.088891 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 00:50:17.088895 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 28 00:50:17.088900 kernel: 0 pages in range for non-PLT usage Jan 28 00:50:17.088905 kernel: 508400 pages in range for PLT usage Jan 28 00:50:17.088909 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 00:50:17.088914 kernel: SMBIOS 3.1.0 present. Jan 28 00:50:17.088919 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 28 00:50:17.088925 kernel: DMI: Memory slots populated: 2/2 Jan 28 00:50:17.088929 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 00:50:17.088934 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 28 00:50:17.088939 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 28 00:50:17.088944 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 28 00:50:17.088949 kernel: audit: initializing netlink subsys (disabled) Jan 28 00:50:17.088954 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 28 00:50:17.088959 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 00:50:17.088964 kernel: cpuidle: using governor menu Jan 28 00:50:17.088969 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 28 00:50:17.088974 kernel: ASID allocator initialised with 32768 entries Jan 28 00:50:17.088978 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 00:50:17.088983 kernel: Serial: AMBA PL011 UART driver Jan 28 00:50:17.088988 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 00:50:17.088993 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 00:50:17.088997 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 28 00:50:17.089002 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 28 00:50:17.089008 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 00:50:17.089013 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 00:50:17.089017 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 28 00:50:17.089022 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 28 00:50:17.089027 kernel: ACPI: Added _OSI(Module Device) Jan 28 00:50:17.089032 kernel: ACPI: Added _OSI(Processor Device) Jan 28 00:50:17.089036 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 00:50:17.089041 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 00:50:17.089046 kernel: ACPI: Interpreter enabled Jan 28 00:50:17.089051 kernel: ACPI: Using GIC for interrupt routing Jan 28 00:50:17.089056 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 28 00:50:17.089061 kernel: printk: legacy console [ttyAMA0] enabled Jan 28 00:50:17.089066 kernel: printk: legacy bootconsole [pl11] disabled Jan 28 00:50:17.089070 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 28 00:50:17.089075 kernel: ACPI: CPU0 has been hot-added Jan 28 00:50:17.089080 kernel: ACPI: CPU1 has been hot-added Jan 28 00:50:17.089085 kernel: iommu: Default domain type: Translated Jan 28 00:50:17.089089 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 28 00:50:17.089094 kernel: efivars: Registered efivars operations Jan 28 00:50:17.089100 kernel: vgaarb: loaded Jan 28 00:50:17.089104 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 28 00:50:17.089109 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 00:50:17.089114 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 00:50:17.089118 kernel: pnp: PnP ACPI init Jan 28 00:50:17.089123 kernel: pnp: PnP ACPI: found 0 devices Jan 28 00:50:17.089128 kernel: NET: Registered PF_INET protocol family Jan 28 00:50:17.089133 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 00:50:17.089138 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 00:50:17.089143 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 00:50:17.089148 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 00:50:17.089153 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 00:50:17.089158 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 00:50:17.089162 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:50:17.089167 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:50:17.089172 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 00:50:17.089177 kernel: PCI: CLS 0 bytes, default 64 Jan 28 00:50:17.089181 kernel: kvm [1]: HYP mode not available Jan 28 00:50:17.089187 kernel: Initialise system trusted keyrings Jan 28 00:50:17.089192 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 00:50:17.089196 kernel: Key type asymmetric registered Jan 28 00:50:17.089201 kernel: Asymmetric key parser 'x509' registered Jan 28 00:50:17.089206 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 28 00:50:17.089211 kernel: io scheduler mq-deadline registered Jan 28 00:50:17.089215 kernel: io scheduler kyber registered Jan 28 00:50:17.089220 kernel: io scheduler bfq registered Jan 28 00:50:17.089225 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 00:50:17.089231 kernel: thunder_xcv, ver 1.0 Jan 28 00:50:17.089235 kernel: thunder_bgx, ver 1.0 Jan 28 00:50:17.089240 kernel: nicpf, ver 1.0 Jan 28 00:50:17.089245 kernel: nicvf, ver 1.0 Jan 28 00:50:17.089361 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 28 00:50:17.089413 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-28T00:50:16 UTC (1769561416) Jan 28 00:50:17.089419 kernel: efifb: probing for efifb Jan 28 00:50:17.089426 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 28 00:50:17.089431 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 28 00:50:17.089435 kernel: efifb: scrolling: redraw Jan 28 00:50:17.089440 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 00:50:17.089445 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 00:50:17.089450 kernel: fb0: EFI VGA frame buffer device Jan 28 00:50:17.089455 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 28 00:50:17.089459 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 00:50:17.089464 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 28 00:50:17.089470 kernel: watchdog: NMI not fully supported Jan 28 00:50:17.089475 kernel: watchdog: Hard watchdog permanently disabled Jan 28 00:50:17.089480 kernel: NET: Registered PF_INET6 protocol family Jan 28 00:50:17.089484 kernel: Segment Routing with IPv6 Jan 28 00:50:17.089489 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 00:50:17.089494 kernel: NET: Registered PF_PACKET protocol family Jan 28 00:50:17.089499 kernel: Key type dns_resolver registered Jan 28 00:50:17.089503 kernel: registered taskstats version 1 Jan 28 00:50:17.089508 kernel: Loading compiled-in X.509 certificates Jan 28 00:50:17.089513 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 79637fe16a8be85dde8ec0d00305a4ac90a53e25' Jan 28 00:50:17.089519 kernel: Demotion targets for Node 0: null Jan 28 00:50:17.089524 kernel: Key type .fscrypt registered Jan 28 00:50:17.089528 kernel: Key type fscrypt-provisioning registered Jan 28 00:50:17.089533 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 00:50:17.089538 kernel: ima: Allocated hash algorithm: sha1 Jan 28 00:50:17.089542 kernel: ima: No architecture policies found Jan 28 00:50:17.089547 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 28 00:50:17.089552 kernel: clk: Disabling unused clocks Jan 28 00:50:17.089557 kernel: PM: genpd: Disabling unused power domains Jan 28 00:50:17.089562 kernel: Warning: unable to open an initial console. Jan 28 00:50:17.089567 kernel: Freeing unused kernel memory: 39552K Jan 28 00:50:17.089572 kernel: Run /init as init process Jan 28 00:50:17.089576 kernel: with arguments: Jan 28 00:50:17.089581 kernel: /init Jan 28 00:50:17.089586 kernel: with environment: Jan 28 00:50:17.089590 kernel: HOME=/ Jan 28 00:50:17.089595 kernel: TERM=linux Jan 28 00:50:17.089601 systemd[1]: Successfully made /usr/ read-only. Jan 28 00:50:17.089609 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 00:50:17.089614 systemd[1]: Detected virtualization microsoft. Jan 28 00:50:17.089619 systemd[1]: Detected architecture arm64. Jan 28 00:50:17.089624 systemd[1]: Running in initrd. Jan 28 00:50:17.089629 systemd[1]: No hostname configured, using default hostname. Jan 28 00:50:17.089635 systemd[1]: Hostname set to . Jan 28 00:50:17.089640 systemd[1]: Initializing machine ID from random generator. Jan 28 00:50:17.089646 systemd[1]: Queued start job for default target initrd.target. Jan 28 00:50:17.089651 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:50:17.089657 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:50:17.089662 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 00:50:17.089667 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:50:17.089672 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 00:50:17.089678 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 00:50:17.089685 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 00:50:17.089690 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 00:50:17.089696 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:50:17.089701 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:50:17.089706 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:50:17.089711 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:50:17.089716 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:50:17.089722 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:50:17.089728 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:50:17.089733 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:50:17.089738 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 00:50:17.089743 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 00:50:17.089748 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:50:17.089754 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:50:17.089759 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:50:17.089764 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:50:17.089769 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 00:50:17.089775 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:50:17.089780 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 00:50:17.089786 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 00:50:17.089791 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 00:50:17.089796 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:50:17.089802 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:50:17.089818 systemd-journald[225]: Collecting audit messages is disabled. Jan 28 00:50:17.089833 systemd-journald[225]: Journal started Jan 28 00:50:17.089846 systemd-journald[225]: Runtime Journal (/run/log/journal/86cba22516d9476698ae8600e61a8a68) is 8M, max 78.3M, 70.3M free. Jan 28 00:50:17.111326 systemd-modules-load[227]: Inserted module 'overlay' Jan 28 00:50:17.117554 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:50:17.134874 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:50:17.134930 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 00:50:17.144532 kernel: Bridge firewalling registered Jan 28 00:50:17.146912 systemd-modules-load[227]: Inserted module 'br_netfilter' Jan 28 00:50:17.151380 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 00:50:17.161300 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:50:17.166960 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 00:50:17.175279 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:50:17.182905 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:50:17.194045 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:50:17.209310 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:50:17.218989 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:50:17.233029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:50:17.251238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:50:17.256121 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:50:17.266151 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:50:17.273101 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 00:50:17.277423 systemd-tmpfiles[250]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 00:50:17.304021 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:50:17.310254 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:50:17.327664 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:50:17.339711 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f94df361d6ccbf6d3bccdda215ef8c4de18f0915f7435d65b20126d9bf4aaef1 Jan 28 00:50:17.371290 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:50:17.398347 systemd-resolved[270]: Positive Trust Anchors: Jan 28 00:50:17.400170 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:50:17.400194 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:50:17.401983 systemd-resolved[270]: Defaulting to hostname 'linux'. Jan 28 00:50:17.402656 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:50:17.413920 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:50:17.484884 kernel: SCSI subsystem initialized Jan 28 00:50:17.491881 kernel: Loading iSCSI transport class v2.0-870. Jan 28 00:50:17.497886 kernel: iscsi: registered transport (tcp) Jan 28 00:50:17.510629 kernel: iscsi: registered transport (qla4xxx) Jan 28 00:50:17.510642 kernel: QLogic iSCSI HBA Driver Jan 28 00:50:17.525142 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:50:17.549817 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:50:17.556337 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:50:17.608984 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 00:50:17.614142 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 00:50:17.671878 kernel: raid6: neonx8 gen() 18530 MB/s Jan 28 00:50:17.690866 kernel: raid6: neonx4 gen() 18556 MB/s Jan 28 00:50:17.709866 kernel: raid6: neonx2 gen() 17062 MB/s Jan 28 00:50:17.729867 kernel: raid6: neonx1 gen() 14988 MB/s Jan 28 00:50:17.748868 kernel: raid6: int64x8 gen() 10516 MB/s Jan 28 00:50:17.768868 kernel: raid6: int64x4 gen() 10605 MB/s Jan 28 00:50:17.789882 kernel: raid6: int64x2 gen() 8975 MB/s Jan 28 00:50:17.808340 kernel: raid6: int64x1 gen() 6783 MB/s Jan 28 00:50:17.808354 kernel: raid6: using algorithm neonx4 gen() 18556 MB/s Jan 28 00:50:17.830417 kernel: raid6: .... xor() 15134 MB/s, rmw enabled Jan 28 00:50:17.830426 kernel: raid6: using neon recovery algorithm Jan 28 00:50:17.839812 kernel: xor: measuring software checksum speed Jan 28 00:50:17.839821 kernel: 8regs : 28647 MB/sec Jan 28 00:50:17.842718 kernel: 32regs : 28804 MB/sec Jan 28 00:50:17.848721 kernel: arm64_neon : 35363 MB/sec Jan 28 00:50:17.848728 kernel: xor: using function: arm64_neon (35363 MB/sec) Jan 28 00:50:17.886879 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 00:50:17.892496 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:50:17.901324 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:50:17.927981 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jan 28 00:50:17.931913 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:50:17.944046 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 00:50:17.976442 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Jan 28 00:50:17.997094 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:50:18.002426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:50:18.047577 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:50:18.053884 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 00:50:18.128043 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:50:18.135943 kernel: hv_vmbus: Vmbus version:5.3 Jan 28 00:50:18.132212 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:50:18.144439 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:50:18.184391 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 28 00:50:18.184408 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 28 00:50:18.184415 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 28 00:50:18.184422 kernel: hv_vmbus: registering driver hv_netvsc Jan 28 00:50:18.184430 kernel: hv_vmbus: registering driver hid_hyperv Jan 28 00:50:18.184436 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 28 00:50:18.184443 kernel: hv_vmbus: registering driver hv_storvsc Jan 28 00:50:18.171675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:50:18.213938 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 28 00:50:18.213956 kernel: PTP clock support registered Jan 28 00:50:18.213963 kernel: scsi host0: storvsc_host_t Jan 28 00:50:18.214100 kernel: scsi host1: storvsc_host_t Jan 28 00:50:18.214117 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 28 00:50:18.191928 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 00:50:18.231152 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 28 00:50:18.210031 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:50:18.245487 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 28 00:50:18.210102 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:50:18.230992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:50:18.264771 kernel: hv_utils: Registering HyperV Utility Driver Jan 28 00:50:18.264797 kernel: hv_vmbus: registering driver hv_utils Jan 28 00:50:18.273919 kernel: hv_utils: Heartbeat IC version 3.0 Jan 28 00:50:18.273945 kernel: hv_utils: Shutdown IC version 3.2 Jan 28 00:50:18.273953 kernel: hv_utils: TimeSync IC version 4.0 Jan 28 00:50:18.090757 systemd-resolved[270]: Clock change detected. Flushing caches. Jan 28 00:50:18.113250 kernel: hv_netvsc 7ced8d88-3d12-7ced-8d88-3d127ced8d88 eth0: VF slot 1 added Jan 28 00:50:18.113353 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 28 00:50:18.113430 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 00:50:18.113435 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 28 00:50:18.113518 systemd-journald[225]: Time jumped backwards, rotating. Jan 28 00:50:18.113544 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 28 00:50:18.098644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:50:18.127440 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 28 00:50:18.127577 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 28 00:50:18.135064 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 28 00:50:18.135206 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 28 00:50:18.141139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#259 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:50:18.147600 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:50:18.159510 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 00:50:18.162621 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 28 00:50:18.167531 kernel: hv_vmbus: registering driver hv_pci Jan 28 00:50:18.173781 kernel: hv_pci d46e08cf-a72c-4304-9243-c51c677770ab: PCI VMBus probing: Using version 0x10004 Jan 28 00:50:18.173927 kernel: hv_pci d46e08cf-a72c-4304-9243-c51c677770ab: PCI host bridge to bus a72c:00 Jan 28 00:50:18.182350 kernel: pci_bus a72c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 28 00:50:18.187269 kernel: pci_bus a72c:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 00:50:18.193849 kernel: pci a72c:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 28 00:50:18.198522 kernel: pci a72c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 00:50:18.202574 kernel: pci a72c:00:02.0: enabling Extended Tags Jan 28 00:50:18.216585 kernel: pci a72c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a72c:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 28 00:50:18.225863 kernel: pci_bus a72c:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 00:50:18.226009 kernel: pci a72c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 28 00:50:18.242562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#308 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 00:50:18.264741 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 00:50:18.298217 kernel: mlx5_core a72c:00:02.0: enabling device (0000 -> 0002) Jan 28 00:50:18.306611 kernel: mlx5_core a72c:00:02.0: PTM is not supported by PCIe Jan 28 00:50:18.306750 kernel: mlx5_core a72c:00:02.0: firmware version: 16.30.5026 Jan 28 00:50:18.477440 kernel: hv_netvsc 7ced8d88-3d12-7ced-8d88-3d127ced8d88 eth0: VF registering: eth1 Jan 28 00:50:18.477660 kernel: mlx5_core a72c:00:02.0 eth1: joined to eth0 Jan 28 00:50:18.484306 kernel: mlx5_core a72c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 28 00:50:18.493514 kernel: mlx5_core a72c:00:02.0 enP42796s1: renamed from eth1 Jan 28 00:50:18.678504 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 28 00:50:18.716695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 00:50:18.783622 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 28 00:50:18.788849 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 28 00:50:18.802067 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 00:50:18.869190 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 28 00:50:18.995850 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 00:50:19.004800 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:50:19.009874 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:50:19.019658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:50:19.032669 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 00:50:19.058134 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:50:19.844603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:50:19.860567 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 00:50:19.860617 disk-uuid[645]: The operation has completed successfully. Jan 28 00:50:19.925162 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 00:50:19.927316 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 00:50:19.958604 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 00:50:19.975900 sh[822]: Success Jan 28 00:50:20.010956 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 00:50:20.010998 kernel: device-mapper: uevent: version 1.0.3 Jan 28 00:50:20.015940 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 00:50:20.024615 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 28 00:50:20.318927 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:50:20.329922 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 00:50:20.338558 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 00:50:20.362516 kernel: BTRFS: device fsid a5f8185f-aa1a-4e36-bd3e-ad4fa971117f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (840) Jan 28 00:50:20.372619 kernel: BTRFS info (device dm-0): first mount of filesystem a5f8185f-aa1a-4e36-bd3e-ad4fa971117f Jan 28 00:50:20.372764 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:50:20.678442 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 00:50:20.678526 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 00:50:20.719932 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 00:50:20.724090 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 00:50:20.731481 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 00:50:20.732193 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 00:50:20.757194 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 00:50:20.793315 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (871) Jan 28 00:50:20.793367 kernel: BTRFS info (device sda6): first mount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:50:20.798244 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:50:20.837779 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:50:20.837843 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:50:20.847572 kernel: BTRFS info (device sda6): last unmount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:50:20.848850 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 00:50:20.854987 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 00:50:20.880480 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:50:20.892040 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:50:20.929486 systemd-networkd[1009]: lo: Link UP Jan 28 00:50:20.929510 systemd-networkd[1009]: lo: Gained carrier Jan 28 00:50:20.930230 systemd-networkd[1009]: Enumeration completed Jan 28 00:50:20.932314 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:50:20.932479 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:50:20.932482 systemd-networkd[1009]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:50:20.940389 systemd[1]: Reached target network.target - Network. Jan 28 00:50:21.009517 kernel: mlx5_core a72c:00:02.0 enP42796s1: Link up Jan 28 00:50:21.043511 kernel: hv_netvsc 7ced8d88-3d12-7ced-8d88-3d127ced8d88 eth0: Data path switched to VF: enP42796s1 Jan 28 00:50:21.044166 systemd-networkd[1009]: enP42796s1: Link UP Jan 28 00:50:21.044228 systemd-networkd[1009]: eth0: Link UP Jan 28 00:50:21.044294 systemd-networkd[1009]: eth0: Gained carrier Jan 28 00:50:21.044308 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:50:21.051686 systemd-networkd[1009]: enP42796s1: Gained carrier Jan 28 00:50:21.077548 systemd-networkd[1009]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:50:22.015341 ignition[988]: Ignition 2.22.0 Jan 28 00:50:22.015356 ignition[988]: Stage: fetch-offline Jan 28 00:50:22.017970 ignition[988]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:50:22.019426 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:50:22.017978 ignition[988]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:50:22.027211 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 00:50:22.018062 ignition[988]: parsed url from cmdline: "" Jan 28 00:50:22.018065 ignition[988]: no config URL provided Jan 28 00:50:22.018068 ignition[988]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:50:22.018073 ignition[988]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:50:22.018076 ignition[988]: failed to fetch config: resource requires networking Jan 28 00:50:22.018200 ignition[988]: Ignition finished successfully Jan 28 00:50:22.064120 ignition[1018]: Ignition 2.22.0 Jan 28 00:50:22.064125 ignition[1018]: Stage: fetch Jan 28 00:50:22.064316 ignition[1018]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:50:22.064323 ignition[1018]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:50:22.064396 ignition[1018]: parsed url from cmdline: "" Jan 28 00:50:22.064398 ignition[1018]: no config URL provided Jan 28 00:50:22.064401 ignition[1018]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:50:22.064406 ignition[1018]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:50:22.064421 ignition[1018]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 28 00:50:22.127915 ignition[1018]: GET result: OK Jan 28 00:50:22.127991 ignition[1018]: config has been read from IMDS userdata Jan 28 00:50:22.128013 ignition[1018]: parsing config with SHA512: 3e29f08a3bfb93c317abeb6d90dc4f7a031591bd37c0c8ef19d60a2666dc9e24742f55293afa5ce6879cd523c111f0d9c0e92caad5866882a93ddbad85aa7bdd Jan 28 00:50:22.131716 unknown[1018]: fetched base config from "system" Jan 28 00:50:22.131987 ignition[1018]: fetch: fetch complete Jan 28 00:50:22.131721 unknown[1018]: fetched base config from "system" Jan 28 00:50:22.131990 ignition[1018]: fetch: fetch passed Jan 28 00:50:22.131724 unknown[1018]: fetched user config from "azure" Jan 28 00:50:22.132033 ignition[1018]: Ignition finished successfully Jan 28 00:50:22.135760 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 00:50:22.144641 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 00:50:22.182679 ignition[1025]: Ignition 2.22.0 Jan 28 00:50:22.182690 ignition[1025]: Stage: kargs Jan 28 00:50:22.186822 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 00:50:22.182858 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:50:22.193790 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 00:50:22.182866 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:50:22.183421 ignition[1025]: kargs: kargs passed Jan 28 00:50:22.183467 ignition[1025]: Ignition finished successfully Jan 28 00:50:22.227283 ignition[1031]: Ignition 2.22.0 Jan 28 00:50:22.227299 ignition[1031]: Stage: disks Jan 28 00:50:22.231255 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 00:50:22.227482 ignition[1031]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:50:22.238247 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 00:50:22.227489 ignition[1031]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:50:22.246423 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 00:50:22.228007 ignition[1031]: disks: disks passed Jan 28 00:50:22.255044 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:50:22.228046 ignition[1031]: Ignition finished successfully Jan 28 00:50:22.263535 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:50:22.272179 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:50:22.281390 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 00:50:22.368363 systemd-fsck[1040]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 28 00:50:22.377672 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 00:50:22.384343 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 00:50:22.616530 kernel: EXT4-fs (sda9): mounted filesystem e7dac9ee-22c5-4146-a097-e1ea6c8c1663 r/w with ordered data mode. Quota mode: none. Jan 28 00:50:22.616645 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 00:50:22.620558 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 00:50:22.644705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:50:22.649268 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 00:50:22.668787 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 00:50:22.679274 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 00:50:22.679307 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:50:22.686073 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 00:50:22.700751 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 00:50:22.720567 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1054) Jan 28 00:50:22.707420 systemd-networkd[1009]: eth0: Gained IPv6LL Jan 28 00:50:22.732969 kernel: BTRFS info (device sda6): first mount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:50:22.732995 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:50:22.742435 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:50:22.742452 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:50:22.744649 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:50:23.289681 coreos-metadata[1056]: Jan 28 00:50:23.289 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 00:50:23.297482 coreos-metadata[1056]: Jan 28 00:50:23.297 INFO Fetch successful Jan 28 00:50:23.301890 coreos-metadata[1056]: Jan 28 00:50:23.301 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 28 00:50:23.310350 coreos-metadata[1056]: Jan 28 00:50:23.310 INFO Fetch successful Jan 28 00:50:23.330560 coreos-metadata[1056]: Jan 28 00:50:23.330 INFO wrote hostname ci-4459.2.3-n-42917f0d29 to /sysroot/etc/hostname Jan 28 00:50:23.338118 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 00:50:23.606862 initrd-setup-root[1084]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 00:50:23.663833 initrd-setup-root[1091]: cut: /sysroot/etc/group: No such file or directory Jan 28 00:50:23.701307 initrd-setup-root[1098]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 00:50:23.709204 initrd-setup-root[1105]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 00:50:24.827213 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 00:50:24.832812 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 00:50:24.852237 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 00:50:24.860439 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 00:50:24.874209 kernel: BTRFS info (device sda6): last unmount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:50:24.891012 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 00:50:24.902908 ignition[1174]: INFO : Ignition 2.22.0 Jan 28 00:50:24.902908 ignition[1174]: INFO : Stage: mount Jan 28 00:50:24.909708 ignition[1174]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:50:24.909708 ignition[1174]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:50:24.909708 ignition[1174]: INFO : mount: mount passed Jan 28 00:50:24.909708 ignition[1174]: INFO : Ignition finished successfully Jan 28 00:50:24.907556 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 00:50:24.915449 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 00:50:24.941590 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:50:24.970510 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1185) Jan 28 00:50:24.980351 kernel: BTRFS info (device sda6): first mount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:50:24.980374 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:50:24.989519 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:50:24.989566 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:50:24.991074 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:50:25.020521 ignition[1202]: INFO : Ignition 2.22.0 Jan 28 00:50:25.020521 ignition[1202]: INFO : Stage: files Jan 28 00:50:25.020521 ignition[1202]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:50:25.020521 ignition[1202]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:50:25.036006 ignition[1202]: DEBUG : files: compiled without relabeling support, skipping Jan 28 00:50:25.041027 ignition[1202]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 00:50:25.041027 ignition[1202]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 00:50:25.103620 ignition[1202]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 00:50:25.109347 ignition[1202]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 00:50:25.109347 ignition[1202]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 00:50:25.104564 unknown[1202]: wrote ssh authorized keys file for user: core Jan 28 00:50:25.124299 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 00:50:25.132301 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 28 00:50:25.156300 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 00:50:25.276245 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 00:50:25.276245 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 00:50:25.291275 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 28 00:50:25.471103 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 00:50:25.575886 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 00:50:25.575886 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 28 00:50:25.575886 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 00:50:25.575886 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:50:25.604513 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:50:25.604513 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:50:25.604513 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:50:25.604513 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:50:25.604513 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:50:25.642287 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:50:25.642287 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:50:25.642287 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:50:25.642287 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:50:25.642287 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:50:25.642287 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 28 00:50:26.018128 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 28 00:50:26.312596 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:50:26.312596 ignition[1202]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 28 00:50:26.353096 ignition[1202]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:50:26.361612 ignition[1202]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:50:26.361612 ignition[1202]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 28 00:50:26.361612 ignition[1202]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 28 00:50:26.361612 ignition[1202]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 00:50:26.361612 ignition[1202]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:50:26.361612 ignition[1202]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:50:26.361612 ignition[1202]: INFO : files: files passed Jan 28 00:50:26.361612 ignition[1202]: INFO : Ignition finished successfully Jan 28 00:50:26.361380 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 00:50:26.375172 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 00:50:26.405430 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 00:50:26.417827 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 00:50:26.417902 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 00:50:26.452480 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:50:26.458887 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:50:26.458887 initrd-setup-root-after-ignition[1231]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:50:26.453545 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:50:26.464390 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 00:50:26.475746 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 00:50:26.524231 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 00:50:26.524328 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 00:50:26.533577 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 00:50:26.542726 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 00:50:26.550870 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 00:50:26.552655 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 00:50:26.588076 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:50:26.594642 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 00:50:26.618854 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:50:26.623721 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:50:26.632712 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 00:50:26.641398 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 00:50:26.641513 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:50:26.653667 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 00:50:26.658218 systemd[1]: Stopped target basic.target - Basic System. Jan 28 00:50:26.666551 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 00:50:26.675111 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:50:26.683291 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 00:50:26.691905 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 00:50:26.700838 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 00:50:26.709082 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:50:26.718599 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 00:50:26.726985 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 00:50:26.736401 systemd[1]: Stopped target swap.target - Swaps. Jan 28 00:50:26.743929 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 00:50:26.744070 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:50:26.755440 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:50:26.760499 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:50:26.769454 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 00:50:26.773447 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:50:26.779081 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 00:50:26.779214 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 00:50:26.792251 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 00:50:26.792389 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:50:26.801152 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 00:50:26.801261 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 00:50:26.810826 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 00:50:26.876829 ignition[1256]: INFO : Ignition 2.22.0 Jan 28 00:50:26.876829 ignition[1256]: INFO : Stage: umount Jan 28 00:50:26.876829 ignition[1256]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:50:26.876829 ignition[1256]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:50:26.876829 ignition[1256]: INFO : umount: umount passed Jan 28 00:50:26.876829 ignition[1256]: INFO : Ignition finished successfully Jan 28 00:50:26.810931 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 00:50:26.821610 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 00:50:26.835726 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 00:50:26.845896 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 00:50:26.846061 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:50:26.866937 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 00:50:26.867035 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:50:26.877181 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 00:50:26.877274 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 00:50:26.886017 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 00:50:26.886262 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 00:50:26.897827 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 00:50:26.897890 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 00:50:26.907596 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 00:50:26.907650 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 00:50:26.916288 systemd[1]: Stopped target network.target - Network. Jan 28 00:50:26.923322 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 00:50:26.923380 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:50:26.933473 systemd[1]: Stopped target paths.target - Path Units. Jan 28 00:50:26.941135 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 00:50:26.944512 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:50:26.950157 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 00:50:26.957904 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 00:50:26.967896 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 00:50:26.967941 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:50:26.977803 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 00:50:26.977844 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:50:26.986574 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 00:50:26.986632 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 00:50:26.994356 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 00:50:26.994382 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 00:50:27.003147 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 00:50:27.010841 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 00:50:27.021000 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 00:50:27.024136 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 00:50:27.024232 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 00:50:27.034327 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 00:50:27.034415 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 00:50:27.227624 kernel: hv_netvsc 7ced8d88-3d12-7ced-8d88-3d127ced8d88 eth0: Data path switched from VF: enP42796s1 Jan 28 00:50:27.050532 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 28 00:50:27.050743 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 00:50:27.050840 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 00:50:27.066635 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 28 00:50:27.068896 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 00:50:27.075331 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 00:50:27.075376 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:50:27.085696 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 00:50:27.100418 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 00:50:27.104013 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:50:27.113485 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:50:27.113553 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:50:27.125071 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 00:50:27.125114 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 00:50:27.129755 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 00:50:27.129785 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:50:27.142707 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:50:27.150365 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 28 00:50:27.150424 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 28 00:50:27.177992 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 00:50:27.178122 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:50:27.184394 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 00:50:27.184464 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 00:50:27.193267 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 00:50:27.193309 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:50:27.203160 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 00:50:27.203217 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:50:27.216698 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 00:50:27.216752 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 00:50:27.233236 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:50:27.233285 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:50:27.243394 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 00:50:27.254455 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 00:50:27.254532 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:50:27.269023 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 00:50:27.269074 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:50:27.279165 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 00:50:27.279215 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:50:27.293828 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 00:50:27.293878 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:50:27.299605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:50:27.299646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:50:27.314724 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 28 00:50:27.314770 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 28 00:50:27.314793 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 28 00:50:27.314820 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 00:50:27.315123 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 00:50:27.315220 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 00:50:27.327105 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 00:50:27.327183 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 00:50:27.410878 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 00:50:27.411021 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 00:50:27.416687 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 00:50:27.426706 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 00:50:27.426767 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 00:50:27.440615 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 00:50:27.559560 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jan 28 00:50:27.463151 systemd[1]: Switching root. Jan 28 00:50:27.563047 systemd-journald[225]: Journal stopped Jan 28 00:50:32.856367 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 00:50:32.856388 kernel: SELinux: policy capability open_perms=1 Jan 28 00:50:32.856396 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 00:50:32.856401 kernel: SELinux: policy capability always_check_network=0 Jan 28 00:50:32.856406 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 00:50:32.856413 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 00:50:32.856419 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 00:50:32.856425 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 00:50:32.856430 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 00:50:32.856436 kernel: audit: type=1403 audit(1769561428.878:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 00:50:32.856443 systemd[1]: Successfully loaded SELinux policy in 274.545ms. Jan 28 00:50:32.856450 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.387ms. Jan 28 00:50:32.856457 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 00:50:32.856463 systemd[1]: Detected virtualization microsoft. Jan 28 00:50:32.856470 systemd[1]: Detected architecture arm64. Jan 28 00:50:32.856476 systemd[1]: Detected first boot. Jan 28 00:50:32.856483 systemd[1]: Hostname set to . Jan 28 00:50:32.856490 systemd[1]: Initializing machine ID from random generator. Jan 28 00:50:32.859551 zram_generator::config[1298]: No configuration found. Jan 28 00:50:32.859562 kernel: NET: Registered PF_VSOCK protocol family Jan 28 00:50:32.859569 systemd[1]: Populated /etc with preset unit settings. Jan 28 00:50:32.859577 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 28 00:50:32.859583 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 00:50:32.859595 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 00:50:32.859601 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 00:50:32.859608 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 00:50:32.859615 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 00:50:32.859621 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 00:50:32.859627 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 00:50:32.859634 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 00:50:32.859641 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 00:50:32.859648 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 00:50:32.859658 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 00:50:32.859665 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:50:32.859671 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:50:32.859677 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 00:50:32.859683 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 00:50:32.859690 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 00:50:32.859698 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:50:32.859704 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 28 00:50:32.859713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:50:32.859719 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:50:32.859725 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 00:50:32.859732 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 00:50:32.859738 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 00:50:32.859745 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 00:50:32.859752 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:50:32.859758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:50:32.859764 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:50:32.859771 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:50:32.859777 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 00:50:32.859783 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 00:50:32.859791 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 00:50:32.859799 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:50:32.859805 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:50:32.859811 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:50:32.859818 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 00:50:32.859824 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 00:50:32.859831 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 00:50:32.859839 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 00:50:32.859845 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 00:50:32.859851 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 00:50:32.859858 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 00:50:32.859865 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 00:50:32.859871 systemd[1]: Reached target machines.target - Containers. Jan 28 00:50:32.859878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 00:50:32.859884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:50:32.859891 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:50:32.859898 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 00:50:32.859904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:50:32.859910 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:50:32.859917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:50:32.859923 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 00:50:32.859929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:50:32.859937 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 00:50:32.859943 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 00:50:32.859951 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 00:50:32.859957 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 00:50:32.859963 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 00:50:32.859970 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:50:32.859977 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:50:32.859983 kernel: fuse: init (API version 7.41) Jan 28 00:50:32.859989 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:50:32.859995 kernel: ACPI: bus type drm_connector registered Jan 28 00:50:32.860002 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:50:32.860038 systemd-journald[1388]: Collecting audit messages is disabled. Jan 28 00:50:32.860056 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 00:50:32.860062 kernel: loop: module loaded Jan 28 00:50:32.860070 systemd-journald[1388]: Journal started Jan 28 00:50:32.860086 systemd-journald[1388]: Runtime Journal (/run/log/journal/854d62e1593144fab8308847d5f94111) is 8M, max 78.3M, 70.3M free. Jan 28 00:50:32.137231 systemd[1]: Queued start job for default target multi-user.target. Jan 28 00:50:32.143008 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 28 00:50:32.143400 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 00:50:32.143690 systemd[1]: systemd-journald.service: Consumed 2.494s CPU time. Jan 28 00:50:32.876679 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 00:50:32.888887 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:50:32.895878 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 00:50:32.895912 systemd[1]: Stopped verity-setup.service. Jan 28 00:50:32.911844 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:50:32.912521 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 00:50:32.916994 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 00:50:32.922532 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 00:50:32.927486 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 00:50:32.932091 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 00:50:32.936751 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 00:50:32.941071 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 00:50:32.946184 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:50:32.951922 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 00:50:32.952069 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 00:50:32.957354 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:50:32.957472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:50:32.962467 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:50:32.962608 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:50:32.967301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:50:32.967424 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:50:32.973629 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 00:50:32.973776 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 00:50:32.978545 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:50:32.978672 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:50:32.983573 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:50:32.988751 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:50:32.997900 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 00:50:33.003525 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 00:50:33.016817 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:50:33.024615 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 00:50:33.036581 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 00:50:33.041302 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 00:50:33.041331 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:50:33.046298 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 00:50:33.053174 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 00:50:33.058915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:50:33.060604 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 00:50:33.066950 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 00:50:33.072265 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:50:33.073207 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 00:50:33.078169 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:50:33.080651 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:50:33.090711 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 00:50:33.099109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:50:33.106375 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:50:33.112023 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 00:50:33.117785 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 00:50:33.125092 systemd-journald[1388]: Time spent on flushing to /var/log/journal/854d62e1593144fab8308847d5f94111 is 41.946ms for 940 entries. Jan 28 00:50:33.125092 systemd-journald[1388]: System Journal (/var/log/journal/854d62e1593144fab8308847d5f94111) is 11.8M, max 2.6G, 2.6G free. Jan 28 00:50:33.225840 systemd-journald[1388]: Received client request to flush runtime journal. Jan 28 00:50:33.225887 systemd-journald[1388]: /var/log/journal/854d62e1593144fab8308847d5f94111/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 28 00:50:33.225911 systemd-journald[1388]: Rotating system journal. Jan 28 00:50:33.225930 kernel: loop0: detected capacity change from 0 to 27936 Jan 28 00:50:33.130273 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 00:50:33.137917 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 00:50:33.155661 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 00:50:33.209518 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:50:33.227389 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 00:50:33.236965 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Jan 28 00:50:33.236975 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Jan 28 00:50:33.239979 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:50:33.247382 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 00:50:33.259659 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 00:50:33.263004 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 00:50:33.413873 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 00:50:33.422800 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:50:33.441070 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Jan 28 00:50:33.441089 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Jan 28 00:50:33.444522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:50:33.620533 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 00:50:33.718269 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 00:50:33.724727 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:50:33.749523 kernel: loop1: detected capacity change from 0 to 119840 Jan 28 00:50:33.760530 systemd-udevd[1462]: Using default interface naming scheme 'v255'. Jan 28 00:50:33.995118 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:50:34.004655 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:50:34.045981 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 28 00:50:34.061265 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 00:50:34.162518 kernel: hv_vmbus: registering driver hv_balloon Jan 28 00:50:34.162617 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 00:50:34.162633 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 00:50:34.173441 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 28 00:50:34.180948 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 28 00:50:34.184525 kernel: hv_vmbus: registering driver hyperv_fb Jan 28 00:50:34.203316 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 28 00:50:34.209865 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 28 00:50:34.215563 kernel: Console: switching to colour dummy device 80x25 Jan 28 00:50:34.224608 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 00:50:34.243146 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 00:50:34.260925 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:50:34.272056 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:50:34.272572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:50:34.281547 kernel: loop2: detected capacity change from 0 to 100632 Jan 28 00:50:34.288773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:50:34.296183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:50:34.297537 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:50:34.306674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:50:34.357512 kernel: MACsec IEEE 802.1AE Jan 28 00:50:34.390253 systemd-networkd[1477]: lo: Link UP Jan 28 00:50:34.390567 systemd-networkd[1477]: lo: Gained carrier Jan 28 00:50:34.391611 systemd-networkd[1477]: Enumeration completed Jan 28 00:50:34.391775 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:50:34.392035 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:50:34.392088 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:50:34.399607 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 00:50:34.409045 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 00:50:34.466685 kernel: mlx5_core a72c:00:02.0 enP42796s1: Link up Jan 28 00:50:34.479084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 00:50:34.490975 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 00:50:34.495940 kernel: hv_netvsc 7ced8d88-3d12-7ced-8d88-3d127ced8d88 eth0: Data path switched to VF: enP42796s1 Jan 28 00:50:34.496839 systemd-networkd[1477]: enP42796s1: Link UP Jan 28 00:50:34.497025 systemd-networkd[1477]: eth0: Link UP Jan 28 00:50:34.497030 systemd-networkd[1477]: eth0: Gained carrier Jan 28 00:50:34.497050 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:50:34.501759 systemd-networkd[1477]: enP42796s1: Gained carrier Jan 28 00:50:34.502129 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 00:50:34.509553 systemd-networkd[1477]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:50:34.565743 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 00:50:34.799388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:50:34.817514 kernel: loop3: detected capacity change from 0 to 207008 Jan 28 00:50:34.862515 kernel: loop4: detected capacity change from 0 to 27936 Jan 28 00:50:34.879559 kernel: loop5: detected capacity change from 0 to 119840 Jan 28 00:50:34.896518 kernel: loop6: detected capacity change from 0 to 100632 Jan 28 00:50:34.908532 kernel: loop7: detected capacity change from 0 to 207008 Jan 28 00:50:34.918245 (sd-merge)[1611]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 28 00:50:34.918661 (sd-merge)[1611]: Merged extensions into '/usr'. Jan 28 00:50:34.921097 systemd[1]: Reload requested from client PID 1436 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 00:50:34.921113 systemd[1]: Reloading... Jan 28 00:50:34.968523 zram_generator::config[1637]: No configuration found. Jan 28 00:50:35.146919 systemd[1]: Reloading finished in 225 ms. Jan 28 00:50:35.168725 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 00:50:35.178525 systemd[1]: Starting ensure-sysext.service... Jan 28 00:50:35.184625 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:50:35.196443 systemd[1]: Reload requested from client PID 1695 ('systemctl') (unit ensure-sysext.service)... Jan 28 00:50:35.196571 systemd[1]: Reloading... Jan 28 00:50:35.216020 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 00:50:35.216216 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 00:50:35.216433 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 00:50:35.216594 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 00:50:35.217014 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 00:50:35.217148 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Jan 28 00:50:35.217175 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Jan 28 00:50:35.245206 zram_generator::config[1723]: No configuration found. Jan 28 00:50:35.257297 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:50:35.257311 systemd-tmpfiles[1696]: Skipping /boot Jan 28 00:50:35.262797 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:50:35.262808 systemd-tmpfiles[1696]: Skipping /boot Jan 28 00:50:35.412743 systemd[1]: Reloading finished in 215 ms. Jan 28 00:50:35.437562 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:50:35.458179 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 00:50:35.469196 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 00:50:35.477714 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 00:50:35.485698 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:50:35.495583 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 00:50:35.502808 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:50:35.507079 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:50:35.514110 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:50:35.523147 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:50:35.529080 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:50:35.529181 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:50:35.532121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:50:35.534637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:50:35.540371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:50:35.540603 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:50:35.548100 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:50:35.549904 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:50:35.563006 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 00:50:35.571777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:50:35.573261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:50:35.590701 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:50:35.597735 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:50:35.609107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:50:35.614844 systemd-resolved[1787]: Positive Trust Anchors: Jan 28 00:50:35.614856 systemd-resolved[1787]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:50:35.614876 systemd-resolved[1787]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:50:35.616563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:50:35.616680 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:50:35.616802 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 00:50:35.622621 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 00:50:35.630035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:50:35.630182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:50:35.636978 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:50:35.637119 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:50:35.642051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:50:35.642187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:50:35.647889 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:50:35.648013 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:50:35.655979 systemd[1]: Finished ensure-sysext.service. Jan 28 00:50:35.661699 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:50:35.661771 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:50:35.705650 systemd-resolved[1787]: Using system hostname 'ci-4459.2.3-n-42917f0d29'. Jan 28 00:50:35.707204 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:50:35.712315 systemd[1]: Reached target network.target - Network. Jan 28 00:50:35.716718 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:50:35.750223 augenrules[1826]: No rules Jan 28 00:50:35.751761 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:50:35.753528 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 00:50:36.402659 systemd-networkd[1477]: eth0: Gained IPv6LL Jan 28 00:50:36.404759 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 00:50:36.410366 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 00:50:36.787611 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 00:50:36.793188 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 00:50:41.848545 ldconfig[1431]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 00:50:41.860691 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 00:50:41.866965 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 00:50:41.883883 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 00:50:41.888919 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:50:41.893702 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 00:50:41.898840 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 00:50:41.904140 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 00:50:41.908772 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 00:50:41.915094 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 00:50:41.920538 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 00:50:41.920564 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:50:41.924830 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:50:41.946515 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 00:50:41.952232 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 00:50:41.957860 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 00:50:41.963086 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 00:50:41.968302 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 00:50:41.974339 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 00:50:41.978783 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 00:50:41.984104 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 00:50:41.988649 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:50:41.992424 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:50:41.996248 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:50:41.996272 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:50:41.998584 systemd[1]: Starting chronyd.service - NTP client/server... Jan 28 00:50:42.010184 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 00:50:42.017662 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 00:50:42.030647 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 00:50:42.035601 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 00:50:42.049193 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 00:50:42.054315 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 00:50:42.057520 jq[1847]: false Jan 28 00:50:42.058519 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 00:50:42.061610 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 28 00:50:42.067582 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 28 00:50:42.068403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:50:42.074645 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 00:50:42.081660 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 00:50:42.095383 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 00:50:42.100639 KVP[1849]: KVP starting; pid is:1849 Jan 28 00:50:42.103139 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 00:50:42.112905 KVP[1849]: KVP LIC Version: 3.1 Jan 28 00:50:42.113528 kernel: hv_utils: KVP IC version 4.0 Jan 28 00:50:42.115085 chronyd[1839]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 28 00:50:42.115576 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 00:50:42.129552 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 00:50:42.134332 extend-filesystems[1848]: Found /dev/sda6 Jan 28 00:50:42.138263 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 00:50:42.139324 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 00:50:42.140784 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 00:50:42.148182 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 00:50:42.163297 jq[1874]: true Jan 28 00:50:42.157710 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 00:50:42.165893 chronyd[1839]: Timezone right/UTC failed leap second check, ignoring Jan 28 00:50:42.166031 chronyd[1839]: Loaded seccomp filter (level 2) Jan 28 00:50:42.167141 systemd[1]: Started chronyd.service - NTP client/server. Jan 28 00:50:42.172025 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 00:50:42.173714 extend-filesystems[1848]: Found /dev/sda9 Jan 28 00:50:42.174603 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 00:50:42.176941 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 00:50:42.177080 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 00:50:42.183039 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 00:50:42.187370 extend-filesystems[1848]: Checking size of /dev/sda9 Jan 28 00:50:42.199857 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 00:50:42.201574 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 00:50:42.226250 systemd-logind[1868]: New seat seat0. Jan 28 00:50:42.230534 systemd-logind[1868]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 00:50:42.230675 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 00:50:42.237294 (ntainerd)[1885]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 00:50:42.237538 jq[1884]: true Jan 28 00:50:42.239550 update_engine[1873]: I20260128 00:50:42.239468 1873 main.cc:92] Flatcar Update Engine starting Jan 28 00:50:42.240534 extend-filesystems[1848]: Old size kept for /dev/sda9 Jan 28 00:50:42.243715 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 00:50:42.243904 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 00:50:42.270118 tar[1882]: linux-arm64/LICENSE Jan 28 00:50:42.270118 tar[1882]: linux-arm64/helm Jan 28 00:50:42.376838 dbus-daemon[1842]: [system] SELinux support is enabled Jan 28 00:50:42.377719 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 00:50:42.382236 update_engine[1873]: I20260128 00:50:42.380271 1873 update_check_scheduler.cc:74] Next update check in 11m51s Jan 28 00:50:42.388817 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 00:50:42.388847 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 00:50:42.397840 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 00:50:42.389448 dbus-daemon[1842]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 00:50:42.397860 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 00:50:42.406830 systemd[1]: Started update-engine.service - Update Engine. Jan 28 00:50:42.426109 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 00:50:42.435932 bash[1925]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:50:42.437906 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 00:50:42.449090 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 00:50:42.452129 sshd_keygen[1872]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 00:50:42.483129 coreos-metadata[1841]: Jan 28 00:50:42.481 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 00:50:42.487185 coreos-metadata[1841]: Jan 28 00:50:42.487 INFO Fetch successful Jan 28 00:50:42.487185 coreos-metadata[1841]: Jan 28 00:50:42.487 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 28 00:50:42.494502 coreos-metadata[1841]: Jan 28 00:50:42.493 INFO Fetch successful Jan 28 00:50:42.494502 coreos-metadata[1841]: Jan 28 00:50:42.493 INFO Fetching http://168.63.129.16/machine/d01e358a-546b-4419-bb59-3fd78170b125/26dda0be%2Dcb3c%2D49f7%2D8ee7%2D71faab174cd8.%5Fci%2D4459.2.3%2Dn%2D42917f0d29?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 28 00:50:42.497504 coreos-metadata[1841]: Jan 28 00:50:42.496 INFO Fetch successful Jan 28 00:50:42.497504 coreos-metadata[1841]: Jan 28 00:50:42.496 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 28 00:50:42.505940 coreos-metadata[1841]: Jan 28 00:50:42.505 INFO Fetch successful Jan 28 00:50:42.529399 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 00:50:42.542220 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 00:50:42.549895 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 28 00:50:42.569368 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 00:50:42.570576 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 00:50:42.583475 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 00:50:42.592542 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 00:50:42.605101 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 00:50:42.619628 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 28 00:50:42.627185 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 00:50:42.642739 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 00:50:42.651702 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 28 00:50:42.658908 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 00:50:42.792515 tar[1882]: linux-arm64/README.md Jan 28 00:50:42.801395 locksmithd[1954]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 00:50:42.806536 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 00:50:42.998832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:50:43.117478 containerd[1885]: time="2026-01-28T00:50:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 00:50:43.119511 containerd[1885]: time="2026-01-28T00:50:43.118309760Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 28 00:50:43.123335 containerd[1885]: time="2026-01-28T00:50:43.123299592Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.736µs" Jan 28 00:50:43.123335 containerd[1885]: time="2026-01-28T00:50:43.123328000Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 00:50:43.123335 containerd[1885]: time="2026-01-28T00:50:43.123343272Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 00:50:43.123528 containerd[1885]: time="2026-01-28T00:50:43.123490504Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 00:50:43.123528 containerd[1885]: time="2026-01-28T00:50:43.123524992Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 00:50:43.123555 containerd[1885]: time="2026-01-28T00:50:43.123544272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 00:50:43.123601 containerd[1885]: time="2026-01-28T00:50:43.123586840Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 00:50:43.123601 containerd[1885]: time="2026-01-28T00:50:43.123598440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 00:50:43.123782 containerd[1885]: time="2026-01-28T00:50:43.123763680Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 00:50:43.123795 containerd[1885]: time="2026-01-28T00:50:43.123780600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 00:50:43.123795 containerd[1885]: time="2026-01-28T00:50:43.123788424Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 00:50:43.123795 containerd[1885]: time="2026-01-28T00:50:43.123793896Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 00:50:43.123866 containerd[1885]: time="2026-01-28T00:50:43.123853912Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 00:50:43.124028 containerd[1885]: time="2026-01-28T00:50:43.124011768Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 00:50:43.124052 containerd[1885]: time="2026-01-28T00:50:43.124038008Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 00:50:43.124052 containerd[1885]: time="2026-01-28T00:50:43.124049512Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 00:50:43.124082 containerd[1885]: time="2026-01-28T00:50:43.124075696Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 00:50:43.124239 containerd[1885]: time="2026-01-28T00:50:43.124221976Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 00:50:43.124294 containerd[1885]: time="2026-01-28T00:50:43.124278336Z" level=info msg="metadata content store policy set" policy=shared Jan 28 00:50:43.134964 containerd[1885]: time="2026-01-28T00:50:43.134928880Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 00:50:43.135035 containerd[1885]: time="2026-01-28T00:50:43.134984640Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 00:50:43.135035 containerd[1885]: time="2026-01-28T00:50:43.134994560Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 00:50:43.135035 containerd[1885]: time="2026-01-28T00:50:43.135002672Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 00:50:43.135035 containerd[1885]: time="2026-01-28T00:50:43.135017776Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 00:50:43.135035 containerd[1885]: time="2026-01-28T00:50:43.135027752Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 00:50:43.135152 containerd[1885]: time="2026-01-28T00:50:43.135038376Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 00:50:43.135152 containerd[1885]: time="2026-01-28T00:50:43.135047704Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 00:50:43.135152 containerd[1885]: time="2026-01-28T00:50:43.135054408Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 00:50:43.135152 containerd[1885]: time="2026-01-28T00:50:43.135060512Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 00:50:43.135152 containerd[1885]: time="2026-01-28T00:50:43.135067168Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 00:50:43.135152 containerd[1885]: time="2026-01-28T00:50:43.135077464Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 00:50:43.135221 containerd[1885]: time="2026-01-28T00:50:43.135194920Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 00:50:43.135221 containerd[1885]: time="2026-01-28T00:50:43.135209944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 00:50:43.135221 containerd[1885]: time="2026-01-28T00:50:43.135218800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 00:50:43.135260 containerd[1885]: time="2026-01-28T00:50:43.135226280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 00:50:43.135260 containerd[1885]: time="2026-01-28T00:50:43.135234888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 00:50:43.135260 containerd[1885]: time="2026-01-28T00:50:43.135242496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 00:50:43.135260 containerd[1885]: time="2026-01-28T00:50:43.135249376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 00:50:43.135260 containerd[1885]: time="2026-01-28T00:50:43.135255944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 00:50:43.135318 containerd[1885]: time="2026-01-28T00:50:43.135262640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 00:50:43.135318 containerd[1885]: time="2026-01-28T00:50:43.135269392Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 00:50:43.135318 containerd[1885]: time="2026-01-28T00:50:43.135275872Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 00:50:43.135354 containerd[1885]: time="2026-01-28T00:50:43.135322400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 00:50:43.135354 containerd[1885]: time="2026-01-28T00:50:43.135332648Z" level=info msg="Start snapshots syncer" Jan 28 00:50:43.135354 containerd[1885]: time="2026-01-28T00:50:43.135350264Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 00:50:43.135588 containerd[1885]: time="2026-01-28T00:50:43.135554368Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 00:50:43.135740 containerd[1885]: time="2026-01-28T00:50:43.135599352Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 00:50:43.135740 containerd[1885]: time="2026-01-28T00:50:43.135636368Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 00:50:43.135740 containerd[1885]: time="2026-01-28T00:50:43.135733536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 00:50:43.135790 containerd[1885]: time="2026-01-28T00:50:43.135747944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 00:50:43.135790 containerd[1885]: time="2026-01-28T00:50:43.135755192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 00:50:43.135790 containerd[1885]: time="2026-01-28T00:50:43.135763672Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 00:50:43.135790 containerd[1885]: time="2026-01-28T00:50:43.135771280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 00:50:43.135790 containerd[1885]: time="2026-01-28T00:50:43.135777760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 00:50:43.135790 containerd[1885]: time="2026-01-28T00:50:43.135784368Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 00:50:43.135877 containerd[1885]: time="2026-01-28T00:50:43.135801888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 00:50:43.135877 containerd[1885]: time="2026-01-28T00:50:43.135809944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 00:50:43.135877 containerd[1885]: time="2026-01-28T00:50:43.135816816Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 00:50:43.135877 containerd[1885]: time="2026-01-28T00:50:43.135848416Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 00:50:43.135877 containerd[1885]: time="2026-01-28T00:50:43.135859080Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 00:50:43.135877 containerd[1885]: time="2026-01-28T00:50:43.135864896Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 00:50:43.135877 containerd[1885]: time="2026-01-28T00:50:43.135870352Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 00:50:43.135877 containerd[1885]: time="2026-01-28T00:50:43.135874808Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 00:50:43.136043 containerd[1885]: time="2026-01-28T00:50:43.135881280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 00:50:43.136043 containerd[1885]: time="2026-01-28T00:50:43.135889240Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 00:50:43.136043 containerd[1885]: time="2026-01-28T00:50:43.135901048Z" level=info msg="runtime interface created" Jan 28 00:50:43.136043 containerd[1885]: time="2026-01-28T00:50:43.135904872Z" level=info msg="created NRI interface" Jan 28 00:50:43.136043 containerd[1885]: time="2026-01-28T00:50:43.135910016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 00:50:43.136043 containerd[1885]: time="2026-01-28T00:50:43.135918344Z" level=info msg="Connect containerd service" Jan 28 00:50:43.136043 containerd[1885]: time="2026-01-28T00:50:43.135932920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 00:50:43.136557 containerd[1885]: time="2026-01-28T00:50:43.136531368Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:50:43.170872 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:50:43.502522 kubelet[2028]: E0128 00:50:43.502002 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:50:43.505680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:50:43.505788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:50:43.506588 systemd[1]: kubelet.service: Consumed 541ms CPU time, 253.9M memory peak. Jan 28 00:50:43.582721 containerd[1885]: time="2026-01-28T00:50:43.582657360Z" level=info msg="Start subscribing containerd event" Jan 28 00:50:43.583024 containerd[1885]: time="2026-01-28T00:50:43.582919848Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 00:50:43.583024 containerd[1885]: time="2026-01-28T00:50:43.582973960Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 00:50:43.583161 containerd[1885]: time="2026-01-28T00:50:43.583008872Z" level=info msg="Start recovering state" Jan 28 00:50:43.583226 containerd[1885]: time="2026-01-28T00:50:43.583214424Z" level=info msg="Start event monitor" Jan 28 00:50:43.583346 containerd[1885]: time="2026-01-28T00:50:43.583259184Z" level=info msg="Start cni network conf syncer for default" Jan 28 00:50:43.583346 containerd[1885]: time="2026-01-28T00:50:43.583280264Z" level=info msg="Start streaming server" Jan 28 00:50:43.583346 containerd[1885]: time="2026-01-28T00:50:43.583290408Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 00:50:43.583346 containerd[1885]: time="2026-01-28T00:50:43.583302344Z" level=info msg="runtime interface starting up..." Jan 28 00:50:43.583346 containerd[1885]: time="2026-01-28T00:50:43.583307080Z" level=info msg="starting plugins..." Jan 28 00:50:43.583346 containerd[1885]: time="2026-01-28T00:50:43.583321240Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 00:50:43.583960 containerd[1885]: time="2026-01-28T00:50:43.583869560Z" level=info msg="containerd successfully booted in 0.466726s" Jan 28 00:50:43.583982 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 00:50:43.591487 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 00:50:43.600571 systemd[1]: Startup finished in 1.640s (kernel) + 12.162s (initrd) + 14.994s (userspace) = 28.797s. Jan 28 00:50:44.913532 waagent[2007]: 2026-01-28T00:50:44.913354Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 28 00:50:44.921703 waagent[2007]: 2026-01-28T00:50:44.918320Z INFO Daemon Daemon OS: flatcar 4459.2.3 Jan 28 00:50:44.921910 waagent[2007]: 2026-01-28T00:50:44.921872Z INFO Daemon Daemon Python: 3.11.13 Jan 28 00:50:44.925558 waagent[2007]: 2026-01-28T00:50:44.925512Z INFO Daemon Daemon Run daemon Jan 28 00:50:44.929335 waagent[2007]: 2026-01-28T00:50:44.929290Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.3' Jan 28 00:50:44.936377 waagent[2007]: 2026-01-28T00:50:44.936343Z INFO Daemon Daemon Using waagent for provisioning Jan 28 00:50:44.940462 waagent[2007]: 2026-01-28T00:50:44.940425Z INFO Daemon Daemon Activate resource disk Jan 28 00:50:44.944646 waagent[2007]: 2026-01-28T00:50:44.944613Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 28 00:50:44.953012 waagent[2007]: 2026-01-28T00:50:44.952971Z INFO Daemon Daemon Found device: None Jan 28 00:50:44.956715 waagent[2007]: 2026-01-28T00:50:44.956674Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 28 00:50:44.963871 waagent[2007]: 2026-01-28T00:50:44.963831Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 28 00:50:44.972726 waagent[2007]: 2026-01-28T00:50:44.972685Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 00:50:44.977481 waagent[2007]: 2026-01-28T00:50:44.977445Z INFO Daemon Daemon Running default provisioning handler Jan 28 00:50:44.987188 waagent[2007]: 2026-01-28T00:50:44.987141Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 28 00:50:44.997975 waagent[2007]: 2026-01-28T00:50:44.997933Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 28 00:50:45.005735 waagent[2007]: 2026-01-28T00:50:45.005703Z INFO Daemon Daemon cloud-init is enabled: False Jan 28 00:50:45.009670 waagent[2007]: 2026-01-28T00:50:45.009646Z INFO Daemon Daemon Copying ovf-env.xml Jan 28 00:50:45.187958 waagent[2007]: 2026-01-28T00:50:45.187817Z INFO Daemon Daemon Successfully mounted dvd Jan 28 00:50:45.216870 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 28 00:50:45.219413 waagent[2007]: 2026-01-28T00:50:45.219349Z INFO Daemon Daemon Detect protocol endpoint Jan 28 00:50:45.223132 waagent[2007]: 2026-01-28T00:50:45.223093Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 00:50:45.227419 waagent[2007]: 2026-01-28T00:50:45.227387Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 28 00:50:45.232580 waagent[2007]: 2026-01-28T00:50:45.232550Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 28 00:50:45.236739 waagent[2007]: 2026-01-28T00:50:45.236708Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 28 00:50:45.240583 waagent[2007]: 2026-01-28T00:50:45.240553Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 28 00:50:45.288591 waagent[2007]: 2026-01-28T00:50:45.288537Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 28 00:50:45.293828 waagent[2007]: 2026-01-28T00:50:45.293804Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 28 00:50:45.298065 waagent[2007]: 2026-01-28T00:50:45.298033Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 28 00:50:45.463193 waagent[2007]: 2026-01-28T00:50:45.458485Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 28 00:50:45.463530 waagent[2007]: 2026-01-28T00:50:45.463469Z INFO Daemon Daemon Forcing an update of the goal state. Jan 28 00:50:45.479522 waagent[2007]: 2026-01-28T00:50:45.479449Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 00:50:45.497079 waagent[2007]: 2026-01-28T00:50:45.497037Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 28 00:50:45.501530 waagent[2007]: 2026-01-28T00:50:45.501471Z INFO Daemon Jan 28 00:50:45.503755 waagent[2007]: 2026-01-28T00:50:45.503720Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5c10640c-5b1b-4ad8-baf4-48385666a5b3 eTag: 17124063775625957536 source: Fabric] Jan 28 00:50:45.512644 waagent[2007]: 2026-01-28T00:50:45.512603Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 28 00:50:45.517705 waagent[2007]: 2026-01-28T00:50:45.517672Z INFO Daemon Jan 28 00:50:45.519898 waagent[2007]: 2026-01-28T00:50:45.519864Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 28 00:50:45.528953 waagent[2007]: 2026-01-28T00:50:45.528923Z INFO Daemon Daemon Downloading artifacts profile blob Jan 28 00:50:45.647571 waagent[2007]: 2026-01-28T00:50:45.647511Z INFO Daemon Downloaded certificate {'thumbprint': 'AC89BC50D8A2FBEFA10B51C326619E3AD9C1ED1C', 'hasPrivateKey': True} Jan 28 00:50:45.655466 waagent[2007]: 2026-01-28T00:50:45.655416Z INFO Daemon Fetch goal state completed Jan 28 00:50:45.665508 waagent[2007]: 2026-01-28T00:50:45.665451Z INFO Daemon Daemon Starting provisioning Jan 28 00:50:45.669797 waagent[2007]: 2026-01-28T00:50:45.669756Z INFO Daemon Daemon Handle ovf-env.xml. Jan 28 00:50:45.673370 waagent[2007]: 2026-01-28T00:50:45.673335Z INFO Daemon Daemon Set hostname [ci-4459.2.3-n-42917f0d29] Jan 28 00:50:45.680514 waagent[2007]: 2026-01-28T00:50:45.679553Z INFO Daemon Daemon Publish hostname [ci-4459.2.3-n-42917f0d29] Jan 28 00:50:45.684460 waagent[2007]: 2026-01-28T00:50:45.684420Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 28 00:50:45.689194 waagent[2007]: 2026-01-28T00:50:45.689159Z INFO Daemon Daemon Primary interface is [eth0] Jan 28 00:50:45.699061 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:50:45.699068 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:50:45.699101 systemd-networkd[1477]: eth0: DHCP lease lost Jan 28 00:50:45.700212 waagent[2007]: 2026-01-28T00:50:45.700165Z INFO Daemon Daemon Create user account if not exists Jan 28 00:50:45.704531 waagent[2007]: 2026-01-28T00:50:45.704468Z INFO Daemon Daemon User core already exists, skip useradd Jan 28 00:50:45.708812 waagent[2007]: 2026-01-28T00:50:45.708746Z INFO Daemon Daemon Configure sudoer Jan 28 00:50:45.716374 waagent[2007]: 2026-01-28T00:50:45.716284Z INFO Daemon Daemon Configure sshd Jan 28 00:50:45.722104 waagent[2007]: 2026-01-28T00:50:45.722059Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 28 00:50:45.731259 waagent[2007]: 2026-01-28T00:50:45.731223Z INFO Daemon Daemon Deploy ssh public key. Jan 28 00:50:45.731548 systemd-networkd[1477]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:50:46.365234 login[2009]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 28 00:50:46.366160 login[2008]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:46.375403 systemd-logind[1868]: New session 1 of user core. Jan 28 00:50:46.376201 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 00:50:46.377112 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 00:50:46.413528 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 00:50:46.415280 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 00:50:46.429887 (systemd)[2075]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 00:50:46.432011 systemd-logind[1868]: New session c1 of user core. Jan 28 00:50:46.566889 systemd[2075]: Queued start job for default target default.target. Jan 28 00:50:46.576322 systemd[2075]: Created slice app.slice - User Application Slice. Jan 28 00:50:46.576430 systemd[2075]: Reached target paths.target - Paths. Jan 28 00:50:46.576473 systemd[2075]: Reached target timers.target - Timers. Jan 28 00:50:46.577541 systemd[2075]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 00:50:46.586151 systemd[2075]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 00:50:46.586212 systemd[2075]: Reached target sockets.target - Sockets. Jan 28 00:50:46.586248 systemd[2075]: Reached target basic.target - Basic System. Jan 28 00:50:46.586270 systemd[2075]: Reached target default.target - Main User Target. Jan 28 00:50:46.586291 systemd[2075]: Startup finished in 149ms. Jan 28 00:50:46.586612 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 00:50:46.589046 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 00:50:46.885242 waagent[2007]: 2026-01-28T00:50:46.885192Z INFO Daemon Daemon Provisioning complete Jan 28 00:50:46.898750 waagent[2007]: 2026-01-28T00:50:46.898711Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 28 00:50:46.903455 waagent[2007]: 2026-01-28T00:50:46.903419Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 28 00:50:46.910556 waagent[2007]: 2026-01-28T00:50:46.910525Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 28 00:50:47.011535 waagent[2096]: 2026-01-28T00:50:47.011215Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 28 00:50:47.011535 waagent[2096]: 2026-01-28T00:50:47.011347Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.3 Jan 28 00:50:47.011535 waagent[2096]: 2026-01-28T00:50:47.011385Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 28 00:50:47.011535 waagent[2096]: 2026-01-28T00:50:47.011421Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 28 00:50:47.099526 waagent[2096]: 2026-01-28T00:50:47.099073Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 28 00:50:47.099526 waagent[2096]: 2026-01-28T00:50:47.099289Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:50:47.099526 waagent[2096]: 2026-01-28T00:50:47.099334Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:50:47.106339 waagent[2096]: 2026-01-28T00:50:47.106287Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 00:50:47.111670 waagent[2096]: 2026-01-28T00:50:47.111636Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 28 00:50:47.112138 waagent[2096]: 2026-01-28T00:50:47.112103Z INFO ExtHandler Jan 28 00:50:47.112265 waagent[2096]: 2026-01-28T00:50:47.112241Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4f022c00-f783-48b9-aa85-63bcc8e927fe eTag: 17124063775625957536 source: Fabric] Jan 28 00:50:47.112609 waagent[2096]: 2026-01-28T00:50:47.112578Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 28 00:50:47.113131 waagent[2096]: 2026-01-28T00:50:47.113098Z INFO ExtHandler Jan 28 00:50:47.113243 waagent[2096]: 2026-01-28T00:50:47.113221Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 28 00:50:47.116802 waagent[2096]: 2026-01-28T00:50:47.116777Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 28 00:50:47.169528 waagent[2096]: 2026-01-28T00:50:47.169219Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AC89BC50D8A2FBEFA10B51C326619E3AD9C1ED1C', 'hasPrivateKey': True} Jan 28 00:50:47.169751 waagent[2096]: 2026-01-28T00:50:47.169712Z INFO ExtHandler Fetch goal state completed Jan 28 00:50:47.182380 waagent[2096]: 2026-01-28T00:50:47.182323Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 28 00:50:47.185890 waagent[2096]: 2026-01-28T00:50:47.185839Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2096 Jan 28 00:50:47.186000 waagent[2096]: 2026-01-28T00:50:47.185974Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 28 00:50:47.186260 waagent[2096]: 2026-01-28T00:50:47.186229Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 28 00:50:47.187409 waagent[2096]: 2026-01-28T00:50:47.187372Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] Jan 28 00:50:47.187780 waagent[2096]: 2026-01-28T00:50:47.187744Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 28 00:50:47.187907 waagent[2096]: 2026-01-28T00:50:47.187882Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 28 00:50:47.188329 waagent[2096]: 2026-01-28T00:50:47.188299Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 28 00:50:47.231520 waagent[2096]: 2026-01-28T00:50:47.231297Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 28 00:50:47.231617 waagent[2096]: 2026-01-28T00:50:47.231526Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 28 00:50:47.236080 waagent[2096]: 2026-01-28T00:50:47.236054Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 28 00:50:47.240653 systemd[1]: Reload requested from client PID 2111 ('systemctl') (unit waagent.service)... Jan 28 00:50:47.240738 systemd[1]: Reloading... Jan 28 00:50:47.315574 zram_generator::config[2153]: No configuration found. Jan 28 00:50:47.366352 login[2009]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:47.463773 systemd[1]: Reloading finished in 222 ms. Jan 28 00:50:47.483023 waagent[2096]: 2026-01-28T00:50:47.482344Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 28 00:50:47.483023 waagent[2096]: 2026-01-28T00:50:47.482488Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 28 00:50:47.488048 systemd-logind[1868]: New session 2 of user core. Jan 28 00:50:47.498624 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 00:50:48.362272 waagent[2096]: 2026-01-28T00:50:48.362192Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 28 00:50:48.362578 waagent[2096]: 2026-01-28T00:50:48.362536Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 28 00:50:48.363224 waagent[2096]: 2026-01-28T00:50:48.363178Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 28 00:50:48.363516 waagent[2096]: 2026-01-28T00:50:48.363469Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 28 00:50:48.364305 waagent[2096]: 2026-01-28T00:50:48.363712Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:50:48.364305 waagent[2096]: 2026-01-28T00:50:48.363785Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:50:48.364305 waagent[2096]: 2026-01-28T00:50:48.363945Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 28 00:50:48.364305 waagent[2096]: 2026-01-28T00:50:48.364076Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 28 00:50:48.364305 waagent[2096]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 28 00:50:48.364305 waagent[2096]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 28 00:50:48.364305 waagent[2096]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 28 00:50:48.364305 waagent[2096]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:50:48.364305 waagent[2096]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:50:48.364305 waagent[2096]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:50:48.364594 waagent[2096]: 2026-01-28T00:50:48.364555Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 28 00:50:48.364642 waagent[2096]: 2026-01-28T00:50:48.364606Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 28 00:50:48.364891 waagent[2096]: 2026-01-28T00:50:48.364863Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:50:48.365000 waagent[2096]: 2026-01-28T00:50:48.364975Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:50:48.365168 waagent[2096]: 2026-01-28T00:50:48.365140Z INFO EnvHandler ExtHandler Configure routes Jan 28 00:50:48.365274 waagent[2096]: 2026-01-28T00:50:48.365253Z INFO EnvHandler ExtHandler Gateway:None Jan 28 00:50:48.365359 waagent[2096]: 2026-01-28T00:50:48.365342Z INFO EnvHandler ExtHandler Routes:None Jan 28 00:50:48.365719 waagent[2096]: 2026-01-28T00:50:48.365681Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 28 00:50:48.365887 waagent[2096]: 2026-01-28T00:50:48.365814Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 28 00:50:48.366073 waagent[2096]: 2026-01-28T00:50:48.366045Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 28 00:50:48.373339 waagent[2096]: 2026-01-28T00:50:48.372074Z INFO ExtHandler ExtHandler Jan 28 00:50:48.373339 waagent[2096]: 2026-01-28T00:50:48.372136Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1990d587-f991-4f0b-a5a9-73b89a45eb35 correlation eae77f38-fdbf-45c1-b234-814918339a6d created: 2026-01-28T00:49:41.618904Z] Jan 28 00:50:48.373339 waagent[2096]: 2026-01-28T00:50:48.372395Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 28 00:50:48.373339 waagent[2096]: 2026-01-28T00:50:48.372810Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 28 00:50:48.393909 waagent[2096]: 2026-01-28T00:50:48.393868Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 28 00:50:48.393909 waagent[2096]: Try `iptables -h' or 'iptables --help' for more information.) Jan 28 00:50:48.394381 waagent[2096]: 2026-01-28T00:50:48.394350Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5454F8B9-5B9E-491E-806C-FB59812F6099;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 28 00:50:48.435385 waagent[2096]: 2026-01-28T00:50:48.435321Z INFO MonitorHandler ExtHandler Network interfaces: Jan 28 00:50:48.435385 waagent[2096]: Executing ['ip', '-a', '-o', 'link']: Jan 28 00:50:48.435385 waagent[2096]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 28 00:50:48.435385 waagent[2096]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:88:3d:12 brd ff:ff:ff:ff:ff:ff Jan 28 00:50:48.435385 waagent[2096]: 3: enP42796s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:88:3d:12 brd ff:ff:ff:ff:ff:ff\ altname enP42796p0s2 Jan 28 00:50:48.435385 waagent[2096]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 28 00:50:48.435385 waagent[2096]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 28 00:50:48.435385 waagent[2096]: 2: eth0 inet 10.200.20.13/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 28 00:50:48.435385 waagent[2096]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 28 00:50:48.435385 waagent[2096]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 28 00:50:48.435385 waagent[2096]: 2: eth0 inet6 fe80::7eed:8dff:fe88:3d12/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 28 00:50:48.467693 waagent[2096]: 2026-01-28T00:50:48.467642Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 28 00:50:48.467693 waagent[2096]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:50:48.467693 waagent[2096]: pkts bytes target prot opt in out source destination Jan 28 00:50:48.467693 waagent[2096]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:50:48.467693 waagent[2096]: pkts bytes target prot opt in out source destination Jan 28 00:50:48.467693 waagent[2096]: Chain OUTPUT (policy ACCEPT 4 packets, 406 bytes) Jan 28 00:50:48.467693 waagent[2096]: pkts bytes target prot opt in out source destination Jan 28 00:50:48.467693 waagent[2096]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 00:50:48.467693 waagent[2096]: 3 534 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 00:50:48.467693 waagent[2096]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 00:50:48.470699 waagent[2096]: 2026-01-28T00:50:48.470655Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 28 00:50:48.470699 waagent[2096]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:50:48.470699 waagent[2096]: pkts bytes target prot opt in out source destination Jan 28 00:50:48.470699 waagent[2096]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:50:48.470699 waagent[2096]: pkts bytes target prot opt in out source destination Jan 28 00:50:48.470699 waagent[2096]: Chain OUTPUT (policy ACCEPT 4 packets, 406 bytes) Jan 28 00:50:48.470699 waagent[2096]: pkts bytes target prot opt in out source destination Jan 28 00:50:48.470699 waagent[2096]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 00:50:48.470699 waagent[2096]: 8 1002 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 00:50:48.470699 waagent[2096]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 00:50:48.471605 waagent[2096]: 2026-01-28T00:50:48.471576Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 28 00:50:53.648388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 00:50:53.649744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:50:53.754568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:50:53.757674 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:50:53.883169 kubelet[2254]: E0128 00:50:53.883116 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:50:53.885908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:50:53.886024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:50:53.886584 systemd[1]: kubelet.service: Consumed 116ms CPU time, 106.4M memory peak. Jan 28 00:50:57.778383 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 00:50:57.779861 systemd[1]: Started sshd@0-10.200.20.13:22-10.200.16.10:46468.service - OpenSSH per-connection server daemon (10.200.16.10:46468). Jan 28 00:50:58.410664 sshd[2261]: Accepted publickey for core from 10.200.16.10 port 46468 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:58.412184 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:58.415812 systemd-logind[1868]: New session 3 of user core. Jan 28 00:50:58.419617 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 00:50:58.850150 systemd[1]: Started sshd@1-10.200.20.13:22-10.200.16.10:46480.service - OpenSSH per-connection server daemon (10.200.16.10:46480). Jan 28 00:50:59.347916 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 46480 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:50:59.349078 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:50:59.352740 systemd-logind[1868]: New session 4 of user core. Jan 28 00:50:59.359799 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 00:50:59.699299 sshd[2270]: Connection closed by 10.200.16.10 port 46480 Jan 28 00:50:59.699901 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Jan 28 00:50:59.703637 systemd[1]: sshd@1-10.200.20.13:22-10.200.16.10:46480.service: Deactivated successfully. Jan 28 00:50:59.705046 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 00:50:59.705633 systemd-logind[1868]: Session 4 logged out. Waiting for processes to exit. Jan 28 00:50:59.706877 systemd-logind[1868]: Removed session 4. Jan 28 00:50:59.788043 systemd[1]: Started sshd@2-10.200.20.13:22-10.200.16.10:51208.service - OpenSSH per-connection server daemon (10.200.16.10:51208). Jan 28 00:51:00.286784 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 51208 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:00.287911 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:00.291573 systemd-logind[1868]: New session 5 of user core. Jan 28 00:51:00.299714 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 00:51:00.630541 sshd[2279]: Connection closed by 10.200.16.10 port 51208 Jan 28 00:51:00.631173 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:00.634460 systemd[1]: sshd@2-10.200.20.13:22-10.200.16.10:51208.service: Deactivated successfully. Jan 28 00:51:00.635958 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 00:51:00.636593 systemd-logind[1868]: Session 5 logged out. Waiting for processes to exit. Jan 28 00:51:00.637946 systemd-logind[1868]: Removed session 5. Jan 28 00:51:00.718097 systemd[1]: Started sshd@3-10.200.20.13:22-10.200.16.10:51218.service - OpenSSH per-connection server daemon (10.200.16.10:51218). Jan 28 00:51:01.212269 sshd[2285]: Accepted publickey for core from 10.200.16.10 port 51218 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:01.213391 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:01.217031 systemd-logind[1868]: New session 6 of user core. Jan 28 00:51:01.224624 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 00:51:01.562439 sshd[2288]: Connection closed by 10.200.16.10 port 51218 Jan 28 00:51:01.562951 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:01.566302 systemd[1]: sshd@3-10.200.20.13:22-10.200.16.10:51218.service: Deactivated successfully. Jan 28 00:51:01.567702 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 00:51:01.568808 systemd-logind[1868]: Session 6 logged out. Waiting for processes to exit. Jan 28 00:51:01.569876 systemd-logind[1868]: Removed session 6. Jan 28 00:51:01.658245 systemd[1]: Started sshd@4-10.200.20.13:22-10.200.16.10:51228.service - OpenSSH per-connection server daemon (10.200.16.10:51228). Jan 28 00:51:02.151847 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 51228 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:02.153893 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:02.157418 systemd-logind[1868]: New session 7 of user core. Jan 28 00:51:02.164648 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 00:51:02.541571 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 00:51:02.541797 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:51:02.573103 sudo[2298]: pam_unix(sudo:session): session closed for user root Jan 28 00:51:02.644775 sshd[2297]: Connection closed by 10.200.16.10 port 51228 Jan 28 00:51:02.644649 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:02.649263 systemd-logind[1868]: Session 7 logged out. Waiting for processes to exit. Jan 28 00:51:02.649461 systemd[1]: sshd@4-10.200.20.13:22-10.200.16.10:51228.service: Deactivated successfully. Jan 28 00:51:02.652731 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 00:51:02.653961 systemd-logind[1868]: Removed session 7. Jan 28 00:51:02.734323 systemd[1]: Started sshd@5-10.200.20.13:22-10.200.16.10:51238.service - OpenSSH per-connection server daemon (10.200.16.10:51238). Jan 28 00:51:03.227532 sshd[2304]: Accepted publickey for core from 10.200.16.10 port 51238 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:03.228400 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:03.231984 systemd-logind[1868]: New session 8 of user core. Jan 28 00:51:03.242822 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 00:51:03.500739 sudo[2309]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 00:51:03.500962 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:51:03.506916 sudo[2309]: pam_unix(sudo:session): session closed for user root Jan 28 00:51:03.510784 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 00:51:03.510986 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:51:03.518646 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 00:51:03.544660 augenrules[2331]: No rules Jan 28 00:51:03.545833 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:51:03.546585 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 00:51:03.548680 sudo[2308]: pam_unix(sudo:session): session closed for user root Jan 28 00:51:03.625160 sshd[2307]: Connection closed by 10.200.16.10 port 51238 Jan 28 00:51:03.625741 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:03.629620 systemd-logind[1868]: Session 8 logged out. Waiting for processes to exit. Jan 28 00:51:03.630003 systemd[1]: sshd@5-10.200.20.13:22-10.200.16.10:51238.service: Deactivated successfully. Jan 28 00:51:03.631349 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 00:51:03.632870 systemd-logind[1868]: Removed session 8. Jan 28 00:51:03.719072 systemd[1]: Started sshd@6-10.200.20.13:22-10.200.16.10:51246.service - OpenSSH per-connection server daemon (10.200.16.10:51246). Jan 28 00:51:03.898383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 00:51:03.901671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:51:04.034626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:51:04.040813 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:51:04.114330 kubelet[2351]: E0128 00:51:04.114253 2351 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:51:04.116406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:51:04.116539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:51:04.116984 systemd[1]: kubelet.service: Consumed 108ms CPU time, 107.2M memory peak. Jan 28 00:51:04.222058 sshd[2340]: Accepted publickey for core from 10.200.16.10 port 51246 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:04.223050 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:04.226713 systemd-logind[1868]: New session 9 of user core. Jan 28 00:51:04.236812 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 00:51:04.497786 sudo[2360]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 00:51:04.498004 sudo[2360]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:51:05.959107 chronyd[1839]: Selected source PHC0 Jan 28 00:51:06.049174 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 00:51:06.058898 (dockerd)[2378]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 00:51:07.201519 dockerd[2378]: time="2026-01-28T00:51:07.201126816Z" level=info msg="Starting up" Jan 28 00:51:07.202601 dockerd[2378]: time="2026-01-28T00:51:07.202579336Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 00:51:07.211232 dockerd[2378]: time="2026-01-28T00:51:07.211203766Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 00:51:07.291109 dockerd[2378]: time="2026-01-28T00:51:07.291060565Z" level=info msg="Loading containers: start." Jan 28 00:51:07.303671 kernel: Initializing XFRM netlink socket Jan 28 00:51:07.646322 systemd-networkd[1477]: docker0: Link UP Jan 28 00:51:07.658641 dockerd[2378]: time="2026-01-28T00:51:07.658596178Z" level=info msg="Loading containers: done." Jan 28 00:51:07.676141 dockerd[2378]: time="2026-01-28T00:51:07.676092758Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 00:51:07.676292 dockerd[2378]: time="2026-01-28T00:51:07.676184086Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 00:51:07.676292 dockerd[2378]: time="2026-01-28T00:51:07.676264614Z" level=info msg="Initializing buildkit" Jan 28 00:51:07.709361 dockerd[2378]: time="2026-01-28T00:51:07.709315767Z" level=info msg="Completed buildkit initialization" Jan 28 00:51:07.715312 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 00:51:07.716382 dockerd[2378]: time="2026-01-28T00:51:07.715809037Z" level=info msg="Daemon has completed initialization" Jan 28 00:51:07.716382 dockerd[2378]: time="2026-01-28T00:51:07.716080573Z" level=info msg="API listen on /run/docker.sock" Jan 28 00:51:08.229773 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck891960846-merged.mount: Deactivated successfully. Jan 28 00:51:08.394306 containerd[1885]: time="2026-01-28T00:51:08.394258639Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 00:51:09.271091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3308049934.mount: Deactivated successfully. Jan 28 00:51:10.497528 containerd[1885]: time="2026-01-28T00:51:10.497216583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:10.499565 containerd[1885]: time="2026-01-28T00:51:10.499528175Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 28 00:51:10.502552 containerd[1885]: time="2026-01-28T00:51:10.502291047Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:10.505569 containerd[1885]: time="2026-01-28T00:51:10.505542487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:10.506025 containerd[1885]: time="2026-01-28T00:51:10.505998879Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.111698944s" Jan 28 00:51:10.506074 containerd[1885]: time="2026-01-28T00:51:10.506027887Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 28 00:51:10.506724 containerd[1885]: time="2026-01-28T00:51:10.506592623Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 00:51:11.944536 containerd[1885]: time="2026-01-28T00:51:11.944220079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:11.946597 containerd[1885]: time="2026-01-28T00:51:11.946412719Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 28 00:51:11.948981 containerd[1885]: time="2026-01-28T00:51:11.948956399Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:11.952871 containerd[1885]: time="2026-01-28T00:51:11.952836031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:11.953651 containerd[1885]: time="2026-01-28T00:51:11.953502535Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.44687384s" Jan 28 00:51:11.953651 containerd[1885]: time="2026-01-28T00:51:11.953530439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 28 00:51:11.954316 containerd[1885]: time="2026-01-28T00:51:11.954295311Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 00:51:13.108241 containerd[1885]: time="2026-01-28T00:51:13.108181783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:13.110944 containerd[1885]: time="2026-01-28T00:51:13.110913679Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 28 00:51:13.112886 containerd[1885]: time="2026-01-28T00:51:13.112860327Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:13.118050 containerd[1885]: time="2026-01-28T00:51:13.117546887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:13.118050 containerd[1885]: time="2026-01-28T00:51:13.117934327Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.163547288s" Jan 28 00:51:13.118050 containerd[1885]: time="2026-01-28T00:51:13.117961247Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 28 00:51:13.118748 containerd[1885]: time="2026-01-28T00:51:13.118723975Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 00:51:14.140513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264479404.mount: Deactivated successfully. Jan 28 00:51:14.142157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 00:51:14.144663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:51:14.581837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:51:14.590743 (kubelet)[2661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:51:14.617893 kubelet[2661]: E0128 00:51:14.617815 2661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:51:14.619977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:51:14.620202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:51:14.620758 systemd[1]: kubelet.service: Consumed 108ms CPU time, 107.2M memory peak. Jan 28 00:51:15.002869 containerd[1885]: time="2026-01-28T00:51:15.002789900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:15.005224 containerd[1885]: time="2026-01-28T00:51:15.005197498Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 28 00:51:15.007181 containerd[1885]: time="2026-01-28T00:51:15.007140963Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:15.010668 containerd[1885]: time="2026-01-28T00:51:15.010631735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:15.011051 containerd[1885]: time="2026-01-28T00:51:15.011024276Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.892269677s" Jan 28 00:51:15.011137 containerd[1885]: time="2026-01-28T00:51:15.011122832Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 28 00:51:15.011642 containerd[1885]: time="2026-01-28T00:51:15.011627498Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 00:51:15.672620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493574862.mount: Deactivated successfully. Jan 28 00:51:17.283009 containerd[1885]: time="2026-01-28T00:51:17.282942482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:17.285888 containerd[1885]: time="2026-01-28T00:51:17.285693919Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 28 00:51:17.288390 containerd[1885]: time="2026-01-28T00:51:17.288364586Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:17.291928 containerd[1885]: time="2026-01-28T00:51:17.291893909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:17.292577 containerd[1885]: time="2026-01-28T00:51:17.292550369Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.280814883s" Jan 28 00:51:17.292751 containerd[1885]: time="2026-01-28T00:51:17.292657971Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 28 00:51:17.293232 containerd[1885]: time="2026-01-28T00:51:17.293205718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 00:51:17.833884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429966234.mount: Deactivated successfully. Jan 28 00:51:17.854795 containerd[1885]: time="2026-01-28T00:51:17.854740508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:51:17.858851 containerd[1885]: time="2026-01-28T00:51:17.858686791Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 28 00:51:17.864912 containerd[1885]: time="2026-01-28T00:51:17.864876149Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:51:17.868454 containerd[1885]: time="2026-01-28T00:51:17.868409232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:51:17.868999 containerd[1885]: time="2026-01-28T00:51:17.868820072Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 575.58389ms" Jan 28 00:51:17.868999 containerd[1885]: time="2026-01-28T00:51:17.868844704Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 28 00:51:17.869352 containerd[1885]: time="2026-01-28T00:51:17.869327898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 00:51:18.491366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833449395.mount: Deactivated successfully. Jan 28 00:51:21.347231 containerd[1885]: time="2026-01-28T00:51:21.346584976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:21.349865 containerd[1885]: time="2026-01-28T00:51:21.349838046Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 28 00:51:21.352797 containerd[1885]: time="2026-01-28T00:51:21.352772374Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:21.357319 containerd[1885]: time="2026-01-28T00:51:21.357284035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:21.358058 containerd[1885]: time="2026-01-28T00:51:21.358031802Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.488671712s" Jan 28 00:51:21.358256 containerd[1885]: time="2026-01-28T00:51:21.358160892Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 28 00:51:22.271513 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 28 00:51:24.648456 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 00:51:24.651673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:51:24.672741 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 00:51:24.672805 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 00:51:24.673020 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:51:24.685205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:51:24.697149 systemd[1]: Reload requested from client PID 2810 ('systemctl') (unit session-9.scope)... Jan 28 00:51:24.697158 systemd[1]: Reloading... Jan 28 00:51:24.770526 zram_generator::config[2856]: No configuration found. Jan 28 00:51:24.927741 systemd[1]: Reloading finished in 230 ms. Jan 28 00:51:24.979326 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 00:51:24.979523 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 00:51:24.979894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:51:24.982091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:51:25.100705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:51:25.112753 (kubelet)[2922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:51:25.136977 kubelet[2922]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:51:25.136977 kubelet[2922]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:51:25.136977 kubelet[2922]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:51:25.203703 kubelet[2922]: I0128 00:51:25.203563 2922 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:51:25.506652 kubelet[2922]: I0128 00:51:25.505708 2922 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:51:25.506652 kubelet[2922]: I0128 00:51:25.505743 2922 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:51:25.506652 kubelet[2922]: I0128 00:51:25.505948 2922 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:51:25.525637 kubelet[2922]: E0128 00:51:25.525596 2922 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:51:25.527437 kubelet[2922]: I0128 00:51:25.527395 2922 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:51:25.532780 kubelet[2922]: I0128 00:51:25.532765 2922 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 00:51:25.535288 kubelet[2922]: I0128 00:51:25.535265 2922 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:51:25.536147 kubelet[2922]: I0128 00:51:25.536113 2922 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:51:25.536372 kubelet[2922]: I0128 00:51:25.536231 2922 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-42917f0d29","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:51:25.536513 kubelet[2922]: I0128 00:51:25.536486 2922 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:51:25.536572 kubelet[2922]: I0128 00:51:25.536564 2922 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:51:25.536759 kubelet[2922]: I0128 00:51:25.536744 2922 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:51:25.539310 kubelet[2922]: I0128 00:51:25.539292 2922 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:51:25.539409 kubelet[2922]: I0128 00:51:25.539398 2922 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:51:25.539473 kubelet[2922]: I0128 00:51:25.539466 2922 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:51:25.539546 kubelet[2922]: I0128 00:51:25.539536 2922 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:51:25.543592 kubelet[2922]: W0128 00:51:25.543246 2922 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-42917f0d29&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 28 00:51:25.543592 kubelet[2922]: E0128 00:51:25.543290 2922 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-42917f0d29&limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:51:25.544755 kubelet[2922]: W0128 00:51:25.544713 2922 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 28 00:51:25.544755 kubelet[2922]: E0128 00:51:25.544752 2922 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:51:25.544860 kubelet[2922]: I0128 00:51:25.544845 2922 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 00:51:25.545166 kubelet[2922]: I0128 00:51:25.545143 2922 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:51:25.545213 kubelet[2922]: W0128 00:51:25.545193 2922 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 00:51:25.545689 kubelet[2922]: I0128 00:51:25.545662 2922 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:51:25.545748 kubelet[2922]: I0128 00:51:25.545697 2922 server.go:1287] "Started kubelet" Jan 28 00:51:25.547362 kubelet[2922]: I0128 00:51:25.546670 2922 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:51:25.547362 kubelet[2922]: I0128 00:51:25.547205 2922 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:51:25.547362 kubelet[2922]: I0128 00:51:25.547269 2922 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:51:25.547933 kubelet[2922]: I0128 00:51:25.547877 2922 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:51:25.548088 kubelet[2922]: I0128 00:51:25.548067 2922 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:51:25.551107 kubelet[2922]: I0128 00:51:25.551065 2922 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:51:25.554757 kubelet[2922]: E0128 00:51:25.554661 2922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.3-n-42917f0d29.188ebecbaa29b8c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.3-n-42917f0d29,UID:ci-4459.2.3-n-42917f0d29,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.3-n-42917f0d29,},FirstTimestamp:2026-01-28 00:51:25.54568314 +0000 UTC m=+0.430528606,LastTimestamp:2026-01-28 00:51:25.54568314 +0000 UTC m=+0.430528606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.3-n-42917f0d29,}" Jan 28 00:51:25.555527 kubelet[2922]: I0128 00:51:25.555127 2922 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:51:25.555527 kubelet[2922]: E0128 00:51:25.555284 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:25.555527 kubelet[2922]: I0128 00:51:25.555317 2922 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:51:25.555527 kubelet[2922]: I0128 00:51:25.555368 2922 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:51:25.555823 kubelet[2922]: W0128 00:51:25.555793 2922 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 28 00:51:25.555913 kubelet[2922]: E0128 00:51:25.555897 2922 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:51:25.557346 kubelet[2922]: E0128 00:51:25.557314 2922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-42917f0d29?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="200ms" Jan 28 00:51:25.557925 kubelet[2922]: I0128 00:51:25.557910 2922 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:51:25.558002 kubelet[2922]: I0128 00:51:25.557994 2922 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:51:25.558114 kubelet[2922]: I0128 00:51:25.558100 2922 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:51:25.560572 kubelet[2922]: E0128 00:51:25.560542 2922 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:51:25.582275 kubelet[2922]: I0128 00:51:25.582250 2922 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:51:25.582418 kubelet[2922]: I0128 00:51:25.582410 2922 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:51:25.582513 kubelet[2922]: I0128 00:51:25.582489 2922 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:51:25.620546 kubelet[2922]: I0128 00:51:25.620517 2922 policy_none.go:49] "None policy: Start" Jan 28 00:51:25.620746 kubelet[2922]: I0128 00:51:25.620686 2922 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:51:25.620746 kubelet[2922]: I0128 00:51:25.620704 2922 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:51:25.627793 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 00:51:25.641712 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 00:51:25.644426 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 00:51:25.652548 kubelet[2922]: I0128 00:51:25.652252 2922 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:51:25.652548 kubelet[2922]: I0128 00:51:25.652428 2922 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:51:25.652548 kubelet[2922]: I0128 00:51:25.652438 2922 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:51:25.655204 kubelet[2922]: I0128 00:51:25.655185 2922 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:51:25.656183 kubelet[2922]: E0128 00:51:25.656118 2922 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:51:25.656635 kubelet[2922]: E0128 00:51:25.656597 2922 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:25.657726 kubelet[2922]: I0128 00:51:25.657692 2922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:51:25.658604 kubelet[2922]: I0128 00:51:25.658571 2922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:51:25.658604 kubelet[2922]: I0128 00:51:25.658600 2922 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:51:25.658665 kubelet[2922]: I0128 00:51:25.658620 2922 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:51:25.658665 kubelet[2922]: I0128 00:51:25.658625 2922 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:51:25.658665 kubelet[2922]: E0128 00:51:25.658658 2922 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 28 00:51:25.659332 kubelet[2922]: W0128 00:51:25.659245 2922 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 28 00:51:25.659332 kubelet[2922]: E0128 00:51:25.659294 2922 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:51:25.755470 kubelet[2922]: I0128 00:51:25.755436 2922 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.755875 kubelet[2922]: E0128 00:51:25.755848 2922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.758120 kubelet[2922]: E0128 00:51:25.758046 2922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-42917f0d29?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="400ms" Jan 28 00:51:25.768094 systemd[1]: Created slice kubepods-burstable-pod19a77c222482993b52b7b06d611a8e7a.slice - libcontainer container kubepods-burstable-pod19a77c222482993b52b7b06d611a8e7a.slice. Jan 28 00:51:25.783382 kubelet[2922]: E0128 00:51:25.783173 2922 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-42917f0d29\" not found" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.785774 systemd[1]: Created slice kubepods-burstable-pod2f257dc5f642513b15209441ae2337d6.slice - libcontainer container kubepods-burstable-pod2f257dc5f642513b15209441ae2337d6.slice. Jan 28 00:51:25.787739 kubelet[2922]: E0128 00:51:25.787709 2922 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-42917f0d29\" not found" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.789629 systemd[1]: Created slice kubepods-burstable-pod4da435f820c7c51678900150f81f3613.slice - libcontainer container kubepods-burstable-pod4da435f820c7c51678900150f81f3613.slice. Jan 28 00:51:25.791443 kubelet[2922]: E0128 00:51:25.791412 2922 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-42917f0d29\" not found" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.856003 kubelet[2922]: I0128 00:51:25.855966 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4da435f820c7c51678900150f81f3613-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-42917f0d29\" (UID: \"4da435f820c7c51678900150f81f3613\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.856113 kubelet[2922]: I0128 00:51:25.856013 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4da435f820c7c51678900150f81f3613-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-42917f0d29\" (UID: \"4da435f820c7c51678900150f81f3613\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.856113 kubelet[2922]: I0128 00:51:25.856030 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19a77c222482993b52b7b06d611a8e7a-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" (UID: \"19a77c222482993b52b7b06d611a8e7a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.856113 kubelet[2922]: I0128 00:51:25.856040 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19a77c222482993b52b7b06d611a8e7a-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" (UID: \"19a77c222482993b52b7b06d611a8e7a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.856113 kubelet[2922]: I0128 00:51:25.856049 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4da435f820c7c51678900150f81f3613-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-42917f0d29\" (UID: \"4da435f820c7c51678900150f81f3613\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.856113 kubelet[2922]: I0128 00:51:25.856060 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19a77c222482993b52b7b06d611a8e7a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" (UID: \"19a77c222482993b52b7b06d611a8e7a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.856203 kubelet[2922]: I0128 00:51:25.856071 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19a77c222482993b52b7b06d611a8e7a-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" (UID: \"19a77c222482993b52b7b06d611a8e7a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.856203 kubelet[2922]: I0128 00:51:25.856087 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19a77c222482993b52b7b06d611a8e7a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" (UID: \"19a77c222482993b52b7b06d611a8e7a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.856203 kubelet[2922]: I0128 00:51:25.856102 2922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f257dc5f642513b15209441ae2337d6-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-42917f0d29\" (UID: \"2f257dc5f642513b15209441ae2337d6\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.957841 kubelet[2922]: I0128 00:51:25.957811 2922 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:25.958178 kubelet[2922]: E0128 00:51:25.958152 2922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:26.084761 containerd[1885]: time="2026-01-28T00:51:26.084649693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-42917f0d29,Uid:19a77c222482993b52b7b06d611a8e7a,Namespace:kube-system,Attempt:0,}" Jan 28 00:51:26.089025 containerd[1885]: time="2026-01-28T00:51:26.088987611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-42917f0d29,Uid:2f257dc5f642513b15209441ae2337d6,Namespace:kube-system,Attempt:0,}" Jan 28 00:51:26.092880 containerd[1885]: time="2026-01-28T00:51:26.092851991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-42917f0d29,Uid:4da435f820c7c51678900150f81f3613,Namespace:kube-system,Attempt:0,}" Jan 28 00:51:26.158679 kubelet[2922]: E0128 00:51:26.158626 2922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-42917f0d29?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="800ms" Jan 28 00:51:26.359864 kubelet[2922]: I0128 00:51:26.359767 2922 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:26.360167 kubelet[2922]: E0128 00:51:26.360111 2922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:26.487470 kubelet[2922]: W0128 00:51:26.487323 2922 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-42917f0d29&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 28 00:51:26.487470 kubelet[2922]: E0128 00:51:26.487391 2922 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-42917f0d29&limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:51:26.692687 kubelet[2922]: W0128 00:51:26.692595 2922 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 28 00:51:26.692687 kubelet[2922]: E0128 00:51:26.692657 2922 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:51:26.824774 kubelet[2922]: W0128 00:51:26.824679 2922 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 28 00:51:26.824774 kubelet[2922]: E0128 00:51:26.824741 2922 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:51:26.833351 kubelet[2922]: W0128 00:51:26.833294 2922 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 28 00:51:26.833351 kubelet[2922]: E0128 00:51:26.833328 2922 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:51:26.959732 kubelet[2922]: E0128 00:51:26.959610 2922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-42917f0d29?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="1.6s" Jan 28 00:51:27.030939 containerd[1885]: time="2026-01-28T00:51:27.030894009Z" level=info msg="connecting to shim be3b1e53671d0488a3a35dc48bbc4fe00c529c3c53376a1ce03f016b4876b977" address="unix:///run/containerd/s/461fb35b93b44635401872dc874ed5c646d4346f2d49f9f4ef4d71ea78f835a5" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:51:27.049525 containerd[1885]: time="2026-01-28T00:51:27.049469165Z" level=info msg="connecting to shim f4fac0783b6009e318c87b36b883a26bb21192022833bfc453bed114d658bef0" address="unix:///run/containerd/s/feddd48b1dd274ac8c6ba25e944fb9545593d51b4670ed10ac8aa89eb5f33814" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:51:27.057213 containerd[1885]: time="2026-01-28T00:51:27.055564745Z" level=info msg="connecting to shim 67e8496da9ae290e6b45275f461248de60b79682a4831a8e42f184d1cfc63caf" address="unix:///run/containerd/s/58e6d1cc58c66df9a96d5076ab4f39577ac8fbe926ac6ea392ec5fbc0d4611f2" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:51:27.058680 systemd[1]: Started cri-containerd-be3b1e53671d0488a3a35dc48bbc4fe00c529c3c53376a1ce03f016b4876b977.scope - libcontainer container be3b1e53671d0488a3a35dc48bbc4fe00c529c3c53376a1ce03f016b4876b977. Jan 28 00:51:27.086123 systemd[1]: Started cri-containerd-f4fac0783b6009e318c87b36b883a26bb21192022833bfc453bed114d658bef0.scope - libcontainer container f4fac0783b6009e318c87b36b883a26bb21192022833bfc453bed114d658bef0. Jan 28 00:51:27.090030 systemd[1]: Started cri-containerd-67e8496da9ae290e6b45275f461248de60b79682a4831a8e42f184d1cfc63caf.scope - libcontainer container 67e8496da9ae290e6b45275f461248de60b79682a4831a8e42f184d1cfc63caf. Jan 28 00:51:27.106973 containerd[1885]: time="2026-01-28T00:51:27.106935230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-42917f0d29,Uid:19a77c222482993b52b7b06d611a8e7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"be3b1e53671d0488a3a35dc48bbc4fe00c529c3c53376a1ce03f016b4876b977\"" Jan 28 00:51:27.112249 containerd[1885]: time="2026-01-28T00:51:27.112213968Z" level=info msg="CreateContainer within sandbox \"be3b1e53671d0488a3a35dc48bbc4fe00c529c3c53376a1ce03f016b4876b977\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 00:51:27.130302 containerd[1885]: time="2026-01-28T00:51:27.130269225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-42917f0d29,Uid:4da435f820c7c51678900150f81f3613,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4fac0783b6009e318c87b36b883a26bb21192022833bfc453bed114d658bef0\"" Jan 28 00:51:27.133526 containerd[1885]: time="2026-01-28T00:51:27.133500743Z" level=info msg="CreateContainer within sandbox \"f4fac0783b6009e318c87b36b883a26bb21192022833bfc453bed114d658bef0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 00:51:27.138408 containerd[1885]: time="2026-01-28T00:51:27.138380137Z" level=info msg="Container 3e6766b0135da289a47b163d7835ffb22662eaf4a360c1f0a27e9cf091e00ba1: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:27.145612 containerd[1885]: time="2026-01-28T00:51:27.145590622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-42917f0d29,Uid:2f257dc5f642513b15209441ae2337d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"67e8496da9ae290e6b45275f461248de60b79682a4831a8e42f184d1cfc63caf\"" Jan 28 00:51:27.149196 containerd[1885]: time="2026-01-28T00:51:27.149132587Z" level=info msg="CreateContainer within sandbox \"67e8496da9ae290e6b45275f461248de60b79682a4831a8e42f184d1cfc63caf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 00:51:27.161343 containerd[1885]: time="2026-01-28T00:51:27.161296435Z" level=info msg="CreateContainer within sandbox \"be3b1e53671d0488a3a35dc48bbc4fe00c529c3c53376a1ce03f016b4876b977\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3e6766b0135da289a47b163d7835ffb22662eaf4a360c1f0a27e9cf091e00ba1\"" Jan 28 00:51:27.162048 kubelet[2922]: I0128 00:51:27.161948 2922 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:27.162510 containerd[1885]: time="2026-01-28T00:51:27.162456844Z" level=info msg="StartContainer for \"3e6766b0135da289a47b163d7835ffb22662eaf4a360c1f0a27e9cf091e00ba1\"" Jan 28 00:51:27.163165 kubelet[2922]: E0128 00:51:27.163140 2922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:27.163449 containerd[1885]: time="2026-01-28T00:51:27.163425657Z" level=info msg="connecting to shim 3e6766b0135da289a47b163d7835ffb22662eaf4a360c1f0a27e9cf091e00ba1" address="unix:///run/containerd/s/461fb35b93b44635401872dc874ed5c646d4346f2d49f9f4ef4d71ea78f835a5" protocol=ttrpc version=3 Jan 28 00:51:27.166584 containerd[1885]: time="2026-01-28T00:51:27.166553405Z" level=info msg="Container 1bf35380302e19c51d35ecd84f8eb3b6ac06c82fa3f2ceb78cec32fdb77f79ff: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:27.180746 systemd[1]: Started cri-containerd-3e6766b0135da289a47b163d7835ffb22662eaf4a360c1f0a27e9cf091e00ba1.scope - libcontainer container 3e6766b0135da289a47b163d7835ffb22662eaf4a360c1f0a27e9cf091e00ba1. Jan 28 00:51:27.194963 containerd[1885]: time="2026-01-28T00:51:27.194925254Z" level=info msg="Container 3a08b5fa2e2545167c84bfe520832dfc1d74097143f5270b85eb2fde9941b517: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:27.202811 containerd[1885]: time="2026-01-28T00:51:27.202672326Z" level=info msg="CreateContainer within sandbox \"f4fac0783b6009e318c87b36b883a26bb21192022833bfc453bed114d658bef0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1bf35380302e19c51d35ecd84f8eb3b6ac06c82fa3f2ceb78cec32fdb77f79ff\"" Jan 28 00:51:27.203540 containerd[1885]: time="2026-01-28T00:51:27.203302156Z" level=info msg="StartContainer for \"1bf35380302e19c51d35ecd84f8eb3b6ac06c82fa3f2ceb78cec32fdb77f79ff\"" Jan 28 00:51:27.204323 containerd[1885]: time="2026-01-28T00:51:27.204299186Z" level=info msg="connecting to shim 1bf35380302e19c51d35ecd84f8eb3b6ac06c82fa3f2ceb78cec32fdb77f79ff" address="unix:///run/containerd/s/feddd48b1dd274ac8c6ba25e944fb9545593d51b4670ed10ac8aa89eb5f33814" protocol=ttrpc version=3 Jan 28 00:51:27.210627 containerd[1885]: time="2026-01-28T00:51:27.210152737Z" level=info msg="CreateContainer within sandbox \"67e8496da9ae290e6b45275f461248de60b79682a4831a8e42f184d1cfc63caf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a08b5fa2e2545167c84bfe520832dfc1d74097143f5270b85eb2fde9941b517\"" Jan 28 00:51:27.210826 containerd[1885]: time="2026-01-28T00:51:27.210804655Z" level=info msg="StartContainer for \"3a08b5fa2e2545167c84bfe520832dfc1d74097143f5270b85eb2fde9941b517\"" Jan 28 00:51:27.214366 containerd[1885]: time="2026-01-28T00:51:27.214331388Z" level=info msg="connecting to shim 3a08b5fa2e2545167c84bfe520832dfc1d74097143f5270b85eb2fde9941b517" address="unix:///run/containerd/s/58e6d1cc58c66df9a96d5076ab4f39577ac8fbe926ac6ea392ec5fbc0d4611f2" protocol=ttrpc version=3 Jan 28 00:51:27.228793 containerd[1885]: time="2026-01-28T00:51:27.228760909Z" level=info msg="StartContainer for \"3e6766b0135da289a47b163d7835ffb22662eaf4a360c1f0a27e9cf091e00ba1\" returns successfully" Jan 28 00:51:27.233787 systemd[1]: Started cri-containerd-1bf35380302e19c51d35ecd84f8eb3b6ac06c82fa3f2ceb78cec32fdb77f79ff.scope - libcontainer container 1bf35380302e19c51d35ecd84f8eb3b6ac06c82fa3f2ceb78cec32fdb77f79ff. Jan 28 00:51:27.244658 systemd[1]: Started cri-containerd-3a08b5fa2e2545167c84bfe520832dfc1d74097143f5270b85eb2fde9941b517.scope - libcontainer container 3a08b5fa2e2545167c84bfe520832dfc1d74097143f5270b85eb2fde9941b517. Jan 28 00:51:27.284064 containerd[1885]: time="2026-01-28T00:51:27.284028438Z" level=info msg="StartContainer for \"1bf35380302e19c51d35ecd84f8eb3b6ac06c82fa3f2ceb78cec32fdb77f79ff\" returns successfully" Jan 28 00:51:27.306504 containerd[1885]: time="2026-01-28T00:51:27.306459910Z" level=info msg="StartContainer for \"3a08b5fa2e2545167c84bfe520832dfc1d74097143f5270b85eb2fde9941b517\" returns successfully" Jan 28 00:51:27.667800 kubelet[2922]: E0128 00:51:27.667766 2922 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-42917f0d29\" not found" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:27.671227 kubelet[2922]: E0128 00:51:27.671171 2922 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-42917f0d29\" not found" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:27.673073 kubelet[2922]: E0128 00:51:27.672923 2922 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-42917f0d29\" not found" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:27.803590 update_engine[1873]: I20260128 00:51:27.803524 1873 update_attempter.cc:509] Updating boot flags... Jan 28 00:51:28.675611 kubelet[2922]: E0128 00:51:28.674949 2922 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-42917f0d29\" not found" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:28.675914 kubelet[2922]: E0128 00:51:28.675616 2922 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-42917f0d29\" not found" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:28.691762 kubelet[2922]: E0128 00:51:28.691721 2922 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.3-n-42917f0d29\" not found" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:28.765890 kubelet[2922]: I0128 00:51:28.765844 2922 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:28.880664 kubelet[2922]: I0128 00:51:28.880625 2922 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:28.880664 kubelet[2922]: E0128 00:51:28.880665 2922 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.3-n-42917f0d29\": node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:28.893739 kubelet[2922]: E0128 00:51:28.893629 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:28.993892 kubelet[2922]: E0128 00:51:28.993765 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:29.094875 kubelet[2922]: E0128 00:51:29.094827 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:29.195406 kubelet[2922]: E0128 00:51:29.195356 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:29.295955 kubelet[2922]: E0128 00:51:29.295821 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:29.396439 kubelet[2922]: E0128 00:51:29.396398 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:29.497222 kubelet[2922]: E0128 00:51:29.497184 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:29.598179 kubelet[2922]: E0128 00:51:29.598062 2922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:29.757456 kubelet[2922]: I0128 00:51:29.757218 2922 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:29.768825 kubelet[2922]: W0128 00:51:29.768791 2922 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:51:29.769236 kubelet[2922]: I0128 00:51:29.769146 2922 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:29.776458 kubelet[2922]: W0128 00:51:29.776217 2922 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:51:29.776458 kubelet[2922]: I0128 00:51:29.776291 2922 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:29.784294 kubelet[2922]: W0128 00:51:29.784264 2922 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:51:30.361589 kubelet[2922]: I0128 00:51:30.361510 2922 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:30.374842 kubelet[2922]: W0128 00:51:30.374715 2922 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:51:30.376671 kubelet[2922]: E0128 00:51:30.376601 2922 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:30.546276 kubelet[2922]: I0128 00:51:30.546229 2922 apiserver.go:52] "Watching apiserver" Jan 28 00:51:30.556306 kubelet[2922]: I0128 00:51:30.556266 2922 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:51:31.198746 systemd[1]: Reload requested from client PID 3258 ('systemctl') (unit session-9.scope)... Jan 28 00:51:31.198759 systemd[1]: Reloading... Jan 28 00:51:31.299643 zram_generator::config[3301]: No configuration found. Jan 28 00:51:31.476907 systemd[1]: Reloading finished in 277 ms. Jan 28 00:51:31.502655 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:51:31.513809 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 00:51:31.513992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:51:31.514035 systemd[1]: kubelet.service: Consumed 626ms CPU time, 127.2M memory peak. Jan 28 00:51:31.516312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:51:32.976798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:51:32.984916 (kubelet)[3369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:51:33.029881 kubelet[3369]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:51:33.029881 kubelet[3369]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:51:33.029881 kubelet[3369]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:51:33.030385 kubelet[3369]: I0128 00:51:33.029914 3369 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:51:33.039640 kubelet[3369]: I0128 00:51:33.039605 3369 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:51:33.039640 kubelet[3369]: I0128 00:51:33.039634 3369 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:51:33.039913 kubelet[3369]: I0128 00:51:33.039891 3369 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:51:33.041903 kubelet[3369]: I0128 00:51:33.041881 3369 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 00:51:33.043507 kubelet[3369]: I0128 00:51:33.043426 3369 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:51:33.047581 kubelet[3369]: I0128 00:51:33.047555 3369 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 00:51:33.053406 kubelet[3369]: I0128 00:51:33.053382 3369 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:51:33.053583 kubelet[3369]: I0128 00:51:33.053555 3369 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:51:33.053705 kubelet[3369]: I0128 00:51:33.053580 3369 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-42917f0d29","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:51:33.053767 kubelet[3369]: I0128 00:51:33.053712 3369 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:51:33.053767 kubelet[3369]: I0128 00:51:33.053719 3369 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:51:33.053767 kubelet[3369]: I0128 00:51:33.053756 3369 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:51:33.054194 kubelet[3369]: I0128 00:51:33.053851 3369 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:51:33.054194 kubelet[3369]: I0128 00:51:33.053861 3369 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:51:33.054194 kubelet[3369]: I0128 00:51:33.053880 3369 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:51:33.054194 kubelet[3369]: I0128 00:51:33.053899 3369 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:51:33.055040 kubelet[3369]: I0128 00:51:33.055012 3369 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 00:51:33.056015 kubelet[3369]: I0128 00:51:33.055798 3369 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:51:33.056550 kubelet[3369]: I0128 00:51:33.056536 3369 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:51:33.056682 kubelet[3369]: I0128 00:51:33.056673 3369 server.go:1287] "Started kubelet" Jan 28 00:51:33.058889 kubelet[3369]: I0128 00:51:33.058848 3369 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:51:33.064076 kubelet[3369]: I0128 00:51:33.064027 3369 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:51:33.065899 kubelet[3369]: I0128 00:51:33.065878 3369 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:51:33.069686 kubelet[3369]: I0128 00:51:33.069632 3369 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:51:33.070519 kubelet[3369]: I0128 00:51:33.069854 3369 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:51:33.070519 kubelet[3369]: I0128 00:51:33.070022 3369 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:51:33.073526 kubelet[3369]: I0128 00:51:33.072845 3369 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:51:33.073736 kubelet[3369]: E0128 00:51:33.073715 3369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-42917f0d29\" not found" Jan 28 00:51:33.074075 kubelet[3369]: I0128 00:51:33.074050 3369 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:51:33.074172 kubelet[3369]: I0128 00:51:33.074160 3369 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:51:33.080375 kubelet[3369]: I0128 00:51:33.080344 3369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:51:33.081201 kubelet[3369]: I0128 00:51:33.081183 3369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:51:33.081532 kubelet[3369]: I0128 00:51:33.081510 3369 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:51:33.081633 kubelet[3369]: I0128 00:51:33.081623 3369 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:51:33.081693 kubelet[3369]: I0128 00:51:33.081686 3369 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:51:33.081792 kubelet[3369]: E0128 00:51:33.081772 3369 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:51:33.082874 kubelet[3369]: I0128 00:51:33.082846 3369 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:51:33.082966 kubelet[3369]: I0128 00:51:33.082947 3369 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:51:33.087349 kubelet[3369]: E0128 00:51:33.087270 3369 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:51:33.088348 kubelet[3369]: I0128 00:51:33.088326 3369 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:51:33.135311 kubelet[3369]: I0128 00:51:33.135283 3369 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:51:33.135311 kubelet[3369]: I0128 00:51:33.135303 3369 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:51:33.135311 kubelet[3369]: I0128 00:51:33.135324 3369 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:51:33.135787 kubelet[3369]: I0128 00:51:33.135465 3369 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 00:51:33.135787 kubelet[3369]: I0128 00:51:33.135472 3369 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 00:51:33.135787 kubelet[3369]: I0128 00:51:33.135487 3369 policy_none.go:49] "None policy: Start" Jan 28 00:51:33.135787 kubelet[3369]: I0128 00:51:33.135513 3369 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:51:33.135787 kubelet[3369]: I0128 00:51:33.135521 3369 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:51:33.135787 kubelet[3369]: I0128 00:51:33.135604 3369 state_mem.go:75] "Updated machine memory state" Jan 28 00:51:33.139954 kubelet[3369]: I0128 00:51:33.139936 3369 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:51:33.140506 kubelet[3369]: I0128 00:51:33.140477 3369 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:51:33.140618 kubelet[3369]: I0128 00:51:33.140584 3369 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:51:33.140948 kubelet[3369]: I0128 00:51:33.140937 3369 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:51:33.145942 kubelet[3369]: E0128 00:51:33.145923 3369 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:51:33.150829 sudo[3400]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 28 00:51:33.151058 sudo[3400]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 28 00:51:33.182911 kubelet[3369]: I0128 00:51:33.182871 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.183671 kubelet[3369]: I0128 00:51:33.183654 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.183850 kubelet[3369]: I0128 00:51:33.183831 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.205875 kubelet[3369]: W0128 00:51:33.205845 3369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:51:33.206040 kubelet[3369]: E0128 00:51:33.205906 3369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.207239 kubelet[3369]: W0128 00:51:33.207152 3369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:51:33.207303 kubelet[3369]: E0128 00:51:33.207290 3369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-n-42917f0d29\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.207562 kubelet[3369]: W0128 00:51:33.207544 3369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:51:33.207590 kubelet[3369]: E0128 00:51:33.207579 3369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-42917f0d29\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.244920 kubelet[3369]: I0128 00:51:33.244833 3369 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.261771 kubelet[3369]: I0128 00:51:33.261731 3369 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.262542 kubelet[3369]: I0128 00:51:33.262337 3369 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.276051 kubelet[3369]: I0128 00:51:33.275987 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19a77c222482993b52b7b06d611a8e7a-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" (UID: \"19a77c222482993b52b7b06d611a8e7a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.276288 kubelet[3369]: I0128 00:51:33.276207 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19a77c222482993b52b7b06d611a8e7a-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" (UID: \"19a77c222482993b52b7b06d611a8e7a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.276288 kubelet[3369]: I0128 00:51:33.276233 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19a77c222482993b52b7b06d611a8e7a-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" (UID: \"19a77c222482993b52b7b06d611a8e7a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.276555 kubelet[3369]: I0128 00:51:33.276392 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4da435f820c7c51678900150f81f3613-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-42917f0d29\" (UID: \"4da435f820c7c51678900150f81f3613\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.276555 kubelet[3369]: I0128 00:51:33.276415 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19a77c222482993b52b7b06d611a8e7a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" (UID: \"19a77c222482993b52b7b06d611a8e7a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.276555 kubelet[3369]: I0128 00:51:33.276429 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19a77c222482993b52b7b06d611a8e7a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" (UID: \"19a77c222482993b52b7b06d611a8e7a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.276740 kubelet[3369]: I0128 00:51:33.276693 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f257dc5f642513b15209441ae2337d6-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-42917f0d29\" (UID: \"2f257dc5f642513b15209441ae2337d6\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.276740 kubelet[3369]: I0128 00:51:33.276717 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4da435f820c7c51678900150f81f3613-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-42917f0d29\" (UID: \"4da435f820c7c51678900150f81f3613\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.276740 kubelet[3369]: I0128 00:51:33.276728 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4da435f820c7c51678900150f81f3613-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-42917f0d29\" (UID: \"4da435f820c7c51678900150f81f3613\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:33.400177 sudo[3400]: pam_unix(sudo:session): session closed for user root Jan 28 00:51:34.061519 kubelet[3369]: I0128 00:51:34.061453 3369 apiserver.go:52] "Watching apiserver" Jan 28 00:51:34.075144 kubelet[3369]: I0128 00:51:34.075109 3369 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:51:34.114886 kubelet[3369]: I0128 00:51:34.114148 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:34.114886 kubelet[3369]: I0128 00:51:34.114454 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:34.114886 kubelet[3369]: I0128 00:51:34.114661 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:34.137368 kubelet[3369]: W0128 00:51:34.137344 3369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:51:34.137655 kubelet[3369]: W0128 00:51:34.137632 3369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:51:34.137783 kubelet[3369]: E0128 00:51:34.137685 3369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-n-42917f0d29\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:34.137783 kubelet[3369]: E0128 00:51:34.137722 3369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-n-42917f0d29\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:34.137931 kubelet[3369]: W0128 00:51:34.137867 3369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:51:34.137931 kubelet[3369]: E0128 00:51:34.137892 3369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-42917f0d29\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" Jan 28 00:51:34.145857 kubelet[3369]: I0128 00:51:34.145812 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-42917f0d29" podStartSLOduration=5.14580153 podStartE2EDuration="5.14580153s" podCreationTimestamp="2026-01-28 00:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:51:34.145619422 +0000 UTC m=+1.157349415" watchObservedRunningTime="2026-01-28 00:51:34.14580153 +0000 UTC m=+1.157531523" Jan 28 00:51:34.168289 kubelet[3369]: I0128 00:51:34.168238 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.3-n-42917f0d29" podStartSLOduration=5.16822123 podStartE2EDuration="5.16822123s" podCreationTimestamp="2026-01-28 00:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:51:34.157879379 +0000 UTC m=+1.169609372" watchObservedRunningTime="2026-01-28 00:51:34.16822123 +0000 UTC m=+1.179951223" Jan 28 00:51:34.522466 sudo[2360]: pam_unix(sudo:session): session closed for user root Jan 28 00:51:34.599829 sshd[2359]: Connection closed by 10.200.16.10 port 51246 Jan 28 00:51:34.600707 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:34.604196 systemd-logind[1868]: Session 9 logged out. Waiting for processes to exit. Jan 28 00:51:34.605162 systemd[1]: sshd@6-10.200.20.13:22-10.200.16.10:51246.service: Deactivated successfully. Jan 28 00:51:34.607337 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 00:51:34.607680 systemd[1]: session-9.scope: Consumed 4.253s CPU time, 259.3M memory peak. Jan 28 00:51:34.609956 systemd-logind[1868]: Removed session 9. Jan 28 00:51:35.855154 kubelet[3369]: I0128 00:51:35.855080 3369 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 00:51:35.855893 containerd[1885]: time="2026-01-28T00:51:35.855858320Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 00:51:35.856456 kubelet[3369]: I0128 00:51:35.856078 3369 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 00:51:36.562548 kubelet[3369]: I0128 00:51:36.562469 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.3-n-42917f0d29" podStartSLOduration=7.562453346 podStartE2EDuration="7.562453346s" podCreationTimestamp="2026-01-28 00:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:51:34.169050992 +0000 UTC m=+1.180780993" watchObservedRunningTime="2026-01-28 00:51:36.562453346 +0000 UTC m=+3.574183339" Jan 28 00:51:36.573735 systemd[1]: Created slice kubepods-burstable-pod04d6614e_f303_460e_87de_5306fea760c4.slice - libcontainer container kubepods-burstable-pod04d6614e_f303_460e_87de_5306fea760c4.slice. Jan 28 00:51:36.585880 systemd[1]: Created slice kubepods-besteffort-pode0fa5d6e_78d3_45d5_9d23_426245ac736a.slice - libcontainer container kubepods-besteffort-pode0fa5d6e_78d3_45d5_9d23_426245ac736a.slice. Jan 28 00:51:36.596532 kubelet[3369]: I0128 00:51:36.596471 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cilium-cgroup\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596669 kubelet[3369]: I0128 00:51:36.596551 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04d6614e-f303-460e-87de-5306fea760c4-clustermesh-secrets\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596669 kubelet[3369]: I0128 00:51:36.596570 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04d6614e-f303-460e-87de-5306fea760c4-hubble-tls\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596669 kubelet[3369]: I0128 00:51:36.596605 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvplb\" (UniqueName: \"kubernetes.io/projected/04d6614e-f303-460e-87de-5306fea760c4-kube-api-access-zvplb\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596669 kubelet[3369]: I0128 00:51:36.596619 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-host-proc-sys-net\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596669 kubelet[3369]: I0128 00:51:36.596629 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04d6614e-f303-460e-87de-5306fea760c4-cilium-config-path\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596669 kubelet[3369]: I0128 00:51:36.596640 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-bpf-maps\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596776 kubelet[3369]: I0128 00:51:36.596648 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cni-path\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596776 kubelet[3369]: I0128 00:51:36.596682 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-etc-cni-netd\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596776 kubelet[3369]: I0128 00:51:36.596692 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-lib-modules\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596776 kubelet[3369]: I0128 00:51:36.596702 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cilium-run\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596776 kubelet[3369]: I0128 00:51:36.596711 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4n7j\" (UniqueName: \"kubernetes.io/projected/e0fa5d6e-78d3-45d5-9d23-426245ac736a-kube-api-access-c4n7j\") pod \"kube-proxy-t4r2x\" (UID: \"e0fa5d6e-78d3-45d5-9d23-426245ac736a\") " pod="kube-system/kube-proxy-t4r2x" Jan 28 00:51:36.596776 kubelet[3369]: I0128 00:51:36.596721 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-hostproc\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596893 kubelet[3369]: I0128 00:51:36.596743 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0fa5d6e-78d3-45d5-9d23-426245ac736a-xtables-lock\") pod \"kube-proxy-t4r2x\" (UID: \"e0fa5d6e-78d3-45d5-9d23-426245ac736a\") " pod="kube-system/kube-proxy-t4r2x" Jan 28 00:51:36.596893 kubelet[3369]: I0128 00:51:36.596751 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0fa5d6e-78d3-45d5-9d23-426245ac736a-lib-modules\") pod \"kube-proxy-t4r2x\" (UID: \"e0fa5d6e-78d3-45d5-9d23-426245ac736a\") " pod="kube-system/kube-proxy-t4r2x" Jan 28 00:51:36.596893 kubelet[3369]: I0128 00:51:36.596761 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-xtables-lock\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.596893 kubelet[3369]: I0128 00:51:36.596773 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0fa5d6e-78d3-45d5-9d23-426245ac736a-kube-proxy\") pod \"kube-proxy-t4r2x\" (UID: \"e0fa5d6e-78d3-45d5-9d23-426245ac736a\") " pod="kube-system/kube-proxy-t4r2x" Jan 28 00:51:36.596893 kubelet[3369]: I0128 00:51:36.596784 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-host-proc-sys-kernel\") pod \"cilium-dchc2\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " pod="kube-system/cilium-dchc2" Jan 28 00:51:36.879574 containerd[1885]: time="2026-01-28T00:51:36.879465650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dchc2,Uid:04d6614e-f303-460e-87de-5306fea760c4,Namespace:kube-system,Attempt:0,}" Jan 28 00:51:36.893258 containerd[1885]: time="2026-01-28T00:51:36.893093685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4r2x,Uid:e0fa5d6e-78d3-45d5-9d23-426245ac736a,Namespace:kube-system,Attempt:0,}" Jan 28 00:51:36.899432 kubelet[3369]: I0128 00:51:36.899401 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56c2ff2c-1d50-4015-8dd8-d2afd87e75dd-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-64q4b\" (UID: \"56c2ff2c-1d50-4015-8dd8-d2afd87e75dd\") " pod="kube-system/cilium-operator-6c4d7847fc-64q4b" Jan 28 00:51:36.899753 kubelet[3369]: I0128 00:51:36.899438 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs78v\" (UniqueName: \"kubernetes.io/projected/56c2ff2c-1d50-4015-8dd8-d2afd87e75dd-kube-api-access-vs78v\") pod \"cilium-operator-6c4d7847fc-64q4b\" (UID: \"56c2ff2c-1d50-4015-8dd8-d2afd87e75dd\") " pod="kube-system/cilium-operator-6c4d7847fc-64q4b" Jan 28 00:51:36.900386 systemd[1]: Created slice kubepods-besteffort-pod56c2ff2c_1d50_4015_8dd8_d2afd87e75dd.slice - libcontainer container kubepods-besteffort-pod56c2ff2c_1d50_4015_8dd8_d2afd87e75dd.slice. Jan 28 00:51:37.155281 containerd[1885]: time="2026-01-28T00:51:37.155240914Z" level=info msg="connecting to shim 67740c840ce2ca23c2fa5d01127ca7525dfd36f1873c15f3b944c2776e35c144" address="unix:///run/containerd/s/4677c1bb79664aa0fec1392bc752f168e56fdadb9b98470bcf2c39a1cdff497f" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:51:37.155763 containerd[1885]: time="2026-01-28T00:51:37.155743517Z" level=info msg="connecting to shim c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174" address="unix:///run/containerd/s/0e587909bd997250e34c6218c64f725020cd41cdda7a15175dc77acc464cbded" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:51:37.177638 systemd[1]: Started cri-containerd-67740c840ce2ca23c2fa5d01127ca7525dfd36f1873c15f3b944c2776e35c144.scope - libcontainer container 67740c840ce2ca23c2fa5d01127ca7525dfd36f1873c15f3b944c2776e35c144. Jan 28 00:51:37.178550 systemd[1]: Started cri-containerd-c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174.scope - libcontainer container c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174. Jan 28 00:51:37.203528 containerd[1885]: time="2026-01-28T00:51:37.202654786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-64q4b,Uid:56c2ff2c-1d50-4015-8dd8-d2afd87e75dd,Namespace:kube-system,Attempt:0,}" Jan 28 00:51:37.212205 containerd[1885]: time="2026-01-28T00:51:37.212153138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dchc2,Uid:04d6614e-f303-460e-87de-5306fea760c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\"" Jan 28 00:51:37.215957 containerd[1885]: time="2026-01-28T00:51:37.215928701Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 28 00:51:37.221897 containerd[1885]: time="2026-01-28T00:51:37.221858343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4r2x,Uid:e0fa5d6e-78d3-45d5-9d23-426245ac736a,Namespace:kube-system,Attempt:0,} returns sandbox id \"67740c840ce2ca23c2fa5d01127ca7525dfd36f1873c15f3b944c2776e35c144\"" Jan 28 00:51:37.225361 containerd[1885]: time="2026-01-28T00:51:37.225328131Z" level=info msg="CreateContainer within sandbox \"67740c840ce2ca23c2fa5d01127ca7525dfd36f1873c15f3b944c2776e35c144\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 00:51:37.258417 containerd[1885]: time="2026-01-28T00:51:37.258290134Z" level=info msg="connecting to shim d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258" address="unix:///run/containerd/s/b4c67fcc1c62335256c0ff528edd07c4d0ef5bd3009834d22c557bfb849d5ad3" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:51:37.260299 containerd[1885]: time="2026-01-28T00:51:37.260267978Z" level=info msg="Container 07be1f5bcbadc965b87ea55c698f41a8c3c0a1a0db970504a36b63b13165e91a: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:37.279030 containerd[1885]: time="2026-01-28T00:51:37.278947963Z" level=info msg="CreateContainer within sandbox \"67740c840ce2ca23c2fa5d01127ca7525dfd36f1873c15f3b944c2776e35c144\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07be1f5bcbadc965b87ea55c698f41a8c3c0a1a0db970504a36b63b13165e91a\"" Jan 28 00:51:37.280157 systemd[1]: Started cri-containerd-d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258.scope - libcontainer container d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258. Jan 28 00:51:37.281502 containerd[1885]: time="2026-01-28T00:51:37.281473443Z" level=info msg="StartContainer for \"07be1f5bcbadc965b87ea55c698f41a8c3c0a1a0db970504a36b63b13165e91a\"" Jan 28 00:51:37.282849 containerd[1885]: time="2026-01-28T00:51:37.282792728Z" level=info msg="connecting to shim 07be1f5bcbadc965b87ea55c698f41a8c3c0a1a0db970504a36b63b13165e91a" address="unix:///run/containerd/s/4677c1bb79664aa0fec1392bc752f168e56fdadb9b98470bcf2c39a1cdff497f" protocol=ttrpc version=3 Jan 28 00:51:37.300615 systemd[1]: Started cri-containerd-07be1f5bcbadc965b87ea55c698f41a8c3c0a1a0db970504a36b63b13165e91a.scope - libcontainer container 07be1f5bcbadc965b87ea55c698f41a8c3c0a1a0db970504a36b63b13165e91a. Jan 28 00:51:37.323163 containerd[1885]: time="2026-01-28T00:51:37.323126812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-64q4b,Uid:56c2ff2c-1d50-4015-8dd8-d2afd87e75dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\"" Jan 28 00:51:37.366851 containerd[1885]: time="2026-01-28T00:51:37.366811074Z" level=info msg="StartContainer for \"07be1f5bcbadc965b87ea55c698f41a8c3c0a1a0db970504a36b63b13165e91a\" returns successfully" Jan 28 00:51:38.809770 kubelet[3369]: I0128 00:51:38.809067 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t4r2x" podStartSLOduration=2.809050592 podStartE2EDuration="2.809050592s" podCreationTimestamp="2026-01-28 00:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:51:38.13711215 +0000 UTC m=+5.148842151" watchObservedRunningTime="2026-01-28 00:51:38.809050592 +0000 UTC m=+5.820780593" Jan 28 00:51:42.901697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746005784.mount: Deactivated successfully. Jan 28 00:51:44.410608 containerd[1885]: time="2026-01-28T00:51:44.410487541Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:44.412545 containerd[1885]: time="2026-01-28T00:51:44.412508926Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 28 00:51:44.414730 containerd[1885]: time="2026-01-28T00:51:44.414698131Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:44.416049 containerd[1885]: time="2026-01-28T00:51:44.416017238Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.20005324s" Jan 28 00:51:44.416133 containerd[1885]: time="2026-01-28T00:51:44.416053783Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 28 00:51:44.419346 containerd[1885]: time="2026-01-28T00:51:44.418852320Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 28 00:51:44.420289 containerd[1885]: time="2026-01-28T00:51:44.420250709Z" level=info msg="CreateContainer within sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 00:51:44.570907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308698141.mount: Deactivated successfully. Jan 28 00:51:44.571913 containerd[1885]: time="2026-01-28T00:51:44.571792123Z" level=info msg="Container 0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:44.573523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655665509.mount: Deactivated successfully. Jan 28 00:51:45.115033 containerd[1885]: time="2026-01-28T00:51:45.114991912Z" level=info msg="CreateContainer within sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\"" Jan 28 00:51:45.116048 containerd[1885]: time="2026-01-28T00:51:45.116018701Z" level=info msg="StartContainer for \"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\"" Jan 28 00:51:45.117390 containerd[1885]: time="2026-01-28T00:51:45.117364968Z" level=info msg="connecting to shim 0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6" address="unix:///run/containerd/s/0e587909bd997250e34c6218c64f725020cd41cdda7a15175dc77acc464cbded" protocol=ttrpc version=3 Jan 28 00:51:45.136840 systemd[1]: Started cri-containerd-0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6.scope - libcontainer container 0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6. Jan 28 00:51:45.165669 containerd[1885]: time="2026-01-28T00:51:45.165611282Z" level=info msg="StartContainer for \"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\" returns successfully" Jan 28 00:51:45.170975 systemd[1]: cri-containerd-0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6.scope: Deactivated successfully. Jan 28 00:51:45.174155 containerd[1885]: time="2026-01-28T00:51:45.173652174Z" level=info msg="received container exit event container_id:\"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\" id:\"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\" pid:3782 exited_at:{seconds:1769561505 nanos:172710083}" Jan 28 00:51:45.567758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6-rootfs.mount: Deactivated successfully. Jan 28 00:51:47.150982 containerd[1885]: time="2026-01-28T00:51:47.150780888Z" level=info msg="CreateContainer within sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 00:51:47.173518 containerd[1885]: time="2026-01-28T00:51:47.171110830Z" level=info msg="Container a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:47.183810 containerd[1885]: time="2026-01-28T00:51:47.183772952Z" level=info msg="CreateContainer within sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\"" Jan 28 00:51:47.184372 containerd[1885]: time="2026-01-28T00:51:47.184347851Z" level=info msg="StartContainer for \"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\"" Jan 28 00:51:47.185102 containerd[1885]: time="2026-01-28T00:51:47.185015713Z" level=info msg="connecting to shim a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75" address="unix:///run/containerd/s/0e587909bd997250e34c6218c64f725020cd41cdda7a15175dc77acc464cbded" protocol=ttrpc version=3 Jan 28 00:51:47.210075 systemd[1]: Started cri-containerd-a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75.scope - libcontainer container a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75. Jan 28 00:51:47.218305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531605783.mount: Deactivated successfully. Jan 28 00:51:47.248689 containerd[1885]: time="2026-01-28T00:51:47.248651496Z" level=info msg="StartContainer for \"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\" returns successfully" Jan 28 00:51:47.256843 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:51:47.257599 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:51:47.257977 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:51:47.260239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:51:47.261874 systemd[1]: cri-containerd-a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75.scope: Deactivated successfully. Jan 28 00:51:47.263332 containerd[1885]: time="2026-01-28T00:51:47.263244169Z" level=info msg="received container exit event container_id:\"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\" id:\"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\" pid:3832 exited_at:{seconds:1769561507 nanos:262944643}" Jan 28 00:51:47.280523 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:51:47.703533 containerd[1885]: time="2026-01-28T00:51:47.703277330Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:47.705850 containerd[1885]: time="2026-01-28T00:51:47.705819286Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 28 00:51:47.707669 containerd[1885]: time="2026-01-28T00:51:47.707640515Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:51:47.708659 containerd[1885]: time="2026-01-28T00:51:47.708627527Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.289747758s" Jan 28 00:51:47.708694 containerd[1885]: time="2026-01-28T00:51:47.708663071Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 28 00:51:47.711889 containerd[1885]: time="2026-01-28T00:51:47.711853752Z" level=info msg="CreateContainer within sandbox \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 28 00:51:47.725291 containerd[1885]: time="2026-01-28T00:51:47.724877345Z" level=info msg="Container cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:47.736828 containerd[1885]: time="2026-01-28T00:51:47.736799300Z" level=info msg="CreateContainer within sandbox \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\"" Jan 28 00:51:47.737379 containerd[1885]: time="2026-01-28T00:51:47.737354583Z" level=info msg="StartContainer for \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\"" Jan 28 00:51:47.740118 containerd[1885]: time="2026-01-28T00:51:47.740086527Z" level=info msg="connecting to shim cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca" address="unix:///run/containerd/s/b4c67fcc1c62335256c0ff528edd07c4d0ef5bd3009834d22c557bfb849d5ad3" protocol=ttrpc version=3 Jan 28 00:51:47.758628 systemd[1]: Started cri-containerd-cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca.scope - libcontainer container cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca. Jan 28 00:51:47.788699 containerd[1885]: time="2026-01-28T00:51:47.788667787Z" level=info msg="StartContainer for \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\" returns successfully" Jan 28 00:51:48.157187 containerd[1885]: time="2026-01-28T00:51:48.157147941Z" level=info msg="CreateContainer within sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 00:51:48.171877 kubelet[3369]: I0128 00:51:48.171800 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-64q4b" podStartSLOduration=1.787591801 podStartE2EDuration="12.171786966s" podCreationTimestamp="2026-01-28 00:51:36 +0000 UTC" firstStartedPulling="2026-01-28 00:51:37.325215282 +0000 UTC m=+4.336945283" lastFinishedPulling="2026-01-28 00:51:47.709410455 +0000 UTC m=+14.721140448" observedRunningTime="2026-01-28 00:51:48.169762261 +0000 UTC m=+15.181492262" watchObservedRunningTime="2026-01-28 00:51:48.171786966 +0000 UTC m=+15.183516959" Jan 28 00:51:48.175092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75-rootfs.mount: Deactivated successfully. Jan 28 00:51:48.183866 containerd[1885]: time="2026-01-28T00:51:48.183835660Z" level=info msg="Container 9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:48.200322 containerd[1885]: time="2026-01-28T00:51:48.200264530Z" level=info msg="CreateContainer within sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\"" Jan 28 00:51:48.201187 containerd[1885]: time="2026-01-28T00:51:48.201158564Z" level=info msg="StartContainer for \"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\"" Jan 28 00:51:48.203127 containerd[1885]: time="2026-01-28T00:51:48.203062107Z" level=info msg="connecting to shim 9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c" address="unix:///run/containerd/s/0e587909bd997250e34c6218c64f725020cd41cdda7a15175dc77acc464cbded" protocol=ttrpc version=3 Jan 28 00:51:48.229640 systemd[1]: Started cri-containerd-9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c.scope - libcontainer container 9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c. Jan 28 00:51:48.308143 containerd[1885]: time="2026-01-28T00:51:48.308105452Z" level=info msg="StartContainer for \"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\" returns successfully" Jan 28 00:51:48.309144 systemd[1]: cri-containerd-9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c.scope: Deactivated successfully. Jan 28 00:51:48.311715 containerd[1885]: time="2026-01-28T00:51:48.311621124Z" level=info msg="received container exit event container_id:\"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\" id:\"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\" pid:3918 exited_at:{seconds:1769561508 nanos:310747810}" Jan 28 00:51:48.343719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c-rootfs.mount: Deactivated successfully. Jan 28 00:51:49.167696 containerd[1885]: time="2026-01-28T00:51:49.167651172Z" level=info msg="CreateContainer within sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 00:51:49.192009 containerd[1885]: time="2026-01-28T00:51:49.190147694Z" level=info msg="Container 63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:49.203670 containerd[1885]: time="2026-01-28T00:51:49.203633512Z" level=info msg="CreateContainer within sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\"" Jan 28 00:51:49.204658 containerd[1885]: time="2026-01-28T00:51:49.204632485Z" level=info msg="StartContainer for \"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\"" Jan 28 00:51:49.205788 containerd[1885]: time="2026-01-28T00:51:49.205590016Z" level=info msg="connecting to shim 63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9" address="unix:///run/containerd/s/0e587909bd997250e34c6218c64f725020cd41cdda7a15175dc77acc464cbded" protocol=ttrpc version=3 Jan 28 00:51:49.222628 systemd[1]: Started cri-containerd-63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9.scope - libcontainer container 63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9. Jan 28 00:51:49.249419 systemd[1]: cri-containerd-63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9.scope: Deactivated successfully. Jan 28 00:51:49.253130 containerd[1885]: time="2026-01-28T00:51:49.253060230Z" level=info msg="received container exit event container_id:\"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\" id:\"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\" pid:3961 exited_at:{seconds:1769561509 nanos:249371035}" Jan 28 00:51:49.254507 containerd[1885]: time="2026-01-28T00:51:49.254465611Z" level=info msg="StartContainer for \"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\" returns successfully" Jan 28 00:51:49.272093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9-rootfs.mount: Deactivated successfully. Jan 28 00:51:50.169647 containerd[1885]: time="2026-01-28T00:51:50.169602430Z" level=info msg="CreateContainer within sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 00:51:50.191444 containerd[1885]: time="2026-01-28T00:51:50.191206965Z" level=info msg="Container 278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:50.224011 containerd[1885]: time="2026-01-28T00:51:50.223966320Z" level=info msg="CreateContainer within sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\"" Jan 28 00:51:50.224920 containerd[1885]: time="2026-01-28T00:51:50.224862514Z" level=info msg="StartContainer for \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\"" Jan 28 00:51:50.225941 containerd[1885]: time="2026-01-28T00:51:50.225910311Z" level=info msg="connecting to shim 278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c" address="unix:///run/containerd/s/0e587909bd997250e34c6218c64f725020cd41cdda7a15175dc77acc464cbded" protocol=ttrpc version=3 Jan 28 00:51:50.244630 systemd[1]: Started cri-containerd-278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c.scope - libcontainer container 278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c. Jan 28 00:51:50.280879 containerd[1885]: time="2026-01-28T00:51:50.280842141Z" level=info msg="StartContainer for \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\" returns successfully" Jan 28 00:51:50.403879 kubelet[3369]: I0128 00:51:50.403852 3369 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 00:51:50.450138 systemd[1]: Created slice kubepods-burstable-pod59170f2d_784c_4d84_87c8_913c0fa648d3.slice - libcontainer container kubepods-burstable-pod59170f2d_784c_4d84_87c8_913c0fa648d3.slice. Jan 28 00:51:50.458136 systemd[1]: Created slice kubepods-burstable-pode4fd47f8_2013_44fe_9139_476805fbf9b9.slice - libcontainer container kubepods-burstable-pode4fd47f8_2013_44fe_9139_476805fbf9b9.slice. Jan 28 00:51:50.482038 kubelet[3369]: I0128 00:51:50.481912 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j68s\" (UniqueName: \"kubernetes.io/projected/e4fd47f8-2013-44fe-9139-476805fbf9b9-kube-api-access-4j68s\") pod \"coredns-668d6bf9bc-nsx7f\" (UID: \"e4fd47f8-2013-44fe-9139-476805fbf9b9\") " pod="kube-system/coredns-668d6bf9bc-nsx7f" Jan 28 00:51:50.482161 kubelet[3369]: I0128 00:51:50.482060 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4fd47f8-2013-44fe-9139-476805fbf9b9-config-volume\") pod \"coredns-668d6bf9bc-nsx7f\" (UID: \"e4fd47f8-2013-44fe-9139-476805fbf9b9\") " pod="kube-system/coredns-668d6bf9bc-nsx7f" Jan 28 00:51:50.482161 kubelet[3369]: I0128 00:51:50.482078 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59170f2d-784c-4d84-87c8-913c0fa648d3-config-volume\") pod \"coredns-668d6bf9bc-mh7hw\" (UID: \"59170f2d-784c-4d84-87c8-913c0fa648d3\") " pod="kube-system/coredns-668d6bf9bc-mh7hw" Jan 28 00:51:50.482161 kubelet[3369]: I0128 00:51:50.482141 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9csv\" (UniqueName: \"kubernetes.io/projected/59170f2d-784c-4d84-87c8-913c0fa648d3-kube-api-access-w9csv\") pod \"coredns-668d6bf9bc-mh7hw\" (UID: \"59170f2d-784c-4d84-87c8-913c0fa648d3\") " pod="kube-system/coredns-668d6bf9bc-mh7hw" Jan 28 00:51:50.753988 containerd[1885]: time="2026-01-28T00:51:50.753878069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mh7hw,Uid:59170f2d-784c-4d84-87c8-913c0fa648d3,Namespace:kube-system,Attempt:0,}" Jan 28 00:51:50.763052 containerd[1885]: time="2026-01-28T00:51:50.762894709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nsx7f,Uid:e4fd47f8-2013-44fe-9139-476805fbf9b9,Namespace:kube-system,Attempt:0,}" Jan 28 00:51:51.196448 kubelet[3369]: I0128 00:51:51.195962 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dchc2" podStartSLOduration=7.993612128 podStartE2EDuration="15.195727012s" podCreationTimestamp="2026-01-28 00:51:36 +0000 UTC" firstStartedPulling="2026-01-28 00:51:37.214767652 +0000 UTC m=+4.226497653" lastFinishedPulling="2026-01-28 00:51:44.416882544 +0000 UTC m=+11.428612537" observedRunningTime="2026-01-28 00:51:51.195045574 +0000 UTC m=+18.206775567" watchObservedRunningTime="2026-01-28 00:51:51.195727012 +0000 UTC m=+18.207457005" Jan 28 00:51:52.320049 systemd-networkd[1477]: cilium_host: Link UP Jan 28 00:51:52.320947 systemd-networkd[1477]: cilium_net: Link UP Jan 28 00:51:52.322191 systemd-networkd[1477]: cilium_net: Gained carrier Jan 28 00:51:52.322375 systemd-networkd[1477]: cilium_host: Gained carrier Jan 28 00:51:52.488891 systemd-networkd[1477]: cilium_vxlan: Link UP Jan 28 00:51:52.488896 systemd-networkd[1477]: cilium_vxlan: Gained carrier Jan 28 00:51:52.698624 systemd-networkd[1477]: cilium_host: Gained IPv6LL Jan 28 00:51:52.712550 kernel: NET: Registered PF_ALG protocol family Jan 28 00:51:53.074738 systemd-networkd[1477]: cilium_net: Gained IPv6LL Jan 28 00:51:53.249410 systemd-networkd[1477]: lxc_health: Link UP Jan 28 00:51:53.254636 systemd-networkd[1477]: lxc_health: Gained carrier Jan 28 00:51:53.788519 kernel: eth0: renamed from tmp704ff Jan 28 00:51:53.789359 systemd-networkd[1477]: lxc74fe2578609a: Link UP Jan 28 00:51:53.790121 systemd-networkd[1477]: lxc74fe2578609a: Gained carrier Jan 28 00:51:53.812605 kernel: eth0: renamed from tmp099c2 Jan 28 00:51:53.814774 systemd-networkd[1477]: lxc5e4f55f85a63: Link UP Jan 28 00:51:53.815977 systemd-networkd[1477]: lxc5e4f55f85a63: Gained carrier Jan 28 00:51:53.970659 systemd-networkd[1477]: cilium_vxlan: Gained IPv6LL Jan 28 00:51:54.930660 systemd-networkd[1477]: lxc_health: Gained IPv6LL Jan 28 00:51:55.123631 systemd-networkd[1477]: lxc74fe2578609a: Gained IPv6LL Jan 28 00:51:55.508650 systemd-networkd[1477]: lxc5e4f55f85a63: Gained IPv6LL Jan 28 00:51:56.375993 containerd[1885]: time="2026-01-28T00:51:56.375934476Z" level=info msg="connecting to shim 099c222e175eea0ff6a659b513b150af618649901c0fed28c5d9ec5beba8de6b" address="unix:///run/containerd/s/ff37a6c82b6ae027710666f21a9cd10013eadc36cd95a804fb7a9016b6a55fb2" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:51:56.394813 containerd[1885]: time="2026-01-28T00:51:56.394757200Z" level=info msg="connecting to shim 704ff47dc7d663ff50fc394ab8dffb3accb4b30d0783e3a732baffd16647bc00" address="unix:///run/containerd/s/730f0618398d5e9b7900692f681e9af6d48ee1d5dfa3b3831c039bf1553ce433" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:51:56.412696 systemd[1]: Started cri-containerd-099c222e175eea0ff6a659b513b150af618649901c0fed28c5d9ec5beba8de6b.scope - libcontainer container 099c222e175eea0ff6a659b513b150af618649901c0fed28c5d9ec5beba8de6b. Jan 28 00:51:56.415710 systemd[1]: Started cri-containerd-704ff47dc7d663ff50fc394ab8dffb3accb4b30d0783e3a732baffd16647bc00.scope - libcontainer container 704ff47dc7d663ff50fc394ab8dffb3accb4b30d0783e3a732baffd16647bc00. Jan 28 00:51:56.447293 containerd[1885]: time="2026-01-28T00:51:56.447204027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nsx7f,Uid:e4fd47f8-2013-44fe-9139-476805fbf9b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"099c222e175eea0ff6a659b513b150af618649901c0fed28c5d9ec5beba8de6b\"" Jan 28 00:51:56.451855 containerd[1885]: time="2026-01-28T00:51:56.451819048Z" level=info msg="CreateContainer within sandbox \"099c222e175eea0ff6a659b513b150af618649901c0fed28c5d9ec5beba8de6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:51:56.460727 containerd[1885]: time="2026-01-28T00:51:56.460696163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mh7hw,Uid:59170f2d-784c-4d84-87c8-913c0fa648d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"704ff47dc7d663ff50fc394ab8dffb3accb4b30d0783e3a732baffd16647bc00\"" Jan 28 00:51:56.464475 containerd[1885]: time="2026-01-28T00:51:56.464408430Z" level=info msg="CreateContainer within sandbox \"704ff47dc7d663ff50fc394ab8dffb3accb4b30d0783e3a732baffd16647bc00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:51:56.480467 containerd[1885]: time="2026-01-28T00:51:56.480434297Z" level=info msg="Container 8a6f6fb908a5e91bb596ffae08b0b3e5b13e263f5c165240e0a4c77b692b18bc: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:56.482980 containerd[1885]: time="2026-01-28T00:51:56.482937244Z" level=info msg="Container 75cdd96d624bc543d38cac47cae9fc1e31d4084833a41879e8dd560588f690aa: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:51:56.497894 containerd[1885]: time="2026-01-28T00:51:56.497857913Z" level=info msg="CreateContainer within sandbox \"099c222e175eea0ff6a659b513b150af618649901c0fed28c5d9ec5beba8de6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a6f6fb908a5e91bb596ffae08b0b3e5b13e263f5c165240e0a4c77b692b18bc\"" Jan 28 00:51:56.499145 containerd[1885]: time="2026-01-28T00:51:56.499117306Z" level=info msg="StartContainer for \"8a6f6fb908a5e91bb596ffae08b0b3e5b13e263f5c165240e0a4c77b692b18bc\"" Jan 28 00:51:56.500155 containerd[1885]: time="2026-01-28T00:51:56.500123871Z" level=info msg="connecting to shim 8a6f6fb908a5e91bb596ffae08b0b3e5b13e263f5c165240e0a4c77b692b18bc" address="unix:///run/containerd/s/ff37a6c82b6ae027710666f21a9cd10013eadc36cd95a804fb7a9016b6a55fb2" protocol=ttrpc version=3 Jan 28 00:51:56.504225 containerd[1885]: time="2026-01-28T00:51:56.504189681Z" level=info msg="CreateContainer within sandbox \"704ff47dc7d663ff50fc394ab8dffb3accb4b30d0783e3a732baffd16647bc00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75cdd96d624bc543d38cac47cae9fc1e31d4084833a41879e8dd560588f690aa\"" Jan 28 00:51:56.505603 containerd[1885]: time="2026-01-28T00:51:56.504879854Z" level=info msg="StartContainer for \"75cdd96d624bc543d38cac47cae9fc1e31d4084833a41879e8dd560588f690aa\"" Jan 28 00:51:56.505603 containerd[1885]: time="2026-01-28T00:51:56.505480147Z" level=info msg="connecting to shim 75cdd96d624bc543d38cac47cae9fc1e31d4084833a41879e8dd560588f690aa" address="unix:///run/containerd/s/730f0618398d5e9b7900692f681e9af6d48ee1d5dfa3b3831c039bf1553ce433" protocol=ttrpc version=3 Jan 28 00:51:56.523640 systemd[1]: Started cri-containerd-8a6f6fb908a5e91bb596ffae08b0b3e5b13e263f5c165240e0a4c77b692b18bc.scope - libcontainer container 8a6f6fb908a5e91bb596ffae08b0b3e5b13e263f5c165240e0a4c77b692b18bc. Jan 28 00:51:56.527255 systemd[1]: Started cri-containerd-75cdd96d624bc543d38cac47cae9fc1e31d4084833a41879e8dd560588f690aa.scope - libcontainer container 75cdd96d624bc543d38cac47cae9fc1e31d4084833a41879e8dd560588f690aa. Jan 28 00:51:56.562510 containerd[1885]: time="2026-01-28T00:51:56.562371223Z" level=info msg="StartContainer for \"8a6f6fb908a5e91bb596ffae08b0b3e5b13e263f5c165240e0a4c77b692b18bc\" returns successfully" Jan 28 00:51:56.563570 containerd[1885]: time="2026-01-28T00:51:56.563411356Z" level=info msg="StartContainer for \"75cdd96d624bc543d38cac47cae9fc1e31d4084833a41879e8dd560588f690aa\" returns successfully" Jan 28 00:51:57.208527 kubelet[3369]: I0128 00:51:57.208168 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mh7hw" podStartSLOduration=21.208152367 podStartE2EDuration="21.208152367s" podCreationTimestamp="2026-01-28 00:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:51:57.206928839 +0000 UTC m=+24.218658832" watchObservedRunningTime="2026-01-28 00:51:57.208152367 +0000 UTC m=+24.219882400" Jan 28 00:51:57.238713 kubelet[3369]: I0128 00:51:57.238536 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nsx7f" podStartSLOduration=21.23851814 podStartE2EDuration="21.23851814s" podCreationTimestamp="2026-01-28 00:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:51:57.238272263 +0000 UTC m=+24.250002256" watchObservedRunningTime="2026-01-28 00:51:57.23851814 +0000 UTC m=+24.250248133" Jan 28 00:52:00.597943 kubelet[3369]: I0128 00:52:00.597898 3369 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 00:52:33.691694 systemd[1]: Started sshd@7-10.200.20.13:22-10.200.16.10:59982.service - OpenSSH per-connection server daemon (10.200.16.10:59982). Jan 28 00:52:34.181780 sshd[4678]: Accepted publickey for core from 10.200.16.10 port 59982 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:34.182976 sshd-session[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:34.186546 systemd-logind[1868]: New session 10 of user core. Jan 28 00:52:34.194625 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 00:52:34.582761 sshd[4681]: Connection closed by 10.200.16.10 port 59982 Jan 28 00:52:34.583261 sshd-session[4678]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:34.587448 systemd-logind[1868]: Session 10 logged out. Waiting for processes to exit. Jan 28 00:52:34.587696 systemd[1]: sshd@7-10.200.20.13:22-10.200.16.10:59982.service: Deactivated successfully. Jan 28 00:52:34.590049 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 00:52:34.591744 systemd-logind[1868]: Removed session 10. Jan 28 00:52:39.678452 systemd[1]: Started sshd@8-10.200.20.13:22-10.200.16.10:52960.service - OpenSSH per-connection server daemon (10.200.16.10:52960). Jan 28 00:52:40.170943 sshd[4698]: Accepted publickey for core from 10.200.16.10 port 52960 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:40.171825 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:40.175399 systemd-logind[1868]: New session 11 of user core. Jan 28 00:52:40.181611 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 00:52:40.559671 sshd[4701]: Connection closed by 10.200.16.10 port 52960 Jan 28 00:52:40.560326 sshd-session[4698]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:40.564160 systemd[1]: sshd@8-10.200.20.13:22-10.200.16.10:52960.service: Deactivated successfully. Jan 28 00:52:40.565712 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 00:52:40.566334 systemd-logind[1868]: Session 11 logged out. Waiting for processes to exit. Jan 28 00:52:40.567982 systemd-logind[1868]: Removed session 11. Jan 28 00:52:45.641700 systemd[1]: Started sshd@9-10.200.20.13:22-10.200.16.10:52962.service - OpenSSH per-connection server daemon (10.200.16.10:52962). Jan 28 00:52:46.098351 sshd[4713]: Accepted publickey for core from 10.200.16.10 port 52962 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:46.099195 sshd-session[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:46.103391 systemd-logind[1868]: New session 12 of user core. Jan 28 00:52:46.110857 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 00:52:46.472488 sshd[4716]: Connection closed by 10.200.16.10 port 52962 Jan 28 00:52:46.472387 sshd-session[4713]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:46.476960 systemd-logind[1868]: Session 12 logged out. Waiting for processes to exit. Jan 28 00:52:46.477024 systemd[1]: sshd@9-10.200.20.13:22-10.200.16.10:52962.service: Deactivated successfully. Jan 28 00:52:46.478618 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 00:52:46.480113 systemd-logind[1868]: Removed session 12. Jan 28 00:52:51.565558 systemd[1]: Started sshd@10-10.200.20.13:22-10.200.16.10:47332.service - OpenSSH per-connection server daemon (10.200.16.10:47332). Jan 28 00:52:52.059984 sshd[4729]: Accepted publickey for core from 10.200.16.10 port 47332 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:52.061182 sshd-session[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:52.065011 systemd-logind[1868]: New session 13 of user core. Jan 28 00:52:52.071630 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 00:52:52.451140 sshd[4732]: Connection closed by 10.200.16.10 port 47332 Jan 28 00:52:52.451754 sshd-session[4729]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:52.455764 systemd-logind[1868]: Session 13 logged out. Waiting for processes to exit. Jan 28 00:52:52.456122 systemd[1]: sshd@10-10.200.20.13:22-10.200.16.10:47332.service: Deactivated successfully. Jan 28 00:52:52.459246 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 00:52:52.461065 systemd-logind[1868]: Removed session 13. Jan 28 00:52:52.539195 systemd[1]: Started sshd@11-10.200.20.13:22-10.200.16.10:47338.service - OpenSSH per-connection server daemon (10.200.16.10:47338). Jan 28 00:52:53.030644 sshd[4745]: Accepted publickey for core from 10.200.16.10 port 47338 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:53.031084 sshd-session[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:53.034864 systemd-logind[1868]: New session 14 of user core. Jan 28 00:52:53.043619 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 00:52:53.453884 sshd[4748]: Connection closed by 10.200.16.10 port 47338 Jan 28 00:52:53.454712 sshd-session[4745]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:53.458335 systemd-logind[1868]: Session 14 logged out. Waiting for processes to exit. Jan 28 00:52:53.459109 systemd[1]: sshd@11-10.200.20.13:22-10.200.16.10:47338.service: Deactivated successfully. Jan 28 00:52:53.460895 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 00:52:53.462331 systemd-logind[1868]: Removed session 14. Jan 28 00:52:53.541270 systemd[1]: Started sshd@12-10.200.20.13:22-10.200.16.10:47340.service - OpenSSH per-connection server daemon (10.200.16.10:47340). Jan 28 00:52:54.030378 sshd[4757]: Accepted publickey for core from 10.200.16.10 port 47340 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:54.031132 sshd-session[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:54.034510 systemd-logind[1868]: New session 15 of user core. Jan 28 00:52:54.039629 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 00:52:54.419536 sshd[4760]: Connection closed by 10.200.16.10 port 47340 Jan 28 00:52:54.420127 sshd-session[4757]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:54.424127 systemd[1]: sshd@12-10.200.20.13:22-10.200.16.10:47340.service: Deactivated successfully. Jan 28 00:52:54.426030 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 00:52:54.427306 systemd-logind[1868]: Session 15 logged out. Waiting for processes to exit. Jan 28 00:52:54.428571 systemd-logind[1868]: Removed session 15. Jan 28 00:52:59.508278 systemd[1]: Started sshd@13-10.200.20.13:22-10.200.16.10:59594.service - OpenSSH per-connection server daemon (10.200.16.10:59594). Jan 28 00:53:00.001528 sshd[4772]: Accepted publickey for core from 10.200.16.10 port 59594 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:00.002352 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:00.006122 systemd-logind[1868]: New session 16 of user core. Jan 28 00:53:00.011627 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 00:53:00.391830 sshd[4775]: Connection closed by 10.200.16.10 port 59594 Jan 28 00:53:00.392368 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:00.395554 systemd[1]: sshd@13-10.200.20.13:22-10.200.16.10:59594.service: Deactivated successfully. Jan 28 00:53:00.397188 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 00:53:00.397883 systemd-logind[1868]: Session 16 logged out. Waiting for processes to exit. Jan 28 00:53:00.399060 systemd-logind[1868]: Removed session 16. Jan 28 00:53:00.471460 systemd[1]: Started sshd@14-10.200.20.13:22-10.200.16.10:59598.service - OpenSSH per-connection server daemon (10.200.16.10:59598). Jan 28 00:53:00.929460 sshd[4787]: Accepted publickey for core from 10.200.16.10 port 59598 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:00.930365 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:00.933981 systemd-logind[1868]: New session 17 of user core. Jan 28 00:53:00.939628 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 00:53:01.317703 sshd[4790]: Connection closed by 10.200.16.10 port 59598 Jan 28 00:53:01.317541 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:01.321086 systemd[1]: sshd@14-10.200.20.13:22-10.200.16.10:59598.service: Deactivated successfully. Jan 28 00:53:01.323814 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 00:53:01.325169 systemd-logind[1868]: Session 17 logged out. Waiting for processes to exit. Jan 28 00:53:01.327827 systemd-logind[1868]: Removed session 17. Jan 28 00:53:01.405043 systemd[1]: Started sshd@15-10.200.20.13:22-10.200.16.10:59612.service - OpenSSH per-connection server daemon (10.200.16.10:59612). Jan 28 00:53:01.858537 sshd[4799]: Accepted publickey for core from 10.200.16.10 port 59612 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:01.859352 sshd-session[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:01.862980 systemd-logind[1868]: New session 18 of user core. Jan 28 00:53:01.871639 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 00:53:02.512616 sshd[4802]: Connection closed by 10.200.16.10 port 59612 Jan 28 00:53:02.513685 sshd-session[4799]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:02.517166 systemd[1]: sshd@15-10.200.20.13:22-10.200.16.10:59612.service: Deactivated successfully. Jan 28 00:53:02.519443 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 00:53:02.522734 systemd-logind[1868]: Session 18 logged out. Waiting for processes to exit. Jan 28 00:53:02.523919 systemd-logind[1868]: Removed session 18. Jan 28 00:53:02.597821 systemd[1]: Started sshd@16-10.200.20.13:22-10.200.16.10:59620.service - OpenSSH per-connection server daemon (10.200.16.10:59620). Jan 28 00:53:03.094592 sshd[4820]: Accepted publickey for core from 10.200.16.10 port 59620 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:03.095713 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:03.099679 systemd-logind[1868]: New session 19 of user core. Jan 28 00:53:03.106769 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 00:53:03.573644 sshd[4823]: Connection closed by 10.200.16.10 port 59620 Jan 28 00:53:03.574002 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:03.578067 systemd-logind[1868]: Session 19 logged out. Waiting for processes to exit. Jan 28 00:53:03.578633 systemd[1]: sshd@16-10.200.20.13:22-10.200.16.10:59620.service: Deactivated successfully. Jan 28 00:53:03.581080 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 00:53:03.583548 systemd-logind[1868]: Removed session 19. Jan 28 00:53:03.659790 systemd[1]: Started sshd@17-10.200.20.13:22-10.200.16.10:59624.service - OpenSSH per-connection server daemon (10.200.16.10:59624). Jan 28 00:53:04.119737 sshd[4832]: Accepted publickey for core from 10.200.16.10 port 59624 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:04.122012 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:04.125923 systemd-logind[1868]: New session 20 of user core. Jan 28 00:53:04.136646 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 00:53:04.484017 sshd[4835]: Connection closed by 10.200.16.10 port 59624 Jan 28 00:53:04.483919 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:04.487769 systemd[1]: sshd@17-10.200.20.13:22-10.200.16.10:59624.service: Deactivated successfully. Jan 28 00:53:04.490132 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 00:53:04.492982 systemd-logind[1868]: Session 20 logged out. Waiting for processes to exit. Jan 28 00:53:04.494100 systemd-logind[1868]: Removed session 20. Jan 28 00:53:09.564221 systemd[1]: Started sshd@18-10.200.20.13:22-10.200.16.10:59286.service - OpenSSH per-connection server daemon (10.200.16.10:59286). Jan 28 00:53:10.021336 sshd[4853]: Accepted publickey for core from 10.200.16.10 port 59286 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:10.022195 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:10.025720 systemd-logind[1868]: New session 21 of user core. Jan 28 00:53:10.034649 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 00:53:10.384343 sshd[4856]: Connection closed by 10.200.16.10 port 59286 Jan 28 00:53:10.385184 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:10.388920 systemd-logind[1868]: Session 21 logged out. Waiting for processes to exit. Jan 28 00:53:10.389180 systemd[1]: sshd@18-10.200.20.13:22-10.200.16.10:59286.service: Deactivated successfully. Jan 28 00:53:10.391781 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 00:53:10.393408 systemd-logind[1868]: Removed session 21. Jan 28 00:53:15.483089 systemd[1]: Started sshd@19-10.200.20.13:22-10.200.16.10:59288.service - OpenSSH per-connection server daemon (10.200.16.10:59288). Jan 28 00:53:15.972790 sshd[4868]: Accepted publickey for core from 10.200.16.10 port 59288 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:15.974060 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:15.977989 systemd-logind[1868]: New session 22 of user core. Jan 28 00:53:15.984631 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 00:53:16.361437 sshd[4871]: Connection closed by 10.200.16.10 port 59288 Jan 28 00:53:16.361883 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:16.365541 systemd[1]: sshd@19-10.200.20.13:22-10.200.16.10:59288.service: Deactivated successfully. Jan 28 00:53:16.367663 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 00:53:16.368398 systemd-logind[1868]: Session 22 logged out. Waiting for processes to exit. Jan 28 00:53:16.370104 systemd-logind[1868]: Removed session 22. Jan 28 00:53:21.457710 systemd[1]: Started sshd@20-10.200.20.13:22-10.200.16.10:40812.service - OpenSSH per-connection server daemon (10.200.16.10:40812). Jan 28 00:53:21.951797 sshd[4883]: Accepted publickey for core from 10.200.16.10 port 40812 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:21.952890 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:21.956508 systemd-logind[1868]: New session 23 of user core. Jan 28 00:53:21.965642 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 00:53:22.342370 sshd[4886]: Connection closed by 10.200.16.10 port 40812 Jan 28 00:53:22.342794 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:22.346286 systemd[1]: sshd@20-10.200.20.13:22-10.200.16.10:40812.service: Deactivated successfully. Jan 28 00:53:22.347892 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 00:53:22.349992 systemd-logind[1868]: Session 23 logged out. Waiting for processes to exit. Jan 28 00:53:22.351336 systemd-logind[1868]: Removed session 23. Jan 28 00:53:22.433266 systemd[1]: Started sshd@21-10.200.20.13:22-10.200.16.10:40822.service - OpenSSH per-connection server daemon (10.200.16.10:40822). Jan 28 00:53:22.927479 sshd[4897]: Accepted publickey for core from 10.200.16.10 port 40822 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:22.928605 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:22.932225 systemd-logind[1868]: New session 24 of user core. Jan 28 00:53:22.938797 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 00:53:24.498322 containerd[1885]: time="2026-01-28T00:53:24.498276612Z" level=info msg="StopContainer for \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\" with timeout 30 (s)" Jan 28 00:53:24.499815 containerd[1885]: time="2026-01-28T00:53:24.499789375Z" level=info msg="Stop container \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\" with signal terminated" Jan 28 00:53:24.508095 containerd[1885]: time="2026-01-28T00:53:24.508058415Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:53:24.514407 containerd[1885]: time="2026-01-28T00:53:24.514381544Z" level=info msg="StopContainer for \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\" with timeout 2 (s)" Jan 28 00:53:24.515114 containerd[1885]: time="2026-01-28T00:53:24.514787212Z" level=info msg="Stop container \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\" with signal terminated" Jan 28 00:53:24.515283 systemd[1]: cri-containerd-cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca.scope: Deactivated successfully. Jan 28 00:53:24.520842 containerd[1885]: time="2026-01-28T00:53:24.520489300Z" level=info msg="received container exit event container_id:\"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\" id:\"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\" pid:3888 exited_at:{seconds:1769561604 nanos:520232692}" Jan 28 00:53:24.527364 systemd-networkd[1477]: lxc_health: Link DOWN Jan 28 00:53:24.528242 systemd-networkd[1477]: lxc_health: Lost carrier Jan 28 00:53:24.542018 systemd[1]: cri-containerd-278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c.scope: Deactivated successfully. Jan 28 00:53:24.542675 systemd[1]: cri-containerd-278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c.scope: Consumed 4.389s CPU time, 120.9M memory peak, 128K read from disk, 12.9M written to disk. Jan 28 00:53:24.543539 containerd[1885]: time="2026-01-28T00:53:24.543464649Z" level=info msg="received container exit event container_id:\"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\" id:\"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\" pid:3999 exited_at:{seconds:1769561604 nanos:543185865}" Jan 28 00:53:24.546422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca-rootfs.mount: Deactivated successfully. Jan 28 00:53:24.563829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c-rootfs.mount: Deactivated successfully. Jan 28 00:53:24.593636 containerd[1885]: time="2026-01-28T00:53:24.593593080Z" level=info msg="StopContainer for \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\" returns successfully" Jan 28 00:53:24.594340 containerd[1885]: time="2026-01-28T00:53:24.594317476Z" level=info msg="StopPodSandbox for \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\"" Jan 28 00:53:24.594531 containerd[1885]: time="2026-01-28T00:53:24.594468184Z" level=info msg="Container to stop \"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:53:24.594748 containerd[1885]: time="2026-01-28T00:53:24.594675958Z" level=info msg="Container to stop \"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:53:24.594748 containerd[1885]: time="2026-01-28T00:53:24.594691110Z" level=info msg="Container to stop \"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:53:24.594748 containerd[1885]: time="2026-01-28T00:53:24.594697862Z" level=info msg="Container to stop \"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:53:24.594748 containerd[1885]: time="2026-01-28T00:53:24.594703063Z" level=info msg="Container to stop \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:53:24.598576 containerd[1885]: time="2026-01-28T00:53:24.598523114Z" level=info msg="StopContainer for \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\" returns successfully" Jan 28 00:53:24.600463 containerd[1885]: time="2026-01-28T00:53:24.600375934Z" level=info msg="StopPodSandbox for \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\"" Jan 28 00:53:24.600463 containerd[1885]: time="2026-01-28T00:53:24.600437640Z" level=info msg="Container to stop \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 00:53:24.604292 systemd[1]: cri-containerd-c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174.scope: Deactivated successfully. Jan 28 00:53:24.606183 containerd[1885]: time="2026-01-28T00:53:24.606156552Z" level=info msg="received sandbox exit event container_id:\"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" id:\"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" exit_status:137 exited_at:{seconds:1769561604 nanos:605998972}" monitor_name=podsandbox Jan 28 00:53:24.610682 systemd[1]: cri-containerd-d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258.scope: Deactivated successfully. Jan 28 00:53:24.612425 containerd[1885]: time="2026-01-28T00:53:24.612391823Z" level=info msg="received sandbox exit event container_id:\"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\" id:\"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\" exit_status:137 exited_at:{seconds:1769561604 nanos:612239267}" monitor_name=podsandbox Jan 28 00:53:24.633447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174-rootfs.mount: Deactivated successfully. Jan 28 00:53:24.639527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258-rootfs.mount: Deactivated successfully. Jan 28 00:53:24.652884 containerd[1885]: time="2026-01-28T00:53:24.652766324Z" level=info msg="shim disconnected" id=d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258 namespace=k8s.io Jan 28 00:53:24.653284 containerd[1885]: time="2026-01-28T00:53:24.652858023Z" level=warning msg="cleaning up after shim disconnected" id=d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258 namespace=k8s.io Jan 28 00:53:24.653506 containerd[1885]: time="2026-01-28T00:53:24.653461168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 00:53:24.653555 containerd[1885]: time="2026-01-28T00:53:24.653225385Z" level=info msg="shim disconnected" id=c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174 namespace=k8s.io Jan 28 00:53:24.653555 containerd[1885]: time="2026-01-28T00:53:24.653525538Z" level=warning msg="cleaning up after shim disconnected" id=c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174 namespace=k8s.io Jan 28 00:53:24.653555 containerd[1885]: time="2026-01-28T00:53:24.653545018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 00:53:24.664515 containerd[1885]: time="2026-01-28T00:53:24.664466333Z" level=info msg="received sandbox container exit event sandbox_id:\"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" exit_status:137 exited_at:{seconds:1769561604 nanos:605998972}" monitor_name=criService Jan 28 00:53:24.666228 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174-shm.mount: Deactivated successfully. Jan 28 00:53:24.667221 containerd[1885]: time="2026-01-28T00:53:24.667010668Z" level=info msg="TearDown network for sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" successfully" Jan 28 00:53:24.667326 containerd[1885]: time="2026-01-28T00:53:24.667308628Z" level=info msg="StopPodSandbox for \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" returns successfully" Jan 28 00:53:24.670970 containerd[1885]: time="2026-01-28T00:53:24.670861672Z" level=info msg="received sandbox container exit event sandbox_id:\"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\" exit_status:137 exited_at:{seconds:1769561604 nanos:612239267}" monitor_name=criService Jan 28 00:53:24.671214 containerd[1885]: time="2026-01-28T00:53:24.670948475Z" level=info msg="TearDown network for sandbox \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\" successfully" Jan 28 00:53:24.671286 containerd[1885]: time="2026-01-28T00:53:24.671273268Z" level=info msg="StopPodSandbox for \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\" returns successfully" Jan 28 00:53:24.724533 kubelet[3369]: I0128 00:53:24.723341 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvplb\" (UniqueName: \"kubernetes.io/projected/04d6614e-f303-460e-87de-5306fea760c4-kube-api-access-zvplb\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.724533 kubelet[3369]: I0128 00:53:24.723383 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-host-proc-sys-net\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.724533 kubelet[3369]: I0128 00:53:24.723397 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-hostproc\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.724533 kubelet[3369]: I0128 00:53:24.723411 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04d6614e-f303-460e-87de-5306fea760c4-cilium-config-path\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.724533 kubelet[3369]: I0128 00:53:24.723423 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cilium-run\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.724533 kubelet[3369]: I0128 00:53:24.723434 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04d6614e-f303-460e-87de-5306fea760c4-hubble-tls\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.724971 kubelet[3369]: I0128 00:53:24.723444 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-host-proc-sys-kernel\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.724971 kubelet[3369]: I0128 00:53:24.723456 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-lib-modules\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.724971 kubelet[3369]: I0128 00:53:24.723469 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-bpf-maps\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.724971 kubelet[3369]: I0128 00:53:24.723483 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-etc-cni-netd\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.724971 kubelet[3369]: I0128 00:53:24.723508 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs78v\" (UniqueName: \"kubernetes.io/projected/56c2ff2c-1d50-4015-8dd8-d2afd87e75dd-kube-api-access-vs78v\") pod \"56c2ff2c-1d50-4015-8dd8-d2afd87e75dd\" (UID: \"56c2ff2c-1d50-4015-8dd8-d2afd87e75dd\") " Jan 28 00:53:24.724971 kubelet[3369]: I0128 00:53:24.723520 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56c2ff2c-1d50-4015-8dd8-d2afd87e75dd-cilium-config-path\") pod \"56c2ff2c-1d50-4015-8dd8-d2afd87e75dd\" (UID: \"56c2ff2c-1d50-4015-8dd8-d2afd87e75dd\") " Jan 28 00:53:24.725065 kubelet[3369]: I0128 00:53:24.723532 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04d6614e-f303-460e-87de-5306fea760c4-clustermesh-secrets\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.725065 kubelet[3369]: I0128 00:53:24.723540 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-xtables-lock\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.725065 kubelet[3369]: I0128 00:53:24.723549 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cilium-cgroup\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.725065 kubelet[3369]: I0128 00:53:24.723560 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cni-path\") pod \"04d6614e-f303-460e-87de-5306fea760c4\" (UID: \"04d6614e-f303-460e-87de-5306fea760c4\") " Jan 28 00:53:24.725065 kubelet[3369]: I0128 00:53:24.723610 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:53:24.725065 kubelet[3369]: I0128 00:53:24.723641 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:53:24.725157 kubelet[3369]: I0128 00:53:24.723651 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:53:24.725157 kubelet[3369]: I0128 00:53:24.724716 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:53:24.725157 kubelet[3369]: I0128 00:53:24.724758 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:53:24.725555 kubelet[3369]: I0128 00:53:24.725535 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:53:24.725648 kubelet[3369]: I0128 00:53:24.725636 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:53:24.725716 kubelet[3369]: I0128 00:53:24.725706 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:53:24.726000 kubelet[3369]: I0128 00:53:24.725983 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:53:24.726086 kubelet[3369]: I0128 00:53:24.726075 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 00:53:24.728349 kubelet[3369]: I0128 00:53:24.727890 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d6614e-f303-460e-87de-5306fea760c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 00:53:24.728644 kubelet[3369]: I0128 00:53:24.728625 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d6614e-f303-460e-87de-5306fea760c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 00:53:24.728811 kubelet[3369]: I0128 00:53:24.728782 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d6614e-f303-460e-87de-5306fea760c4-kube-api-access-zvplb" (OuterVolumeSpecName: "kube-api-access-zvplb") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "kube-api-access-zvplb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 00:53:24.729348 kubelet[3369]: I0128 00:53:24.729323 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d6614e-f303-460e-87de-5306fea760c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "04d6614e-f303-460e-87de-5306fea760c4" (UID: "04d6614e-f303-460e-87de-5306fea760c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 00:53:24.729861 kubelet[3369]: I0128 00:53:24.729841 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c2ff2c-1d50-4015-8dd8-d2afd87e75dd-kube-api-access-vs78v" (OuterVolumeSpecName: "kube-api-access-vs78v") pod "56c2ff2c-1d50-4015-8dd8-d2afd87e75dd" (UID: "56c2ff2c-1d50-4015-8dd8-d2afd87e75dd"). InnerVolumeSpecName "kube-api-access-vs78v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 00:53:24.730334 kubelet[3369]: I0128 00:53:24.730314 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c2ff2c-1d50-4015-8dd8-d2afd87e75dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "56c2ff2c-1d50-4015-8dd8-d2afd87e75dd" (UID: "56c2ff2c-1d50-4015-8dd8-d2afd87e75dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 00:53:24.824161 kubelet[3369]: I0128 00:53:24.824000 3369 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-etc-cni-netd\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824336 kubelet[3369]: I0128 00:53:24.824324 3369 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04d6614e-f303-460e-87de-5306fea760c4-clustermesh-secrets\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824409 kubelet[3369]: I0128 00:53:24.824399 3369 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vs78v\" (UniqueName: \"kubernetes.io/projected/56c2ff2c-1d50-4015-8dd8-d2afd87e75dd-kube-api-access-vs78v\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824470 kubelet[3369]: I0128 00:53:24.824461 3369 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56c2ff2c-1d50-4015-8dd8-d2afd87e75dd-cilium-config-path\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824544 kubelet[3369]: I0128 00:53:24.824534 3369 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cilium-cgroup\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824687 kubelet[3369]: I0128 00:53:24.824596 3369 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cni-path\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824687 kubelet[3369]: I0128 00:53:24.824608 3369 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-xtables-lock\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824687 kubelet[3369]: I0128 00:53:24.824619 3369 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zvplb\" (UniqueName: \"kubernetes.io/projected/04d6614e-f303-460e-87de-5306fea760c4-kube-api-access-zvplb\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824687 kubelet[3369]: I0128 00:53:24.824628 3369 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-host-proc-sys-net\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824687 kubelet[3369]: I0128 00:53:24.824635 3369 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04d6614e-f303-460e-87de-5306fea760c4-cilium-config-path\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824687 kubelet[3369]: I0128 00:53:24.824642 3369 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-cilium-run\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824687 kubelet[3369]: I0128 00:53:24.824648 3369 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-hostproc\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824687 kubelet[3369]: I0128 00:53:24.824655 3369 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04d6614e-f303-460e-87de-5306fea760c4-hubble-tls\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824823 kubelet[3369]: I0128 00:53:24.824661 3369 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-host-proc-sys-kernel\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824823 kubelet[3369]: I0128 00:53:24.824667 3369 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-lib-modules\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:24.824823 kubelet[3369]: I0128 00:53:24.824672 3369 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04d6614e-f303-460e-87de-5306fea760c4-bpf-maps\") on node \"ci-4459.2.3-n-42917f0d29\" DevicePath \"\"" Jan 28 00:53:25.088455 systemd[1]: Removed slice kubepods-burstable-pod04d6614e_f303_460e_87de_5306fea760c4.slice - libcontainer container kubepods-burstable-pod04d6614e_f303_460e_87de_5306fea760c4.slice. Jan 28 00:53:25.088567 systemd[1]: kubepods-burstable-pod04d6614e_f303_460e_87de_5306fea760c4.slice: Consumed 4.456s CPU time, 121.3M memory peak, 128K read from disk, 12.9M written to disk. Jan 28 00:53:25.090050 systemd[1]: Removed slice kubepods-besteffort-pod56c2ff2c_1d50_4015_8dd8_d2afd87e75dd.slice - libcontainer container kubepods-besteffort-pod56c2ff2c_1d50_4015_8dd8_d2afd87e75dd.slice. Jan 28 00:53:25.348302 kubelet[3369]: I0128 00:53:25.348159 3369 scope.go:117] "RemoveContainer" containerID="278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c" Jan 28 00:53:25.350620 containerd[1885]: time="2026-01-28T00:53:25.350530548Z" level=info msg="RemoveContainer for \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\"" Jan 28 00:53:25.358922 containerd[1885]: time="2026-01-28T00:53:25.358842590Z" level=info msg="RemoveContainer for \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\" returns successfully" Jan 28 00:53:25.359199 kubelet[3369]: I0128 00:53:25.359180 3369 scope.go:117] "RemoveContainer" containerID="63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9" Jan 28 00:53:25.361188 containerd[1885]: time="2026-01-28T00:53:25.361158119Z" level=info msg="RemoveContainer for \"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\"" Jan 28 00:53:25.368197 containerd[1885]: time="2026-01-28T00:53:25.368166179Z" level=info msg="RemoveContainer for \"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\" returns successfully" Jan 28 00:53:25.368720 kubelet[3369]: I0128 00:53:25.368654 3369 scope.go:117] "RemoveContainer" containerID="9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c" Jan 28 00:53:25.371138 containerd[1885]: time="2026-01-28T00:53:25.371115934Z" level=info msg="RemoveContainer for \"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\"" Jan 28 00:53:25.379513 containerd[1885]: time="2026-01-28T00:53:25.379442616Z" level=info msg="RemoveContainer for \"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\" returns successfully" Jan 28 00:53:25.379763 kubelet[3369]: I0128 00:53:25.379732 3369 scope.go:117] "RemoveContainer" containerID="a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75" Jan 28 00:53:25.381361 containerd[1885]: time="2026-01-28T00:53:25.381028708Z" level=info msg="RemoveContainer for \"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\"" Jan 28 00:53:25.386862 containerd[1885]: time="2026-01-28T00:53:25.386835831Z" level=info msg="RemoveContainer for \"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\" returns successfully" Jan 28 00:53:25.387123 kubelet[3369]: I0128 00:53:25.387099 3369 scope.go:117] "RemoveContainer" containerID="0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6" Jan 28 00:53:25.389665 containerd[1885]: time="2026-01-28T00:53:25.389427640Z" level=info msg="RemoveContainer for \"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\"" Jan 28 00:53:25.396842 containerd[1885]: time="2026-01-28T00:53:25.396781951Z" level=info msg="RemoveContainer for \"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\" returns successfully" Jan 28 00:53:25.397237 kubelet[3369]: I0128 00:53:25.397185 3369 scope.go:117] "RemoveContainer" containerID="278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c" Jan 28 00:53:25.397564 containerd[1885]: time="2026-01-28T00:53:25.397535868Z" level=error msg="ContainerStatus for \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\": not found" Jan 28 00:53:25.397867 kubelet[3369]: E0128 00:53:25.397831 3369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\": not found" containerID="278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c" Jan 28 00:53:25.397923 kubelet[3369]: I0128 00:53:25.397871 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c"} err="failed to get container status \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\": rpc error: code = NotFound desc = an error occurred when try to find container \"278e39fdd0794f15f76cdc225ffc6937a0dc7c515b1ffc5509f27f87f359b54c\": not found" Jan 28 00:53:25.397952 kubelet[3369]: I0128 00:53:25.397928 3369 scope.go:117] "RemoveContainer" containerID="63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9" Jan 28 00:53:25.398190 containerd[1885]: time="2026-01-28T00:53:25.398169166Z" level=error msg="ContainerStatus for \"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\": not found" Jan 28 00:53:25.398419 kubelet[3369]: E0128 00:53:25.398402 3369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\": not found" containerID="63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9" Jan 28 00:53:25.398536 kubelet[3369]: I0128 00:53:25.398514 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9"} err="failed to get container status \"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"63cb7e64c800794986db54d7d0929afa63669350f85bdf780eea662718f8f5d9\": not found" Jan 28 00:53:25.398609 kubelet[3369]: I0128 00:53:25.398598 3369 scope.go:117] "RemoveContainer" containerID="9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c" Jan 28 00:53:25.398968 containerd[1885]: time="2026-01-28T00:53:25.398816224Z" level=error msg="ContainerStatus for \"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\": not found" Jan 28 00:53:25.399107 kubelet[3369]: E0128 00:53:25.399091 3369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\": not found" containerID="9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c" Jan 28 00:53:25.399238 kubelet[3369]: I0128 00:53:25.399182 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c"} err="failed to get container status \"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c2950c67a4a6b1b609777099b685a03d6089f99d89c3fdf0b1deb01c573cc5c\": not found" Jan 28 00:53:25.399238 kubelet[3369]: I0128 00:53:25.399199 3369 scope.go:117] "RemoveContainer" containerID="a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75" Jan 28 00:53:25.399459 containerd[1885]: time="2026-01-28T00:53:25.399434281Z" level=error msg="ContainerStatus for \"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\": not found" Jan 28 00:53:25.400586 kubelet[3369]: E0128 00:53:25.400454 3369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\": not found" containerID="a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75" Jan 28 00:53:25.400586 kubelet[3369]: I0128 00:53:25.400474 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75"} err="failed to get container status \"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4444b574f74bb938beae68044221303dc70bb4202d34f7d6bb8255805f95a75\": not found" Jan 28 00:53:25.400586 kubelet[3369]: I0128 00:53:25.400487 3369 scope.go:117] "RemoveContainer" containerID="0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6" Jan 28 00:53:25.401011 containerd[1885]: time="2026-01-28T00:53:25.400965228Z" level=error msg="ContainerStatus for \"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\": not found" Jan 28 00:53:25.401288 kubelet[3369]: E0128 00:53:25.401080 3369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\": not found" containerID="0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6" Jan 28 00:53:25.401288 kubelet[3369]: I0128 00:53:25.401098 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6"} err="failed to get container status \"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e5262d2267fd683fdfa7b02f21186b557a083e3d6ed3048cffd67b34d58a2e6\": not found" Jan 28 00:53:25.401288 kubelet[3369]: I0128 00:53:25.401113 3369 scope.go:117] "RemoveContainer" containerID="cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca" Jan 28 00:53:25.402934 containerd[1885]: time="2026-01-28T00:53:25.402906410Z" level=info msg="RemoveContainer for \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\"" Jan 28 00:53:25.412409 containerd[1885]: time="2026-01-28T00:53:25.412379156Z" level=info msg="RemoveContainer for \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\" returns successfully" Jan 28 00:53:25.412728 kubelet[3369]: I0128 00:53:25.412632 3369 scope.go:117] "RemoveContainer" containerID="cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca" Jan 28 00:53:25.412930 containerd[1885]: time="2026-01-28T00:53:25.412901131Z" level=error msg="ContainerStatus for \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\": not found" Jan 28 00:53:25.413178 kubelet[3369]: E0128 00:53:25.413151 3369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\": not found" containerID="cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca" Jan 28 00:53:25.413230 kubelet[3369]: I0128 00:53:25.413183 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca"} err="failed to get container status \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"cebb4b6eaf95945b66f05eafb902a76d8c424718daed0dac64ba20c2d78430ca\": not found" Jan 28 00:53:25.545605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258-shm.mount: Deactivated successfully. Jan 28 00:53:25.545710 systemd[1]: var-lib-kubelet-pods-56c2ff2c\x2d1d50\x2d4015\x2d8dd8\x2dd2afd87e75dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvs78v.mount: Deactivated successfully. Jan 28 00:53:25.545755 systemd[1]: var-lib-kubelet-pods-04d6614e\x2df303\x2d460e\x2d87de\x2d5306fea760c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzvplb.mount: Deactivated successfully. Jan 28 00:53:25.545798 systemd[1]: var-lib-kubelet-pods-04d6614e\x2df303\x2d460e\x2d87de\x2d5306fea760c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 28 00:53:25.545833 systemd[1]: var-lib-kubelet-pods-04d6614e\x2df303\x2d460e\x2d87de\x2d5306fea760c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 28 00:53:26.503355 sshd[4900]: Connection closed by 10.200.16.10 port 40822 Jan 28 00:53:26.503975 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:26.508153 systemd[1]: sshd@21-10.200.20.13:22-10.200.16.10:40822.service: Deactivated successfully. Jan 28 00:53:26.509985 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 00:53:26.510832 systemd-logind[1868]: Session 24 logged out. Waiting for processes to exit. Jan 28 00:53:26.512295 systemd-logind[1868]: Removed session 24. Jan 28 00:53:26.587273 systemd[1]: Started sshd@22-10.200.20.13:22-10.200.16.10:40832.service - OpenSSH per-connection server daemon (10.200.16.10:40832). Jan 28 00:53:27.045296 sshd[5046]: Accepted publickey for core from 10.200.16.10 port 40832 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:27.046420 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:27.050080 systemd-logind[1868]: New session 25 of user core. Jan 28 00:53:27.052675 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 00:53:27.084140 kubelet[3369]: I0128 00:53:27.084085 3369 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d6614e-f303-460e-87de-5306fea760c4" path="/var/lib/kubelet/pods/04d6614e-f303-460e-87de-5306fea760c4/volumes" Jan 28 00:53:27.084989 kubelet[3369]: I0128 00:53:27.084961 3369 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c2ff2c-1d50-4015-8dd8-d2afd87e75dd" path="/var/lib/kubelet/pods/56c2ff2c-1d50-4015-8dd8-d2afd87e75dd/volumes" Jan 28 00:53:27.703994 kubelet[3369]: I0128 00:53:27.703951 3369 memory_manager.go:355] "RemoveStaleState removing state" podUID="56c2ff2c-1d50-4015-8dd8-d2afd87e75dd" containerName="cilium-operator" Jan 28 00:53:27.703994 kubelet[3369]: I0128 00:53:27.703982 3369 memory_manager.go:355] "RemoveStaleState removing state" podUID="04d6614e-f303-460e-87de-5306fea760c4" containerName="cilium-agent" Jan 28 00:53:27.712177 systemd[1]: Created slice kubepods-burstable-pod6c4117fa_b2ef_44b0_ba92_fce7f41f09f5.slice - libcontainer container kubepods-burstable-pod6c4117fa_b2ef_44b0_ba92_fce7f41f09f5.slice. Jan 28 00:53:27.719601 sshd[5049]: Connection closed by 10.200.16.10 port 40832 Jan 28 00:53:27.720685 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:27.724565 systemd[1]: sshd@22-10.200.20.13:22-10.200.16.10:40832.service: Deactivated successfully. Jan 28 00:53:27.728569 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 00:53:27.730332 systemd-logind[1868]: Session 25 logged out. Waiting for processes to exit. Jan 28 00:53:27.732407 systemd-logind[1868]: Removed session 25. Jan 28 00:53:27.739365 kubelet[3369]: I0128 00:53:27.739325 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-hostproc\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.739455 kubelet[3369]: I0128 00:53:27.739370 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-host-proc-sys-net\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.739455 kubelet[3369]: I0128 00:53:27.739383 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-bpf-maps\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.739455 kubelet[3369]: I0128 00:53:27.739395 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-cni-path\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.739455 kubelet[3369]: I0128 00:53:27.739404 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-hubble-tls\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.739455 kubelet[3369]: I0128 00:53:27.739414 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-cilium-run\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.739455 kubelet[3369]: I0128 00:53:27.739423 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-etc-cni-netd\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.739582 kubelet[3369]: I0128 00:53:27.739432 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-lib-modules\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.739582 kubelet[3369]: I0128 00:53:27.739442 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-clustermesh-secrets\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.739582 kubelet[3369]: I0128 00:53:27.739452 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-cilium-ipsec-secrets\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.739582 kubelet[3369]: I0128 00:53:27.739467 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-host-proc-sys-kernel\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.739582 kubelet[3369]: I0128 00:53:27.739479 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-cilium-cgroup\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.740345 kubelet[3369]: I0128 00:53:27.740293 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-xtables-lock\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.740408 kubelet[3369]: I0128 00:53:27.740351 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmb7\" (UniqueName: \"kubernetes.io/projected/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-kube-api-access-blmb7\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.740408 kubelet[3369]: I0128 00:53:27.740368 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c4117fa-b2ef-44b0-ba92-fce7f41f09f5-cilium-config-path\") pod \"cilium-4n6q5\" (UID: \"6c4117fa-b2ef-44b0-ba92-fce7f41f09f5\") " pod="kube-system/cilium-4n6q5" Jan 28 00:53:27.804781 systemd[1]: Started sshd@23-10.200.20.13:22-10.200.16.10:40834.service - OpenSSH per-connection server daemon (10.200.16.10:40834). Jan 28 00:53:28.016920 containerd[1885]: time="2026-01-28T00:53:28.016793607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4n6q5,Uid:6c4117fa-b2ef-44b0-ba92-fce7f41f09f5,Namespace:kube-system,Attempt:0,}" Jan 28 00:53:28.043212 containerd[1885]: time="2026-01-28T00:53:28.043172363Z" level=info msg="connecting to shim 96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204" address="unix:///run/containerd/s/7ce3b6dad97535163871eb63559168ee1eeb2833e753056b417235266f73dcd7" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:53:28.064649 systemd[1]: Started cri-containerd-96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204.scope - libcontainer container 96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204. Jan 28 00:53:28.086864 containerd[1885]: time="2026-01-28T00:53:28.086830541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4n6q5,Uid:6c4117fa-b2ef-44b0-ba92-fce7f41f09f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204\"" Jan 28 00:53:28.089665 containerd[1885]: time="2026-01-28T00:53:28.089585898Z" level=info msg="CreateContainer within sandbox \"96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 00:53:28.104569 containerd[1885]: time="2026-01-28T00:53:28.104285583Z" level=info msg="Container 5e91baf7e0c1032e18163952103acd13f7f5301734ade3b14987447c3eb12f76: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:53:28.116019 containerd[1885]: time="2026-01-28T00:53:28.115989127Z" level=info msg="CreateContainer within sandbox \"96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5e91baf7e0c1032e18163952103acd13f7f5301734ade3b14987447c3eb12f76\"" Jan 28 00:53:28.117578 containerd[1885]: time="2026-01-28T00:53:28.117188385Z" level=info msg="StartContainer for \"5e91baf7e0c1032e18163952103acd13f7f5301734ade3b14987447c3eb12f76\"" Jan 28 00:53:28.118305 containerd[1885]: time="2026-01-28T00:53:28.118280144Z" level=info msg="connecting to shim 5e91baf7e0c1032e18163952103acd13f7f5301734ade3b14987447c3eb12f76" address="unix:///run/containerd/s/7ce3b6dad97535163871eb63559168ee1eeb2833e753056b417235266f73dcd7" protocol=ttrpc version=3 Jan 28 00:53:28.134630 systemd[1]: Started cri-containerd-5e91baf7e0c1032e18163952103acd13f7f5301734ade3b14987447c3eb12f76.scope - libcontainer container 5e91baf7e0c1032e18163952103acd13f7f5301734ade3b14987447c3eb12f76. Jan 28 00:53:28.164762 containerd[1885]: time="2026-01-28T00:53:28.164724567Z" level=info msg="StartContainer for \"5e91baf7e0c1032e18163952103acd13f7f5301734ade3b14987447c3eb12f76\" returns successfully" Jan 28 00:53:28.168163 kubelet[3369]: E0128 00:53:28.168128 3369 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 00:53:28.169136 systemd[1]: cri-containerd-5e91baf7e0c1032e18163952103acd13f7f5301734ade3b14987447c3eb12f76.scope: Deactivated successfully. Jan 28 00:53:28.170459 containerd[1885]: time="2026-01-28T00:53:28.170414791Z" level=info msg="received container exit event container_id:\"5e91baf7e0c1032e18163952103acd13f7f5301734ade3b14987447c3eb12f76\" id:\"5e91baf7e0c1032e18163952103acd13f7f5301734ade3b14987447c3eb12f76\" pid:5126 exited_at:{seconds:1769561608 nanos:169792829}" Jan 28 00:53:28.258053 sshd[5061]: Accepted publickey for core from 10.200.16.10 port 40834 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:28.259212 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:28.263294 systemd-logind[1868]: New session 26 of user core. Jan 28 00:53:28.270636 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 00:53:28.364969 containerd[1885]: time="2026-01-28T00:53:28.364929498Z" level=info msg="CreateContainer within sandbox \"96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 00:53:28.377045 containerd[1885]: time="2026-01-28T00:53:28.377001333Z" level=info msg="Container 1ebcdc575eb3f60cde819f9186573c5ace41283a28bfb5a8a0f176244b35172f: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:53:28.390191 containerd[1885]: time="2026-01-28T00:53:28.390078916Z" level=info msg="CreateContainer within sandbox \"96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1ebcdc575eb3f60cde819f9186573c5ace41283a28bfb5a8a0f176244b35172f\"" Jan 28 00:53:28.390743 containerd[1885]: time="2026-01-28T00:53:28.390708406Z" level=info msg="StartContainer for \"1ebcdc575eb3f60cde819f9186573c5ace41283a28bfb5a8a0f176244b35172f\"" Jan 28 00:53:28.391362 containerd[1885]: time="2026-01-28T00:53:28.391328431Z" level=info msg="connecting to shim 1ebcdc575eb3f60cde819f9186573c5ace41283a28bfb5a8a0f176244b35172f" address="unix:///run/containerd/s/7ce3b6dad97535163871eb63559168ee1eeb2833e753056b417235266f73dcd7" protocol=ttrpc version=3 Jan 28 00:53:28.409651 systemd[1]: Started cri-containerd-1ebcdc575eb3f60cde819f9186573c5ace41283a28bfb5a8a0f176244b35172f.scope - libcontainer container 1ebcdc575eb3f60cde819f9186573c5ace41283a28bfb5a8a0f176244b35172f. Jan 28 00:53:28.435113 containerd[1885]: time="2026-01-28T00:53:28.435079579Z" level=info msg="StartContainer for \"1ebcdc575eb3f60cde819f9186573c5ace41283a28bfb5a8a0f176244b35172f\" returns successfully" Jan 28 00:53:28.437478 systemd[1]: cri-containerd-1ebcdc575eb3f60cde819f9186573c5ace41283a28bfb5a8a0f176244b35172f.scope: Deactivated successfully. Jan 28 00:53:28.438980 containerd[1885]: time="2026-01-28T00:53:28.438940448Z" level=info msg="received container exit event container_id:\"1ebcdc575eb3f60cde819f9186573c5ace41283a28bfb5a8a0f176244b35172f\" id:\"1ebcdc575eb3f60cde819f9186573c5ace41283a28bfb5a8a0f176244b35172f\" pid:5175 exited_at:{seconds:1769561608 nanos:438683208}" Jan 28 00:53:28.582544 sshd[5160]: Connection closed by 10.200.16.10 port 40834 Jan 28 00:53:28.581439 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:28.584988 systemd-logind[1868]: Session 26 logged out. Waiting for processes to exit. Jan 28 00:53:28.585171 systemd[1]: sshd@23-10.200.20.13:22-10.200.16.10:40834.service: Deactivated successfully. Jan 28 00:53:28.587056 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 00:53:28.589075 systemd-logind[1868]: Removed session 26. Jan 28 00:53:28.664455 systemd[1]: Started sshd@24-10.200.20.13:22-10.200.16.10:40848.service - OpenSSH per-connection server daemon (10.200.16.10:40848). Jan 28 00:53:29.120515 sshd[5213]: Accepted publickey for core from 10.200.16.10 port 40848 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:29.121692 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:29.128213 systemd-logind[1868]: New session 27 of user core. Jan 28 00:53:29.132269 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 00:53:29.367923 containerd[1885]: time="2026-01-28T00:53:29.367619802Z" level=info msg="CreateContainer within sandbox \"96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 00:53:29.390181 containerd[1885]: time="2026-01-28T00:53:29.389441502Z" level=info msg="Container 2a6534d3bb463a8ce2e841dd6ce836f175c973543b6596bf8cf02a76ce68c974: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:53:29.392132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919139630.mount: Deactivated successfully. Jan 28 00:53:29.412166 containerd[1885]: time="2026-01-28T00:53:29.412022336Z" level=info msg="CreateContainer within sandbox \"96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a6534d3bb463a8ce2e841dd6ce836f175c973543b6596bf8cf02a76ce68c974\"" Jan 28 00:53:29.413264 containerd[1885]: time="2026-01-28T00:53:29.413237914Z" level=info msg="StartContainer for \"2a6534d3bb463a8ce2e841dd6ce836f175c973543b6596bf8cf02a76ce68c974\"" Jan 28 00:53:29.415073 containerd[1885]: time="2026-01-28T00:53:29.414981195Z" level=info msg="connecting to shim 2a6534d3bb463a8ce2e841dd6ce836f175c973543b6596bf8cf02a76ce68c974" address="unix:///run/containerd/s/7ce3b6dad97535163871eb63559168ee1eeb2833e753056b417235266f73dcd7" protocol=ttrpc version=3 Jan 28 00:53:29.432656 systemd[1]: Started cri-containerd-2a6534d3bb463a8ce2e841dd6ce836f175c973543b6596bf8cf02a76ce68c974.scope - libcontainer container 2a6534d3bb463a8ce2e841dd6ce836f175c973543b6596bf8cf02a76ce68c974. Jan 28 00:53:29.487932 systemd[1]: cri-containerd-2a6534d3bb463a8ce2e841dd6ce836f175c973543b6596bf8cf02a76ce68c974.scope: Deactivated successfully. Jan 28 00:53:29.490873 containerd[1885]: time="2026-01-28T00:53:29.490798603Z" level=info msg="received container exit event container_id:\"2a6534d3bb463a8ce2e841dd6ce836f175c973543b6596bf8cf02a76ce68c974\" id:\"2a6534d3bb463a8ce2e841dd6ce836f175c973543b6596bf8cf02a76ce68c974\" pid:5235 exited_at:{seconds:1769561609 nanos:490398016}" Jan 28 00:53:29.491205 containerd[1885]: time="2026-01-28T00:53:29.491137460Z" level=info msg="StartContainer for \"2a6534d3bb463a8ce2e841dd6ce836f175c973543b6596bf8cf02a76ce68c974\" returns successfully" Jan 28 00:53:29.510051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a6534d3bb463a8ce2e841dd6ce836f175c973543b6596bf8cf02a76ce68c974-rootfs.mount: Deactivated successfully. Jan 28 00:53:30.371592 containerd[1885]: time="2026-01-28T00:53:30.371539141Z" level=info msg="CreateContainer within sandbox \"96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 00:53:30.391357 containerd[1885]: time="2026-01-28T00:53:30.390683558Z" level=info msg="Container 1b49ca8df0ecbab83e8f0ea40db50383c2b675a15e6883363eb55a8dc43b67a6: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:53:30.402573 containerd[1885]: time="2026-01-28T00:53:30.402536130Z" level=info msg="CreateContainer within sandbox \"96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1b49ca8df0ecbab83e8f0ea40db50383c2b675a15e6883363eb55a8dc43b67a6\"" Jan 28 00:53:30.403165 containerd[1885]: time="2026-01-28T00:53:30.403120379Z" level=info msg="StartContainer for \"1b49ca8df0ecbab83e8f0ea40db50383c2b675a15e6883363eb55a8dc43b67a6\"" Jan 28 00:53:30.403979 containerd[1885]: time="2026-01-28T00:53:30.403943146Z" level=info msg="connecting to shim 1b49ca8df0ecbab83e8f0ea40db50383c2b675a15e6883363eb55a8dc43b67a6" address="unix:///run/containerd/s/7ce3b6dad97535163871eb63559168ee1eeb2833e753056b417235266f73dcd7" protocol=ttrpc version=3 Jan 28 00:53:30.426992 systemd[1]: Started cri-containerd-1b49ca8df0ecbab83e8f0ea40db50383c2b675a15e6883363eb55a8dc43b67a6.scope - libcontainer container 1b49ca8df0ecbab83e8f0ea40db50383c2b675a15e6883363eb55a8dc43b67a6. Jan 28 00:53:30.448993 systemd[1]: cri-containerd-1b49ca8df0ecbab83e8f0ea40db50383c2b675a15e6883363eb55a8dc43b67a6.scope: Deactivated successfully. Jan 28 00:53:30.452728 containerd[1885]: time="2026-01-28T00:53:30.452610782Z" level=info msg="received container exit event container_id:\"1b49ca8df0ecbab83e8f0ea40db50383c2b675a15e6883363eb55a8dc43b67a6\" id:\"1b49ca8df0ecbab83e8f0ea40db50383c2b675a15e6883363eb55a8dc43b67a6\" pid:5275 exited_at:{seconds:1769561610 nanos:450699721}" Jan 28 00:53:30.465152 containerd[1885]: time="2026-01-28T00:53:30.465020802Z" level=info msg="StartContainer for \"1b49ca8df0ecbab83e8f0ea40db50383c2b675a15e6883363eb55a8dc43b67a6\" returns successfully" Jan 28 00:53:30.476599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b49ca8df0ecbab83e8f0ea40db50383c2b675a15e6883363eb55a8dc43b67a6-rootfs.mount: Deactivated successfully. Jan 28 00:53:31.378162 containerd[1885]: time="2026-01-28T00:53:31.378120079Z" level=info msg="CreateContainer within sandbox \"96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 00:53:31.400122 containerd[1885]: time="2026-01-28T00:53:31.399555304Z" level=info msg="Container aca2d2b32ccfca2df2fd4bef27364fcb7fbb01465d3c8d4062e83c43338e9fa0: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:53:31.413353 containerd[1885]: time="2026-01-28T00:53:31.413307313Z" level=info msg="CreateContainer within sandbox \"96da44c397314a9910b04ea81f30a1fed4a0e21728a750adab725c2f2b30d204\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aca2d2b32ccfca2df2fd4bef27364fcb7fbb01465d3c8d4062e83c43338e9fa0\"" Jan 28 00:53:31.414126 containerd[1885]: time="2026-01-28T00:53:31.414014213Z" level=info msg="StartContainer for \"aca2d2b32ccfca2df2fd4bef27364fcb7fbb01465d3c8d4062e83c43338e9fa0\"" Jan 28 00:53:31.415502 containerd[1885]: time="2026-01-28T00:53:31.415469774Z" level=info msg="connecting to shim aca2d2b32ccfca2df2fd4bef27364fcb7fbb01465d3c8d4062e83c43338e9fa0" address="unix:///run/containerd/s/7ce3b6dad97535163871eb63559168ee1eeb2833e753056b417235266f73dcd7" protocol=ttrpc version=3 Jan 28 00:53:31.433632 systemd[1]: Started cri-containerd-aca2d2b32ccfca2df2fd4bef27364fcb7fbb01465d3c8d4062e83c43338e9fa0.scope - libcontainer container aca2d2b32ccfca2df2fd4bef27364fcb7fbb01465d3c8d4062e83c43338e9fa0. Jan 28 00:53:31.478440 containerd[1885]: time="2026-01-28T00:53:31.478401843Z" level=info msg="StartContainer for \"aca2d2b32ccfca2df2fd4bef27364fcb7fbb01465d3c8d4062e83c43338e9fa0\" returns successfully" Jan 28 00:53:31.824520 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 28 00:53:32.397306 kubelet[3369]: I0128 00:53:32.397230 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4n6q5" podStartSLOduration=5.397197438 podStartE2EDuration="5.397197438s" podCreationTimestamp="2026-01-28 00:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:53:32.395678252 +0000 UTC m=+119.407408245" watchObservedRunningTime="2026-01-28 00:53:32.397197438 +0000 UTC m=+119.408927432" Jan 28 00:53:33.089011 containerd[1885]: time="2026-01-28T00:53:33.088801880Z" level=info msg="StopPodSandbox for \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\"" Jan 28 00:53:33.089011 containerd[1885]: time="2026-01-28T00:53:33.088953340Z" level=info msg="TearDown network for sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" successfully" Jan 28 00:53:33.089011 containerd[1885]: time="2026-01-28T00:53:33.088965212Z" level=info msg="StopPodSandbox for \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" returns successfully" Jan 28 00:53:33.089772 containerd[1885]: time="2026-01-28T00:53:33.089557533Z" level=info msg="RemovePodSandbox for \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\"" Jan 28 00:53:33.089772 containerd[1885]: time="2026-01-28T00:53:33.089593286Z" level=info msg="Forcibly stopping sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\"" Jan 28 00:53:33.089772 containerd[1885]: time="2026-01-28T00:53:33.089682320Z" level=info msg="TearDown network for sandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" successfully" Jan 28 00:53:33.090717 containerd[1885]: time="2026-01-28T00:53:33.090691789Z" level=info msg="Ensure that sandbox c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174 in task-service has been cleanup successfully" Jan 28 00:53:33.110962 containerd[1885]: time="2026-01-28T00:53:33.110914812Z" level=info msg="RemovePodSandbox \"c352838e9bf2cd0643a54704289eae6379a496f173529f75734a00dc972cf174\" returns successfully" Jan 28 00:53:33.111458 containerd[1885]: time="2026-01-28T00:53:33.111427146Z" level=info msg="StopPodSandbox for \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\"" Jan 28 00:53:33.111672 containerd[1885]: time="2026-01-28T00:53:33.111626760Z" level=info msg="TearDown network for sandbox \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\" successfully" Jan 28 00:53:33.111672 containerd[1885]: time="2026-01-28T00:53:33.111642448Z" level=info msg="StopPodSandbox for \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\" returns successfully" Jan 28 00:53:33.112368 containerd[1885]: time="2026-01-28T00:53:33.112084749Z" level=info msg="RemovePodSandbox for \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\"" Jan 28 00:53:33.112368 containerd[1885]: time="2026-01-28T00:53:33.112126510Z" level=info msg="Forcibly stopping sandbox \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\"" Jan 28 00:53:33.112368 containerd[1885]: time="2026-01-28T00:53:33.112203128Z" level=info msg="TearDown network for sandbox \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\" successfully" Jan 28 00:53:33.113068 containerd[1885]: time="2026-01-28T00:53:33.113034703Z" level=info msg="Ensure that sandbox d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258 in task-service has been cleanup successfully" Jan 28 00:53:33.312282 containerd[1885]: time="2026-01-28T00:53:33.312189104Z" level=info msg="RemovePodSandbox \"d105ff39aca765455f07905afa60f08cf19f668215bc4f9686010c2110073258\" returns successfully" Jan 28 00:53:33.688141 kubelet[3369]: E0128 00:53:33.687814 3369 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44890->127.0.0.1:36243: write tcp 127.0.0.1:44890->127.0.0.1:36243: write: broken pipe Jan 28 00:53:34.310904 systemd-networkd[1477]: lxc_health: Link UP Jan 28 00:53:34.318985 systemd-networkd[1477]: lxc_health: Gained carrier Jan 28 00:53:35.667691 systemd-networkd[1477]: lxc_health: Gained IPv6LL Jan 28 00:53:37.866411 kubelet[3369]: E0128 00:53:37.866372 3369 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40768->127.0.0.1:36243: write tcp 127.0.0.1:40768->127.0.0.1:36243: write: broken pipe Jan 28 00:53:42.116100 sshd[5216]: Connection closed by 10.200.16.10 port 40848 Jan 28 00:53:42.115328 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:42.119052 systemd[1]: sshd@24-10.200.20.13:22-10.200.16.10:40848.service: Deactivated successfully. Jan 28 00:53:42.121215 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 00:53:42.122644 systemd-logind[1868]: Session 27 logged out. Waiting for processes to exit. Jan 28 00:53:42.124382 systemd-logind[1868]: Removed session 27.