Jan 15 23:43:57.066051 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 15 23:43:57.066069 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Jan 15 22:06:59 -00 2026 Jan 15 23:43:57.066075 kernel: KASLR enabled Jan 15 23:43:57.066079 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 15 23:43:57.066083 kernel: printk: legacy bootconsole [pl11] enabled Jan 15 23:43:57.066088 kernel: efi: EFI v2.7 by EDK II Jan 15 23:43:57.066093 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 15 23:43:57.066097 kernel: random: crng init done Jan 15 23:43:57.066101 kernel: secureboot: Secure boot disabled Jan 15 23:43:57.066105 kernel: ACPI: Early table checksum verification disabled Jan 15 23:43:57.066109 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 15 23:43:57.066113 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:43:57.066117 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:43:57.066121 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 15 23:43:57.066127 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:43:57.066131 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:43:57.066148 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:43:57.066153 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:43:57.066157 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:43:57.066162 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:43:57.066166 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 15 23:43:57.066170 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:43:57.066175 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 15 23:43:57.066179 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 15 23:43:57.066183 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 15 23:43:57.066187 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 15 23:43:57.066191 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 15 23:43:57.066195 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 15 23:43:57.066199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 15 23:43:57.066204 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 15 23:43:57.066209 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 15 23:43:57.066213 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 15 23:43:57.066217 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 15 23:43:57.066221 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 15 23:43:57.066225 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 15 23:43:57.066229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 15 23:43:57.066233 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 15 23:43:57.066237 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 15 23:43:57.066241 kernel: Zone ranges: Jan 15 23:43:57.066246 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 15 23:43:57.066252 kernel: DMA32 empty Jan 15 23:43:57.066257 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 23:43:57.066261 kernel: Device empty Jan 15 23:43:57.066265 kernel: Movable zone start for each node Jan 15 23:43:57.066270 kernel: Early memory node ranges Jan 15 23:43:57.066274 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 15 23:43:57.066279 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 15 23:43:57.066284 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 15 23:43:57.066288 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 15 23:43:57.066292 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 15 23:43:57.066296 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 15 23:43:57.066301 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 23:43:57.066305 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 15 23:43:57.066309 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 15 23:43:57.066314 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 15 23:43:57.066318 kernel: psci: probing for conduit method from ACPI. Jan 15 23:43:57.066322 kernel: psci: PSCIv1.3 detected in firmware. Jan 15 23:43:57.066327 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 23:43:57.066332 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 15 23:43:57.066336 kernel: psci: SMC Calling Convention v1.4 Jan 15 23:43:57.066340 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 15 23:43:57.066345 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 15 23:43:57.066349 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 15 23:43:57.066353 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 15 23:43:57.066358 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 23:43:57.066362 kernel: Detected PIPT I-cache on CPU0 Jan 15 23:43:57.066367 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 15 23:43:57.066371 kernel: CPU features: detected: GIC system register CPU interface Jan 15 23:43:57.066375 kernel: CPU features: detected: Spectre-v4 Jan 15 23:43:57.066380 kernel: CPU features: detected: Spectre-BHB Jan 15 23:43:57.066385 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 15 23:43:57.066389 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 15 23:43:57.066394 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 15 23:43:57.066398 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 15 23:43:57.066402 kernel: alternatives: applying boot alternatives Jan 15 23:43:57.066408 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=83f7d443283b2e87b6283ab8b3252eb2d2356b218981a63efeb3e370fba6f971 Jan 15 23:43:57.066412 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 23:43:57.066417 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 23:43:57.066421 kernel: Fallback order for Node 0: 0 Jan 15 23:43:57.066426 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 15 23:43:57.066431 kernel: Policy zone: Normal Jan 15 23:43:57.066435 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 23:43:57.066439 kernel: software IO TLB: area num 2. Jan 15 23:43:57.066444 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Jan 15 23:43:57.066448 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 23:43:57.066452 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 23:43:57.066457 kernel: rcu: RCU event tracing is enabled. Jan 15 23:43:57.066462 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 23:43:57.066466 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 23:43:57.066471 kernel: Tracing variant of Tasks RCU enabled. Jan 15 23:43:57.066475 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 23:43:57.066479 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 23:43:57.066485 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 23:43:57.066489 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 23:43:57.066494 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 23:43:57.066498 kernel: GICv3: 960 SPIs implemented Jan 15 23:43:57.066502 kernel: GICv3: 0 Extended SPIs implemented Jan 15 23:43:57.066506 kernel: Root IRQ handler: gic_handle_irq Jan 15 23:43:57.066511 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 15 23:43:57.066515 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 15 23:43:57.066519 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 15 23:43:57.066524 kernel: ITS: No ITS available, not enabling LPIs Jan 15 23:43:57.066528 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 23:43:57.066533 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 15 23:43:57.066538 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 23:43:57.066542 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 15 23:43:57.066547 kernel: Console: colour dummy device 80x25 Jan 15 23:43:57.066552 kernel: printk: legacy console [tty1] enabled Jan 15 23:43:57.066556 kernel: ACPI: Core revision 20240827 Jan 15 23:43:57.066561 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 15 23:43:57.066565 kernel: pid_max: default: 32768 minimum: 301 Jan 15 23:43:57.066570 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 15 23:43:57.066574 kernel: landlock: Up and running. Jan 15 23:43:57.066579 kernel: SELinux: Initializing. Jan 15 23:43:57.066584 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 23:43:57.066588 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 23:43:57.066593 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 15 23:43:57.066598 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 15 23:43:57.066605 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 15 23:43:57.066611 kernel: rcu: Hierarchical SRCU implementation. Jan 15 23:43:57.066616 kernel: rcu: Max phase no-delay instances is 400. Jan 15 23:43:57.066620 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 15 23:43:57.066625 kernel: Remapping and enabling EFI services. Jan 15 23:43:57.066630 kernel: smp: Bringing up secondary CPUs ... Jan 15 23:43:57.066635 kernel: Detected PIPT I-cache on CPU1 Jan 15 23:43:57.066640 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 15 23:43:57.066645 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 15 23:43:57.066650 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 23:43:57.066654 kernel: SMP: Total of 2 processors activated. Jan 15 23:43:57.066659 kernel: CPU: All CPU(s) started at EL1 Jan 15 23:43:57.066665 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 23:43:57.066670 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 15 23:43:57.066675 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 15 23:43:57.066679 kernel: CPU features: detected: Common not Private translations Jan 15 23:43:57.066684 kernel: CPU features: detected: CRC32 instructions Jan 15 23:43:57.066689 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 15 23:43:57.066694 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 15 23:43:57.066698 kernel: CPU features: detected: LSE atomic instructions Jan 15 23:43:57.066703 kernel: CPU features: detected: Privileged Access Never Jan 15 23:43:57.066708 kernel: CPU features: detected: Speculation barrier (SB) Jan 15 23:43:57.066713 kernel: CPU features: detected: TLB range maintenance instructions Jan 15 23:43:57.066718 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 15 23:43:57.066723 kernel: CPU features: detected: Scalable Vector Extension Jan 15 23:43:57.066727 kernel: alternatives: applying system-wide alternatives Jan 15 23:43:57.066732 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 15 23:43:57.066737 kernel: SVE: maximum available vector length 16 bytes per vector Jan 15 23:43:57.066741 kernel: SVE: default vector length 16 bytes per vector Jan 15 23:43:57.066746 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Jan 15 23:43:57.066752 kernel: devtmpfs: initialized Jan 15 23:43:57.066757 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 23:43:57.066761 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 23:43:57.066766 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 15 23:43:57.066771 kernel: 0 pages in range for non-PLT usage Jan 15 23:43:57.066775 kernel: 508400 pages in range for PLT usage Jan 15 23:43:57.066780 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 23:43:57.066785 kernel: SMBIOS 3.1.0 present. Jan 15 23:43:57.066791 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 15 23:43:57.066795 kernel: DMI: Memory slots populated: 2/2 Jan 15 23:43:57.066800 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 23:43:57.066805 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 23:43:57.066809 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 23:43:57.066814 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 23:43:57.066819 kernel: audit: initializing netlink subsys (disabled) Jan 15 23:43:57.066824 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 15 23:43:57.066828 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 23:43:57.066834 kernel: cpuidle: using governor menu Jan 15 23:43:57.066839 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 23:43:57.066843 kernel: ASID allocator initialised with 32768 entries Jan 15 23:43:57.066848 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 23:43:57.066853 kernel: Serial: AMBA PL011 UART driver Jan 15 23:43:57.066858 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 23:43:57.066862 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 23:43:57.066867 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 23:43:57.066872 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 23:43:57.066877 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 23:43:57.066882 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 23:43:57.066887 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 23:43:57.066891 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 23:43:57.066896 kernel: ACPI: Added _OSI(Module Device) Jan 15 23:43:57.066901 kernel: ACPI: Added _OSI(Processor Device) Jan 15 23:43:57.066905 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 23:43:57.066910 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 23:43:57.066915 kernel: ACPI: Interpreter enabled Jan 15 23:43:57.066920 kernel: ACPI: Using GIC for interrupt routing Jan 15 23:43:57.066925 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 15 23:43:57.066930 kernel: printk: legacy console [ttyAMA0] enabled Jan 15 23:43:57.066935 kernel: printk: legacy bootconsole [pl11] disabled Jan 15 23:43:57.066939 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 15 23:43:57.066944 kernel: ACPI: CPU0 has been hot-added Jan 15 23:43:57.066949 kernel: ACPI: CPU1 has been hot-added Jan 15 23:43:57.066953 kernel: iommu: Default domain type: Translated Jan 15 23:43:57.066958 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 23:43:57.066964 kernel: efivars: Registered efivars operations Jan 15 23:43:57.066968 kernel: vgaarb: loaded Jan 15 23:43:57.066973 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 23:43:57.066978 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 23:43:57.066983 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 23:43:57.066987 kernel: pnp: PnP ACPI init Jan 15 23:43:57.066992 kernel: pnp: PnP ACPI: found 0 devices Jan 15 23:43:57.066997 kernel: NET: Registered PF_INET protocol family Jan 15 23:43:57.067001 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 23:43:57.067006 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 23:43:57.067012 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 23:43:57.067017 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 23:43:57.067021 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 23:43:57.067026 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 23:43:57.067031 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 23:43:57.067036 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 23:43:57.067040 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 23:43:57.067045 kernel: PCI: CLS 0 bytes, default 64 Jan 15 23:43:57.067050 kernel: kvm [1]: HYP mode not available Jan 15 23:43:57.067056 kernel: Initialise system trusted keyrings Jan 15 23:43:57.067060 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 23:43:57.067065 kernel: Key type asymmetric registered Jan 15 23:43:57.067070 kernel: Asymmetric key parser 'x509' registered Jan 15 23:43:57.067075 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 15 23:43:57.067079 kernel: io scheduler mq-deadline registered Jan 15 23:43:57.067084 kernel: io scheduler kyber registered Jan 15 23:43:57.067089 kernel: io scheduler bfq registered Jan 15 23:43:57.067094 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 23:43:57.067099 kernel: thunder_xcv, ver 1.0 Jan 15 23:43:57.067104 kernel: thunder_bgx, ver 1.0 Jan 15 23:43:57.067108 kernel: nicpf, ver 1.0 Jan 15 23:43:57.067113 kernel: nicvf, ver 1.0 Jan 15 23:43:57.067223 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 23:43:57.067275 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-15T23:43:56 UTC (1768520636) Jan 15 23:43:57.067281 kernel: efifb: probing for efifb Jan 15 23:43:57.067288 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 15 23:43:57.067293 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 15 23:43:57.067297 kernel: efifb: scrolling: redraw Jan 15 23:43:57.067302 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 23:43:57.067307 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 23:43:57.067312 kernel: fb0: EFI VGA frame buffer device Jan 15 23:43:57.067317 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 15 23:43:57.067321 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 23:43:57.067326 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 15 23:43:57.067332 kernel: watchdog: NMI not fully supported Jan 15 23:43:57.067336 kernel: NET: Registered PF_INET6 protocol family Jan 15 23:43:57.067341 kernel: watchdog: Hard watchdog permanently disabled Jan 15 23:43:57.067346 kernel: Segment Routing with IPv6 Jan 15 23:43:57.067351 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 23:43:57.067355 kernel: NET: Registered PF_PACKET protocol family Jan 15 23:43:57.067360 kernel: Key type dns_resolver registered Jan 15 23:43:57.067365 kernel: registered taskstats version 1 Jan 15 23:43:57.067369 kernel: Loading compiled-in X.509 certificates Jan 15 23:43:57.067374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: b110dfc7e70ecac41e34f52a0c530f0543b60d51' Jan 15 23:43:57.067380 kernel: Demotion targets for Node 0: null Jan 15 23:43:57.067385 kernel: Key type .fscrypt registered Jan 15 23:43:57.067390 kernel: Key type fscrypt-provisioning registered Jan 15 23:43:57.067394 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 23:43:57.067399 kernel: ima: Allocated hash algorithm: sha1 Jan 15 23:43:57.067404 kernel: ima: No architecture policies found Jan 15 23:43:57.067409 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 23:43:57.067413 kernel: clk: Disabling unused clocks Jan 15 23:43:57.067418 kernel: PM: genpd: Disabling unused power domains Jan 15 23:43:57.067424 kernel: Warning: unable to open an initial console. Jan 15 23:43:57.067428 kernel: Freeing unused kernel memory: 39552K Jan 15 23:43:57.067433 kernel: Run /init as init process Jan 15 23:43:57.067438 kernel: with arguments: Jan 15 23:43:57.067442 kernel: /init Jan 15 23:43:57.067447 kernel: with environment: Jan 15 23:43:57.067451 kernel: HOME=/ Jan 15 23:43:57.067456 kernel: TERM=linux Jan 15 23:43:57.067462 systemd[1]: Successfully made /usr/ read-only. Jan 15 23:43:57.067469 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 23:43:57.067482 systemd[1]: Detected virtualization microsoft. Jan 15 23:43:57.067487 systemd[1]: Detected architecture arm64. Jan 15 23:43:57.067492 systemd[1]: Running in initrd. Jan 15 23:43:57.067497 systemd[1]: No hostname configured, using default hostname. Jan 15 23:43:57.067502 systemd[1]: Hostname set to . Jan 15 23:43:57.067507 systemd[1]: Initializing machine ID from random generator. Jan 15 23:43:57.067513 systemd[1]: Queued start job for default target initrd.target. Jan 15 23:43:57.067519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:43:57.067524 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:43:57.067530 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 23:43:57.067535 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 23:43:57.067540 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 23:43:57.067546 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 23:43:57.067553 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 23:43:57.067558 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 23:43:57.067563 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:43:57.067569 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:43:57.067574 systemd[1]: Reached target paths.target - Path Units. Jan 15 23:43:57.067579 systemd[1]: Reached target slices.target - Slice Units. Jan 15 23:43:57.067584 systemd[1]: Reached target swap.target - Swaps. Jan 15 23:43:57.067589 systemd[1]: Reached target timers.target - Timer Units. Jan 15 23:43:57.067595 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 23:43:57.067601 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 23:43:57.067606 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 23:43:57.067611 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 15 23:43:57.067616 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:43:57.067622 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 23:43:57.067627 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:43:57.067632 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 23:43:57.067637 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 23:43:57.067643 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 23:43:57.067648 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 23:43:57.067654 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 15 23:43:57.067659 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 23:43:57.067664 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 23:43:57.067669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 23:43:57.067685 systemd-journald[226]: Collecting audit messages is disabled. Jan 15 23:43:57.067699 systemd-journald[226]: Journal started Jan 15 23:43:57.067713 systemd-journald[226]: Runtime Journal (/run/log/journal/1caa1fc8b3a94c0b9eecb45c958f56b5) is 8M, max 78.3M, 70.3M free. Jan 15 23:43:57.075167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:43:57.080056 systemd-modules-load[228]: Inserted module 'overlay' Jan 15 23:43:57.096157 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 23:43:57.096185 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 23:43:57.106428 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 23:43:57.117437 kernel: Bridge firewalling registered Jan 15 23:43:57.114861 systemd-modules-load[228]: Inserted module 'br_netfilter' Jan 15 23:43:57.118374 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:43:57.129158 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 23:43:57.134614 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 23:43:57.142521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:43:57.148809 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 23:43:57.170770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:43:57.184742 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 23:43:57.196598 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 23:43:57.207463 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 23:43:57.212917 systemd-tmpfiles[259]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 15 23:43:57.217520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:43:57.224595 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 23:43:57.236880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:43:57.247774 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 23:43:57.270133 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 23:43:57.280856 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 23:43:57.298624 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:43:57.310075 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=83f7d443283b2e87b6283ab8b3252eb2d2356b218981a63efeb3e370fba6f971 Jan 15 23:43:57.340917 systemd-resolved[266]: Positive Trust Anchors: Jan 15 23:43:57.340929 systemd-resolved[266]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 23:43:57.340948 systemd-resolved[266]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 23:43:57.343003 systemd-resolved[266]: Defaulting to hostname 'linux'. Jan 15 23:43:57.344654 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 23:43:57.351484 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:43:57.456161 kernel: SCSI subsystem initialized Jan 15 23:43:57.462150 kernel: Loading iSCSI transport class v2.0-870. Jan 15 23:43:57.469194 kernel: iscsi: registered transport (tcp) Jan 15 23:43:57.481970 kernel: iscsi: registered transport (qla4xxx) Jan 15 23:43:57.481981 kernel: QLogic iSCSI HBA Driver Jan 15 23:43:57.496234 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 23:43:57.519784 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:43:57.526147 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 23:43:57.576711 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 23:43:57.582225 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 23:43:57.644158 kernel: raid6: neonx8 gen() 18562 MB/s Jan 15 23:43:57.661145 kernel: raid6: neonx4 gen() 18543 MB/s Jan 15 23:43:57.680145 kernel: raid6: neonx2 gen() 17055 MB/s Jan 15 23:43:57.699146 kernel: raid6: neonx1 gen() 15013 MB/s Jan 15 23:43:57.718144 kernel: raid6: int64x8 gen() 10552 MB/s Jan 15 23:43:57.737145 kernel: raid6: int64x4 gen() 10596 MB/s Jan 15 23:43:57.757144 kernel: raid6: int64x2 gen() 8980 MB/s Jan 15 23:43:57.778650 kernel: raid6: int64x1 gen() 7004 MB/s Jan 15 23:43:57.778658 kernel: raid6: using algorithm neonx8 gen() 18562 MB/s Jan 15 23:43:57.800428 kernel: raid6: .... xor() 14915 MB/s, rmw enabled Jan 15 23:43:57.800435 kernel: raid6: using neon recovery algorithm Jan 15 23:43:57.809270 kernel: xor: measuring software checksum speed Jan 15 23:43:57.809279 kernel: 8regs : 28609 MB/sec Jan 15 23:43:57.811778 kernel: 32regs : 28763 MB/sec Jan 15 23:43:57.814279 kernel: arm64_neon : 37418 MB/sec Jan 15 23:43:57.817239 kernel: xor: using function: arm64_neon (37418 MB/sec) Jan 15 23:43:57.855155 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 23:43:57.860697 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 23:43:57.869673 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:43:57.902683 systemd-udevd[477]: Using default interface naming scheme 'v255'. Jan 15 23:43:57.906966 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:43:57.920022 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 23:43:57.956448 dracut-pre-trigger[489]: rd.md=0: removing MD RAID activation Jan 15 23:43:58.004829 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 23:43:58.014915 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 23:43:58.053018 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:43:58.064521 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 23:43:58.119164 kernel: hv_vmbus: Vmbus version:5.3 Jan 15 23:43:58.141335 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:43:58.174697 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 15 23:43:58.174719 kernel: hv_vmbus: registering driver hid_hyperv Jan 15 23:43:58.174727 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 15 23:43:58.174734 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 15 23:43:58.174740 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 15 23:43:58.179252 kernel: PTP clock support registered Jan 15 23:43:58.179265 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 15 23:43:58.141457 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:43:58.199181 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 15 23:43:58.199199 kernel: hv_vmbus: registering driver hv_netvsc Jan 15 23:43:58.168238 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:43:58.212263 kernel: hv_utils: Registering HyperV Utility Driver Jan 15 23:43:58.184607 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:43:58.227582 kernel: hv_vmbus: registering driver hv_storvsc Jan 15 23:43:58.227598 kernel: hv_vmbus: registering driver hv_utils Jan 15 23:43:58.227605 kernel: hv_utils: Heartbeat IC version 3.0 Jan 15 23:43:58.220107 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:43:58.239514 kernel: hv_utils: Shutdown IC version 3.2 Jan 15 23:43:58.239530 kernel: scsi host1: storvsc_host_t Jan 15 23:43:58.222578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:43:58.250765 kernel: hv_utils: TimeSync IC version 4.0 Jan 15 23:43:58.222649 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:43:58.580228 kernel: scsi host0: storvsc_host_t Jan 15 23:43:58.580382 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 15 23:43:58.246193 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:43:58.589240 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 15 23:43:58.570210 systemd-resolved[266]: Clock change detected. Flushing caches. Jan 15 23:43:58.604033 kernel: hv_netvsc 7ced8db6-fb72-7ced-8db6-fb727ced8db6 eth0: VF slot 1 added Jan 15 23:43:58.612814 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 15 23:43:58.612988 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 23:43:58.612996 kernel: hv_vmbus: registering driver hv_pci Jan 15 23:43:58.619740 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 15 23:43:58.619900 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 15 23:43:58.623359 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 15 23:43:58.623488 kernel: hv_pci 50a711ff-c626-4e67-ac83-b7167eb33c28: PCI VMBus probing: Using version 0x10004 Jan 15 23:43:58.630924 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 15 23:43:58.636002 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 15 23:43:58.636131 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 15 23:43:58.636198 kernel: hv_pci 50a711ff-c626-4e67-ac83-b7167eb33c28: PCI host bridge to bus c626:00 Jan 15 23:43:58.646396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 15 23:43:58.646520 kernel: pci_bus c626:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 15 23:43:58.657579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#79 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 15 23:43:58.657722 kernel: pci_bus c626:00: No busn resource found for root bus, will use [bus 00-ff] Jan 15 23:43:58.651700 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:43:58.671513 kernel: pci c626:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 15 23:43:58.679077 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 23:43:58.681688 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 15 23:43:58.681824 kernel: pci c626:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 23:43:58.699675 kernel: pci c626:00:02.0: enabling Extended Tags Jan 15 23:43:58.720374 kernel: pci c626:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c626:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 15 23:43:58.720534 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#118 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 15 23:43:58.720608 kernel: pci_bus c626:00: busn_res: [bus 00-ff] end is updated to 00 Jan 15 23:43:58.729812 kernel: pci c626:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 15 23:43:58.746642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#91 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 15 23:43:58.794947 kernel: mlx5_core c626:00:02.0: enabling device (0000 -> 0002) Jan 15 23:43:58.803550 kernel: mlx5_core c626:00:02.0: PTM is not supported by PCIe Jan 15 23:43:58.803644 kernel: mlx5_core c626:00:02.0: firmware version: 16.30.5026 Jan 15 23:43:58.979332 kernel: hv_netvsc 7ced8db6-fb72-7ced-8db6-fb727ced8db6 eth0: VF registering: eth1 Jan 15 23:43:58.979516 kernel: mlx5_core c626:00:02.0 eth1: joined to eth0 Jan 15 23:43:58.984662 kernel: mlx5_core c626:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 15 23:43:58.996496 kernel: mlx5_core c626:00:02.0 enP50726s1: renamed from eth1 Jan 15 23:43:59.149280 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 15 23:43:59.218640 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 23:43:59.266277 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 15 23:43:59.300546 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 15 23:43:59.305527 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 15 23:43:59.315828 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 23:43:59.337960 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 23:43:59.343065 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 23:43:59.364170 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:43:59.369413 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 15 23:43:59.375753 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 23:43:59.382748 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 23:43:59.399056 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 23:43:59.414665 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 23:44:00.412676 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#75 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 15 23:44:00.426121 disk-uuid[660]: The operation has completed successfully. Jan 15 23:44:00.431923 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 23:44:00.505550 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 23:44:00.506776 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 23:44:00.529270 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 23:44:00.549857 sh[825]: Success Jan 15 23:44:00.597806 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 23:44:00.597852 kernel: device-mapper: uevent: version 1.0.3 Jan 15 23:44:00.603024 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 15 23:44:00.612640 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 15 23:44:00.855036 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 23:44:00.870030 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 23:44:00.876186 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 23:44:00.904653 kernel: BTRFS: device fsid 4e574c26-9d5a-48bc-a727-ae12db8ee9fc devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (843) Jan 15 23:44:00.914400 kernel: BTRFS info (device dm-0): first mount of filesystem 4e574c26-9d5a-48bc-a727-ae12db8ee9fc Jan 15 23:44:00.914418 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:44:01.208886 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 23:44:01.208980 kernel: BTRFS info (device dm-0): enabling free space tree Jan 15 23:44:01.245605 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 23:44:01.249490 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 15 23:44:01.257785 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 23:44:01.258399 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 23:44:01.282583 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 23:44:01.313638 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (876) Jan 15 23:44:01.324309 kernel: BTRFS info (device sda6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:44:01.324342 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:44:01.351341 kernel: BTRFS info (device sda6): turning on async discard Jan 15 23:44:01.351388 kernel: BTRFS info (device sda6): enabling free space tree Jan 15 23:44:01.360678 kernel: BTRFS info (device sda6): last unmount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:44:01.360708 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 23:44:01.367807 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 23:44:01.393843 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 23:44:01.404969 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 23:44:01.437408 systemd-networkd[1012]: lo: Link UP Jan 15 23:44:01.437420 systemd-networkd[1012]: lo: Gained carrier Jan 15 23:44:01.438561 systemd-networkd[1012]: Enumeration completed Jan 15 23:44:01.440728 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:44:01.440731 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 23:44:01.443445 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 23:44:01.448860 systemd[1]: Reached target network.target - Network. Jan 15 23:44:01.518660 kernel: mlx5_core c626:00:02.0 enP50726s1: Link up Jan 15 23:44:01.551647 kernel: hv_netvsc 7ced8db6-fb72-7ced-8db6-fb727ced8db6 eth0: Data path switched to VF: enP50726s1 Jan 15 23:44:01.551918 systemd-networkd[1012]: enP50726s1: Link UP Jan 15 23:44:01.551984 systemd-networkd[1012]: eth0: Link UP Jan 15 23:44:01.552398 systemd-networkd[1012]: eth0: Gained carrier Jan 15 23:44:01.552412 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:44:01.569790 systemd-networkd[1012]: enP50726s1: Gained carrier Jan 15 23:44:01.584657 systemd-networkd[1012]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 23:44:02.484227 ignition[989]: Ignition 2.22.0 Jan 15 23:44:02.484239 ignition[989]: Stage: fetch-offline Jan 15 23:44:02.486986 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 23:44:02.484338 ignition[989]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:44:02.495818 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 23:44:02.484346 ignition[989]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:44:02.484410 ignition[989]: parsed url from cmdline: "" Jan 15 23:44:02.484413 ignition[989]: no config URL provided Jan 15 23:44:02.484416 ignition[989]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 23:44:02.484421 ignition[989]: no config at "/usr/lib/ignition/user.ign" Jan 15 23:44:02.484424 ignition[989]: failed to fetch config: resource requires networking Jan 15 23:44:02.484541 ignition[989]: Ignition finished successfully Jan 15 23:44:02.530635 ignition[1036]: Ignition 2.22.0 Jan 15 23:44:02.532487 ignition[1036]: Stage: fetch Jan 15 23:44:02.532699 ignition[1036]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:44:02.532706 ignition[1036]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:44:02.532772 ignition[1036]: parsed url from cmdline: "" Jan 15 23:44:02.532774 ignition[1036]: no config URL provided Jan 15 23:44:02.532777 ignition[1036]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 23:44:02.532782 ignition[1036]: no config at "/usr/lib/ignition/user.ign" Jan 15 23:44:02.532797 ignition[1036]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 15 23:44:02.599325 ignition[1036]: GET result: OK Jan 15 23:44:02.599404 ignition[1036]: config has been read from IMDS userdata Jan 15 23:44:02.602134 unknown[1036]: fetched base config from "system" Jan 15 23:44:02.599436 ignition[1036]: parsing config with SHA512: e907e497b7ecd4227a1b20808f1493b90df8fb43c9dec13224b1afdbf650ba216a57c89ae16241ef0aa747be52a5127cffe0a24ee0e758904e87504dfa80f9e1 Jan 15 23:44:02.602139 unknown[1036]: fetched base config from "system" Jan 15 23:44:02.602385 ignition[1036]: fetch: fetch complete Jan 15 23:44:02.602142 unknown[1036]: fetched user config from "azure" Jan 15 23:44:02.602389 ignition[1036]: fetch: fetch passed Jan 15 23:44:02.607347 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 23:44:02.602427 ignition[1036]: Ignition finished successfully Jan 15 23:44:02.615548 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 23:44:02.653207 ignition[1042]: Ignition 2.22.0 Jan 15 23:44:02.653216 ignition[1042]: Stage: kargs Jan 15 23:44:02.653378 ignition[1042]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:44:02.659661 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 23:44:02.653385 ignition[1042]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:44:02.665742 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 23:44:02.656377 ignition[1042]: kargs: kargs passed Jan 15 23:44:02.656420 ignition[1042]: Ignition finished successfully Jan 15 23:44:02.696055 ignition[1048]: Ignition 2.22.0 Jan 15 23:44:02.696066 ignition[1048]: Stage: disks Jan 15 23:44:02.699982 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 23:44:02.696263 ignition[1048]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:44:02.706615 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 23:44:02.696273 ignition[1048]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:44:02.714991 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 23:44:02.696929 ignition[1048]: disks: disks passed Jan 15 23:44:02.723114 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 23:44:02.696969 ignition[1048]: Ignition finished successfully Jan 15 23:44:02.731830 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 23:44:02.740399 systemd[1]: Reached target basic.target - Basic System. Jan 15 23:44:02.749580 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 23:44:02.828799 systemd-fsck[1056]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 15 23:44:02.833848 systemd-networkd[1012]: eth0: Gained IPv6LL Jan 15 23:44:02.839280 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 23:44:02.845115 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 23:44:03.062652 kernel: EXT4-fs (sda9): mounted filesystem e775b4a8-7fa9-4c45-80b7-b5e0f0a5e4b9 r/w with ordered data mode. Quota mode: none. Jan 15 23:44:03.062895 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 23:44:03.066963 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 23:44:03.089192 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 23:44:03.102899 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 23:44:03.111749 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 23:44:03.122648 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 23:44:03.126691 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 23:44:03.138499 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 23:44:03.149759 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 23:44:03.171125 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1070) Jan 15 23:44:03.171152 kernel: BTRFS info (device sda6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:44:03.175825 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:44:03.185692 kernel: BTRFS info (device sda6): turning on async discard Jan 15 23:44:03.185743 kernel: BTRFS info (device sda6): enabling free space tree Jan 15 23:44:03.186848 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 23:44:03.865610 coreos-metadata[1072]: Jan 15 23:44:03.865 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 23:44:03.872512 coreos-metadata[1072]: Jan 15 23:44:03.872 INFO Fetch successful Jan 15 23:44:03.872512 coreos-metadata[1072]: Jan 15 23:44:03.872 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 15 23:44:03.884769 coreos-metadata[1072]: Jan 15 23:44:03.884 INFO Fetch successful Jan 15 23:44:03.896844 coreos-metadata[1072]: Jan 15 23:44:03.896 INFO wrote hostname ci-4459.2.2-n-6dfb6e6787 to /sysroot/etc/hostname Jan 15 23:44:03.904141 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 23:44:04.139489 initrd-setup-root[1100]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 23:44:04.193345 initrd-setup-root[1107]: cut: /sysroot/etc/group: No such file or directory Jan 15 23:44:04.215354 initrd-setup-root[1114]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 23:44:04.222170 initrd-setup-root[1121]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 23:44:05.171091 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 23:44:05.176941 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 23:44:05.191282 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 23:44:05.202411 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 23:44:05.212847 kernel: BTRFS info (device sda6): last unmount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:44:05.229253 ignition[1189]: INFO : Ignition 2.22.0 Jan 15 23:44:05.233721 ignition[1189]: INFO : Stage: mount Jan 15 23:44:05.233721 ignition[1189]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:44:05.233721 ignition[1189]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:44:05.233721 ignition[1189]: INFO : mount: mount passed Jan 15 23:44:05.233721 ignition[1189]: INFO : Ignition finished successfully Jan 15 23:44:05.234318 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 23:44:05.243119 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 23:44:05.250924 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 23:44:05.271816 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 23:44:05.305147 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1201) Jan 15 23:44:05.305183 kernel: BTRFS info (device sda6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:44:05.309659 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:44:05.319399 kernel: BTRFS info (device sda6): turning on async discard Jan 15 23:44:05.319412 kernel: BTRFS info (device sda6): enabling free space tree Jan 15 23:44:05.321163 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 23:44:05.348865 ignition[1219]: INFO : Ignition 2.22.0 Jan 15 23:44:05.348865 ignition[1219]: INFO : Stage: files Jan 15 23:44:05.354771 ignition[1219]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:44:05.354771 ignition[1219]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:44:05.354771 ignition[1219]: DEBUG : files: compiled without relabeling support, skipping Jan 15 23:44:05.368440 ignition[1219]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 23:44:05.368440 ignition[1219]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 23:44:05.409000 ignition[1219]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 23:44:05.414681 ignition[1219]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 23:44:05.414681 ignition[1219]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 23:44:05.409356 unknown[1219]: wrote ssh authorized keys file for user: core Jan 15 23:44:05.438325 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 15 23:44:05.446326 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 15 23:44:05.465807 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 23:44:05.568954 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 15 23:44:05.576919 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 15 23:44:05.576919 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 15 23:44:07.094356 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 15 23:44:07.313689 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 15 23:44:07.320821 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 15 23:44:07.320821 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 23:44:07.320821 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 23:44:07.320821 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 23:44:07.320821 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 23:44:07.320821 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 23:44:07.320821 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 23:44:07.320821 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 23:44:07.377745 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 23:44:07.377745 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 23:44:07.377745 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 15 23:44:07.377745 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 15 23:44:07.377745 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 15 23:44:07.377745 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 15 23:44:07.897633 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 15 23:44:09.341650 ignition[1219]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 15 23:44:09.341650 ignition[1219]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 15 23:44:09.371095 ignition[1219]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 23:44:09.386458 ignition[1219]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 23:44:09.386458 ignition[1219]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 15 23:44:09.386458 ignition[1219]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 15 23:44:09.417061 ignition[1219]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 23:44:09.417061 ignition[1219]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 23:44:09.417061 ignition[1219]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 23:44:09.417061 ignition[1219]: INFO : files: files passed Jan 15 23:44:09.417061 ignition[1219]: INFO : Ignition finished successfully Jan 15 23:44:09.395881 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 23:44:09.405881 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 23:44:09.437121 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 23:44:09.449868 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 23:44:09.452966 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 23:44:09.482645 initrd-setup-root-after-ignition[1247]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:44:09.482645 initrd-setup-root-after-ignition[1247]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:44:09.497512 initrd-setup-root-after-ignition[1251]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:44:09.498113 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 23:44:09.509280 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 23:44:09.518941 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 23:44:09.558046 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 23:44:09.558154 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 23:44:09.567241 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 23:44:09.577030 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 23:44:09.585321 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 23:44:09.586070 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 23:44:09.622376 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 23:44:09.629325 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 23:44:09.654915 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:44:09.659895 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:44:09.669453 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 23:44:09.677499 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 23:44:09.677594 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 23:44:09.689901 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 23:44:09.694219 systemd[1]: Stopped target basic.target - Basic System. Jan 15 23:44:09.702658 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 23:44:09.710900 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 23:44:09.719360 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 23:44:09.728627 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 15 23:44:09.737997 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 23:44:09.746837 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 23:44:09.756177 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 23:44:09.764460 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 23:44:09.773016 systemd[1]: Stopped target swap.target - Swaps. Jan 15 23:44:09.780841 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 23:44:09.780950 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 23:44:09.791863 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:44:09.796372 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:44:09.804738 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 23:44:09.804807 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:44:09.813591 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 23:44:09.813685 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 23:44:09.826189 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 23:44:09.826275 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 23:44:09.831496 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 23:44:09.831564 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 23:44:09.839577 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 23:44:09.839649 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 23:44:09.915125 ignition[1271]: INFO : Ignition 2.22.0 Jan 15 23:44:09.915125 ignition[1271]: INFO : Stage: umount Jan 15 23:44:09.915125 ignition[1271]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:44:09.915125 ignition[1271]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:44:09.915125 ignition[1271]: INFO : umount: umount passed Jan 15 23:44:09.915125 ignition[1271]: INFO : Ignition finished successfully Jan 15 23:44:09.850710 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 23:44:09.878798 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 23:44:09.889714 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 23:44:09.889845 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:44:09.897844 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 23:44:09.897923 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 23:44:09.916653 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 23:44:09.916758 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 23:44:09.927460 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 23:44:09.927543 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 23:44:09.937289 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 23:44:09.937347 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 23:44:09.945400 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 23:44:09.945441 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 23:44:09.953974 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 23:44:09.954022 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 23:44:09.961539 systemd[1]: Stopped target network.target - Network. Jan 15 23:44:09.969721 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 23:44:09.969771 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 23:44:09.979769 systemd[1]: Stopped target paths.target - Path Units. Jan 15 23:44:09.987142 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 23:44:09.990642 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:44:09.996773 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 23:44:10.004210 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 23:44:10.012260 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 23:44:10.012306 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 23:44:10.019817 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 23:44:10.019850 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 23:44:10.028394 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 23:44:10.028446 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 23:44:10.036278 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 23:44:10.036307 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 23:44:10.045529 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 23:44:10.053152 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 23:44:10.062645 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 23:44:10.063108 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 23:44:10.063211 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 23:44:10.071072 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 23:44:10.265613 kernel: hv_netvsc 7ced8db6-fb72-7ced-8db6-fb727ced8db6 eth0: Data path switched from VF: enP50726s1 Jan 15 23:44:10.071185 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 23:44:10.080354 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 23:44:10.080475 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 23:44:10.093562 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 15 23:44:10.093779 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 23:44:10.093895 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 23:44:10.106150 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 15 23:44:10.106716 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 15 23:44:10.115039 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 23:44:10.115076 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:44:10.123679 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 23:44:10.139932 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 23:44:10.139994 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 23:44:10.145302 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 23:44:10.145340 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:44:10.157735 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 23:44:10.157770 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 23:44:10.163215 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 23:44:10.163247 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:44:10.176103 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:44:10.181483 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 15 23:44:10.181532 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:44:10.197844 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 23:44:10.198392 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:44:10.208107 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 23:44:10.208143 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 23:44:10.216845 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 23:44:10.216869 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:44:10.226003 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 23:44:10.226052 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 23:44:10.238869 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 23:44:10.238913 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 23:44:10.250286 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 23:44:10.250320 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 23:44:10.266158 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 23:44:10.275680 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 15 23:44:10.275726 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:44:10.288899 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 23:44:10.288939 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:44:10.305729 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 15 23:44:10.508286 systemd-journald[226]: Received SIGTERM from PID 1 (systemd). Jan 15 23:44:10.305794 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 23:44:10.315215 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 23:44:10.315252 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:44:10.320925 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:44:10.320964 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:44:10.336156 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 15 23:44:10.336200 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 15 23:44:10.336227 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 15 23:44:10.336250 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:44:10.336509 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 23:44:10.336646 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 23:44:10.352927 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 23:44:10.353043 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 23:44:10.361688 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 23:44:10.372117 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 23:44:10.405255 systemd[1]: Switching root. Jan 15 23:44:10.586520 systemd-journald[226]: Journal stopped Jan 15 23:44:14.882040 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 23:44:14.882057 kernel: SELinux: policy capability open_perms=1 Jan 15 23:44:14.882065 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 23:44:14.882070 kernel: SELinux: policy capability always_check_network=0 Jan 15 23:44:14.882077 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 23:44:14.882083 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 23:44:14.882089 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 23:44:14.882095 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 23:44:14.882100 kernel: SELinux: policy capability userspace_initial_context=0 Jan 15 23:44:14.882107 systemd[1]: Successfully loaded SELinux policy in 211.720ms. Jan 15 23:44:14.882113 kernel: audit: type=1403 audit(1768520651.766:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 23:44:14.882120 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.319ms. Jan 15 23:44:14.882126 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 23:44:14.882133 systemd[1]: Detected virtualization microsoft. Jan 15 23:44:14.882139 systemd[1]: Detected architecture arm64. Jan 15 23:44:14.882145 systemd[1]: Detected first boot. Jan 15 23:44:14.882152 systemd[1]: Hostname set to . Jan 15 23:44:14.882158 systemd[1]: Initializing machine ID from random generator. Jan 15 23:44:14.882165 zram_generator::config[1314]: No configuration found. Jan 15 23:44:14.882171 kernel: NET: Registered PF_VSOCK protocol family Jan 15 23:44:14.882177 systemd[1]: Populated /etc with preset unit settings. Jan 15 23:44:14.882184 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 15 23:44:14.882189 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 23:44:14.882196 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 23:44:14.882202 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 23:44:14.882208 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 23:44:14.882214 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 23:44:14.882220 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 23:44:14.882226 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 23:44:14.882232 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 23:44:14.882239 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 23:44:14.882245 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 23:44:14.882251 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 23:44:14.882257 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:44:14.882263 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:44:14.882269 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 23:44:14.882275 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 23:44:14.882281 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 23:44:14.882288 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 23:44:14.882294 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 15 23:44:14.882301 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:44:14.882308 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:44:14.882314 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 23:44:14.882320 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 23:44:14.882326 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 23:44:14.882332 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 23:44:14.882339 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:44:14.882345 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 23:44:14.882351 systemd[1]: Reached target slices.target - Slice Units. Jan 15 23:44:14.882357 systemd[1]: Reached target swap.target - Swaps. Jan 15 23:44:14.882363 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 23:44:14.882369 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 23:44:14.882377 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 15 23:44:14.882383 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:44:14.882389 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 23:44:14.882395 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:44:14.882402 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 23:44:14.882408 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 23:44:14.882414 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 23:44:14.882421 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 23:44:14.882428 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 23:44:14.882434 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 23:44:14.882440 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 23:44:14.882446 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 23:44:14.882453 systemd[1]: Reached target machines.target - Containers. Jan 15 23:44:14.882460 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 23:44:14.882466 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:44:14.882473 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 23:44:14.882480 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 23:44:14.882486 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:44:14.882492 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 23:44:14.882499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:44:14.882505 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 23:44:14.882511 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:44:14.882518 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 23:44:14.882524 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 23:44:14.882531 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 23:44:14.882537 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 23:44:14.882543 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 23:44:14.882550 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:44:14.882556 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 23:44:14.882562 kernel: fuse: init (API version 7.41) Jan 15 23:44:14.882568 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 23:44:14.882574 kernel: loop: module loaded Jan 15 23:44:14.882581 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 23:44:14.882587 kernel: ACPI: bus type drm_connector registered Jan 15 23:44:14.882593 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 23:44:14.882599 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 15 23:44:14.882606 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 23:44:14.882612 systemd[1]: verity-setup.service: Deactivated successfully. Jan 15 23:44:14.882618 systemd[1]: Stopped verity-setup.service. Jan 15 23:44:14.882647 systemd-journald[1404]: Collecting audit messages is disabled. Jan 15 23:44:14.882662 systemd-journald[1404]: Journal started Jan 15 23:44:14.882677 systemd-journald[1404]: Runtime Journal (/run/log/journal/2a9099b9f40d43f08a729b79a503c1e6) is 8M, max 78.3M, 70.3M free. Jan 15 23:44:14.136199 systemd[1]: Queued start job for default target multi-user.target. Jan 15 23:44:14.144056 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 15 23:44:14.144434 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 23:44:14.144704 systemd[1]: systemd-journald.service: Consumed 2.516s CPU time. Jan 15 23:44:14.893633 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 23:44:14.894223 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 23:44:14.898464 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 23:44:14.902963 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 23:44:14.907356 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 23:44:14.911983 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 23:44:14.916555 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 23:44:14.922661 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 23:44:14.927959 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:44:14.933420 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 23:44:14.933547 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 23:44:14.938821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:44:14.938951 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:44:14.943898 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 23:44:14.944015 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 23:44:14.948763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:44:14.948907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:44:14.954516 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 23:44:14.957680 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 23:44:14.962200 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:44:14.963657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:44:14.968393 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 23:44:14.973234 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:44:14.978480 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 23:44:14.984067 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 15 23:44:14.998876 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 23:44:15.004725 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 23:44:15.020701 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 23:44:15.025609 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 23:44:15.025712 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 23:44:15.030756 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 15 23:44:15.038726 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 23:44:15.042862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:44:15.049146 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 23:44:15.054200 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 23:44:15.058790 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 23:44:15.059441 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 23:44:15.064764 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 23:44:15.066740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:44:15.072702 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 23:44:15.079471 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 23:44:15.090653 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:44:15.098155 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 23:44:15.102982 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 23:44:15.109642 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 23:44:15.116566 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 23:44:15.122788 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 15 23:44:15.131757 systemd-journald[1404]: Time spent on flushing to /var/log/journal/2a9099b9f40d43f08a729b79a503c1e6 is 8.627ms for 944 entries. Jan 15 23:44:15.131757 systemd-journald[1404]: System Journal (/var/log/journal/2a9099b9f40d43f08a729b79a503c1e6) is 8M, max 2.6G, 2.6G free. Jan 15 23:44:15.152823 systemd-journald[1404]: Received client request to flush runtime journal. Jan 15 23:44:15.155324 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 23:44:15.158632 kernel: loop0: detected capacity change from 0 to 100632 Jan 15 23:44:15.166533 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:44:15.187344 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 23:44:15.187936 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 15 23:44:15.207712 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jan 15 23:44:15.207726 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jan 15 23:44:15.210181 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 23:44:15.216562 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 23:44:15.350195 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 23:44:15.356759 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 23:44:15.378481 systemd-tmpfiles[1471]: ACLs are not supported, ignoring. Jan 15 23:44:15.378501 systemd-tmpfiles[1471]: ACLs are not supported, ignoring. Jan 15 23:44:15.380870 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:44:15.601650 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 23:44:15.656649 kernel: loop1: detected capacity change from 0 to 27936 Jan 15 23:44:15.742040 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 23:44:15.748256 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:44:15.773766 systemd-udevd[1478]: Using default interface naming scheme 'v255'. Jan 15 23:44:16.004636 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:44:16.013722 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 23:44:16.033647 kernel: loop2: detected capacity change from 0 to 119840 Jan 15 23:44:16.061071 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 23:44:16.092373 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 15 23:44:16.120646 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 23:44:16.159902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 15 23:44:16.165648 kernel: hv_vmbus: registering driver hv_balloon Jan 15 23:44:16.165698 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 15 23:44:16.174789 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 15 23:44:16.207841 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 23:44:16.218644 kernel: hv_vmbus: registering driver hyperv_fb Jan 15 23:44:16.218702 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 15 23:44:16.227962 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 15 23:44:16.234243 kernel: Console: switching to colour dummy device 80x25 Jan 15 23:44:16.237677 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 23:44:16.301687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:44:16.328402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:44:16.329595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:44:16.338134 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:44:16.341375 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:44:16.380358 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 23:44:16.382503 systemd-networkd[1483]: lo: Link UP Jan 15 23:44:16.382721 systemd-networkd[1483]: lo: Gained carrier Jan 15 23:44:16.383683 systemd-networkd[1483]: Enumeration completed Jan 15 23:44:16.383998 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:44:16.384066 systemd-networkd[1483]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 23:44:16.385392 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 23:44:16.392643 kernel: MACsec IEEE 802.1AE Jan 15 23:44:16.393751 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 23:44:16.400296 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 15 23:44:16.415289 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 23:44:16.428661 kernel: loop3: detected capacity change from 0 to 211168 Jan 15 23:44:16.449053 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 23:44:16.458645 kernel: mlx5_core c626:00:02.0 enP50726s1: Link up Jan 15 23:44:16.480892 kernel: hv_netvsc 7ced8db6-fb72-7ced-8db6-fb727ced8db6 eth0: Data path switched to VF: enP50726s1 Jan 15 23:44:16.481723 systemd-networkd[1483]: enP50726s1: Link UP Jan 15 23:44:16.482078 systemd-networkd[1483]: eth0: Link UP Jan 15 23:44:16.482084 systemd-networkd[1483]: eth0: Gained carrier Jan 15 23:44:16.482100 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:44:16.483285 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 15 23:44:16.491640 kernel: loop4: detected capacity change from 0 to 100632 Jan 15 23:44:16.492898 systemd-networkd[1483]: enP50726s1: Gained carrier Jan 15 23:44:16.505637 kernel: loop5: detected capacity change from 0 to 27936 Jan 15 23:44:16.505694 systemd-networkd[1483]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 23:44:16.520696 kernel: loop6: detected capacity change from 0 to 119840 Jan 15 23:44:16.534650 kernel: loop7: detected capacity change from 0 to 211168 Jan 15 23:44:16.548270 (sd-merge)[1623]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 15 23:44:16.549467 (sd-merge)[1623]: Merged extensions into '/usr'. Jan 15 23:44:16.551886 systemd[1]: Reload requested from client PID 1452 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 23:44:16.551900 systemd[1]: Reloading... Jan 15 23:44:16.605639 zram_generator::config[1660]: No configuration found. Jan 15 23:44:16.770790 systemd[1]: Reloading finished in 218 ms. Jan 15 23:44:16.783749 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 23:44:16.795433 systemd[1]: Starting ensure-sysext.service... Jan 15 23:44:16.801755 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 23:44:16.814770 systemd[1]: Reload requested from client PID 1710 ('systemctl') (unit ensure-sysext.service)... Jan 15 23:44:16.814782 systemd[1]: Reloading... Jan 15 23:44:16.817847 systemd-tmpfiles[1711]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 15 23:44:16.818141 systemd-tmpfiles[1711]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 15 23:44:16.818506 systemd-tmpfiles[1711]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 23:44:16.818794 systemd-tmpfiles[1711]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 23:44:16.819824 systemd-tmpfiles[1711]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 23:44:16.820289 systemd-tmpfiles[1711]: ACLs are not supported, ignoring. Jan 15 23:44:16.820488 systemd-tmpfiles[1711]: ACLs are not supported, ignoring. Jan 15 23:44:16.859473 systemd-tmpfiles[1711]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 23:44:16.859581 systemd-tmpfiles[1711]: Skipping /boot Jan 15 23:44:16.871342 systemd-tmpfiles[1711]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 23:44:16.871457 systemd-tmpfiles[1711]: Skipping /boot Jan 15 23:44:16.873650 zram_generator::config[1744]: No configuration found. Jan 15 23:44:17.025732 systemd[1]: Reloading finished in 210 ms. Jan 15 23:44:17.036645 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:44:17.056610 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:44:17.067924 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 23:44:17.093081 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 23:44:17.108806 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 23:44:17.122802 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 23:44:17.127805 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 23:44:17.134548 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:44:17.141848 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:44:17.149887 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:44:17.157017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:44:17.162952 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:44:17.163048 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:44:17.166389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:44:17.166536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:44:17.166610 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:44:17.168320 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 23:44:17.175159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:44:17.175282 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:44:17.181349 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:44:17.181470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:44:17.188032 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:44:17.188168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:44:17.201614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:44:17.202486 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:44:17.210307 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 23:44:17.217647 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:44:17.226358 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:44:17.234153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:44:17.234303 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:44:17.234766 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 23:44:17.240258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:44:17.240394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:44:17.245906 systemd-resolved[1806]: Positive Trust Anchors: Jan 15 23:44:17.245921 systemd-resolved[1806]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 23:44:17.245941 systemd-resolved[1806]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 23:44:17.248876 systemd-resolved[1806]: Using system hostname 'ci-4459.2.2-n-6dfb6e6787'. Jan 15 23:44:17.249112 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 23:44:17.255746 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 23:44:17.260838 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 23:44:17.261084 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 23:44:17.267411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:44:17.267636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:44:17.273017 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:44:17.273204 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:44:17.281201 systemd[1]: Finished ensure-sysext.service. Jan 15 23:44:17.287078 systemd[1]: Reached target network.target - Network. Jan 15 23:44:17.291131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:44:17.296015 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 23:44:17.296281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 23:44:17.301633 augenrules[1844]: No rules Jan 15 23:44:17.302766 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 23:44:17.303041 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 23:44:17.703451 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 23:44:17.710011 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 23:44:18.065757 systemd-networkd[1483]: eth0: Gained IPv6LL Jan 15 23:44:18.070972 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 23:44:18.076184 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 23:44:20.146245 ldconfig[1447]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 23:44:20.159144 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 23:44:20.165319 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 23:44:20.183652 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 23:44:20.188299 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 23:44:20.192553 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 23:44:20.197552 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 23:44:20.203116 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 23:44:20.207493 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 23:44:20.212547 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 23:44:20.217703 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 23:44:20.217730 systemd[1]: Reached target paths.target - Path Units. Jan 15 23:44:20.221581 systemd[1]: Reached target timers.target - Timer Units. Jan 15 23:44:20.240262 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 23:44:20.245875 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 23:44:20.251172 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 15 23:44:20.256496 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 15 23:44:20.261606 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 15 23:44:20.267472 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 23:44:20.271938 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 15 23:44:20.277169 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 23:44:20.281676 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 23:44:20.285449 systemd[1]: Reached target basic.target - Basic System. Jan 15 23:44:20.289087 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 23:44:20.289108 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 23:44:20.291065 systemd[1]: Starting chronyd.service - NTP client/server... Jan 15 23:44:20.302711 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 23:44:20.308781 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 23:44:20.313984 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 23:44:20.320738 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 23:44:20.335501 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 23:44:20.340613 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 23:44:20.344648 jq[1866]: false Jan 15 23:44:20.345299 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 23:44:20.346027 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 15 23:44:20.351963 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 15 23:44:20.352920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:44:20.359792 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 23:44:20.364520 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 23:44:20.369708 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 23:44:20.375773 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 23:44:20.383735 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 23:44:20.395381 chronyd[1858]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 15 23:44:20.395401 KVP[1868]: KVP starting; pid is:1868 Jan 15 23:44:20.396104 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 23:44:20.401802 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 23:44:20.402127 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 23:44:20.402890 extend-filesystems[1867]: Found /dev/sda6 Jan 15 23:44:20.404740 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 23:44:20.415536 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 23:44:20.420411 KVP[1868]: KVP LIC Version: 3.1 Jan 15 23:44:20.425639 kernel: hv_utils: KVP IC version 4.0 Jan 15 23:44:20.427963 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 23:44:20.433306 extend-filesystems[1867]: Found /dev/sda9 Jan 15 23:44:20.447919 extend-filesystems[1867]: Checking size of /dev/sda9 Jan 15 23:44:20.436415 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 23:44:20.435819 chronyd[1858]: Timezone right/UTC failed leap second check, ignoring Jan 15 23:44:20.457248 jq[1890]: true Jan 15 23:44:20.439789 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 23:44:20.440265 chronyd[1858]: Loaded seccomp filter (level 2) Jan 15 23:44:20.440482 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 23:44:20.440757 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 23:44:20.451557 systemd[1]: Started chronyd.service - NTP client/server. Jan 15 23:44:20.457995 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 23:44:20.459382 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 23:44:20.476082 extend-filesystems[1867]: Old size kept for /dev/sda9 Jan 15 23:44:20.489400 update_engine[1884]: I20260115 23:44:20.486393 1884 main.cc:92] Flatcar Update Engine starting Jan 15 23:44:20.480918 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 23:44:20.484179 (ntainerd)[1903]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 23:44:20.484325 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 23:44:20.491259 jq[1901]: true Jan 15 23:44:20.506828 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 23:44:20.546172 systemd-logind[1880]: New seat seat0. Jan 15 23:44:20.547845 systemd-logind[1880]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 15 23:44:20.548005 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 23:44:20.567179 tar[1899]: linux-arm64/LICENSE Jan 15 23:44:20.567378 tar[1899]: linux-arm64/helm Jan 15 23:44:20.601734 bash[1935]: Updated "/home/core/.ssh/authorized_keys" Jan 15 23:44:20.602332 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 23:44:20.611753 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 23:44:20.615655 dbus-daemon[1861]: [system] SELinux support is enabled Jan 15 23:44:20.615990 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 23:44:20.620190 sshd_keygen[1895]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 23:44:20.625150 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 23:44:20.626605 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 23:44:20.635504 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 23:44:20.643235 update_engine[1884]: I20260115 23:44:20.642254 1884 update_check_scheduler.cc:74] Next update check in 7m53s Jan 15 23:44:20.635530 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 23:44:20.645676 systemd[1]: Started update-engine.service - Update Engine. Jan 15 23:44:20.650122 dbus-daemon[1861]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 23:44:20.665969 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 23:44:20.700086 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 23:44:20.705160 coreos-metadata[1860]: Jan 15 23:44:20.705 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 23:44:20.707921 coreos-metadata[1860]: Jan 15 23:44:20.707 INFO Fetch successful Jan 15 23:44:20.708018 coreos-metadata[1860]: Jan 15 23:44:20.707 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 15 23:44:20.721823 coreos-metadata[1860]: Jan 15 23:44:20.715 INFO Fetch successful Jan 15 23:44:20.721823 coreos-metadata[1860]: Jan 15 23:44:20.715 INFO Fetching http://168.63.129.16/machine/476991c5-c241-42a9-be18-9cf00f501d2e/b48611b4%2D6521%2D4b4d%2D91b4%2D0de2c7bd279f.%5Fci%2D4459.2.2%2Dn%2D6dfb6e6787?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 15 23:44:20.717698 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 23:44:20.722984 coreos-metadata[1860]: Jan 15 23:44:20.722 INFO Fetch successful Jan 15 23:44:20.722984 coreos-metadata[1860]: Jan 15 23:44:20.722 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 15 23:44:20.727914 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 15 23:44:20.738961 coreos-metadata[1860]: Jan 15 23:44:20.734 INFO Fetch successful Jan 15 23:44:20.791017 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 23:44:20.791179 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 23:44:20.801706 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 23:44:20.817275 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 23:44:20.828161 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 23:44:20.835083 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 15 23:44:20.865985 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 23:44:20.873851 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 23:44:20.884750 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 15 23:44:20.890845 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 23:44:21.069290 tar[1899]: linux-arm64/README.md Jan 15 23:44:21.081218 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 23:44:21.115172 locksmithd[1976]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 23:44:21.197563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:44:21.239947 containerd[1903]: time="2026-01-15T23:44:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 15 23:44:21.240492 containerd[1903]: time="2026-01-15T23:44:21.240457668Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 15 23:44:21.245897 containerd[1903]: time="2026-01-15T23:44:21.245868084Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.952µs" Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.245976340Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.245998236Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.246117948Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.246128908Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.246145572Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.246177756Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.246184692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.246334268Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.246344396Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.246351756Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.246356820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 15 23:44:21.246616 containerd[1903]: time="2026-01-15T23:44:21.246408628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 15 23:44:21.246954 containerd[1903]: time="2026-01-15T23:44:21.246551644Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 23:44:21.246954 containerd[1903]: time="2026-01-15T23:44:21.246569468Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 23:44:21.246954 containerd[1903]: time="2026-01-15T23:44:21.246576116Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 15 23:44:21.246954 containerd[1903]: time="2026-01-15T23:44:21.246605156Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 15 23:44:21.246954 containerd[1903]: time="2026-01-15T23:44:21.246776052Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 15 23:44:21.246954 containerd[1903]: time="2026-01-15T23:44:21.246836388Z" level=info msg="metadata content store policy set" policy=shared Jan 15 23:44:21.262098 containerd[1903]: time="2026-01-15T23:44:21.262067988Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 15 23:44:21.262162 containerd[1903]: time="2026-01-15T23:44:21.262118124Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 15 23:44:21.262162 containerd[1903]: time="2026-01-15T23:44:21.262134636Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 15 23:44:21.262162 containerd[1903]: time="2026-01-15T23:44:21.262144772Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 15 23:44:21.262162 containerd[1903]: time="2026-01-15T23:44:21.262154380Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 15 23:44:21.262162 containerd[1903]: time="2026-01-15T23:44:21.262162300Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 15 23:44:21.262229 containerd[1903]: time="2026-01-15T23:44:21.262172836Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 15 23:44:21.262229 containerd[1903]: time="2026-01-15T23:44:21.262183844Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 15 23:44:21.262229 containerd[1903]: time="2026-01-15T23:44:21.262191188Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 15 23:44:21.262229 containerd[1903]: time="2026-01-15T23:44:21.262198364Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 15 23:44:21.262229 containerd[1903]: time="2026-01-15T23:44:21.262204564Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 15 23:44:21.262229 containerd[1903]: time="2026-01-15T23:44:21.262213076Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 15 23:44:21.262340 containerd[1903]: time="2026-01-15T23:44:21.262322524Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 15 23:44:21.262370 containerd[1903]: time="2026-01-15T23:44:21.262342788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 15 23:44:21.262370 containerd[1903]: time="2026-01-15T23:44:21.262352324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 15 23:44:21.262370 containerd[1903]: time="2026-01-15T23:44:21.262360396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 15 23:44:21.262370 containerd[1903]: time="2026-01-15T23:44:21.262367796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 15 23:44:21.262422 containerd[1903]: time="2026-01-15T23:44:21.262374836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 15 23:44:21.262422 containerd[1903]: time="2026-01-15T23:44:21.262382372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 15 23:44:21.262422 containerd[1903]: time="2026-01-15T23:44:21.262388788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 15 23:44:21.262422 containerd[1903]: time="2026-01-15T23:44:21.262401516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 15 23:44:21.262422 containerd[1903]: time="2026-01-15T23:44:21.262408620Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 15 23:44:21.262422 containerd[1903]: time="2026-01-15T23:44:21.262415356Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 15 23:44:21.262580 containerd[1903]: time="2026-01-15T23:44:21.262453724Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 15 23:44:21.262580 containerd[1903]: time="2026-01-15T23:44:21.262466396Z" level=info msg="Start snapshots syncer" Jan 15 23:44:21.262580 containerd[1903]: time="2026-01-15T23:44:21.262486844Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 15 23:44:21.262719 containerd[1903]: time="2026-01-15T23:44:21.262685692Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 15 23:44:21.262803 containerd[1903]: time="2026-01-15T23:44:21.262727652Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 15 23:44:21.262803 containerd[1903]: time="2026-01-15T23:44:21.262764420Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 15 23:44:21.262885 containerd[1903]: time="2026-01-15T23:44:21.262856292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 15 23:44:21.262885 containerd[1903]: time="2026-01-15T23:44:21.262871572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 15 23:44:21.262885 containerd[1903]: time="2026-01-15T23:44:21.262878860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 15 23:44:21.262927 containerd[1903]: time="2026-01-15T23:44:21.262887332Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 15 23:44:21.262927 containerd[1903]: time="2026-01-15T23:44:21.262895524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 15 23:44:21.262927 containerd[1903]: time="2026-01-15T23:44:21.262902668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 15 23:44:21.262927 containerd[1903]: time="2026-01-15T23:44:21.262909588Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 15 23:44:21.262927 containerd[1903]: time="2026-01-15T23:44:21.262927036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.262935796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.262944148Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.262971532Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.262987924Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.262993972Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.263000396Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.263005508Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.263013996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.263020956Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.263032532Z" level=info msg="runtime interface created" Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.263035796Z" level=info msg="created NRI interface" Jan 15 23:44:21.263037 containerd[1903]: time="2026-01-15T23:44:21.263041180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 15 23:44:21.263246 containerd[1903]: time="2026-01-15T23:44:21.263050148Z" level=info msg="Connect containerd service" Jan 15 23:44:21.263246 containerd[1903]: time="2026-01-15T23:44:21.263064884Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 23:44:21.264538 containerd[1903]: time="2026-01-15T23:44:21.264506908Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 23:44:21.323407 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:44:21.674450 containerd[1903]: time="2026-01-15T23:44:21.674347852Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 23:44:21.674450 containerd[1903]: time="2026-01-15T23:44:21.674420180Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 23:44:21.674450 containerd[1903]: time="2026-01-15T23:44:21.674435244Z" level=info msg="Start subscribing containerd event" Jan 15 23:44:21.674583 containerd[1903]: time="2026-01-15T23:44:21.674458828Z" level=info msg="Start recovering state" Jan 15 23:44:21.674583 containerd[1903]: time="2026-01-15T23:44:21.674522604Z" level=info msg="Start event monitor" Jan 15 23:44:21.674583 containerd[1903]: time="2026-01-15T23:44:21.674531724Z" level=info msg="Start cni network conf syncer for default" Jan 15 23:44:21.674583 containerd[1903]: time="2026-01-15T23:44:21.674537732Z" level=info msg="Start streaming server" Jan 15 23:44:21.674583 containerd[1903]: time="2026-01-15T23:44:21.674544380Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 15 23:44:21.674583 containerd[1903]: time="2026-01-15T23:44:21.674554988Z" level=info msg="runtime interface starting up..." Jan 15 23:44:21.674583 containerd[1903]: time="2026-01-15T23:44:21.674558916Z" level=info msg="starting plugins..." Jan 15 23:44:21.674583 containerd[1903]: time="2026-01-15T23:44:21.674569508Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 15 23:44:21.680346 containerd[1903]: time="2026-01-15T23:44:21.674694340Z" level=info msg="containerd successfully booted in 0.435080s" Jan 15 23:44:21.674796 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 23:44:21.681203 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 23:44:21.685665 systemd[1]: Startup finished in 1.629s (kernel) + 14.612s (initrd) + 10.129s (userspace) = 26.371s. Jan 15 23:44:21.699689 kubelet[2050]: E0115 23:44:21.699652 2050 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:44:21.701688 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:44:21.701786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:44:21.702031 systemd[1]: kubelet.service: Consumed 551ms CPU time, 259.6M memory peak. Jan 15 23:44:21.908099 login[2031]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 15 23:44:21.909074 login[2032]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:44:21.916129 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 23:44:21.917056 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 23:44:21.921195 systemd-logind[1880]: New session 2 of user core. Jan 15 23:44:21.942420 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 23:44:21.945772 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 23:44:21.972561 (systemd)[2078]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 23:44:21.974365 systemd-logind[1880]: New session c1 of user core. Jan 15 23:44:22.074383 systemd[2078]: Queued start job for default target default.target. Jan 15 23:44:22.079349 systemd[2078]: Created slice app.slice - User Application Slice. Jan 15 23:44:22.079372 systemd[2078]: Reached target paths.target - Paths. Jan 15 23:44:22.079401 systemd[2078]: Reached target timers.target - Timers. Jan 15 23:44:22.082732 systemd[2078]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 23:44:22.088535 systemd[2078]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 23:44:22.088577 systemd[2078]: Reached target sockets.target - Sockets. Jan 15 23:44:22.088607 systemd[2078]: Reached target basic.target - Basic System. Jan 15 23:44:22.088714 systemd[2078]: Reached target default.target - Main User Target. Jan 15 23:44:22.088736 systemd[2078]: Startup finished in 109ms. Jan 15 23:44:22.088904 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 23:44:22.090990 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 23:44:22.646265 waagent[2029]: 2026-01-15T23:44:22.646191Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 15 23:44:22.654450 waagent[2029]: 2026-01-15T23:44:22.651107Z INFO Daemon Daemon OS: flatcar 4459.2.2 Jan 15 23:44:22.654667 waagent[2029]: 2026-01-15T23:44:22.654631Z INFO Daemon Daemon Python: 3.11.13 Jan 15 23:44:22.659714 waagent[2029]: 2026-01-15T23:44:22.659672Z INFO Daemon Daemon Run daemon Jan 15 23:44:22.663168 waagent[2029]: 2026-01-15T23:44:22.663135Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Jan 15 23:44:22.670745 waagent[2029]: 2026-01-15T23:44:22.670703Z INFO Daemon Daemon Using waagent for provisioning Jan 15 23:44:22.675055 waagent[2029]: 2026-01-15T23:44:22.675011Z INFO Daemon Daemon Activate resource disk Jan 15 23:44:22.678993 waagent[2029]: 2026-01-15T23:44:22.678961Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 15 23:44:22.687794 waagent[2029]: 2026-01-15T23:44:22.687755Z INFO Daemon Daemon Found device: None Jan 15 23:44:22.691496 waagent[2029]: 2026-01-15T23:44:22.691464Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 15 23:44:22.697942 waagent[2029]: 2026-01-15T23:44:22.697916Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 15 23:44:22.706877 waagent[2029]: 2026-01-15T23:44:22.706840Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 23:44:22.711612 waagent[2029]: 2026-01-15T23:44:22.711581Z INFO Daemon Daemon Running default provisioning handler Jan 15 23:44:22.720368 waagent[2029]: 2026-01-15T23:44:22.720313Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 15 23:44:22.730973 waagent[2029]: 2026-01-15T23:44:22.730934Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 15 23:44:22.738525 waagent[2029]: 2026-01-15T23:44:22.738493Z INFO Daemon Daemon cloud-init is enabled: False Jan 15 23:44:22.742587 waagent[2029]: 2026-01-15T23:44:22.742557Z INFO Daemon Daemon Copying ovf-env.xml Jan 15 23:44:22.822018 waagent[2029]: 2026-01-15T23:44:22.821934Z INFO Daemon Daemon Successfully mounted dvd Jan 15 23:44:22.848387 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 15 23:44:22.853401 waagent[2029]: 2026-01-15T23:44:22.849749Z INFO Daemon Daemon Detect protocol endpoint Jan 15 23:44:22.853734 waagent[2029]: 2026-01-15T23:44:22.853695Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 23:44:22.858277 waagent[2029]: 2026-01-15T23:44:22.858234Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 15 23:44:22.863286 waagent[2029]: 2026-01-15T23:44:22.863241Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 15 23:44:22.867521 waagent[2029]: 2026-01-15T23:44:22.867475Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 15 23:44:22.871600 waagent[2029]: 2026-01-15T23:44:22.871557Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 15 23:44:22.908456 login[2031]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:44:22.912616 waagent[2029]: 2026-01-15T23:44:22.912582Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 15 23:44:22.913188 systemd-logind[1880]: New session 1 of user core. Jan 15 23:44:22.918800 waagent[2029]: 2026-01-15T23:44:22.918778Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 15 23:44:22.919725 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 23:44:22.923891 waagent[2029]: 2026-01-15T23:44:22.923840Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 15 23:44:23.035444 waagent[2029]: 2026-01-15T23:44:23.035349Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 15 23:44:23.040422 waagent[2029]: 2026-01-15T23:44:23.040379Z INFO Daemon Daemon Forcing an update of the goal state. Jan 15 23:44:23.047242 waagent[2029]: 2026-01-15T23:44:23.047202Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 23:44:23.064197 waagent[2029]: 2026-01-15T23:44:23.064163Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 15 23:44:23.068451 waagent[2029]: 2026-01-15T23:44:23.068417Z INFO Daemon Jan 15 23:44:23.070560 waagent[2029]: 2026-01-15T23:44:23.070531Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 41c9b7a5-71b9-4994-b0fd-d7bb49203080 eTag: 14782472596138869646 source: Fabric] Jan 15 23:44:23.079162 waagent[2029]: 2026-01-15T23:44:23.079130Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 15 23:44:23.083911 waagent[2029]: 2026-01-15T23:44:23.083878Z INFO Daemon Jan 15 23:44:23.085975 waagent[2029]: 2026-01-15T23:44:23.085943Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 15 23:44:23.093839 waagent[2029]: 2026-01-15T23:44:23.093810Z INFO Daemon Daemon Downloading artifacts profile blob Jan 15 23:44:23.155450 waagent[2029]: 2026-01-15T23:44:23.155398Z INFO Daemon Downloaded certificate {'thumbprint': 'EE48F914C0A14376873A05DD3CADAC3FDFB0AA9C', 'hasPrivateKey': True} Jan 15 23:44:23.162667 waagent[2029]: 2026-01-15T23:44:23.162569Z INFO Daemon Fetch goal state completed Jan 15 23:44:23.171073 waagent[2029]: 2026-01-15T23:44:23.171043Z INFO Daemon Daemon Starting provisioning Jan 15 23:44:23.174980 waagent[2029]: 2026-01-15T23:44:23.174949Z INFO Daemon Daemon Handle ovf-env.xml. Jan 15 23:44:23.178501 waagent[2029]: 2026-01-15T23:44:23.178477Z INFO Daemon Daemon Set hostname [ci-4459.2.2-n-6dfb6e6787] Jan 15 23:44:23.205460 waagent[2029]: 2026-01-15T23:44:23.205416Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-n-6dfb6e6787] Jan 15 23:44:23.210134 waagent[2029]: 2026-01-15T23:44:23.210100Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 15 23:44:23.214842 waagent[2029]: 2026-01-15T23:44:23.214809Z INFO Daemon Daemon Primary interface is [eth0] Jan 15 23:44:23.224684 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:44:23.224691 systemd-networkd[1483]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 23:44:23.224717 systemd-networkd[1483]: eth0: DHCP lease lost Jan 15 23:44:23.225658 waagent[2029]: 2026-01-15T23:44:23.225606Z INFO Daemon Daemon Create user account if not exists Jan 15 23:44:23.229794 waagent[2029]: 2026-01-15T23:44:23.229760Z INFO Daemon Daemon User core already exists, skip useradd Jan 15 23:44:23.234582 waagent[2029]: 2026-01-15T23:44:23.234543Z INFO Daemon Daemon Configure sudoer Jan 15 23:44:23.247760 waagent[2029]: 2026-01-15T23:44:23.247718Z INFO Daemon Daemon Configure sshd Jan 15 23:44:23.252679 systemd-networkd[1483]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 23:44:23.315842 waagent[2029]: 2026-01-15T23:44:23.315780Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 15 23:44:23.324915 waagent[2029]: 2026-01-15T23:44:23.324882Z INFO Daemon Daemon Deploy ssh public key. Jan 15 23:44:24.422538 waagent[2029]: 2026-01-15T23:44:24.422494Z INFO Daemon Daemon Provisioning complete Jan 15 23:44:24.438837 waagent[2029]: 2026-01-15T23:44:24.438800Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 15 23:44:24.443494 waagent[2029]: 2026-01-15T23:44:24.443463Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 15 23:44:24.450860 waagent[2029]: 2026-01-15T23:44:24.450833Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 15 23:44:24.550116 waagent[2128]: 2026-01-15T23:44:24.550055Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 15 23:44:24.551551 waagent[2128]: 2026-01-15T23:44:24.550485Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Jan 15 23:44:24.551551 waagent[2128]: 2026-01-15T23:44:24.550539Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 15 23:44:24.551551 waagent[2128]: 2026-01-15T23:44:24.550575Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 15 23:44:24.600333 waagent[2128]: 2026-01-15T23:44:24.600282Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 15 23:44:24.600602 waagent[2128]: 2026-01-15T23:44:24.600573Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 23:44:24.600761 waagent[2128]: 2026-01-15T23:44:24.600733Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 23:44:24.606079 waagent[2128]: 2026-01-15T23:44:24.606024Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 23:44:24.610302 waagent[2128]: 2026-01-15T23:44:24.610268Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 15 23:44:24.610777 waagent[2128]: 2026-01-15T23:44:24.610740Z INFO ExtHandler Jan 15 23:44:24.610907 waagent[2128]: 2026-01-15T23:44:24.610882Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 60a26c65-2e70-4843-bf17-731e70bb4ac1 eTag: 14782472596138869646 source: Fabric] Jan 15 23:44:24.611212 waagent[2128]: 2026-01-15T23:44:24.611179Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 15 23:44:24.611754 waagent[2128]: 2026-01-15T23:44:24.611717Z INFO ExtHandler Jan 15 23:44:24.611870 waagent[2128]: 2026-01-15T23:44:24.611849Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 15 23:44:24.614522 waagent[2128]: 2026-01-15T23:44:24.614491Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 15 23:44:24.669426 waagent[2128]: 2026-01-15T23:44:24.669381Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EE48F914C0A14376873A05DD3CADAC3FDFB0AA9C', 'hasPrivateKey': True} Jan 15 23:44:24.669940 waagent[2128]: 2026-01-15T23:44:24.669905Z INFO ExtHandler Fetch goal state completed Jan 15 23:44:24.680695 waagent[2128]: 2026-01-15T23:44:24.680461Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 15 23:44:24.683828 waagent[2128]: 2026-01-15T23:44:24.683784Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2128 Jan 15 23:44:24.683928 waagent[2128]: 2026-01-15T23:44:24.683900Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 15 23:44:24.684168 waagent[2128]: 2026-01-15T23:44:24.684139Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 15 23:44:24.685256 waagent[2128]: 2026-01-15T23:44:24.685220Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Jan 15 23:44:24.685577 waagent[2128]: 2026-01-15T23:44:24.685546Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 15 23:44:24.685720 waagent[2128]: 2026-01-15T23:44:24.685693Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 15 23:44:24.686142 waagent[2128]: 2026-01-15T23:44:24.686110Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 15 23:44:24.745540 waagent[2128]: 2026-01-15T23:44:24.745503Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 15 23:44:24.745712 waagent[2128]: 2026-01-15T23:44:24.745683Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 15 23:44:24.750396 waagent[2128]: 2026-01-15T23:44:24.750023Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 15 23:44:24.754549 systemd[1]: Reload requested from client PID 2143 ('systemctl') (unit waagent.service)... Jan 15 23:44:24.754778 systemd[1]: Reloading... Jan 15 23:44:24.825667 zram_generator::config[2188]: No configuration found. Jan 15 23:44:24.872322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#27 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 15 23:44:24.872956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#28 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jan 15 23:44:24.987537 systemd[1]: Reloading finished in 232 ms. Jan 15 23:44:24.998554 waagent[2128]: 2026-01-15T23:44:24.998487Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 15 23:44:24.998666 waagent[2128]: 2026-01-15T23:44:24.998640Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 15 23:44:26.085781 waagent[2128]: 2026-01-15T23:44:26.085701Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 15 23:44:26.086070 waagent[2128]: 2026-01-15T23:44:26.086021Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 15 23:44:26.086725 waagent[2128]: 2026-01-15T23:44:26.086681Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 15 23:44:26.087010 waagent[2128]: 2026-01-15T23:44:26.086972Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 15 23:44:26.087648 waagent[2128]: 2026-01-15T23:44:26.087223Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 23:44:26.087648 waagent[2128]: 2026-01-15T23:44:26.087291Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 23:44:26.087648 waagent[2128]: 2026-01-15T23:44:26.087447Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 15 23:44:26.087648 waagent[2128]: 2026-01-15T23:44:26.087574Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 15 23:44:26.087648 waagent[2128]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 15 23:44:26.087648 waagent[2128]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 15 23:44:26.087648 waagent[2128]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 15 23:44:26.087648 waagent[2128]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 15 23:44:26.087648 waagent[2128]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 23:44:26.087648 waagent[2128]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 23:44:26.088009 waagent[2128]: 2026-01-15T23:44:26.087950Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 15 23:44:26.088061 waagent[2128]: 2026-01-15T23:44:26.088011Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 15 23:44:26.088278 waagent[2128]: 2026-01-15T23:44:26.088251Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 23:44:26.088390 waagent[2128]: 2026-01-15T23:44:26.088370Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 23:44:26.088574 waagent[2128]: 2026-01-15T23:44:26.088543Z INFO EnvHandler ExtHandler Configure routes Jan 15 23:44:26.088938 waagent[2128]: 2026-01-15T23:44:26.088911Z INFO EnvHandler ExtHandler Gateway:None Jan 15 23:44:26.089047 waagent[2128]: 2026-01-15T23:44:26.089027Z INFO EnvHandler ExtHandler Routes:None Jan 15 23:44:26.089196 waagent[2128]: 2026-01-15T23:44:26.089164Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 15 23:44:26.089323 waagent[2128]: 2026-01-15T23:44:26.089215Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 15 23:44:26.089836 waagent[2128]: 2026-01-15T23:44:26.089809Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 15 23:44:26.095289 waagent[2128]: 2026-01-15T23:44:26.095206Z INFO ExtHandler ExtHandler Jan 15 23:44:26.095372 waagent[2128]: 2026-01-15T23:44:26.095327Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 443c7167-9e09-48cb-8f4b-cadd5db64290 correlation b0630c52-2a16-42f1-a68f-9928be7c250e created: 2026-01-15T23:43:26.919037Z] Jan 15 23:44:26.096039 waagent[2128]: 2026-01-15T23:44:26.095947Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 15 23:44:26.096533 waagent[2128]: 2026-01-15T23:44:26.096499Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 15 23:44:26.118435 waagent[2128]: 2026-01-15T23:44:26.118389Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 15 23:44:26.118435 waagent[2128]: Try `iptables -h' or 'iptables --help' for more information.) Jan 15 23:44:26.119067 waagent[2128]: 2026-01-15T23:44:26.118988Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CC046E0D-7770-43B0-BF4C-793BFBDD5DBC;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 15 23:44:26.141958 waagent[2128]: 2026-01-15T23:44:26.141902Z INFO MonitorHandler ExtHandler Network interfaces: Jan 15 23:44:26.141958 waagent[2128]: Executing ['ip', '-a', '-o', 'link']: Jan 15 23:44:26.141958 waagent[2128]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 15 23:44:26.141958 waagent[2128]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b6:fb:72 brd ff:ff:ff:ff:ff:ff Jan 15 23:44:26.141958 waagent[2128]: 3: enP50726s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b6:fb:72 brd ff:ff:ff:ff:ff:ff\ altname enP50726p0s2 Jan 15 23:44:26.141958 waagent[2128]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 15 23:44:26.141958 waagent[2128]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 15 23:44:26.141958 waagent[2128]: 2: eth0 inet 10.200.20.15/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 15 23:44:26.141958 waagent[2128]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 15 23:44:26.141958 waagent[2128]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 15 23:44:26.141958 waagent[2128]: 2: eth0 inet6 fe80::7eed:8dff:feb6:fb72/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 15 23:44:26.193522 waagent[2128]: 2026-01-15T23:44:26.193469Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 15 23:44:26.193522 waagent[2128]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:44:26.193522 waagent[2128]: pkts bytes target prot opt in out source destination Jan 15 23:44:26.193522 waagent[2128]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:44:26.193522 waagent[2128]: pkts bytes target prot opt in out source destination Jan 15 23:44:26.193522 waagent[2128]: Chain OUTPUT (policy ACCEPT 5 packets, 646 bytes) Jan 15 23:44:26.193522 waagent[2128]: pkts bytes target prot opt in out source destination Jan 15 23:44:26.193522 waagent[2128]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 23:44:26.193522 waagent[2128]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 23:44:26.193522 waagent[2128]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 23:44:26.197130 waagent[2128]: 2026-01-15T23:44:26.197085Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 15 23:44:26.197130 waagent[2128]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:44:26.197130 waagent[2128]: pkts bytes target prot opt in out source destination Jan 15 23:44:26.197130 waagent[2128]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:44:26.197130 waagent[2128]: pkts bytes target prot opt in out source destination Jan 15 23:44:26.197130 waagent[2128]: Chain OUTPUT (policy ACCEPT 5 packets, 646 bytes) Jan 15 23:44:26.197130 waagent[2128]: pkts bytes target prot opt in out source destination Jan 15 23:44:26.197130 waagent[2128]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 23:44:26.197130 waagent[2128]: 9 816 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 23:44:26.197130 waagent[2128]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 23:44:26.197310 waagent[2128]: 2026-01-15T23:44:26.197287Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 15 23:44:31.952485 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 23:44:31.953788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:44:32.058685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:44:32.061591 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:44:32.183076 kubelet[2283]: E0115 23:44:32.183030 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:44:32.186020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:44:32.186129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:44:32.186567 systemd[1]: kubelet.service: Consumed 109ms CPU time, 107.6M memory peak. Jan 15 23:44:42.436689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 23:44:42.439784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:44:42.763757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:44:42.771835 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:44:42.796568 kubelet[2298]: E0115 23:44:42.796520 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:44:42.798739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:44:42.798930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:44:42.799252 systemd[1]: kubelet.service: Consumed 104ms CPU time, 104.8M memory peak. Jan 15 23:44:44.250747 chronyd[1858]: Selected source PHC0 Jan 15 23:44:50.209758 waagent[2128]: 2026-01-15T23:44:50.209115Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 15 23:44:50.215675 waagent[2128]: 2026-01-15T23:44:50.215645Z INFO ExtHandler Jan 15 23:44:50.215816 waagent[2128]: 2026-01-15T23:44:50.215797Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 807a2813-89af-4efd-a10e-53f2f24612ba eTag: 8689906094196317294 source: Fabric] Jan 15 23:44:50.216130 waagent[2128]: 2026-01-15T23:44:50.216104Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 15 23:44:50.216713 waagent[2128]: 2026-01-15T23:44:50.216679Z INFO ExtHandler Jan 15 23:44:50.216850 waagent[2128]: 2026-01-15T23:44:50.216827Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 15 23:44:50.267244 waagent[2128]: 2026-01-15T23:44:50.267212Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 15 23:44:50.313655 waagent[2128]: 2026-01-15T23:44:50.313537Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EE48F914C0A14376873A05DD3CADAC3FDFB0AA9C', 'hasPrivateKey': True} Jan 15 23:44:50.313939 waagent[2128]: 2026-01-15T23:44:50.313902Z INFO ExtHandler Fetch goal state completed Jan 15 23:44:50.314215 waagent[2128]: 2026-01-15T23:44:50.314186Z INFO ExtHandler ExtHandler Jan 15 23:44:50.314260 waagent[2128]: 2026-01-15T23:44:50.314242Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 31db35ee-898a-4212-845f-7653030e1da3 correlation b0630c52-2a16-42f1-a68f-9928be7c250e created: 2026-01-15T23:44:40.872498Z] Jan 15 23:44:50.314474 waagent[2128]: 2026-01-15T23:44:50.314447Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 15 23:44:50.314880 waagent[2128]: 2026-01-15T23:44:50.314847Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 15 23:44:52.834964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 23:44:52.836208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:44:53.082792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:44:53.085325 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:44:53.110050 kubelet[2318]: E0115 23:44:53.109998 2318 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:44:53.111836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:44:53.112038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:44:53.112504 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.9M memory peak. Jan 15 23:44:53.532861 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 23:44:53.534046 systemd[1]: Started sshd@0-10.200.20.15:22-10.200.16.10:55490.service - OpenSSH per-connection server daemon (10.200.16.10:55490). Jan 15 23:44:54.158736 sshd[2326]: Accepted publickey for core from 10.200.16.10 port 55490 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:44:54.159260 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:44:54.162845 systemd-logind[1880]: New session 3 of user core. Jan 15 23:44:54.173908 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 23:44:54.588512 systemd[1]: Started sshd@1-10.200.20.15:22-10.200.16.10:55492.service - OpenSSH per-connection server daemon (10.200.16.10:55492). Jan 15 23:44:55.043749 sshd[2332]: Accepted publickey for core from 10.200.16.10 port 55492 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:44:55.044888 sshd-session[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:44:55.048455 systemd-logind[1880]: New session 4 of user core. Jan 15 23:44:55.059737 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 23:44:55.376340 sshd[2335]: Connection closed by 10.200.16.10 port 55492 Jan 15 23:44:55.376256 sshd-session[2332]: pam_unix(sshd:session): session closed for user core Jan 15 23:44:55.379581 systemd[1]: sshd@1-10.200.20.15:22-10.200.16.10:55492.service: Deactivated successfully. Jan 15 23:44:55.381547 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 23:44:55.382490 systemd-logind[1880]: Session 4 logged out. Waiting for processes to exit. Jan 15 23:44:55.384043 systemd-logind[1880]: Removed session 4. Jan 15 23:44:55.444564 systemd[1]: Started sshd@2-10.200.20.15:22-10.200.16.10:55496.service - OpenSSH per-connection server daemon (10.200.16.10:55496). Jan 15 23:44:55.859691 sshd[2341]: Accepted publickey for core from 10.200.16.10 port 55496 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:44:55.860741 sshd-session[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:44:55.864287 systemd-logind[1880]: New session 5 of user core. Jan 15 23:44:55.871739 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 23:44:56.174804 sshd[2344]: Connection closed by 10.200.16.10 port 55496 Jan 15 23:44:56.174661 sshd-session[2341]: pam_unix(sshd:session): session closed for user core Jan 15 23:44:56.179003 systemd[1]: sshd@2-10.200.20.15:22-10.200.16.10:55496.service: Deactivated successfully. Jan 15 23:44:56.180716 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 23:44:56.182112 systemd-logind[1880]: Session 5 logged out. Waiting for processes to exit. Jan 15 23:44:56.183026 systemd-logind[1880]: Removed session 5. Jan 15 23:44:56.260827 systemd[1]: Started sshd@3-10.200.20.15:22-10.200.16.10:55498.service - OpenSSH per-connection server daemon (10.200.16.10:55498). Jan 15 23:44:56.712480 sshd[2350]: Accepted publickey for core from 10.200.16.10 port 55498 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:44:56.713538 sshd-session[2350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:44:56.717450 systemd-logind[1880]: New session 6 of user core. Jan 15 23:44:56.723736 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 23:44:57.038992 sshd[2353]: Connection closed by 10.200.16.10 port 55498 Jan 15 23:44:57.039615 sshd-session[2350]: pam_unix(sshd:session): session closed for user core Jan 15 23:44:57.042869 systemd-logind[1880]: Session 6 logged out. Waiting for processes to exit. Jan 15 23:44:57.043503 systemd[1]: sshd@3-10.200.20.15:22-10.200.16.10:55498.service: Deactivated successfully. Jan 15 23:44:57.045155 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 23:44:57.046899 systemd-logind[1880]: Removed session 6. Jan 15 23:44:57.138816 systemd[1]: Started sshd@4-10.200.20.15:22-10.200.16.10:55504.service - OpenSSH per-connection server daemon (10.200.16.10:55504). Jan 15 23:44:57.628459 sshd[2359]: Accepted publickey for core from 10.200.16.10 port 55504 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:44:57.629518 sshd-session[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:44:57.633180 systemd-logind[1880]: New session 7 of user core. Jan 15 23:44:57.643918 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 23:44:58.061315 sudo[2363]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 23:44:58.061531 sudo[2363]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:44:58.091149 sudo[2363]: pam_unix(sudo:session): session closed for user root Jan 15 23:44:58.169951 sshd[2362]: Connection closed by 10.200.16.10 port 55504 Jan 15 23:44:58.169099 sshd-session[2359]: pam_unix(sshd:session): session closed for user core Jan 15 23:44:58.172811 systemd[1]: sshd@4-10.200.20.15:22-10.200.16.10:55504.service: Deactivated successfully. Jan 15 23:44:58.174984 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 23:44:58.175821 systemd-logind[1880]: Session 7 logged out. Waiting for processes to exit. Jan 15 23:44:58.177465 systemd-logind[1880]: Removed session 7. Jan 15 23:44:58.237421 systemd[1]: Started sshd@5-10.200.20.15:22-10.200.16.10:55512.service - OpenSSH per-connection server daemon (10.200.16.10:55512). Jan 15 23:44:58.653258 sshd[2369]: Accepted publickey for core from 10.200.16.10 port 55512 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:44:58.654012 sshd-session[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:44:58.657777 systemd-logind[1880]: New session 8 of user core. Jan 15 23:44:58.665762 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 23:44:58.888882 sudo[2374]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 23:44:58.889092 sudo[2374]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:44:58.895686 sudo[2374]: pam_unix(sudo:session): session closed for user root Jan 15 23:44:58.899119 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 15 23:44:58.899308 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:44:58.906975 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 23:44:58.931978 augenrules[2396]: No rules Jan 15 23:44:58.933099 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 23:44:58.933348 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 23:44:58.936741 sudo[2373]: pam_unix(sudo:session): session closed for user root Jan 15 23:44:59.022349 sshd[2372]: Connection closed by 10.200.16.10 port 55512 Jan 15 23:44:59.022250 sshd-session[2369]: pam_unix(sshd:session): session closed for user core Jan 15 23:44:59.025134 systemd[1]: sshd@5-10.200.20.15:22-10.200.16.10:55512.service: Deactivated successfully. Jan 15 23:44:59.026751 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 23:44:59.027907 systemd-logind[1880]: Session 8 logged out. Waiting for processes to exit. Jan 15 23:44:59.029303 systemd-logind[1880]: Removed session 8. Jan 15 23:44:59.101041 systemd[1]: Started sshd@6-10.200.20.15:22-10.200.16.10:55524.service - OpenSSH per-connection server daemon (10.200.16.10:55524). Jan 15 23:44:59.520696 sshd[2405]: Accepted publickey for core from 10.200.16.10 port 55524 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:44:59.521550 sshd-session[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:44:59.525050 systemd-logind[1880]: New session 9 of user core. Jan 15 23:44:59.536751 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 23:44:59.755994 sudo[2409]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 23:44:59.756208 sudo[2409]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:45:01.223406 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 23:45:01.233873 (dockerd)[2427]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 23:45:02.167396 dockerd[2427]: time="2026-01-15T23:45:02.167340781Z" level=info msg="Starting up" Jan 15 23:45:02.169202 dockerd[2427]: time="2026-01-15T23:45:02.169174207Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 15 23:45:02.177199 dockerd[2427]: time="2026-01-15T23:45:02.177164606Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 15 23:45:02.283859 dockerd[2427]: time="2026-01-15T23:45:02.283819584Z" level=info msg="Loading containers: start." Jan 15 23:45:02.312709 kernel: Initializing XFRM netlink socket Jan 15 23:45:02.617731 systemd-networkd[1483]: docker0: Link UP Jan 15 23:45:02.634004 dockerd[2427]: time="2026-01-15T23:45:02.633966034Z" level=info msg="Loading containers: done." Jan 15 23:45:02.652222 dockerd[2427]: time="2026-01-15T23:45:02.652181088Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 23:45:02.652360 dockerd[2427]: time="2026-01-15T23:45:02.652251185Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 15 23:45:02.652360 dockerd[2427]: time="2026-01-15T23:45:02.652329858Z" level=info msg="Initializing buildkit" Jan 15 23:45:02.701106 dockerd[2427]: time="2026-01-15T23:45:02.701064251Z" level=info msg="Completed buildkit initialization" Jan 15 23:45:02.706142 dockerd[2427]: time="2026-01-15T23:45:02.706106481Z" level=info msg="Daemon has completed initialization" Jan 15 23:45:02.706638 dockerd[2427]: time="2026-01-15T23:45:02.706418806Z" level=info msg="API listen on /run/docker.sock" Jan 15 23:45:02.706547 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 23:45:03.334950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 15 23:45:03.336521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:45:03.395446 containerd[1903]: time="2026-01-15T23:45:03.395037294Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 15 23:45:03.622744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:45:03.628964 (kubelet)[2642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:45:03.655298 kubelet[2642]: E0115 23:45:03.655228 2642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:45:03.657282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:45:03.657389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:45:03.657710 systemd[1]: kubelet.service: Consumed 105ms CPU time, 104.7M memory peak. Jan 15 23:45:04.329440 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 15 23:45:04.788153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013204879.mount: Deactivated successfully. Jan 15 23:45:05.992583 containerd[1903]: time="2026-01-15T23:45:05.991988929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:05.997954 containerd[1903]: time="2026-01-15T23:45:05.997930692Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Jan 15 23:45:06.001579 containerd[1903]: time="2026-01-15T23:45:06.001554566Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:06.005516 containerd[1903]: time="2026-01-15T23:45:06.005485005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:06.006193 containerd[1903]: time="2026-01-15T23:45:06.006169927Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.611084344s" Jan 15 23:45:06.006245 containerd[1903]: time="2026-01-15T23:45:06.006196319Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 15 23:45:06.007728 containerd[1903]: time="2026-01-15T23:45:06.007701036Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 15 23:45:06.319201 update_engine[1884]: I20260115 23:45:06.318656 1884 update_attempter.cc:509] Updating boot flags... Jan 15 23:45:07.794965 containerd[1903]: time="2026-01-15T23:45:07.794913915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:07.798693 containerd[1903]: time="2026-01-15T23:45:07.798666007Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Jan 15 23:45:07.801662 containerd[1903]: time="2026-01-15T23:45:07.801639705Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:07.806328 containerd[1903]: time="2026-01-15T23:45:07.806301322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:07.809136 containerd[1903]: time="2026-01-15T23:45:07.809106737Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.801381397s" Jan 15 23:45:07.809167 containerd[1903]: time="2026-01-15T23:45:07.809141314Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 15 23:45:07.811607 containerd[1903]: time="2026-01-15T23:45:07.811584364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 15 23:45:09.213030 containerd[1903]: time="2026-01-15T23:45:09.212975333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:09.215922 containerd[1903]: time="2026-01-15T23:45:09.215893347Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Jan 15 23:45:09.219110 containerd[1903]: time="2026-01-15T23:45:09.219075067Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:09.223784 containerd[1903]: time="2026-01-15T23:45:09.223752827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:09.224869 containerd[1903]: time="2026-01-15T23:45:09.224327489Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.412715173s" Jan 15 23:45:09.224869 containerd[1903]: time="2026-01-15T23:45:09.224353225Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 15 23:45:09.224951 containerd[1903]: time="2026-01-15T23:45:09.224936607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 15 23:45:10.367563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275576032.mount: Deactivated successfully. Jan 15 23:45:10.639763 containerd[1903]: time="2026-01-15T23:45:10.639640613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:10.642654 containerd[1903]: time="2026-01-15T23:45:10.642630403Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 15 23:45:10.645912 containerd[1903]: time="2026-01-15T23:45:10.645887581Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:10.649834 containerd[1903]: time="2026-01-15T23:45:10.649809828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:10.650100 containerd[1903]: time="2026-01-15T23:45:10.650070007Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.42511464s" Jan 15 23:45:10.650100 containerd[1903]: time="2026-01-15T23:45:10.650101343Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 15 23:45:10.650592 containerd[1903]: time="2026-01-15T23:45:10.650524892Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 15 23:45:11.344387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455368569.mount: Deactivated successfully. Jan 15 23:45:12.650905 containerd[1903]: time="2026-01-15T23:45:12.650851116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:12.655505 containerd[1903]: time="2026-01-15T23:45:12.655474251Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jan 15 23:45:12.658847 containerd[1903]: time="2026-01-15T23:45:12.658822117Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:12.663533 containerd[1903]: time="2026-01-15T23:45:12.663503389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:12.664591 containerd[1903]: time="2026-01-15T23:45:12.664563136Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.014012732s" Jan 15 23:45:12.664615 containerd[1903]: time="2026-01-15T23:45:12.664596904Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 15 23:45:12.665241 containerd[1903]: time="2026-01-15T23:45:12.665008117Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 23:45:13.195130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286971601.mount: Deactivated successfully. Jan 15 23:45:13.267963 containerd[1903]: time="2026-01-15T23:45:13.267910448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:45:13.271190 containerd[1903]: time="2026-01-15T23:45:13.271161081Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 15 23:45:13.273937 containerd[1903]: time="2026-01-15T23:45:13.273911710Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:45:13.277997 containerd[1903]: time="2026-01-15T23:45:13.277970023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:45:13.278411 containerd[1903]: time="2026-01-15T23:45:13.278390515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 613.357742ms" Jan 15 23:45:13.278430 containerd[1903]: time="2026-01-15T23:45:13.278417508Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 15 23:45:13.279060 containerd[1903]: time="2026-01-15T23:45:13.279035098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 15 23:45:13.834944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 15 23:45:13.836438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:45:14.082507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:45:14.085163 (kubelet)[2853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:45:14.110005 kubelet[2853]: E0115 23:45:14.109958 2853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:45:14.112024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:45:14.112133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:45:14.112728 systemd[1]: kubelet.service: Consumed 104ms CPU time, 104.5M memory peak. Jan 15 23:45:14.342435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484480235.mount: Deactivated successfully. Jan 15 23:45:16.849148 containerd[1903]: time="2026-01-15T23:45:16.849098283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:16.851983 containerd[1903]: time="2026-01-15T23:45:16.851957325Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Jan 15 23:45:16.855479 containerd[1903]: time="2026-01-15T23:45:16.855384317Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:16.859361 containerd[1903]: time="2026-01-15T23:45:16.859322964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:16.860368 containerd[1903]: time="2026-01-15T23:45:16.859940043Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.580878265s" Jan 15 23:45:16.860368 containerd[1903]: time="2026-01-15T23:45:16.859970171Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 15 23:45:20.546397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:45:20.546502 systemd[1]: kubelet.service: Consumed 104ms CPU time, 104.5M memory peak. Jan 15 23:45:20.548053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:45:20.565916 systemd[1]: Reload requested from client PID 2943 ('systemctl') (unit session-9.scope)... Jan 15 23:45:20.565929 systemd[1]: Reloading... Jan 15 23:45:20.643642 zram_generator::config[2985]: No configuration found. Jan 15 23:45:20.808038 systemd[1]: Reloading finished in 241 ms. Jan 15 23:45:20.848975 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 23:45:20.849033 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 23:45:20.849282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:45:20.849320 systemd[1]: kubelet.service: Consumed 74ms CPU time, 94.9M memory peak. Jan 15 23:45:20.850481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:45:21.101760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:45:21.107839 (kubelet)[3056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 23:45:21.130424 kubelet[3056]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:45:21.130678 kubelet[3056]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 23:45:21.130717 kubelet[3056]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:45:21.130821 kubelet[3056]: I0115 23:45:21.130797 3056 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 23:45:21.491803 kubelet[3056]: I0115 23:45:21.491699 3056 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 15 23:45:21.493528 kubelet[3056]: I0115 23:45:21.492035 3056 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 23:45:21.493528 kubelet[3056]: I0115 23:45:21.492226 3056 server.go:956] "Client rotation is on, will bootstrap in background" Jan 15 23:45:21.509460 kubelet[3056]: E0115 23:45:21.509406 3056 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 15 23:45:21.509893 kubelet[3056]: I0115 23:45:21.509872 3056 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 23:45:21.519393 kubelet[3056]: I0115 23:45:21.519378 3056 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 23:45:21.521903 kubelet[3056]: I0115 23:45:21.521885 3056 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 23:45:21.523383 kubelet[3056]: I0115 23:45:21.523348 3056 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 23:45:21.523593 kubelet[3056]: I0115 23:45:21.523463 3056 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-6dfb6e6787","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 23:45:21.523750 kubelet[3056]: I0115 23:45:21.523738 3056 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 23:45:21.523792 kubelet[3056]: I0115 23:45:21.523786 3056 container_manager_linux.go:303] "Creating device plugin manager" Jan 15 23:45:21.524617 kubelet[3056]: I0115 23:45:21.524604 3056 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:45:21.527188 kubelet[3056]: I0115 23:45:21.527171 3056 kubelet.go:480] "Attempting to sync node with API server" Jan 15 23:45:21.527274 kubelet[3056]: I0115 23:45:21.527264 3056 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 23:45:21.527339 kubelet[3056]: I0115 23:45:21.527332 3056 kubelet.go:386] "Adding apiserver pod source" Jan 15 23:45:21.527394 kubelet[3056]: I0115 23:45:21.527387 3056 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 23:45:21.530640 kubelet[3056]: E0115 23:45:21.530608 3056 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-6dfb6e6787&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 15 23:45:21.530940 kubelet[3056]: E0115 23:45:21.530915 3056 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 15 23:45:21.531238 kubelet[3056]: I0115 23:45:21.531218 3056 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 15 23:45:21.532704 kubelet[3056]: I0115 23:45:21.531579 3056 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 15 23:45:21.532704 kubelet[3056]: W0115 23:45:21.531636 3056 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 23:45:21.534000 kubelet[3056]: I0115 23:45:21.533981 3056 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 23:45:21.534057 kubelet[3056]: I0115 23:45:21.534033 3056 server.go:1289] "Started kubelet" Jan 15 23:45:21.534216 kubelet[3056]: I0115 23:45:21.534185 3056 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 23:45:21.535572 kubelet[3056]: I0115 23:45:21.535532 3056 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 23:45:21.537115 kubelet[3056]: I0115 23:45:21.535770 3056 server.go:317] "Adding debug handlers to kubelet server" Jan 15 23:45:21.537115 kubelet[3056]: I0115 23:45:21.535821 3056 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 23:45:21.537115 kubelet[3056]: I0115 23:45:21.537001 3056 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 23:45:21.539717 kubelet[3056]: E0115 23:45:21.538150 3056 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-n-6dfb6e6787.188b0c39e8f38be5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-n-6dfb6e6787,UID:ci-4459.2.2-n-6dfb6e6787,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-n-6dfb6e6787,},FirstTimestamp:2026-01-15 23:45:21.533996005 +0000 UTC m=+0.423039368,LastTimestamp:2026-01-15 23:45:21.533996005 +0000 UTC m=+0.423039368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-n-6dfb6e6787,}" Jan 15 23:45:21.540508 kubelet[3056]: I0115 23:45:21.540107 3056 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 23:45:21.542491 kubelet[3056]: E0115 23:45:21.542464 3056 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" Jan 15 23:45:21.542551 kubelet[3056]: I0115 23:45:21.542498 3056 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 23:45:21.542677 kubelet[3056]: I0115 23:45:21.542661 3056 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 23:45:21.542724 kubelet[3056]: I0115 23:45:21.542708 3056 reconciler.go:26] "Reconciler: start to sync state" Jan 15 23:45:21.543189 kubelet[3056]: E0115 23:45:21.543167 3056 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 15 23:45:21.543359 kubelet[3056]: I0115 23:45:21.543337 3056 factory.go:223] Registration of the systemd container factory successfully Jan 15 23:45:21.543416 kubelet[3056]: I0115 23:45:21.543398 3056 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 23:45:21.543725 kubelet[3056]: E0115 23:45:21.543706 3056 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 23:45:21.545055 kubelet[3056]: E0115 23:45:21.545025 3056 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-6dfb6e6787?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="200ms" Jan 15 23:45:21.545287 kubelet[3056]: I0115 23:45:21.545267 3056 factory.go:223] Registration of the containerd container factory successfully Jan 15 23:45:21.555802 kubelet[3056]: I0115 23:45:21.555789 3056 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 23:45:21.556017 kubelet[3056]: I0115 23:45:21.555878 3056 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 23:45:21.556017 kubelet[3056]: I0115 23:45:21.555895 3056 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:45:21.629832 kubelet[3056]: I0115 23:45:21.629805 3056 policy_none.go:49] "None policy: Start" Jan 15 23:45:21.629960 kubelet[3056]: I0115 23:45:21.629952 3056 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 23:45:21.630048 kubelet[3056]: I0115 23:45:21.630037 3056 state_mem.go:35] "Initializing new in-memory state store" Jan 15 23:45:21.642739 kubelet[3056]: E0115 23:45:21.642706 3056 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" Jan 15 23:45:21.645052 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 23:45:21.653189 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 23:45:21.655954 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 23:45:21.666189 kubelet[3056]: E0115 23:45:21.666163 3056 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 15 23:45:21.666331 kubelet[3056]: I0115 23:45:21.666314 3056 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 23:45:21.666355 kubelet[3056]: I0115 23:45:21.666330 3056 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 23:45:21.668090 kubelet[3056]: I0115 23:45:21.667999 3056 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 23:45:21.668489 kubelet[3056]: E0115 23:45:21.668399 3056 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 23:45:21.668489 kubelet[3056]: E0115 23:45:21.668429 3056 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-n-6dfb6e6787\" not found" Jan 15 23:45:21.722731 kubelet[3056]: I0115 23:45:21.722703 3056 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 15 23:45:21.723865 kubelet[3056]: I0115 23:45:21.723698 3056 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 15 23:45:21.723865 kubelet[3056]: I0115 23:45:21.723720 3056 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 15 23:45:21.723865 kubelet[3056]: I0115 23:45:21.723739 3056 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 23:45:21.723865 kubelet[3056]: I0115 23:45:21.723744 3056 kubelet.go:2436] "Starting kubelet main sync loop" Jan 15 23:45:21.723865 kubelet[3056]: E0115 23:45:21.723778 3056 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 15 23:45:21.724882 kubelet[3056]: E0115 23:45:21.724863 3056 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 15 23:45:21.747128 kubelet[3056]: E0115 23:45:21.746325 3056 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-6dfb6e6787?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="400ms" Jan 15 23:45:21.768119 kubelet[3056]: I0115 23:45:21.767776 3056 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:21.768119 kubelet[3056]: E0115 23:45:21.768040 3056 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:21.943687 kubelet[3056]: I0115 23:45:21.943649 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4623537fe5ce29429bb45253e43d707c-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-6dfb6e6787\" (UID: \"4623537fe5ce29429bb45253e43d707c\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:21.943810 kubelet[3056]: I0115 23:45:21.943708 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4623537fe5ce29429bb45253e43d707c-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-6dfb6e6787\" (UID: \"4623537fe5ce29429bb45253e43d707c\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:21.943810 kubelet[3056]: I0115 23:45:21.943726 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4623537fe5ce29429bb45253e43d707c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-6dfb6e6787\" (UID: \"4623537fe5ce29429bb45253e43d707c\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:21.970620 kubelet[3056]: I0115 23:45:21.970229 3056 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:21.970620 kubelet[3056]: E0115 23:45:21.970525 3056 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.147006 kubelet[3056]: E0115 23:45:22.146970 3056 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-6dfb6e6787?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="800ms" Jan 15 23:45:22.372429 kubelet[3056]: I0115 23:45:22.372055 3056 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.372429 kubelet[3056]: E0115 23:45:22.372349 3056 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.676083 kubelet[3056]: E0115 23:45:22.486352 3056 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 15 23:45:22.676083 kubelet[3056]: E0115 23:45:22.532988 3056 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-6dfb6e6787&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 15 23:45:22.686543 systemd[1]: Created slice kubepods-burstable-pod4623537fe5ce29429bb45253e43d707c.slice - libcontainer container kubepods-burstable-pod4623537fe5ce29429bb45253e43d707c.slice. Jan 15 23:45:22.695305 kubelet[3056]: E0115 23:45:22.695246 3056 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.696515 containerd[1903]: time="2026-01-15T23:45:22.696447384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-6dfb6e6787,Uid:4623537fe5ce29429bb45253e43d707c,Namespace:kube-system,Attempt:0,}" Jan 15 23:45:22.699559 systemd[1]: Created slice kubepods-burstable-pod3e06ce6326a678568698a906fd5b1227.slice - libcontainer container kubepods-burstable-pod3e06ce6326a678568698a906fd5b1227.slice. Jan 15 23:45:22.706507 kubelet[3056]: E0115 23:45:22.706485 3056 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.708974 systemd[1]: Created slice kubepods-burstable-pod15215819bb9e4a5d43bc00c843f3a620.slice - libcontainer container kubepods-burstable-pod15215819bb9e4a5d43bc00c843f3a620.slice. Jan 15 23:45:22.710534 kubelet[3056]: E0115 23:45:22.710511 3056 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.732122 containerd[1903]: time="2026-01-15T23:45:22.732060005Z" level=info msg="connecting to shim 5eda157ea40d1b4b1d2c83d31e10a2a6e2af7cb276a0a6af07d293aed51570fa" address="unix:///run/containerd/s/3718dcd67fb6b06c385f9592f5382a2c3227eb67c0207506bfba1e7d60c1d0ae" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:45:22.747525 kubelet[3056]: I0115 23:45:22.747506 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e06ce6326a678568698a906fd5b1227-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" (UID: \"3e06ce6326a678568698a906fd5b1227\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.748054 kubelet[3056]: I0115 23:45:22.747825 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e06ce6326a678568698a906fd5b1227-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" (UID: \"3e06ce6326a678568698a906fd5b1227\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.748054 kubelet[3056]: I0115 23:45:22.747848 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e06ce6326a678568698a906fd5b1227-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" (UID: \"3e06ce6326a678568698a906fd5b1227\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.748054 kubelet[3056]: I0115 23:45:22.747859 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e06ce6326a678568698a906fd5b1227-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" (UID: \"3e06ce6326a678568698a906fd5b1227\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.747888 systemd[1]: Started cri-containerd-5eda157ea40d1b4b1d2c83d31e10a2a6e2af7cb276a0a6af07d293aed51570fa.scope - libcontainer container 5eda157ea40d1b4b1d2c83d31e10a2a6e2af7cb276a0a6af07d293aed51570fa. Jan 15 23:45:22.748388 kubelet[3056]: I0115 23:45:22.747868 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e06ce6326a678568698a906fd5b1227-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" (UID: \"3e06ce6326a678568698a906fd5b1227\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.748388 kubelet[3056]: I0115 23:45:22.748286 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15215819bb9e4a5d43bc00c843f3a620-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-6dfb6e6787\" (UID: \"15215819bb9e4a5d43bc00c843f3a620\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:22.770961 kubelet[3056]: E0115 23:45:22.770927 3056 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 15 23:45:22.775616 containerd[1903]: time="2026-01-15T23:45:22.775580030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-6dfb6e6787,Uid:4623537fe5ce29429bb45253e43d707c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5eda157ea40d1b4b1d2c83d31e10a2a6e2af7cb276a0a6af07d293aed51570fa\"" Jan 15 23:45:22.782904 containerd[1903]: time="2026-01-15T23:45:22.782873724Z" level=info msg="CreateContainer within sandbox \"5eda157ea40d1b4b1d2c83d31e10a2a6e2af7cb276a0a6af07d293aed51570fa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 23:45:22.804646 containerd[1903]: time="2026-01-15T23:45:22.803022466Z" level=info msg="Container daa6e4666edc6c8096744a8b61eb2e7839199de032863681b00d0f1930665bcd: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:45:22.817472 containerd[1903]: time="2026-01-15T23:45:22.817408348Z" level=info msg="CreateContainer within sandbox \"5eda157ea40d1b4b1d2c83d31e10a2a6e2af7cb276a0a6af07d293aed51570fa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"daa6e4666edc6c8096744a8b61eb2e7839199de032863681b00d0f1930665bcd\"" Jan 15 23:45:22.818708 containerd[1903]: time="2026-01-15T23:45:22.818106244Z" level=info msg="StartContainer for \"daa6e4666edc6c8096744a8b61eb2e7839199de032863681b00d0f1930665bcd\"" Jan 15 23:45:22.818905 containerd[1903]: time="2026-01-15T23:45:22.818881685Z" level=info msg="connecting to shim daa6e4666edc6c8096744a8b61eb2e7839199de032863681b00d0f1930665bcd" address="unix:///run/containerd/s/3718dcd67fb6b06c385f9592f5382a2c3227eb67c0207506bfba1e7d60c1d0ae" protocol=ttrpc version=3 Jan 15 23:45:22.835737 systemd[1]: Started cri-containerd-daa6e4666edc6c8096744a8b61eb2e7839199de032863681b00d0f1930665bcd.scope - libcontainer container daa6e4666edc6c8096744a8b61eb2e7839199de032863681b00d0f1930665bcd. Jan 15 23:45:22.867223 containerd[1903]: time="2026-01-15T23:45:22.867192423Z" level=info msg="StartContainer for \"daa6e4666edc6c8096744a8b61eb2e7839199de032863681b00d0f1930665bcd\" returns successfully" Jan 15 23:45:23.007899 containerd[1903]: time="2026-01-15T23:45:23.007796042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-6dfb6e6787,Uid:3e06ce6326a678568698a906fd5b1227,Namespace:kube-system,Attempt:0,}" Jan 15 23:45:23.011354 containerd[1903]: time="2026-01-15T23:45:23.011328131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-6dfb6e6787,Uid:15215819bb9e4a5d43bc00c843f3a620,Namespace:kube-system,Attempt:0,}" Jan 15 23:45:23.079695 containerd[1903]: time="2026-01-15T23:45:23.079656522Z" level=info msg="connecting to shim a18dbfc55b2269aee479f53cbd4dc8962d7fb7fe5aedf139ca4d7a0a0f2ab260" address="unix:///run/containerd/s/f49c20ce87e649a9c607b33ce9287814d41cc6595e1e498debab3ad4a4a80443" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:45:23.082083 containerd[1903]: time="2026-01-15T23:45:23.082056694Z" level=info msg="connecting to shim 1dddd6256cb5152cf361c8389f4ec85a8ed1d14e7adeb9c59ac25d5f4cd22602" address="unix:///run/containerd/s/c2e161abe389d99b75dae5c63ba06f76ab412563fa22fe3263dbf6749aa48410" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:45:23.103761 systemd[1]: Started cri-containerd-1dddd6256cb5152cf361c8389f4ec85a8ed1d14e7adeb9c59ac25d5f4cd22602.scope - libcontainer container 1dddd6256cb5152cf361c8389f4ec85a8ed1d14e7adeb9c59ac25d5f4cd22602. Jan 15 23:45:23.108206 systemd[1]: Started cri-containerd-a18dbfc55b2269aee479f53cbd4dc8962d7fb7fe5aedf139ca4d7a0a0f2ab260.scope - libcontainer container a18dbfc55b2269aee479f53cbd4dc8962d7fb7fe5aedf139ca4d7a0a0f2ab260. Jan 15 23:45:23.155790 containerd[1903]: time="2026-01-15T23:45:23.155750675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-6dfb6e6787,Uid:15215819bb9e4a5d43bc00c843f3a620,Namespace:kube-system,Attempt:0,} returns sandbox id \"a18dbfc55b2269aee479f53cbd4dc8962d7fb7fe5aedf139ca4d7a0a0f2ab260\"" Jan 15 23:45:23.159714 containerd[1903]: time="2026-01-15T23:45:23.159457423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-6dfb6e6787,Uid:3e06ce6326a678568698a906fd5b1227,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dddd6256cb5152cf361c8389f4ec85a8ed1d14e7adeb9c59ac25d5f4cd22602\"" Jan 15 23:45:23.166180 containerd[1903]: time="2026-01-15T23:45:23.166135646Z" level=info msg="CreateContainer within sandbox \"a18dbfc55b2269aee479f53cbd4dc8962d7fb7fe5aedf139ca4d7a0a0f2ab260\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 23:45:23.171537 containerd[1903]: time="2026-01-15T23:45:23.171501005Z" level=info msg="CreateContainer within sandbox \"1dddd6256cb5152cf361c8389f4ec85a8ed1d14e7adeb9c59ac25d5f4cd22602\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 23:45:23.175268 kubelet[3056]: I0115 23:45:23.175238 3056 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:23.192278 containerd[1903]: time="2026-01-15T23:45:23.192247258Z" level=info msg="Container b916f7f46ffc9b1051d947f7f804cce9f46d954c3cbdccd56a4bcf1183c6ed58: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:45:23.200696 containerd[1903]: time="2026-01-15T23:45:23.200664757Z" level=info msg="Container e47c41afd3ba7e01f46cc10572452c17787c7d16ca53c7b4937319e675d21fdd: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:45:23.282734 containerd[1903]: time="2026-01-15T23:45:23.282350185Z" level=info msg="CreateContainer within sandbox \"a18dbfc55b2269aee479f53cbd4dc8962d7fb7fe5aedf139ca4d7a0a0f2ab260\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b916f7f46ffc9b1051d947f7f804cce9f46d954c3cbdccd56a4bcf1183c6ed58\"" Jan 15 23:45:23.283637 containerd[1903]: time="2026-01-15T23:45:23.283474342Z" level=info msg="StartContainer for \"b916f7f46ffc9b1051d947f7f804cce9f46d954c3cbdccd56a4bcf1183c6ed58\"" Jan 15 23:45:23.284426 containerd[1903]: time="2026-01-15T23:45:23.284404577Z" level=info msg="connecting to shim b916f7f46ffc9b1051d947f7f804cce9f46d954c3cbdccd56a4bcf1183c6ed58" address="unix:///run/containerd/s/f49c20ce87e649a9c607b33ce9287814d41cc6595e1e498debab3ad4a4a80443" protocol=ttrpc version=3 Jan 15 23:45:23.297190 containerd[1903]: time="2026-01-15T23:45:23.297147560Z" level=info msg="CreateContainer within sandbox \"1dddd6256cb5152cf361c8389f4ec85a8ed1d14e7adeb9c59ac25d5f4cd22602\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e47c41afd3ba7e01f46cc10572452c17787c7d16ca53c7b4937319e675d21fdd\"" Jan 15 23:45:23.297938 containerd[1903]: time="2026-01-15T23:45:23.297819655Z" level=info msg="StartContainer for \"e47c41afd3ba7e01f46cc10572452c17787c7d16ca53c7b4937319e675d21fdd\"" Jan 15 23:45:23.302527 containerd[1903]: time="2026-01-15T23:45:23.302508159Z" level=info msg="connecting to shim e47c41afd3ba7e01f46cc10572452c17787c7d16ca53c7b4937319e675d21fdd" address="unix:///run/containerd/s/c2e161abe389d99b75dae5c63ba06f76ab412563fa22fe3263dbf6749aa48410" protocol=ttrpc version=3 Jan 15 23:45:23.304771 systemd[1]: Started cri-containerd-b916f7f46ffc9b1051d947f7f804cce9f46d954c3cbdccd56a4bcf1183c6ed58.scope - libcontainer container b916f7f46ffc9b1051d947f7f804cce9f46d954c3cbdccd56a4bcf1183c6ed58. Jan 15 23:45:23.331803 systemd[1]: Started cri-containerd-e47c41afd3ba7e01f46cc10572452c17787c7d16ca53c7b4937319e675d21fdd.scope - libcontainer container e47c41afd3ba7e01f46cc10572452c17787c7d16ca53c7b4937319e675d21fdd. Jan 15 23:45:23.365721 containerd[1903]: time="2026-01-15T23:45:23.365686024Z" level=info msg="StartContainer for \"b916f7f46ffc9b1051d947f7f804cce9f46d954c3cbdccd56a4bcf1183c6ed58\" returns successfully" Jan 15 23:45:23.381863 containerd[1903]: time="2026-01-15T23:45:23.381826383Z" level=info msg="StartContainer for \"e47c41afd3ba7e01f46cc10572452c17787c7d16ca53c7b4937319e675d21fdd\" returns successfully" Jan 15 23:45:23.738170 kubelet[3056]: E0115 23:45:23.738138 3056 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:23.739852 kubelet[3056]: E0115 23:45:23.739829 3056 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:23.743311 kubelet[3056]: E0115 23:45:23.743290 3056 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:23.832847 kubelet[3056]: E0115 23:45:23.832811 3056 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-n-6dfb6e6787\" not found" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:23.928252 kubelet[3056]: I0115 23:45:23.928218 3056 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:23.928252 kubelet[3056]: E0115 23:45:23.928253 3056 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-n-6dfb6e6787\": node \"ci-4459.2.2-n-6dfb6e6787\" not found" Jan 15 23:45:23.953856 kubelet[3056]: E0115 23:45:23.953824 3056 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" Jan 15 23:45:24.055273 kubelet[3056]: E0115 23:45:24.054907 3056 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" Jan 15 23:45:24.155527 kubelet[3056]: E0115 23:45:24.155485 3056 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" Jan 15 23:45:24.256110 kubelet[3056]: E0115 23:45:24.256072 3056 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" Jan 15 23:45:24.357135 kubelet[3056]: E0115 23:45:24.357103 3056 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" Jan 15 23:45:24.445726 kubelet[3056]: I0115 23:45:24.445412 3056 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:24.451137 kubelet[3056]: E0115 23:45:24.451114 3056 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-6dfb6e6787\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:24.451137 kubelet[3056]: I0115 23:45:24.451137 3056 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:24.452294 kubelet[3056]: E0115 23:45:24.452273 3056 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:24.452294 kubelet[3056]: I0115 23:45:24.452292 3056 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:24.453469 kubelet[3056]: E0115 23:45:24.453450 3056 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-6dfb6e6787\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:24.533102 kubelet[3056]: I0115 23:45:24.533077 3056 apiserver.go:52] "Watching apiserver" Jan 15 23:45:24.543636 kubelet[3056]: I0115 23:45:24.543612 3056 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 23:45:24.744241 kubelet[3056]: I0115 23:45:24.743609 3056 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:24.744241 kubelet[3056]: I0115 23:45:24.743705 3056 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:24.744241 kubelet[3056]: I0115 23:45:24.743895 3056 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:24.746470 kubelet[3056]: E0115 23:45:24.746306 3056 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:24.746470 kubelet[3056]: E0115 23:45:24.746313 3056 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-6dfb6e6787\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:24.747481 kubelet[3056]: E0115 23:45:24.747463 3056 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-6dfb6e6787\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.159152 systemd[1]: Reload requested from client PID 3329 ('systemctl') (unit session-9.scope)... Jan 15 23:45:26.159169 systemd[1]: Reloading... Jan 15 23:45:26.247704 zram_generator::config[3379]: No configuration found. Jan 15 23:45:26.408134 systemd[1]: Reloading finished in 248 ms. Jan 15 23:45:26.431507 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:45:26.447002 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 23:45:26.447207 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:45:26.447247 systemd[1]: kubelet.service: Consumed 673ms CPU time, 126.7M memory peak. Jan 15 23:45:26.451067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:45:26.560746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:45:26.568874 (kubelet)[3440]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 23:45:26.597010 kubelet[3440]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:45:26.598378 kubelet[3440]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 23:45:26.598378 kubelet[3440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:45:26.598378 kubelet[3440]: I0115 23:45:26.597369 3440 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 23:45:26.601917 kubelet[3440]: I0115 23:45:26.601888 3440 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 15 23:45:26.601917 kubelet[3440]: I0115 23:45:26.601912 3440 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 23:45:26.602063 kubelet[3440]: I0115 23:45:26.602044 3440 server.go:956] "Client rotation is on, will bootstrap in background" Jan 15 23:45:26.602942 kubelet[3440]: I0115 23:45:26.602924 3440 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 15 23:45:26.604650 kubelet[3440]: I0115 23:45:26.604493 3440 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 23:45:26.609115 kubelet[3440]: I0115 23:45:26.609096 3440 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 23:45:26.611631 kubelet[3440]: I0115 23:45:26.611610 3440 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 23:45:26.611855 kubelet[3440]: I0115 23:45:26.611828 3440 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 23:45:26.611957 kubelet[3440]: I0115 23:45:26.611852 3440 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-6dfb6e6787","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 23:45:26.612019 kubelet[3440]: I0115 23:45:26.611963 3440 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 23:45:26.612019 kubelet[3440]: I0115 23:45:26.611969 3440 container_manager_linux.go:303] "Creating device plugin manager" Jan 15 23:45:26.612019 kubelet[3440]: I0115 23:45:26.612002 3440 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:45:26.612145 kubelet[3440]: I0115 23:45:26.612131 3440 kubelet.go:480] "Attempting to sync node with API server" Jan 15 23:45:26.612145 kubelet[3440]: I0115 23:45:26.612144 3440 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 23:45:26.612196 kubelet[3440]: I0115 23:45:26.612179 3440 kubelet.go:386] "Adding apiserver pod source" Jan 15 23:45:26.612196 kubelet[3440]: I0115 23:45:26.612187 3440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 23:45:26.613650 kubelet[3440]: I0115 23:45:26.613210 3440 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 15 23:45:26.613650 kubelet[3440]: I0115 23:45:26.613544 3440 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 15 23:45:26.615028 kubelet[3440]: I0115 23:45:26.615009 3440 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 23:45:26.615086 kubelet[3440]: I0115 23:45:26.615041 3440 server.go:1289] "Started kubelet" Jan 15 23:45:26.616756 kubelet[3440]: I0115 23:45:26.616736 3440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 23:45:26.620264 kubelet[3440]: I0115 23:45:26.619337 3440 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 23:45:26.622139 kubelet[3440]: I0115 23:45:26.621921 3440 server.go:317] "Adding debug handlers to kubelet server" Jan 15 23:45:26.625604 kubelet[3440]: I0115 23:45:26.625566 3440 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 23:45:26.625865 kubelet[3440]: I0115 23:45:26.625852 3440 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 23:45:26.626343 kubelet[3440]: I0115 23:45:26.626324 3440 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 23:45:26.629653 kubelet[3440]: I0115 23:45:26.629484 3440 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 23:45:26.630429 kubelet[3440]: E0115 23:45:26.629765 3440 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-6dfb6e6787\" not found" Jan 15 23:45:26.631532 kubelet[3440]: I0115 23:45:26.631278 3440 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 23:45:26.631643 kubelet[3440]: I0115 23:45:26.631526 3440 reconciler.go:26] "Reconciler: start to sync state" Jan 15 23:45:26.633449 kubelet[3440]: I0115 23:45:26.633417 3440 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 15 23:45:26.635812 kubelet[3440]: I0115 23:45:26.635525 3440 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 15 23:45:26.635812 kubelet[3440]: I0115 23:45:26.635545 3440 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 15 23:45:26.635812 kubelet[3440]: I0115 23:45:26.635559 3440 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 23:45:26.635812 kubelet[3440]: I0115 23:45:26.635564 3440 kubelet.go:2436] "Starting kubelet main sync loop" Jan 15 23:45:26.635812 kubelet[3440]: E0115 23:45:26.635605 3440 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 23:45:26.641973 kubelet[3440]: E0115 23:45:26.641853 3440 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 23:45:26.641973 kubelet[3440]: I0115 23:45:26.641920 3440 factory.go:223] Registration of the systemd container factory successfully Jan 15 23:45:26.642594 kubelet[3440]: I0115 23:45:26.642557 3440 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 23:45:26.645672 kubelet[3440]: I0115 23:45:26.645596 3440 factory.go:223] Registration of the containerd container factory successfully Jan 15 23:45:26.697571 kubelet[3440]: I0115 23:45:26.696767 3440 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 23:45:26.698397 kubelet[3440]: I0115 23:45:26.697636 3440 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 23:45:26.698397 kubelet[3440]: I0115 23:45:26.697659 3440 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:45:26.698397 kubelet[3440]: I0115 23:45:26.697766 3440 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 23:45:26.698397 kubelet[3440]: I0115 23:45:26.697773 3440 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 23:45:26.698397 kubelet[3440]: I0115 23:45:26.697787 3440 policy_none.go:49] "None policy: Start" Jan 15 23:45:26.698397 kubelet[3440]: I0115 23:45:26.697795 3440 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 23:45:26.698397 kubelet[3440]: I0115 23:45:26.697802 3440 state_mem.go:35] "Initializing new in-memory state store" Jan 15 23:45:26.698397 kubelet[3440]: I0115 23:45:26.697859 3440 state_mem.go:75] "Updated machine memory state" Jan 15 23:45:26.702194 kubelet[3440]: E0115 23:45:26.702179 3440 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 15 23:45:26.702644 kubelet[3440]: I0115 23:45:26.702612 3440 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 23:45:26.702785 kubelet[3440]: I0115 23:45:26.702759 3440 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 23:45:26.702994 kubelet[3440]: I0115 23:45:26.702981 3440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 23:45:26.704845 kubelet[3440]: E0115 23:45:26.704831 3440 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 23:45:26.737130 kubelet[3440]: I0115 23:45:26.737105 3440 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.737575 kubelet[3440]: I0115 23:45:26.737347 3440 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.737711 kubelet[3440]: I0115 23:45:26.737434 3440 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.746710 kubelet[3440]: I0115 23:45:26.746693 3440 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 15 23:45:26.750490 kubelet[3440]: I0115 23:45:26.750476 3440 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 15 23:45:26.750989 kubelet[3440]: I0115 23:45:26.750973 3440 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 15 23:45:26.805087 kubelet[3440]: I0115 23:45:26.805042 3440 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.817295 kubelet[3440]: I0115 23:45:26.817266 3440 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.817409 kubelet[3440]: I0115 23:45:26.817338 3440 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.832933 kubelet[3440]: I0115 23:45:26.832822 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4623537fe5ce29429bb45253e43d707c-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-6dfb6e6787\" (UID: \"4623537fe5ce29429bb45253e43d707c\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.832933 kubelet[3440]: I0115 23:45:26.832852 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4623537fe5ce29429bb45253e43d707c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-6dfb6e6787\" (UID: \"4623537fe5ce29429bb45253e43d707c\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.832933 kubelet[3440]: I0115 23:45:26.832866 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e06ce6326a678568698a906fd5b1227-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" (UID: \"3e06ce6326a678568698a906fd5b1227\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.832933 kubelet[3440]: I0115 23:45:26.832895 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e06ce6326a678568698a906fd5b1227-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" (UID: \"3e06ce6326a678568698a906fd5b1227\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.833147 kubelet[3440]: I0115 23:45:26.832923 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15215819bb9e4a5d43bc00c843f3a620-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-6dfb6e6787\" (UID: \"15215819bb9e4a5d43bc00c843f3a620\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.833147 kubelet[3440]: I0115 23:45:26.833104 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e06ce6326a678568698a906fd5b1227-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" (UID: \"3e06ce6326a678568698a906fd5b1227\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.833147 kubelet[3440]: I0115 23:45:26.833119 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e06ce6326a678568698a906fd5b1227-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" (UID: \"3e06ce6326a678568698a906fd5b1227\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.833147 kubelet[3440]: I0115 23:45:26.833128 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e06ce6326a678568698a906fd5b1227-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-6dfb6e6787\" (UID: \"3e06ce6326a678568698a906fd5b1227\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:26.833300 kubelet[3440]: I0115 23:45:26.833137 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4623537fe5ce29429bb45253e43d707c-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-6dfb6e6787\" (UID: \"4623537fe5ce29429bb45253e43d707c\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:27.192433 sudo[3476]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 15 23:45:27.192684 sudo[3476]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 15 23:45:27.423532 sudo[3476]: pam_unix(sudo:session): session closed for user root Jan 15 23:45:27.613294 kubelet[3440]: I0115 23:45:27.613170 3440 apiserver.go:52] "Watching apiserver" Jan 15 23:45:27.632053 kubelet[3440]: I0115 23:45:27.632015 3440 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 23:45:27.673644 kubelet[3440]: I0115 23:45:27.673485 3440 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:27.681779 kubelet[3440]: I0115 23:45:27.681758 3440 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 15 23:45:27.681860 kubelet[3440]: E0115 23:45:27.681803 3440 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-6dfb6e6787\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" Jan 15 23:45:27.702581 kubelet[3440]: I0115 23:45:27.702457 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-n-6dfb6e6787" podStartSLOduration=1.702444633 podStartE2EDuration="1.702444633s" podCreationTimestamp="2026-01-15 23:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:45:27.69092671 +0000 UTC m=+1.118034885" watchObservedRunningTime="2026-01-15 23:45:27.702444633 +0000 UTC m=+1.129552808" Jan 15 23:45:27.712478 kubelet[3440]: I0115 23:45:27.712053 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-n-6dfb6e6787" podStartSLOduration=1.712045603 podStartE2EDuration="1.712045603s" podCreationTimestamp="2026-01-15 23:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:45:27.702748708 +0000 UTC m=+1.129856883" watchObservedRunningTime="2026-01-15 23:45:27.712045603 +0000 UTC m=+1.139153778" Jan 15 23:45:27.712478 kubelet[3440]: I0115 23:45:27.712110 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-6dfb6e6787" podStartSLOduration=1.7121074680000001 podStartE2EDuration="1.712107468s" podCreationTimestamp="2026-01-15 23:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:45:27.711721272 +0000 UTC m=+1.138829471" watchObservedRunningTime="2026-01-15 23:45:27.712107468 +0000 UTC m=+1.139215651" Jan 15 23:45:28.994141 sudo[2409]: pam_unix(sudo:session): session closed for user root Jan 15 23:45:29.074683 sshd[2408]: Connection closed by 10.200.16.10 port 55524 Jan 15 23:45:29.075184 sshd-session[2405]: pam_unix(sshd:session): session closed for user core Jan 15 23:45:29.079021 systemd[1]: sshd@6-10.200.20.15:22-10.200.16.10:55524.service: Deactivated successfully. Jan 15 23:45:29.080919 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 23:45:29.081139 systemd[1]: session-9.scope: Consumed 4.840s CPU time, 265M memory peak. Jan 15 23:45:29.082452 systemd-logind[1880]: Session 9 logged out. Waiting for processes to exit. Jan 15 23:45:29.084068 systemd-logind[1880]: Removed session 9. Jan 15 23:45:33.589386 kubelet[3440]: I0115 23:45:33.589322 3440 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 23:45:33.590641 containerd[1903]: time="2026-01-15T23:45:33.589985822Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 23:45:33.590850 kubelet[3440]: I0115 23:45:33.590152 3440 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 23:45:34.701606 systemd[1]: Created slice kubepods-besteffort-pod9d7ed3d3_b7c1_4b5c_bb7a_ddecae270e26.slice - libcontainer container kubepods-besteffort-pod9d7ed3d3_b7c1_4b5c_bb7a_ddecae270e26.slice. Jan 15 23:45:34.716942 systemd[1]: Created slice kubepods-burstable-pod37aa14db_936c_4a40_928c_4e00cd92b33f.slice - libcontainer container kubepods-burstable-pod37aa14db_936c_4a40_928c_4e00cd92b33f.slice. Jan 15 23:45:34.776918 kubelet[3440]: I0115 23:45:34.776865 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d7ed3d3-b7c1-4b5c-bb7a-ddecae270e26-kube-proxy\") pod \"kube-proxy-xzhgq\" (UID: \"9d7ed3d3-b7c1-4b5c-bb7a-ddecae270e26\") " pod="kube-system/kube-proxy-xzhgq" Jan 15 23:45:34.776918 kubelet[3440]: I0115 23:45:34.776899 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d7ed3d3-b7c1-4b5c-bb7a-ddecae270e26-lib-modules\") pod \"kube-proxy-xzhgq\" (UID: \"9d7ed3d3-b7c1-4b5c-bb7a-ddecae270e26\") " pod="kube-system/kube-proxy-xzhgq" Jan 15 23:45:34.776918 kubelet[3440]: I0115 23:45:34.776912 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cni-path\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777540 kubelet[3440]: I0115 23:45:34.776930 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d7ed3d3-b7c1-4b5c-bb7a-ddecae270e26-xtables-lock\") pod \"kube-proxy-xzhgq\" (UID: \"9d7ed3d3-b7c1-4b5c-bb7a-ddecae270e26\") " pod="kube-system/kube-proxy-xzhgq" Jan 15 23:45:34.777540 kubelet[3440]: I0115 23:45:34.776940 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-run\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777540 kubelet[3440]: I0115 23:45:34.776948 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-hostproc\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777540 kubelet[3440]: I0115 23:45:34.776956 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-lib-modules\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777540 kubelet[3440]: I0115 23:45:34.776965 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-xtables-lock\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777540 kubelet[3440]: I0115 23:45:34.776975 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-host-proc-sys-kernel\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777670 kubelet[3440]: I0115 23:45:34.776984 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37aa14db-936c-4a40-928c-4e00cd92b33f-hubble-tls\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777670 kubelet[3440]: I0115 23:45:34.776996 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9cms\" (UniqueName: \"kubernetes.io/projected/37aa14db-936c-4a40-928c-4e00cd92b33f-kube-api-access-w9cms\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777670 kubelet[3440]: I0115 23:45:34.777015 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfvvg\" (UniqueName: \"kubernetes.io/projected/9d7ed3d3-b7c1-4b5c-bb7a-ddecae270e26-kube-api-access-jfvvg\") pod \"kube-proxy-xzhgq\" (UID: \"9d7ed3d3-b7c1-4b5c-bb7a-ddecae270e26\") " pod="kube-system/kube-proxy-xzhgq" Jan 15 23:45:34.777670 kubelet[3440]: I0115 23:45:34.777024 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-cgroup\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777670 kubelet[3440]: I0115 23:45:34.777055 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-config-path\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777743 kubelet[3440]: I0115 23:45:34.777066 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-bpf-maps\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777743 kubelet[3440]: I0115 23:45:34.777084 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-etc-cni-netd\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777743 kubelet[3440]: I0115 23:45:34.777116 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37aa14db-936c-4a40-928c-4e00cd92b33f-clustermesh-secrets\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.777743 kubelet[3440]: I0115 23:45:34.777151 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-host-proc-sys-net\") pod \"cilium-m6m5h\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " pod="kube-system/cilium-m6m5h" Jan 15 23:45:34.798541 systemd[1]: Created slice kubepods-besteffort-podd513465d_02c8_4ae8_9542_a66946be2e56.slice - libcontainer container kubepods-besteffort-podd513465d_02c8_4ae8_9542_a66946be2e56.slice. Jan 15 23:45:34.879681 kubelet[3440]: I0115 23:45:34.877903 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k87z\" (UniqueName: \"kubernetes.io/projected/d513465d-02c8-4ae8-9542-a66946be2e56-kube-api-access-5k87z\") pod \"cilium-operator-6c4d7847fc-xrtdw\" (UID: \"d513465d-02c8-4ae8-9542-a66946be2e56\") " pod="kube-system/cilium-operator-6c4d7847fc-xrtdw" Jan 15 23:45:34.879859 kubelet[3440]: I0115 23:45:34.879840 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d513465d-02c8-4ae8-9542-a66946be2e56-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xrtdw\" (UID: \"d513465d-02c8-4ae8-9542-a66946be2e56\") " pod="kube-system/cilium-operator-6c4d7847fc-xrtdw" Jan 15 23:45:35.012578 containerd[1903]: time="2026-01-15T23:45:35.012485920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xzhgq,Uid:9d7ed3d3-b7c1-4b5c-bb7a-ddecae270e26,Namespace:kube-system,Attempt:0,}" Jan 15 23:45:35.020923 containerd[1903]: time="2026-01-15T23:45:35.020895564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6m5h,Uid:37aa14db-936c-4a40-928c-4e00cd92b33f,Namespace:kube-system,Attempt:0,}" Jan 15 23:45:35.069379 containerd[1903]: time="2026-01-15T23:45:35.069346185Z" level=info msg="connecting to shim e1f73c1c776ab66cba26c2bd527221c171d6f4f1c7603a192bffdddf6b94e08d" address="unix:///run/containerd/s/2143b7999e36042999d40467d060d2125e2f2f163968a10ed56960443448577d" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:45:35.084778 systemd[1]: Started cri-containerd-e1f73c1c776ab66cba26c2bd527221c171d6f4f1c7603a192bffdddf6b94e08d.scope - libcontainer container e1f73c1c776ab66cba26c2bd527221c171d6f4f1c7603a192bffdddf6b94e08d. Jan 15 23:45:35.089847 containerd[1903]: time="2026-01-15T23:45:35.089597871Z" level=info msg="connecting to shim 2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd" address="unix:///run/containerd/s/8c599edff5391f9d07d9f6887f21b0789814f745b83f2bad9e57ce2adc00383a" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:45:35.102953 containerd[1903]: time="2026-01-15T23:45:35.102931922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xrtdw,Uid:d513465d-02c8-4ae8-9542-a66946be2e56,Namespace:kube-system,Attempt:0,}" Jan 15 23:45:35.108847 systemd[1]: Started cri-containerd-2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd.scope - libcontainer container 2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd. Jan 15 23:45:35.119902 containerd[1903]: time="2026-01-15T23:45:35.119838043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xzhgq,Uid:9d7ed3d3-b7c1-4b5c-bb7a-ddecae270e26,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1f73c1c776ab66cba26c2bd527221c171d6f4f1c7603a192bffdddf6b94e08d\"" Jan 15 23:45:35.128671 containerd[1903]: time="2026-01-15T23:45:35.128114190Z" level=info msg="CreateContainer within sandbox \"e1f73c1c776ab66cba26c2bd527221c171d6f4f1c7603a192bffdddf6b94e08d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 23:45:35.143024 containerd[1903]: time="2026-01-15T23:45:35.142999994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6m5h,Uid:37aa14db-936c-4a40-928c-4e00cd92b33f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\"" Jan 15 23:45:35.144596 containerd[1903]: time="2026-01-15T23:45:35.144565843Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 15 23:45:35.166913 containerd[1903]: time="2026-01-15T23:45:35.166890056Z" level=info msg="Container 1b2966cf1eb937da73b49958cbf8b340b2f95a043cab2ba545c611a873866876: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:45:35.175638 containerd[1903]: time="2026-01-15T23:45:35.175596216Z" level=info msg="connecting to shim e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc" address="unix:///run/containerd/s/4a8a5f09282fe579c68ad4509ac833f1809010ce6d3f28143b97daba61a7c2be" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:45:35.188785 containerd[1903]: time="2026-01-15T23:45:35.188758129Z" level=info msg="CreateContainer within sandbox \"e1f73c1c776ab66cba26c2bd527221c171d6f4f1c7603a192bffdddf6b94e08d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1b2966cf1eb937da73b49958cbf8b340b2f95a043cab2ba545c611a873866876\"" Jan 15 23:45:35.193276 containerd[1903]: time="2026-01-15T23:45:35.193091376Z" level=info msg="StartContainer for \"1b2966cf1eb937da73b49958cbf8b340b2f95a043cab2ba545c611a873866876\"" Jan 15 23:45:35.193868 systemd[1]: Started cri-containerd-e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc.scope - libcontainer container e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc. Jan 15 23:45:35.196334 containerd[1903]: time="2026-01-15T23:45:35.196298803Z" level=info msg="connecting to shim 1b2966cf1eb937da73b49958cbf8b340b2f95a043cab2ba545c611a873866876" address="unix:///run/containerd/s/2143b7999e36042999d40467d060d2125e2f2f163968a10ed56960443448577d" protocol=ttrpc version=3 Jan 15 23:45:35.217761 systemd[1]: Started cri-containerd-1b2966cf1eb937da73b49958cbf8b340b2f95a043cab2ba545c611a873866876.scope - libcontainer container 1b2966cf1eb937da73b49958cbf8b340b2f95a043cab2ba545c611a873866876. Jan 15 23:45:35.228008 containerd[1903]: time="2026-01-15T23:45:35.227977687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xrtdw,Uid:d513465d-02c8-4ae8-9542-a66946be2e56,Namespace:kube-system,Attempt:0,} returns sandbox id \"e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc\"" Jan 15 23:45:35.282112 containerd[1903]: time="2026-01-15T23:45:35.282018113Z" level=info msg="StartContainer for \"1b2966cf1eb937da73b49958cbf8b340b2f95a043cab2ba545c611a873866876\" returns successfully" Jan 15 23:45:35.700366 kubelet[3440]: I0115 23:45:35.699402 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xzhgq" podStartSLOduration=1.6993886900000001 podStartE2EDuration="1.69938869s" podCreationTimestamp="2026-01-15 23:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:45:35.699225416 +0000 UTC m=+9.126333599" watchObservedRunningTime="2026-01-15 23:45:35.69938869 +0000 UTC m=+9.126496865" Jan 15 23:45:47.194954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980229634.mount: Deactivated successfully. Jan 15 23:45:48.503653 containerd[1903]: time="2026-01-15T23:45:48.503337515Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:48.512543 containerd[1903]: time="2026-01-15T23:45:48.512509188Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 15 23:45:48.561651 containerd[1903]: time="2026-01-15T23:45:48.561563728Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:48.565311 containerd[1903]: time="2026-01-15T23:45:48.565228787Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.420463918s" Jan 15 23:45:48.565311 containerd[1903]: time="2026-01-15T23:45:48.565256116Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 15 23:45:48.566558 containerd[1903]: time="2026-01-15T23:45:48.566379007Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 15 23:45:48.611767 containerd[1903]: time="2026-01-15T23:45:48.611744687Z" level=info msg="CreateContainer within sandbox \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 15 23:45:48.634903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2678132909.mount: Deactivated successfully. Jan 15 23:45:48.638230 containerd[1903]: time="2026-01-15T23:45:48.638160584Z" level=info msg="Container 774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:45:48.654307 containerd[1903]: time="2026-01-15T23:45:48.654240140Z" level=info msg="CreateContainer within sandbox \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\"" Jan 15 23:45:48.655508 containerd[1903]: time="2026-01-15T23:45:48.654988603Z" level=info msg="StartContainer for \"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\"" Jan 15 23:45:48.656699 containerd[1903]: time="2026-01-15T23:45:48.656617219Z" level=info msg="connecting to shim 774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e" address="unix:///run/containerd/s/8c599edff5391f9d07d9f6887f21b0789814f745b83f2bad9e57ce2adc00383a" protocol=ttrpc version=3 Jan 15 23:45:48.677808 systemd[1]: Started cri-containerd-774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e.scope - libcontainer container 774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e. Jan 15 23:45:48.706011 systemd[1]: cri-containerd-774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e.scope: Deactivated successfully. Jan 15 23:45:48.708605 containerd[1903]: time="2026-01-15T23:45:48.706457847Z" level=info msg="StartContainer for \"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\" returns successfully" Jan 15 23:45:48.710297 containerd[1903]: time="2026-01-15T23:45:48.710121706Z" level=info msg="received container exit event container_id:\"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\" id:\"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\" pid:3862 exited_at:{seconds:1768520748 nanos:709519028}" Jan 15 23:45:49.633422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e-rootfs.mount: Deactivated successfully. Jan 15 23:45:50.723880 containerd[1903]: time="2026-01-15T23:45:50.723838242Z" level=info msg="CreateContainer within sandbox \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 15 23:45:50.744928 containerd[1903]: time="2026-01-15T23:45:50.743816172Z" level=info msg="Container 8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:45:50.745869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935511057.mount: Deactivated successfully. Jan 15 23:45:50.756652 containerd[1903]: time="2026-01-15T23:45:50.756561416Z" level=info msg="CreateContainer within sandbox \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\"" Jan 15 23:45:50.757307 containerd[1903]: time="2026-01-15T23:45:50.757170662Z" level=info msg="StartContainer for \"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\"" Jan 15 23:45:50.757921 containerd[1903]: time="2026-01-15T23:45:50.757890637Z" level=info msg="connecting to shim 8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc" address="unix:///run/containerd/s/8c599edff5391f9d07d9f6887f21b0789814f745b83f2bad9e57ce2adc00383a" protocol=ttrpc version=3 Jan 15 23:45:50.775742 systemd[1]: Started cri-containerd-8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc.scope - libcontainer container 8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc. Jan 15 23:45:50.799694 containerd[1903]: time="2026-01-15T23:45:50.799653090Z" level=info msg="StartContainer for \"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\" returns successfully" Jan 15 23:45:50.808053 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 23:45:50.808385 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:45:50.809769 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:45:50.810941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:45:50.812521 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 15 23:45:50.815945 systemd[1]: cri-containerd-8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc.scope: Deactivated successfully. Jan 15 23:45:50.817195 containerd[1903]: time="2026-01-15T23:45:50.817104971Z" level=info msg="received container exit event container_id:\"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\" id:\"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\" pid:3908 exited_at:{seconds:1768520750 nanos:816974098}" Jan 15 23:45:50.833780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:45:51.725704 containerd[1903]: time="2026-01-15T23:45:51.724317220Z" level=info msg="CreateContainer within sandbox \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 15 23:45:51.743976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc-rootfs.mount: Deactivated successfully. Jan 15 23:45:51.764752 containerd[1903]: time="2026-01-15T23:45:51.762598592Z" level=info msg="Container 486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:45:51.779442 containerd[1903]: time="2026-01-15T23:45:51.779402635Z" level=info msg="CreateContainer within sandbox \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\"" Jan 15 23:45:51.780413 containerd[1903]: time="2026-01-15T23:45:51.780380916Z" level=info msg="StartContainer for \"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\"" Jan 15 23:45:51.781492 containerd[1903]: time="2026-01-15T23:45:51.781454943Z" level=info msg="connecting to shim 486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646" address="unix:///run/containerd/s/8c599edff5391f9d07d9f6887f21b0789814f745b83f2bad9e57ce2adc00383a" protocol=ttrpc version=3 Jan 15 23:45:51.802757 systemd[1]: Started cri-containerd-486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646.scope - libcontainer container 486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646. Jan 15 23:45:51.860729 systemd[1]: cri-containerd-486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646.scope: Deactivated successfully. Jan 15 23:45:51.863794 containerd[1903]: time="2026-01-15T23:45:51.863757470Z" level=info msg="received container exit event container_id:\"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\" id:\"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\" pid:3955 exited_at:{seconds:1768520751 nanos:861543816}" Jan 15 23:45:51.870988 containerd[1903]: time="2026-01-15T23:45:51.870958148Z" level=info msg="StartContainer for \"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\" returns successfully" Jan 15 23:45:51.881762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646-rootfs.mount: Deactivated successfully. Jan 15 23:45:52.729118 containerd[1903]: time="2026-01-15T23:45:52.729069655Z" level=info msg="CreateContainer within sandbox \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 15 23:45:52.743827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount72024670.mount: Deactivated successfully. Jan 15 23:45:52.754037 containerd[1903]: time="2026-01-15T23:45:52.753450668Z" level=info msg="Container fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:45:52.770128 containerd[1903]: time="2026-01-15T23:45:52.770077573Z" level=info msg="CreateContainer within sandbox \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\"" Jan 15 23:45:52.770813 containerd[1903]: time="2026-01-15T23:45:52.770761828Z" level=info msg="StartContainer for \"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\"" Jan 15 23:45:52.772611 containerd[1903]: time="2026-01-15T23:45:52.772541181Z" level=info msg="connecting to shim fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b" address="unix:///run/containerd/s/8c599edff5391f9d07d9f6887f21b0789814f745b83f2bad9e57ce2adc00383a" protocol=ttrpc version=3 Jan 15 23:45:52.795757 systemd[1]: Started cri-containerd-fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b.scope - libcontainer container fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b. Jan 15 23:45:52.817524 systemd[1]: cri-containerd-fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b.scope: Deactivated successfully. Jan 15 23:45:52.823337 containerd[1903]: time="2026-01-15T23:45:52.823237849Z" level=info msg="received container exit event container_id:\"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\" id:\"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\" pid:4002 exited_at:{seconds:1768520752 nanos:817722388}" Jan 15 23:45:52.830137 containerd[1903]: time="2026-01-15T23:45:52.830107988Z" level=info msg="StartContainer for \"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\" returns successfully" Jan 15 23:45:53.733409 containerd[1903]: time="2026-01-15T23:45:53.733364205Z" level=info msg="CreateContainer within sandbox \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 15 23:45:53.743923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b-rootfs.mount: Deactivated successfully. Jan 15 23:45:53.755595 containerd[1903]: time="2026-01-15T23:45:53.754609947Z" level=info msg="Container 7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:45:53.769390 containerd[1903]: time="2026-01-15T23:45:53.769358539Z" level=info msg="CreateContainer within sandbox \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\"" Jan 15 23:45:53.769970 containerd[1903]: time="2026-01-15T23:45:53.769779335Z" level=info msg="StartContainer for \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\"" Jan 15 23:45:53.770816 containerd[1903]: time="2026-01-15T23:45:53.770796761Z" level=info msg="connecting to shim 7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1" address="unix:///run/containerd/s/8c599edff5391f9d07d9f6887f21b0789814f745b83f2bad9e57ce2adc00383a" protocol=ttrpc version=3 Jan 15 23:45:53.788839 systemd[1]: Started cri-containerd-7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1.scope - libcontainer container 7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1. Jan 15 23:45:53.827677 containerd[1903]: time="2026-01-15T23:45:53.827650889Z" level=info msg="StartContainer for \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\" returns successfully" Jan 15 23:45:53.906010 kubelet[3440]: I0115 23:45:53.905942 3440 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 15 23:45:53.953880 systemd[1]: Created slice kubepods-burstable-poded2a3b68_028f_4b21_a319_bad4118a5cbb.slice - libcontainer container kubepods-burstable-poded2a3b68_028f_4b21_a319_bad4118a5cbb.slice. Jan 15 23:45:53.961169 systemd[1]: Created slice kubepods-burstable-pod366f31a9_f256_4a1c_ab2f_6638780ea94c.slice - libcontainer container kubepods-burstable-pod366f31a9_f256_4a1c_ab2f_6638780ea94c.slice. Jan 15 23:45:53.989633 kubelet[3440]: I0115 23:45:53.989455 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp9wm\" (UniqueName: \"kubernetes.io/projected/ed2a3b68-028f-4b21-a319-bad4118a5cbb-kube-api-access-gp9wm\") pod \"coredns-674b8bbfcf-575kw\" (UID: \"ed2a3b68-028f-4b21-a319-bad4118a5cbb\") " pod="kube-system/coredns-674b8bbfcf-575kw" Jan 15 23:45:53.989633 kubelet[3440]: I0115 23:45:53.989496 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed2a3b68-028f-4b21-a319-bad4118a5cbb-config-volume\") pod \"coredns-674b8bbfcf-575kw\" (UID: \"ed2a3b68-028f-4b21-a319-bad4118a5cbb\") " pod="kube-system/coredns-674b8bbfcf-575kw" Jan 15 23:45:53.989633 kubelet[3440]: I0115 23:45:53.989510 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/366f31a9-f256-4a1c-ab2f-6638780ea94c-config-volume\") pod \"coredns-674b8bbfcf-pswb5\" (UID: \"366f31a9-f256-4a1c-ab2f-6638780ea94c\") " pod="kube-system/coredns-674b8bbfcf-pswb5" Jan 15 23:45:53.989633 kubelet[3440]: I0115 23:45:53.989520 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47q7s\" (UniqueName: \"kubernetes.io/projected/366f31a9-f256-4a1c-ab2f-6638780ea94c-kube-api-access-47q7s\") pod \"coredns-674b8bbfcf-pswb5\" (UID: \"366f31a9-f256-4a1c-ab2f-6638780ea94c\") " pod="kube-system/coredns-674b8bbfcf-pswb5" Jan 15 23:45:54.259730 containerd[1903]: time="2026-01-15T23:45:54.259592714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-575kw,Uid:ed2a3b68-028f-4b21-a319-bad4118a5cbb,Namespace:kube-system,Attempt:0,}" Jan 15 23:45:54.264780 containerd[1903]: time="2026-01-15T23:45:54.264744700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pswb5,Uid:366f31a9-f256-4a1c-ab2f-6638780ea94c,Namespace:kube-system,Attempt:0,}" Jan 15 23:45:54.541727 containerd[1903]: time="2026-01-15T23:45:54.541406115Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:54.545338 containerd[1903]: time="2026-01-15T23:45:54.545307864Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 15 23:45:54.548235 containerd[1903]: time="2026-01-15T23:45:54.548205381Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:45:54.549346 containerd[1903]: time="2026-01-15T23:45:54.549317711Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.98291396s" Jan 15 23:45:54.549375 containerd[1903]: time="2026-01-15T23:45:54.549351216Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 15 23:45:54.556373 containerd[1903]: time="2026-01-15T23:45:54.556347516Z" level=info msg="CreateContainer within sandbox \"e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 15 23:45:54.570683 containerd[1903]: time="2026-01-15T23:45:54.570656687Z" level=info msg="Container 2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:45:54.592728 containerd[1903]: time="2026-01-15T23:45:54.592698093Z" level=info msg="CreateContainer within sandbox \"e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\"" Jan 15 23:45:54.593987 containerd[1903]: time="2026-01-15T23:45:54.593299170Z" level=info msg="StartContainer for \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\"" Jan 15 23:45:54.594124 containerd[1903]: time="2026-01-15T23:45:54.594101186Z" level=info msg="connecting to shim 2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9" address="unix:///run/containerd/s/4a8a5f09282fe579c68ad4509ac833f1809010ce6d3f28143b97daba61a7c2be" protocol=ttrpc version=3 Jan 15 23:45:54.609754 systemd[1]: Started cri-containerd-2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9.scope - libcontainer container 2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9. Jan 15 23:45:54.635540 containerd[1903]: time="2026-01-15T23:45:54.635509908Z" level=info msg="StartContainer for \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\" returns successfully" Jan 15 23:45:54.808426 kubelet[3440]: I0115 23:45:54.808215 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xrtdw" podStartSLOduration=1.487335079 podStartE2EDuration="20.808191961s" podCreationTimestamp="2026-01-15 23:45:34 +0000 UTC" firstStartedPulling="2026-01-15 23:45:35.229149156 +0000 UTC m=+8.656257331" lastFinishedPulling="2026-01-15 23:45:54.55000603 +0000 UTC m=+27.977114213" observedRunningTime="2026-01-15 23:45:54.774814021 +0000 UTC m=+28.201922196" watchObservedRunningTime="2026-01-15 23:45:54.808191961 +0000 UTC m=+28.235300136" Jan 15 23:45:58.689740 systemd-networkd[1483]: cilium_host: Link UP Jan 15 23:45:58.690935 systemd-networkd[1483]: cilium_net: Link UP Jan 15 23:45:58.691830 systemd-networkd[1483]: cilium_net: Gained carrier Jan 15 23:45:58.692513 systemd-networkd[1483]: cilium_host: Gained carrier Jan 15 23:45:58.831675 systemd-networkd[1483]: cilium_vxlan: Link UP Jan 15 23:45:58.831849 systemd-networkd[1483]: cilium_vxlan: Gained carrier Jan 15 23:45:59.081769 systemd-networkd[1483]: cilium_net: Gained IPv6LL Jan 15 23:45:59.093649 kernel: NET: Registered PF_ALG protocol family Jan 15 23:45:59.442752 systemd-networkd[1483]: cilium_host: Gained IPv6LL Jan 15 23:45:59.640085 systemd-networkd[1483]: lxc_health: Link UP Jan 15 23:45:59.648135 systemd-networkd[1483]: lxc_health: Gained carrier Jan 15 23:45:59.802754 systemd-networkd[1483]: lxc6f961feef98f: Link UP Jan 15 23:45:59.812863 kernel: eth0: renamed from tmp92dd3 Jan 15 23:45:59.812444 systemd-networkd[1483]: lxc6f961feef98f: Gained carrier Jan 15 23:45:59.823871 systemd-networkd[1483]: lxc6f91b4902547: Link UP Jan 15 23:45:59.831846 kernel: eth0: renamed from tmp00582 Jan 15 23:45:59.834472 systemd-networkd[1483]: lxc6f91b4902547: Gained carrier Jan 15 23:46:00.082839 systemd-networkd[1483]: cilium_vxlan: Gained IPv6LL Jan 15 23:46:01.041066 kubelet[3440]: I0115 23:46:01.040998 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m6m5h" podStartSLOduration=13.618904968 podStartE2EDuration="27.040986871s" podCreationTimestamp="2026-01-15 23:45:34 +0000 UTC" firstStartedPulling="2026-01-15 23:45:35.143845675 +0000 UTC m=+8.570953850" lastFinishedPulling="2026-01-15 23:45:48.565927578 +0000 UTC m=+21.993035753" observedRunningTime="2026-01-15 23:45:54.809348924 +0000 UTC m=+28.236457099" watchObservedRunningTime="2026-01-15 23:46:01.040986871 +0000 UTC m=+34.468095046" Jan 15 23:46:01.042815 systemd-networkd[1483]: lxc6f91b4902547: Gained IPv6LL Jan 15 23:46:01.169786 systemd-networkd[1483]: lxc6f961feef98f: Gained IPv6LL Jan 15 23:46:01.618807 systemd-networkd[1483]: lxc_health: Gained IPv6LL Jan 15 23:46:02.389019 containerd[1903]: time="2026-01-15T23:46:02.388688877Z" level=info msg="connecting to shim 92dd3713e95c7668548587c15f7f06c42d9b3cc50429371f6546965923679024" address="unix:///run/containerd/s/04798eb0224e09deda7611f663b7ad74f3eb9e6a7114880491e22be9ac30de69" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:46:02.400828 containerd[1903]: time="2026-01-15T23:46:02.400794841Z" level=info msg="connecting to shim 005827a37adeba66f9609770983e4ced0557d87b726acdb319a819c5bd628964" address="unix:///run/containerd/s/db6414e8069ccd8433ac3411b5a8657174a0ed6fad8c1cd97e5d102699f1e96d" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:46:02.416795 systemd[1]: Started cri-containerd-92dd3713e95c7668548587c15f7f06c42d9b3cc50429371f6546965923679024.scope - libcontainer container 92dd3713e95c7668548587c15f7f06c42d9b3cc50429371f6546965923679024. Jan 15 23:46:02.432740 systemd[1]: Started cri-containerd-005827a37adeba66f9609770983e4ced0557d87b726acdb319a819c5bd628964.scope - libcontainer container 005827a37adeba66f9609770983e4ced0557d87b726acdb319a819c5bd628964. Jan 15 23:46:02.462570 containerd[1903]: time="2026-01-15T23:46:02.462484087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-575kw,Uid:ed2a3b68-028f-4b21-a319-bad4118a5cbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"92dd3713e95c7668548587c15f7f06c42d9b3cc50429371f6546965923679024\"" Jan 15 23:46:02.471916 containerd[1903]: time="2026-01-15T23:46:02.471812317Z" level=info msg="CreateContainer within sandbox \"92dd3713e95c7668548587c15f7f06c42d9b3cc50429371f6546965923679024\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 23:46:02.480222 containerd[1903]: time="2026-01-15T23:46:02.480148959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pswb5,Uid:366f31a9-f256-4a1c-ab2f-6638780ea94c,Namespace:kube-system,Attempt:0,} returns sandbox id \"005827a37adeba66f9609770983e4ced0557d87b726acdb319a819c5bd628964\"" Jan 15 23:46:02.489222 containerd[1903]: time="2026-01-15T23:46:02.488958207Z" level=info msg="CreateContainer within sandbox \"005827a37adeba66f9609770983e4ced0557d87b726acdb319a819c5bd628964\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 23:46:02.499013 containerd[1903]: time="2026-01-15T23:46:02.498986316Z" level=info msg="Container 6af8ce6eea3ed4704e6c8c054806ea639b5cf1e2df69cd75727f98bf2512b70e: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:46:02.518771 containerd[1903]: time="2026-01-15T23:46:02.518737731Z" level=info msg="CreateContainer within sandbox \"92dd3713e95c7668548587c15f7f06c42d9b3cc50429371f6546965923679024\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6af8ce6eea3ed4704e6c8c054806ea639b5cf1e2df69cd75727f98bf2512b70e\"" Jan 15 23:46:02.519684 containerd[1903]: time="2026-01-15T23:46:02.519656573Z" level=info msg="StartContainer for \"6af8ce6eea3ed4704e6c8c054806ea639b5cf1e2df69cd75727f98bf2512b70e\"" Jan 15 23:46:02.520256 containerd[1903]: time="2026-01-15T23:46:02.520227779Z" level=info msg="connecting to shim 6af8ce6eea3ed4704e6c8c054806ea639b5cf1e2df69cd75727f98bf2512b70e" address="unix:///run/containerd/s/04798eb0224e09deda7611f663b7ad74f3eb9e6a7114880491e22be9ac30de69" protocol=ttrpc version=3 Jan 15 23:46:02.523772 containerd[1903]: time="2026-01-15T23:46:02.523744953Z" level=info msg="Container e61d5e16c35dce2f38bfc350b144a18293c6bcc78cb92b7c838a4724d14ec35a: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:46:02.534868 systemd[1]: Started cri-containerd-6af8ce6eea3ed4704e6c8c054806ea639b5cf1e2df69cd75727f98bf2512b70e.scope - libcontainer container 6af8ce6eea3ed4704e6c8c054806ea639b5cf1e2df69cd75727f98bf2512b70e. Jan 15 23:46:02.539991 containerd[1903]: time="2026-01-15T23:46:02.539954361Z" level=info msg="CreateContainer within sandbox \"005827a37adeba66f9609770983e4ced0557d87b726acdb319a819c5bd628964\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e61d5e16c35dce2f38bfc350b144a18293c6bcc78cb92b7c838a4724d14ec35a\"" Jan 15 23:46:02.540649 containerd[1903]: time="2026-01-15T23:46:02.540565120Z" level=info msg="StartContainer for \"e61d5e16c35dce2f38bfc350b144a18293c6bcc78cb92b7c838a4724d14ec35a\"" Jan 15 23:46:02.542512 containerd[1903]: time="2026-01-15T23:46:02.542068520Z" level=info msg="connecting to shim e61d5e16c35dce2f38bfc350b144a18293c6bcc78cb92b7c838a4724d14ec35a" address="unix:///run/containerd/s/db6414e8069ccd8433ac3411b5a8657174a0ed6fad8c1cd97e5d102699f1e96d" protocol=ttrpc version=3 Jan 15 23:46:02.560743 systemd[1]: Started cri-containerd-e61d5e16c35dce2f38bfc350b144a18293c6bcc78cb92b7c838a4724d14ec35a.scope - libcontainer container e61d5e16c35dce2f38bfc350b144a18293c6bcc78cb92b7c838a4724d14ec35a. Jan 15 23:46:02.575420 containerd[1903]: time="2026-01-15T23:46:02.575306938Z" level=info msg="StartContainer for \"6af8ce6eea3ed4704e6c8c054806ea639b5cf1e2df69cd75727f98bf2512b70e\" returns successfully" Jan 15 23:46:02.597946 containerd[1903]: time="2026-01-15T23:46:02.597886287Z" level=info msg="StartContainer for \"e61d5e16c35dce2f38bfc350b144a18293c6bcc78cb92b7c838a4724d14ec35a\" returns successfully" Jan 15 23:46:02.768282 kubelet[3440]: I0115 23:46:02.767757 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pswb5" podStartSLOduration=28.767745229 podStartE2EDuration="28.767745229s" podCreationTimestamp="2026-01-15 23:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:46:02.766027987 +0000 UTC m=+36.193136170" watchObservedRunningTime="2026-01-15 23:46:02.767745229 +0000 UTC m=+36.194853404" Jan 15 23:46:03.379456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105525575.mount: Deactivated successfully. Jan 15 23:47:10.654814 systemd[1]: Started sshd@7-10.200.20.15:22-10.200.16.10:60744.service - OpenSSH per-connection server daemon (10.200.16.10:60744). Jan 15 23:47:11.113587 sshd[4775]: Accepted publickey for core from 10.200.16.10 port 60744 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:11.114766 sshd-session[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:11.118589 systemd-logind[1880]: New session 10 of user core. Jan 15 23:47:11.124751 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 23:47:11.493305 sshd[4778]: Connection closed by 10.200.16.10 port 60744 Jan 15 23:47:11.493996 sshd-session[4775]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:11.497671 systemd-logind[1880]: Session 10 logged out. Waiting for processes to exit. Jan 15 23:47:11.497980 systemd[1]: sshd@7-10.200.20.15:22-10.200.16.10:60744.service: Deactivated successfully. Jan 15 23:47:11.500948 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 23:47:11.503153 systemd-logind[1880]: Removed session 10. Jan 15 23:47:16.592454 systemd[1]: Started sshd@8-10.200.20.15:22-10.200.16.10:60752.service - OpenSSH per-connection server daemon (10.200.16.10:60752). Jan 15 23:47:17.072738 sshd[4790]: Accepted publickey for core from 10.200.16.10 port 60752 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:17.073473 sshd-session[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:17.077065 systemd-logind[1880]: New session 11 of user core. Jan 15 23:47:17.085749 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 23:47:17.482266 sshd[4793]: Connection closed by 10.200.16.10 port 60752 Jan 15 23:47:17.482172 sshd-session[4790]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:17.485536 systemd[1]: sshd@8-10.200.20.15:22-10.200.16.10:60752.service: Deactivated successfully. Jan 15 23:47:17.489148 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 23:47:17.490044 systemd-logind[1880]: Session 11 logged out. Waiting for processes to exit. Jan 15 23:47:17.491297 systemd-logind[1880]: Removed session 11. Jan 15 23:47:22.560564 systemd[1]: Started sshd@9-10.200.20.15:22-10.200.16.10:46194.service - OpenSSH per-connection server daemon (10.200.16.10:46194). Jan 15 23:47:23.021021 sshd[4805]: Accepted publickey for core from 10.200.16.10 port 46194 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:23.021859 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:23.025359 systemd-logind[1880]: New session 12 of user core. Jan 15 23:47:23.032752 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 23:47:23.404953 sshd[4808]: Connection closed by 10.200.16.10 port 46194 Jan 15 23:47:23.406185 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:23.409949 systemd-logind[1880]: Session 12 logged out. Waiting for processes to exit. Jan 15 23:47:23.410516 systemd[1]: sshd@9-10.200.20.15:22-10.200.16.10:46194.service: Deactivated successfully. Jan 15 23:47:23.412820 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 23:47:23.414473 systemd-logind[1880]: Removed session 12. Jan 15 23:47:28.473827 systemd[1]: Started sshd@10-10.200.20.15:22-10.200.16.10:46210.service - OpenSSH per-connection server daemon (10.200.16.10:46210). Jan 15 23:47:28.893475 sshd[4823]: Accepted publickey for core from 10.200.16.10 port 46210 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:28.894941 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:28.899723 systemd-logind[1880]: New session 13 of user core. Jan 15 23:47:28.906850 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 23:47:29.240499 sshd[4826]: Connection closed by 10.200.16.10 port 46210 Jan 15 23:47:29.241136 sshd-session[4823]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:29.244726 systemd-logind[1880]: Session 13 logged out. Waiting for processes to exit. Jan 15 23:47:29.245046 systemd[1]: sshd@10-10.200.20.15:22-10.200.16.10:46210.service: Deactivated successfully. Jan 15 23:47:29.248176 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 23:47:29.249957 systemd-logind[1880]: Removed session 13. Jan 15 23:47:29.332777 systemd[1]: Started sshd@11-10.200.20.15:22-10.200.16.10:46224.service - OpenSSH per-connection server daemon (10.200.16.10:46224). Jan 15 23:47:29.780560 sshd[4838]: Accepted publickey for core from 10.200.16.10 port 46224 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:29.781806 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:29.785301 systemd-logind[1880]: New session 14 of user core. Jan 15 23:47:29.793751 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 23:47:30.190583 sshd[4841]: Connection closed by 10.200.16.10 port 46224 Jan 15 23:47:30.190984 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:30.194911 systemd[1]: sshd@11-10.200.20.15:22-10.200.16.10:46224.service: Deactivated successfully. Jan 15 23:47:30.196911 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 23:47:30.197848 systemd-logind[1880]: Session 14 logged out. Waiting for processes to exit. Jan 15 23:47:30.199587 systemd-logind[1880]: Removed session 14. Jan 15 23:47:30.268924 systemd[1]: Started sshd@12-10.200.20.15:22-10.200.16.10:53706.service - OpenSSH per-connection server daemon (10.200.16.10:53706). Jan 15 23:47:30.688147 sshd[4851]: Accepted publickey for core from 10.200.16.10 port 53706 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:30.689296 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:30.692955 systemd-logind[1880]: New session 15 of user core. Jan 15 23:47:30.700760 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 23:47:31.046685 sshd[4854]: Connection closed by 10.200.16.10 port 53706 Jan 15 23:47:31.047312 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:31.050979 systemd[1]: sshd@12-10.200.20.15:22-10.200.16.10:53706.service: Deactivated successfully. Jan 15 23:47:31.053284 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 23:47:31.054299 systemd-logind[1880]: Session 15 logged out. Waiting for processes to exit. Jan 15 23:47:31.055886 systemd-logind[1880]: Removed session 15. Jan 15 23:47:36.128440 systemd[1]: Started sshd@13-10.200.20.15:22-10.200.16.10:53714.service - OpenSSH per-connection server daemon (10.200.16.10:53714). Jan 15 23:47:36.581367 sshd[4869]: Accepted publickey for core from 10.200.16.10 port 53714 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:36.582097 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:36.586131 systemd-logind[1880]: New session 16 of user core. Jan 15 23:47:36.594748 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 23:47:36.944855 sshd[4872]: Connection closed by 10.200.16.10 port 53714 Jan 15 23:47:36.945417 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:36.948747 systemd[1]: sshd@13-10.200.20.15:22-10.200.16.10:53714.service: Deactivated successfully. Jan 15 23:47:36.950281 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 23:47:36.951090 systemd-logind[1880]: Session 16 logged out. Waiting for processes to exit. Jan 15 23:47:36.952452 systemd-logind[1880]: Removed session 16. Jan 15 23:47:37.024938 systemd[1]: Started sshd@14-10.200.20.15:22-10.200.16.10:53718.service - OpenSSH per-connection server daemon (10.200.16.10:53718). Jan 15 23:47:37.439084 sshd[4884]: Accepted publickey for core from 10.200.16.10 port 53718 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:37.440200 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:37.443885 systemd-logind[1880]: New session 17 of user core. Jan 15 23:47:37.457948 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 23:47:37.828816 sshd[4887]: Connection closed by 10.200.16.10 port 53718 Jan 15 23:47:37.829473 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:37.833299 systemd[1]: sshd@14-10.200.20.15:22-10.200.16.10:53718.service: Deactivated successfully. Jan 15 23:47:37.835322 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 23:47:37.836306 systemd-logind[1880]: Session 17 logged out. Waiting for processes to exit. Jan 15 23:47:37.838291 systemd-logind[1880]: Removed session 17. Jan 15 23:47:37.909476 systemd[1]: Started sshd@15-10.200.20.15:22-10.200.16.10:53720.service - OpenSSH per-connection server daemon (10.200.16.10:53720). Jan 15 23:47:38.362973 sshd[4896]: Accepted publickey for core from 10.200.16.10 port 53720 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:38.364144 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:38.367851 systemd-logind[1880]: New session 18 of user core. Jan 15 23:47:38.374772 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 23:47:39.152884 sshd[4899]: Connection closed by 10.200.16.10 port 53720 Jan 15 23:47:39.152790 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:39.156674 systemd[1]: sshd@15-10.200.20.15:22-10.200.16.10:53720.service: Deactivated successfully. Jan 15 23:47:39.158421 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 23:47:39.160413 systemd-logind[1880]: Session 18 logged out. Waiting for processes to exit. Jan 15 23:47:39.161733 systemd-logind[1880]: Removed session 18. Jan 15 23:47:39.267557 systemd[1]: Started sshd@16-10.200.20.15:22-10.200.16.10:53722.service - OpenSSH per-connection server daemon (10.200.16.10:53722). Jan 15 23:47:39.752683 sshd[4916]: Accepted publickey for core from 10.200.16.10 port 53722 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:39.753539 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:39.757139 systemd-logind[1880]: New session 19 of user core. Jan 15 23:47:39.769771 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 23:47:40.238964 sshd[4919]: Connection closed by 10.200.16.10 port 53722 Jan 15 23:47:40.239519 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:40.243097 systemd[1]: sshd@16-10.200.20.15:22-10.200.16.10:53722.service: Deactivated successfully. Jan 15 23:47:40.244928 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 23:47:40.246183 systemd-logind[1880]: Session 19 logged out. Waiting for processes to exit. Jan 15 23:47:40.248925 systemd-logind[1880]: Removed session 19. Jan 15 23:47:40.331424 systemd[1]: Started sshd@17-10.200.20.15:22-10.200.16.10:38220.service - OpenSSH per-connection server daemon (10.200.16.10:38220). Jan 15 23:47:40.814682 sshd[4928]: Accepted publickey for core from 10.200.16.10 port 38220 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:40.815684 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:40.820674 systemd-logind[1880]: New session 20 of user core. Jan 15 23:47:40.824757 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 23:47:41.216296 sshd[4931]: Connection closed by 10.200.16.10 port 38220 Jan 15 23:47:41.216210 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:41.219463 systemd[1]: sshd@17-10.200.20.15:22-10.200.16.10:38220.service: Deactivated successfully. Jan 15 23:47:41.221252 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 23:47:41.222037 systemd-logind[1880]: Session 20 logged out. Waiting for processes to exit. Jan 15 23:47:41.223230 systemd-logind[1880]: Removed session 20. Jan 15 23:47:46.297468 systemd[1]: Started sshd@18-10.200.20.15:22-10.200.16.10:38222.service - OpenSSH per-connection server daemon (10.200.16.10:38222). Jan 15 23:47:46.748677 sshd[4945]: Accepted publickey for core from 10.200.16.10 port 38222 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:46.749470 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:46.753292 systemd-logind[1880]: New session 21 of user core. Jan 15 23:47:46.760759 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 23:47:47.115408 sshd[4948]: Connection closed by 10.200.16.10 port 38222 Jan 15 23:47:47.116023 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:47.118693 systemd[1]: sshd@18-10.200.20.15:22-10.200.16.10:38222.service: Deactivated successfully. Jan 15 23:47:47.120589 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 23:47:47.121560 systemd-logind[1880]: Session 21 logged out. Waiting for processes to exit. Jan 15 23:47:47.123459 systemd-logind[1880]: Removed session 21. Jan 15 23:47:52.186938 systemd[1]: Started sshd@19-10.200.20.15:22-10.200.16.10:56780.service - OpenSSH per-connection server daemon (10.200.16.10:56780). Jan 15 23:47:52.598934 sshd[4960]: Accepted publickey for core from 10.200.16.10 port 56780 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:52.600001 sshd-session[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:52.603558 systemd-logind[1880]: New session 22 of user core. Jan 15 23:47:52.611744 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 23:47:52.943479 sshd[4963]: Connection closed by 10.200.16.10 port 56780 Jan 15 23:47:52.942550 sshd-session[4960]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:52.945482 systemd[1]: sshd@19-10.200.20.15:22-10.200.16.10:56780.service: Deactivated successfully. Jan 15 23:47:52.947079 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 23:47:52.947839 systemd-logind[1880]: Session 22 logged out. Waiting for processes to exit. Jan 15 23:47:52.949136 systemd-logind[1880]: Removed session 22. Jan 15 23:47:53.041345 systemd[1]: Started sshd@20-10.200.20.15:22-10.200.16.10:56790.service - OpenSSH per-connection server daemon (10.200.16.10:56790). Jan 15 23:47:53.537832 sshd[4974]: Accepted publickey for core from 10.200.16.10 port 56790 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:53.538908 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:53.542396 systemd-logind[1880]: New session 23 of user core. Jan 15 23:47:53.549918 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 23:47:55.085657 kubelet[3440]: I0115 23:47:55.084662 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-575kw" podStartSLOduration=141.084637997 podStartE2EDuration="2m21.084637997s" podCreationTimestamp="2026-01-15 23:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:46:02.794504272 +0000 UTC m=+36.221612447" watchObservedRunningTime="2026-01-15 23:47:55.084637997 +0000 UTC m=+148.511746172" Jan 15 23:47:55.102986 containerd[1903]: time="2026-01-15T23:47:55.102943363Z" level=info msg="StopContainer for \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\" with timeout 30 (s)" Jan 15 23:47:55.105955 containerd[1903]: time="2026-01-15T23:47:55.105874998Z" level=info msg="Stop container \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\" with signal terminated" Jan 15 23:47:55.121789 systemd[1]: cri-containerd-2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9.scope: Deactivated successfully. Jan 15 23:47:55.123770 containerd[1903]: time="2026-01-15T23:47:55.123731127Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 23:47:55.127347 containerd[1903]: time="2026-01-15T23:47:55.127158864Z" level=info msg="received container exit event container_id:\"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\" id:\"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\" pid:4191 exited_at:{seconds:1768520875 nanos:125858987}" Jan 15 23:47:55.131499 containerd[1903]: time="2026-01-15T23:47:55.131457832Z" level=info msg="StopContainer for \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\" with timeout 2 (s)" Jan 15 23:47:55.132915 containerd[1903]: time="2026-01-15T23:47:55.132781357Z" level=info msg="Stop container \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\" with signal terminated" Jan 15 23:47:55.143167 systemd-networkd[1483]: lxc_health: Link DOWN Jan 15 23:47:55.143517 systemd-networkd[1483]: lxc_health: Lost carrier Jan 15 23:47:55.154280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9-rootfs.mount: Deactivated successfully. Jan 15 23:47:55.160375 systemd[1]: cri-containerd-7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1.scope: Deactivated successfully. Jan 15 23:47:55.161013 systemd[1]: cri-containerd-7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1.scope: Consumed 4.371s CPU time, 125.7M memory peak, 136K read from disk, 12.9M written to disk. Jan 15 23:47:55.161923 containerd[1903]: time="2026-01-15T23:47:55.161858736Z" level=info msg="received container exit event container_id:\"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\" id:\"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\" pid:4044 exited_at:{seconds:1768520875 nanos:161318899}" Jan 15 23:47:55.180502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1-rootfs.mount: Deactivated successfully. Jan 15 23:47:55.222164 containerd[1903]: time="2026-01-15T23:47:55.222035641Z" level=info msg="StopContainer for \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\" returns successfully" Jan 15 23:47:55.222800 containerd[1903]: time="2026-01-15T23:47:55.222707032Z" level=info msg="StopPodSandbox for \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\"" Jan 15 23:47:55.222800 containerd[1903]: time="2026-01-15T23:47:55.222772848Z" level=info msg="Container to stop \"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:47:55.222800 containerd[1903]: time="2026-01-15T23:47:55.222781728Z" level=info msg="Container to stop \"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:47:55.223059 containerd[1903]: time="2026-01-15T23:47:55.222787832Z" level=info msg="Container to stop \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:47:55.223210 containerd[1903]: time="2026-01-15T23:47:55.223175828Z" level=info msg="Container to stop \"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:47:55.223210 containerd[1903]: time="2026-01-15T23:47:55.223199380Z" level=info msg="Container to stop \"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:47:55.225054 containerd[1903]: time="2026-01-15T23:47:55.225014381Z" level=info msg="StopContainer for \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\" returns successfully" Jan 15 23:47:55.225639 containerd[1903]: time="2026-01-15T23:47:55.225550715Z" level=info msg="StopPodSandbox for \"e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc\"" Jan 15 23:47:55.225794 containerd[1903]: time="2026-01-15T23:47:55.225727332Z" level=info msg="Container to stop \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:47:55.229596 systemd[1]: cri-containerd-2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd.scope: Deactivated successfully. Jan 15 23:47:55.232039 containerd[1903]: time="2026-01-15T23:47:55.231951783Z" level=info msg="received sandbox exit event container_id:\"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" id:\"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" exit_status:137 exited_at:{seconds:1768520875 nanos:231608348}" monitor_name=podsandbox Jan 15 23:47:55.232587 systemd[1]: cri-containerd-e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc.scope: Deactivated successfully. Jan 15 23:47:55.242922 containerd[1903]: time="2026-01-15T23:47:55.242546107Z" level=info msg="received sandbox exit event container_id:\"e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc\" id:\"e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc\" exit_status:137 exited_at:{seconds:1768520875 nanos:242077335}" monitor_name=podsandbox Jan 15 23:47:55.255454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd-rootfs.mount: Deactivated successfully. Jan 15 23:47:55.268432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc-rootfs.mount: Deactivated successfully. Jan 15 23:47:55.271566 containerd[1903]: time="2026-01-15T23:47:55.271292395Z" level=info msg="shim disconnected" id=e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc namespace=k8s.io Jan 15 23:47:55.271566 containerd[1903]: time="2026-01-15T23:47:55.271504853Z" level=warning msg="cleaning up after shim disconnected" id=e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc namespace=k8s.io Jan 15 23:47:55.271868 containerd[1903]: time="2026-01-15T23:47:55.271538190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 23:47:55.272008 containerd[1903]: time="2026-01-15T23:47:55.271320644Z" level=info msg="shim disconnected" id=2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd namespace=k8s.io Jan 15 23:47:55.272008 containerd[1903]: time="2026-01-15T23:47:55.271956778Z" level=warning msg="cleaning up after shim disconnected" id=2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd namespace=k8s.io Jan 15 23:47:55.272008 containerd[1903]: time="2026-01-15T23:47:55.271976306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 23:47:55.283705 containerd[1903]: time="2026-01-15T23:47:55.283598832Z" level=info msg="received sandbox container exit event sandbox_id:\"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" exit_status:137 exited_at:{seconds:1768520875 nanos:231608348}" monitor_name=criService Jan 15 23:47:55.286085 containerd[1903]: time="2026-01-15T23:47:55.286046519Z" level=info msg="TearDown network for sandbox \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" successfully" Jan 15 23:47:55.286222 containerd[1903]: time="2026-01-15T23:47:55.286209120Z" level=info msg="StopPodSandbox for \"2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd\" returns successfully" Jan 15 23:47:55.286261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f6982c7d84485fb4763d6ab0be06a6e2c3a0f5065c788947192336a843432cd-shm.mount: Deactivated successfully. Jan 15 23:47:55.289503 containerd[1903]: time="2026-01-15T23:47:55.289098116Z" level=info msg="received sandbox container exit event sandbox_id:\"e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc\" exit_status:137 exited_at:{seconds:1768520875 nanos:242077335}" monitor_name=criService Jan 15 23:47:55.289598 containerd[1903]: time="2026-01-15T23:47:55.289354534Z" level=info msg="TearDown network for sandbox \"e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc\" successfully" Jan 15 23:47:55.289598 containerd[1903]: time="2026-01-15T23:47:55.289541288Z" level=info msg="StopPodSandbox for \"e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc\" returns successfully" Jan 15 23:47:55.435818 kubelet[3440]: I0115 23:47:55.435684 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-hostproc\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.435818 kubelet[3440]: I0115 23:47:55.435734 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-host-proc-sys-kernel\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.435818 kubelet[3440]: I0115 23:47:55.435756 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d513465d-02c8-4ae8-9542-a66946be2e56-cilium-config-path\") pod \"d513465d-02c8-4ae8-9542-a66946be2e56\" (UID: \"d513465d-02c8-4ae8-9542-a66946be2e56\") " Jan 15 23:47:55.435818 kubelet[3440]: I0115 23:47:55.435770 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k87z\" (UniqueName: \"kubernetes.io/projected/d513465d-02c8-4ae8-9542-a66946be2e56-kube-api-access-5k87z\") pod \"d513465d-02c8-4ae8-9542-a66946be2e56\" (UID: \"d513465d-02c8-4ae8-9542-a66946be2e56\") " Jan 15 23:47:55.435818 kubelet[3440]: I0115 23:47:55.435779 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-lib-modules\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.435818 kubelet[3440]: I0115 23:47:55.435788 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-bpf-maps\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.436042 kubelet[3440]: I0115 23:47:55.435801 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37aa14db-936c-4a40-928c-4e00cd92b33f-clustermesh-secrets\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.436042 kubelet[3440]: I0115 23:47:55.435823 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-etc-cni-netd\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.436042 kubelet[3440]: I0115 23:47:55.435831 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-cgroup\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.436042 kubelet[3440]: I0115 23:47:55.435841 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-run\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.436042 kubelet[3440]: I0115 23:47:55.435850 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37aa14db-936c-4a40-928c-4e00cd92b33f-hubble-tls\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.436042 kubelet[3440]: I0115 23:47:55.435862 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-xtables-lock\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.436129 kubelet[3440]: I0115 23:47:55.435870 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-host-proc-sys-net\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.436129 kubelet[3440]: I0115 23:47:55.435881 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-config-path\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.436129 kubelet[3440]: I0115 23:47:55.435891 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cni-path\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.436129 kubelet[3440]: I0115 23:47:55.435903 3440 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9cms\" (UniqueName: \"kubernetes.io/projected/37aa14db-936c-4a40-928c-4e00cd92b33f-kube-api-access-w9cms\") pod \"37aa14db-936c-4a40-928c-4e00cd92b33f\" (UID: \"37aa14db-936c-4a40-928c-4e00cd92b33f\") " Jan 15 23:47:55.436636 kubelet[3440]: I0115 23:47:55.435699 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-hostproc" (OuterVolumeSpecName: "hostproc") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:47:55.436636 kubelet[3440]: I0115 23:47:55.436230 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:47:55.436636 kubelet[3440]: I0115 23:47:55.436294 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:47:55.437188 kubelet[3440]: I0115 23:47:55.437133 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:47:55.437188 kubelet[3440]: I0115 23:47:55.437164 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:47:55.437962 kubelet[3440]: I0115 23:47:55.437935 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d513465d-02c8-4ae8-9542-a66946be2e56-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d513465d-02c8-4ae8-9542-a66946be2e56" (UID: "d513465d-02c8-4ae8-9542-a66946be2e56"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 23:47:55.439702 kubelet[3440]: I0115 23:47:55.439672 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:47:55.439781 kubelet[3440]: I0115 23:47:55.439708 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:47:55.440369 kubelet[3440]: I0115 23:47:55.439928 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:47:55.440499 kubelet[3440]: I0115 23:47:55.440483 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:47:55.440871 kubelet[3440]: I0115 23:47:55.440852 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cni-path" (OuterVolumeSpecName: "cni-path") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:47:55.442655 kubelet[3440]: I0115 23:47:55.442613 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 23:47:55.442819 kubelet[3440]: I0115 23:47:55.442710 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37aa14db-936c-4a40-928c-4e00cd92b33f-kube-api-access-w9cms" (OuterVolumeSpecName: "kube-api-access-w9cms") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "kube-api-access-w9cms". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 23:47:55.444219 kubelet[3440]: I0115 23:47:55.444184 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d513465d-02c8-4ae8-9542-a66946be2e56-kube-api-access-5k87z" (OuterVolumeSpecName: "kube-api-access-5k87z") pod "d513465d-02c8-4ae8-9542-a66946be2e56" (UID: "d513465d-02c8-4ae8-9542-a66946be2e56"). InnerVolumeSpecName "kube-api-access-5k87z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 23:47:55.445180 kubelet[3440]: I0115 23:47:55.445147 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37aa14db-936c-4a40-928c-4e00cd92b33f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 23:47:55.445336 kubelet[3440]: I0115 23:47:55.445322 3440 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37aa14db-936c-4a40-928c-4e00cd92b33f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "37aa14db-936c-4a40-928c-4e00cd92b33f" (UID: "37aa14db-936c-4a40-928c-4e00cd92b33f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 15 23:47:55.536582 kubelet[3440]: I0115 23:47:55.536441 3440 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-xtables-lock\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536582 kubelet[3440]: I0115 23:47:55.536475 3440 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-host-proc-sys-net\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536582 kubelet[3440]: I0115 23:47:55.536482 3440 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-config-path\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536582 kubelet[3440]: I0115 23:47:55.536487 3440 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cni-path\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536582 kubelet[3440]: I0115 23:47:55.536497 3440 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w9cms\" (UniqueName: \"kubernetes.io/projected/37aa14db-936c-4a40-928c-4e00cd92b33f-kube-api-access-w9cms\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536582 kubelet[3440]: I0115 23:47:55.536505 3440 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-hostproc\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536582 kubelet[3440]: I0115 23:47:55.536510 3440 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-host-proc-sys-kernel\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536582 kubelet[3440]: I0115 23:47:55.536517 3440 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d513465d-02c8-4ae8-9542-a66946be2e56-cilium-config-path\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536863 kubelet[3440]: I0115 23:47:55.536523 3440 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5k87z\" (UniqueName: \"kubernetes.io/projected/d513465d-02c8-4ae8-9542-a66946be2e56-kube-api-access-5k87z\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536863 kubelet[3440]: I0115 23:47:55.536528 3440 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-lib-modules\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536863 kubelet[3440]: I0115 23:47:55.536534 3440 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-bpf-maps\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536863 kubelet[3440]: I0115 23:47:55.536539 3440 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37aa14db-936c-4a40-928c-4e00cd92b33f-clustermesh-secrets\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536863 kubelet[3440]: I0115 23:47:55.536544 3440 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-etc-cni-netd\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536863 kubelet[3440]: I0115 23:47:55.536550 3440 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-cgroup\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536863 kubelet[3440]: I0115 23:47:55.536555 3440 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37aa14db-936c-4a40-928c-4e00cd92b33f-cilium-run\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.536863 kubelet[3440]: I0115 23:47:55.536560 3440 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37aa14db-936c-4a40-928c-4e00cd92b33f-hubble-tls\") on node \"ci-4459.2.2-n-6dfb6e6787\" DevicePath \"\"" Jan 15 23:47:55.948231 kubelet[3440]: I0115 23:47:55.948197 3440 scope.go:117] "RemoveContainer" containerID="2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9" Jan 15 23:47:55.952349 containerd[1903]: time="2026-01-15T23:47:55.952295358Z" level=info msg="RemoveContainer for \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\"" Jan 15 23:47:55.953489 systemd[1]: Removed slice kubepods-besteffort-podd513465d_02c8_4ae8_9542_a66946be2e56.slice - libcontainer container kubepods-besteffort-podd513465d_02c8_4ae8_9542_a66946be2e56.slice. Jan 15 23:47:55.961568 systemd[1]: Removed slice kubepods-burstable-pod37aa14db_936c_4a40_928c_4e00cd92b33f.slice - libcontainer container kubepods-burstable-pod37aa14db_936c_4a40_928c_4e00cd92b33f.slice. Jan 15 23:47:55.961722 systemd[1]: kubepods-burstable-pod37aa14db_936c_4a40_928c_4e00cd92b33f.slice: Consumed 4.432s CPU time, 126.1M memory peak, 136K read from disk, 12.9M written to disk. Jan 15 23:47:55.963474 containerd[1903]: time="2026-01-15T23:47:55.963443175Z" level=info msg="RemoveContainer for \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\" returns successfully" Jan 15 23:47:55.963872 kubelet[3440]: I0115 23:47:55.963798 3440 scope.go:117] "RemoveContainer" containerID="2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9" Jan 15 23:47:55.964665 containerd[1903]: time="2026-01-15T23:47:55.964436265Z" level=error msg="ContainerStatus for \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\": not found" Jan 15 23:47:55.965005 kubelet[3440]: E0115 23:47:55.964778 3440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\": not found" containerID="2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9" Jan 15 23:47:55.965005 kubelet[3440]: I0115 23:47:55.964915 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9"} err="failed to get container status \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"2119775e58938eb2ec0e16eb93e5670810d4ee29eb83f441449051033b3396a9\": not found" Jan 15 23:47:55.965005 kubelet[3440]: I0115 23:47:55.964948 3440 scope.go:117] "RemoveContainer" containerID="7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1" Jan 15 23:47:55.966579 containerd[1903]: time="2026-01-15T23:47:55.966556733Z" level=info msg="RemoveContainer for \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\"" Jan 15 23:47:55.975960 containerd[1903]: time="2026-01-15T23:47:55.975860357Z" level=info msg="RemoveContainer for \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\" returns successfully" Jan 15 23:47:55.976591 kubelet[3440]: I0115 23:47:55.976459 3440 scope.go:117] "RemoveContainer" containerID="fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b" Jan 15 23:47:55.979815 containerd[1903]: time="2026-01-15T23:47:55.978927626Z" level=info msg="RemoveContainer for \"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\"" Jan 15 23:47:55.987472 containerd[1903]: time="2026-01-15T23:47:55.987429770Z" level=info msg="RemoveContainer for \"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\" returns successfully" Jan 15 23:47:55.987749 kubelet[3440]: I0115 23:47:55.987608 3440 scope.go:117] "RemoveContainer" containerID="486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646" Jan 15 23:47:55.989651 containerd[1903]: time="2026-01-15T23:47:55.989513374Z" level=info msg="RemoveContainer for \"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\"" Jan 15 23:47:55.998492 containerd[1903]: time="2026-01-15T23:47:55.998454531Z" level=info msg="RemoveContainer for \"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\" returns successfully" Jan 15 23:47:55.998873 kubelet[3440]: I0115 23:47:55.998770 3440 scope.go:117] "RemoveContainer" containerID="8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc" Jan 15 23:47:56.000327 containerd[1903]: time="2026-01-15T23:47:56.000295556Z" level=info msg="RemoveContainer for \"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\"" Jan 15 23:47:56.008133 containerd[1903]: time="2026-01-15T23:47:56.008067462Z" level=info msg="RemoveContainer for \"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\" returns successfully" Jan 15 23:47:56.008412 kubelet[3440]: I0115 23:47:56.008377 3440 scope.go:117] "RemoveContainer" containerID="774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e" Jan 15 23:47:56.009713 containerd[1903]: time="2026-01-15T23:47:56.009684213Z" level=info msg="RemoveContainer for \"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\"" Jan 15 23:47:56.017610 containerd[1903]: time="2026-01-15T23:47:56.017576328Z" level=info msg="RemoveContainer for \"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\" returns successfully" Jan 15 23:47:56.018203 kubelet[3440]: I0115 23:47:56.018176 3440 scope.go:117] "RemoveContainer" containerID="7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1" Jan 15 23:47:56.018462 containerd[1903]: time="2026-01-15T23:47:56.018390119Z" level=error msg="ContainerStatus for \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\": not found" Jan 15 23:47:56.018663 kubelet[3440]: E0115 23:47:56.018642 3440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\": not found" containerID="7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1" Jan 15 23:47:56.018734 kubelet[3440]: I0115 23:47:56.018667 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1"} err="failed to get container status \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"7298b85c6027d9ecae6df103634939de1cb889d9ba59add6af4abad0286a63b1\": not found" Jan 15 23:47:56.018734 kubelet[3440]: I0115 23:47:56.018686 3440 scope.go:117] "RemoveContainer" containerID="fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b" Jan 15 23:47:56.018889 containerd[1903]: time="2026-01-15T23:47:56.018865132Z" level=error msg="ContainerStatus for \"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\": not found" Jan 15 23:47:56.019101 kubelet[3440]: E0115 23:47:56.019082 3440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\": not found" containerID="fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b" Jan 15 23:47:56.019159 kubelet[3440]: I0115 23:47:56.019105 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b"} err="failed to get container status \"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe601936c3f11246267e3a51a52c55104c25d6909f45f88a723b4371f2a0d95b\": not found" Jan 15 23:47:56.019159 kubelet[3440]: I0115 23:47:56.019116 3440 scope.go:117] "RemoveContainer" containerID="486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646" Jan 15 23:47:56.019310 containerd[1903]: time="2026-01-15T23:47:56.019285584Z" level=error msg="ContainerStatus for \"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\": not found" Jan 15 23:47:56.019474 kubelet[3440]: E0115 23:47:56.019448 3440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\": not found" containerID="486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646" Jan 15 23:47:56.019474 kubelet[3440]: I0115 23:47:56.019469 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646"} err="failed to get container status \"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\": rpc error: code = NotFound desc = an error occurred when try to find container \"486fe47066f127166f6fbdc896331d25206efc6fe52d2326867c415fba728646\": not found" Jan 15 23:47:56.019540 kubelet[3440]: I0115 23:47:56.019479 3440 scope.go:117] "RemoveContainer" containerID="8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc" Jan 15 23:47:56.019675 containerd[1903]: time="2026-01-15T23:47:56.019617427Z" level=error msg="ContainerStatus for \"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\": not found" Jan 15 23:47:56.019779 kubelet[3440]: E0115 23:47:56.019758 3440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\": not found" containerID="8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc" Jan 15 23:47:56.019818 kubelet[3440]: I0115 23:47:56.019780 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc"} err="failed to get container status \"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c95d50088e6cb7b79beeebf1b8fdc592f3bd8ec1322400b195ccddd052201fc\": not found" Jan 15 23:47:56.019818 kubelet[3440]: I0115 23:47:56.019792 3440 scope.go:117] "RemoveContainer" containerID="774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e" Jan 15 23:47:56.020016 containerd[1903]: time="2026-01-15T23:47:56.019938718Z" level=error msg="ContainerStatus for \"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\": not found" Jan 15 23:47:56.020067 kubelet[3440]: E0115 23:47:56.020019 3440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\": not found" containerID="774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e" Jan 15 23:47:56.020067 kubelet[3440]: I0115 23:47:56.020032 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e"} err="failed to get container status \"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"774fb093e064a9fa61a88896a3c4050a956df197f108fe81bcaede29a64dec2e\": not found" Jan 15 23:47:56.154276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e607d402df4ab9b0953730ce03082364fca56d1fbadcdee882d30adbcceb2ecc-shm.mount: Deactivated successfully. Jan 15 23:47:56.154367 systemd[1]: var-lib-kubelet-pods-d513465d\x2d02c8\x2d4ae8\x2d9542\x2da66946be2e56-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5k87z.mount: Deactivated successfully. Jan 15 23:47:56.154412 systemd[1]: var-lib-kubelet-pods-37aa14db\x2d936c\x2d4a40\x2d928c\x2d4e00cd92b33f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw9cms.mount: Deactivated successfully. Jan 15 23:47:56.154448 systemd[1]: var-lib-kubelet-pods-37aa14db\x2d936c\x2d4a40\x2d928c\x2d4e00cd92b33f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 15 23:47:56.154492 systemd[1]: var-lib-kubelet-pods-37aa14db\x2d936c\x2d4a40\x2d928c\x2d4e00cd92b33f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 15 23:47:56.638671 kubelet[3440]: I0115 23:47:56.638325 3440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37aa14db-936c-4a40-928c-4e00cd92b33f" path="/var/lib/kubelet/pods/37aa14db-936c-4a40-928c-4e00cd92b33f/volumes" Jan 15 23:47:56.639036 kubelet[3440]: I0115 23:47:56.638930 3440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d513465d-02c8-4ae8-9542-a66946be2e56" path="/var/lib/kubelet/pods/d513465d-02c8-4ae8-9542-a66946be2e56/volumes" Jan 15 23:47:56.730108 kubelet[3440]: E0115 23:47:56.730073 3440 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 15 23:47:57.124745 sshd[4977]: Connection closed by 10.200.16.10 port 56790 Jan 15 23:47:57.125446 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:57.129351 systemd[1]: sshd@20-10.200.20.15:22-10.200.16.10:56790.service: Deactivated successfully. Jan 15 23:47:57.131302 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 23:47:57.132095 systemd-logind[1880]: Session 23 logged out. Waiting for processes to exit. Jan 15 23:47:57.133277 systemd-logind[1880]: Removed session 23. Jan 15 23:47:57.192705 systemd[1]: Started sshd@21-10.200.20.15:22-10.200.16.10:56806.service - OpenSSH per-connection server daemon (10.200.16.10:56806). Jan 15 23:47:57.608926 sshd[5123]: Accepted publickey for core from 10.200.16.10 port 56806 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:57.610171 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:57.615151 systemd-logind[1880]: New session 24 of user core. Jan 15 23:47:57.621787 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 23:47:58.484753 systemd[1]: Created slice kubepods-burstable-pod949704ff_e1c2_48e4_9704_661ee50cfb04.slice - libcontainer container kubepods-burstable-pod949704ff_e1c2_48e4_9704_661ee50cfb04.slice. Jan 15 23:47:58.487105 sshd[5126]: Connection closed by 10.200.16.10 port 56806 Jan 15 23:47:58.487645 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:58.491439 systemd[1]: sshd@21-10.200.20.15:22-10.200.16.10:56806.service: Deactivated successfully. Jan 15 23:47:58.494212 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 23:47:58.495105 systemd-logind[1880]: Session 24 logged out. Waiting for processes to exit. Jan 15 23:47:58.496847 systemd-logind[1880]: Removed session 24. Jan 15 23:47:58.551277 kubelet[3440]: I0115 23:47:58.551233 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/949704ff-e1c2-48e4-9704-661ee50cfb04-cni-path\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552002 kubelet[3440]: I0115 23:47:58.551592 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/949704ff-e1c2-48e4-9704-661ee50cfb04-etc-cni-netd\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552002 kubelet[3440]: I0115 23:47:58.551613 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/949704ff-e1c2-48e4-9704-661ee50cfb04-lib-modules\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552002 kubelet[3440]: I0115 23:47:58.551645 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/949704ff-e1c2-48e4-9704-661ee50cfb04-clustermesh-secrets\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552002 kubelet[3440]: I0115 23:47:58.551659 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/949704ff-e1c2-48e4-9704-661ee50cfb04-cilium-run\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552002 kubelet[3440]: I0115 23:47:58.551668 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/949704ff-e1c2-48e4-9704-661ee50cfb04-xtables-lock\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552002 kubelet[3440]: I0115 23:47:58.551680 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/949704ff-e1c2-48e4-9704-661ee50cfb04-cilium-ipsec-secrets\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552205 kubelet[3440]: I0115 23:47:58.551690 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/949704ff-e1c2-48e4-9704-661ee50cfb04-host-proc-sys-net\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552205 kubelet[3440]: I0115 23:47:58.551699 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/949704ff-e1c2-48e4-9704-661ee50cfb04-host-proc-sys-kernel\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552205 kubelet[3440]: I0115 23:47:58.551722 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqtfw\" (UniqueName: \"kubernetes.io/projected/949704ff-e1c2-48e4-9704-661ee50cfb04-kube-api-access-fqtfw\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552205 kubelet[3440]: I0115 23:47:58.551786 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/949704ff-e1c2-48e4-9704-661ee50cfb04-cilium-config-path\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552205 kubelet[3440]: I0115 23:47:58.551832 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/949704ff-e1c2-48e4-9704-661ee50cfb04-bpf-maps\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552277 kubelet[3440]: I0115 23:47:58.551848 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/949704ff-e1c2-48e4-9704-661ee50cfb04-hostproc\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552277 kubelet[3440]: I0115 23:47:58.551860 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/949704ff-e1c2-48e4-9704-661ee50cfb04-cilium-cgroup\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.552277 kubelet[3440]: I0115 23:47:58.551879 3440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/949704ff-e1c2-48e4-9704-661ee50cfb04-hubble-tls\") pod \"cilium-8zm5l\" (UID: \"949704ff-e1c2-48e4-9704-661ee50cfb04\") " pod="kube-system/cilium-8zm5l" Jan 15 23:47:58.575870 systemd[1]: Started sshd@22-10.200.20.15:22-10.200.16.10:56814.service - OpenSSH per-connection server daemon (10.200.16.10:56814). Jan 15 23:47:58.790181 containerd[1903]: time="2026-01-15T23:47:58.790060526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8zm5l,Uid:949704ff-e1c2-48e4-9704-661ee50cfb04,Namespace:kube-system,Attempt:0,}" Jan 15 23:47:58.826511 containerd[1903]: time="2026-01-15T23:47:58.826471768Z" level=info msg="connecting to shim b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5" address="unix:///run/containerd/s/6078a9b7eb2c7ed12de4a34a2bc53aaf081ce2f56d061c9872954bc05f691896" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:47:58.846787 systemd[1]: Started cri-containerd-b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5.scope - libcontainer container b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5. Jan 15 23:47:58.870988 containerd[1903]: time="2026-01-15T23:47:58.870947674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8zm5l,Uid:949704ff-e1c2-48e4-9704-661ee50cfb04,Namespace:kube-system,Attempt:0,} returns sandbox id \"b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5\"" Jan 15 23:47:58.880814 containerd[1903]: time="2026-01-15T23:47:58.880773121Z" level=info msg="CreateContainer within sandbox \"b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 15 23:47:58.895670 containerd[1903]: time="2026-01-15T23:47:58.895260724Z" level=info msg="Container e6a2ed700a019b7aecc8f233ee5e5f6071ef37a52c9b2911c4fa429ecdd81b1c: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:47:58.908842 containerd[1903]: time="2026-01-15T23:47:58.908797947Z" level=info msg="CreateContainer within sandbox \"b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e6a2ed700a019b7aecc8f233ee5e5f6071ef37a52c9b2911c4fa429ecdd81b1c\"" Jan 15 23:47:58.909791 containerd[1903]: time="2026-01-15T23:47:58.909762288Z" level=info msg="StartContainer for \"e6a2ed700a019b7aecc8f233ee5e5f6071ef37a52c9b2911c4fa429ecdd81b1c\"" Jan 15 23:47:58.910450 containerd[1903]: time="2026-01-15T23:47:58.910421183Z" level=info msg="connecting to shim e6a2ed700a019b7aecc8f233ee5e5f6071ef37a52c9b2911c4fa429ecdd81b1c" address="unix:///run/containerd/s/6078a9b7eb2c7ed12de4a34a2bc53aaf081ce2f56d061c9872954bc05f691896" protocol=ttrpc version=3 Jan 15 23:47:58.940807 systemd[1]: Started cri-containerd-e6a2ed700a019b7aecc8f233ee5e5f6071ef37a52c9b2911c4fa429ecdd81b1c.scope - libcontainer container e6a2ed700a019b7aecc8f233ee5e5f6071ef37a52c9b2911c4fa429ecdd81b1c. Jan 15 23:47:58.969349 containerd[1903]: time="2026-01-15T23:47:58.969314096Z" level=info msg="StartContainer for \"e6a2ed700a019b7aecc8f233ee5e5f6071ef37a52c9b2911c4fa429ecdd81b1c\" returns successfully" Jan 15 23:47:58.972586 systemd[1]: cri-containerd-e6a2ed700a019b7aecc8f233ee5e5f6071ef37a52c9b2911c4fa429ecdd81b1c.scope: Deactivated successfully. Jan 15 23:47:58.977512 containerd[1903]: time="2026-01-15T23:47:58.977462670Z" level=info msg="received container exit event container_id:\"e6a2ed700a019b7aecc8f233ee5e5f6071ef37a52c9b2911c4fa429ecdd81b1c\" id:\"e6a2ed700a019b7aecc8f233ee5e5f6071ef37a52c9b2911c4fa429ecdd81b1c\" pid:5201 exited_at:{seconds:1768520878 nanos:977142375}" Jan 15 23:47:59.041934 sshd[5136]: Accepted publickey for core from 10.200.16.10 port 56814 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:59.043067 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:59.046900 systemd-logind[1880]: New session 25 of user core. Jan 15 23:47:59.053764 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 23:47:59.368588 sshd[5232]: Connection closed by 10.200.16.10 port 56814 Jan 15 23:47:59.369213 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Jan 15 23:47:59.372400 systemd[1]: sshd@22-10.200.20.15:22-10.200.16.10:56814.service: Deactivated successfully. Jan 15 23:47:59.374772 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 23:47:59.375892 systemd-logind[1880]: Session 25 logged out. Waiting for processes to exit. Jan 15 23:47:59.377943 systemd-logind[1880]: Removed session 25. Jan 15 23:47:59.439835 systemd[1]: Started sshd@23-10.200.20.15:22-10.200.16.10:56828.service - OpenSSH per-connection server daemon (10.200.16.10:56828). Jan 15 23:47:59.853426 sshd[5239]: Accepted publickey for core from 10.200.16.10 port 56828 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:47:59.854990 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:47:59.860216 systemd-logind[1880]: New session 26 of user core. Jan 15 23:47:59.864757 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 23:47:59.986759 containerd[1903]: time="2026-01-15T23:47:59.986353680Z" level=info msg="CreateContainer within sandbox \"b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 15 23:48:00.004523 containerd[1903]: time="2026-01-15T23:48:00.004063413Z" level=info msg="Container 32f1339a0d2bd1da3badfd9b5663c97bef73b4c39d042192f34e74fbc4d22e19: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:48:00.006208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910693504.mount: Deactivated successfully. Jan 15 23:48:00.019531 containerd[1903]: time="2026-01-15T23:48:00.019488896Z" level=info msg="CreateContainer within sandbox \"b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"32f1339a0d2bd1da3badfd9b5663c97bef73b4c39d042192f34e74fbc4d22e19\"" Jan 15 23:48:00.020532 containerd[1903]: time="2026-01-15T23:48:00.020299532Z" level=info msg="StartContainer for \"32f1339a0d2bd1da3badfd9b5663c97bef73b4c39d042192f34e74fbc4d22e19\"" Jan 15 23:48:00.021271 containerd[1903]: time="2026-01-15T23:48:00.021249395Z" level=info msg="connecting to shim 32f1339a0d2bd1da3badfd9b5663c97bef73b4c39d042192f34e74fbc4d22e19" address="unix:///run/containerd/s/6078a9b7eb2c7ed12de4a34a2bc53aaf081ce2f56d061c9872954bc05f691896" protocol=ttrpc version=3 Jan 15 23:48:00.037778 systemd[1]: Started cri-containerd-32f1339a0d2bd1da3badfd9b5663c97bef73b4c39d042192f34e74fbc4d22e19.scope - libcontainer container 32f1339a0d2bd1da3badfd9b5663c97bef73b4c39d042192f34e74fbc4d22e19. Jan 15 23:48:00.066862 containerd[1903]: time="2026-01-15T23:48:00.066809944Z" level=info msg="StartContainer for \"32f1339a0d2bd1da3badfd9b5663c97bef73b4c39d042192f34e74fbc4d22e19\" returns successfully" Jan 15 23:48:00.069400 systemd[1]: cri-containerd-32f1339a0d2bd1da3badfd9b5663c97bef73b4c39d042192f34e74fbc4d22e19.scope: Deactivated successfully. Jan 15 23:48:00.070349 containerd[1903]: time="2026-01-15T23:48:00.070212580Z" level=info msg="received container exit event container_id:\"32f1339a0d2bd1da3badfd9b5663c97bef73b4c39d042192f34e74fbc4d22e19\" id:\"32f1339a0d2bd1da3badfd9b5663c97bef73b4c39d042192f34e74fbc4d22e19\" pid:5257 exited_at:{seconds:1768520880 nanos:69938560}" Jan 15 23:48:00.088928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32f1339a0d2bd1da3badfd9b5663c97bef73b4c39d042192f34e74fbc4d22e19-rootfs.mount: Deactivated successfully. Jan 15 23:48:00.537827 kubelet[3440]: I0115 23:48:00.537771 3440 setters.go:618] "Node became not ready" node="ci-4459.2.2-n-6dfb6e6787" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-15T23:48:00Z","lastTransitionTime":"2026-01-15T23:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 15 23:48:00.987782 containerd[1903]: time="2026-01-15T23:48:00.987723005Z" level=info msg="CreateContainer within sandbox \"b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 15 23:48:01.006523 containerd[1903]: time="2026-01-15T23:48:01.005610973Z" level=info msg="Container ca08fcebbad74342689ab0336028d3bde450f94040942a866c973a22b5d1a4f1: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:48:01.021345 containerd[1903]: time="2026-01-15T23:48:01.021292812Z" level=info msg="CreateContainer within sandbox \"b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca08fcebbad74342689ab0336028d3bde450f94040942a866c973a22b5d1a4f1\"" Jan 15 23:48:01.022730 containerd[1903]: time="2026-01-15T23:48:01.022181937Z" level=info msg="StartContainer for \"ca08fcebbad74342689ab0336028d3bde450f94040942a866c973a22b5d1a4f1\"" Jan 15 23:48:01.024338 containerd[1903]: time="2026-01-15T23:48:01.024306530Z" level=info msg="connecting to shim ca08fcebbad74342689ab0336028d3bde450f94040942a866c973a22b5d1a4f1" address="unix:///run/containerd/s/6078a9b7eb2c7ed12de4a34a2bc53aaf081ce2f56d061c9872954bc05f691896" protocol=ttrpc version=3 Jan 15 23:48:01.043784 systemd[1]: Started cri-containerd-ca08fcebbad74342689ab0336028d3bde450f94040942a866c973a22b5d1a4f1.scope - libcontainer container ca08fcebbad74342689ab0336028d3bde450f94040942a866c973a22b5d1a4f1. Jan 15 23:48:01.100796 systemd[1]: cri-containerd-ca08fcebbad74342689ab0336028d3bde450f94040942a866c973a22b5d1a4f1.scope: Deactivated successfully. Jan 15 23:48:01.104263 containerd[1903]: time="2026-01-15T23:48:01.104180889Z" level=info msg="received container exit event container_id:\"ca08fcebbad74342689ab0336028d3bde450f94040942a866c973a22b5d1a4f1\" id:\"ca08fcebbad74342689ab0336028d3bde450f94040942a866c973a22b5d1a4f1\" pid:5306 exited_at:{seconds:1768520881 nanos:103154146}" Jan 15 23:48:01.105709 containerd[1903]: time="2026-01-15T23:48:01.105684000Z" level=info msg="StartContainer for \"ca08fcebbad74342689ab0336028d3bde450f94040942a866c973a22b5d1a4f1\" returns successfully" Jan 15 23:48:01.122399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca08fcebbad74342689ab0336028d3bde450f94040942a866c973a22b5d1a4f1-rootfs.mount: Deactivated successfully. Jan 15 23:48:01.732155 kubelet[3440]: E0115 23:48:01.732112 3440 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 15 23:48:01.993712 containerd[1903]: time="2026-01-15T23:48:01.992613457Z" level=info msg="CreateContainer within sandbox \"b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 15 23:48:02.012471 containerd[1903]: time="2026-01-15T23:48:02.012370590Z" level=info msg="Container 68a4c5950656271b4699a55dc336c0f2c943f3659719902467825f4bb05f410d: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:48:02.027417 containerd[1903]: time="2026-01-15T23:48:02.027378346Z" level=info msg="CreateContainer within sandbox \"b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"68a4c5950656271b4699a55dc336c0f2c943f3659719902467825f4bb05f410d\"" Jan 15 23:48:02.028116 containerd[1903]: time="2026-01-15T23:48:02.028093853Z" level=info msg="StartContainer for \"68a4c5950656271b4699a55dc336c0f2c943f3659719902467825f4bb05f410d\"" Jan 15 23:48:02.029893 containerd[1903]: time="2026-01-15T23:48:02.029782791Z" level=info msg="connecting to shim 68a4c5950656271b4699a55dc336c0f2c943f3659719902467825f4bb05f410d" address="unix:///run/containerd/s/6078a9b7eb2c7ed12de4a34a2bc53aaf081ce2f56d061c9872954bc05f691896" protocol=ttrpc version=3 Jan 15 23:48:02.045759 systemd[1]: Started cri-containerd-68a4c5950656271b4699a55dc336c0f2c943f3659719902467825f4bb05f410d.scope - libcontainer container 68a4c5950656271b4699a55dc336c0f2c943f3659719902467825f4bb05f410d. Jan 15 23:48:02.068056 systemd[1]: cri-containerd-68a4c5950656271b4699a55dc336c0f2c943f3659719902467825f4bb05f410d.scope: Deactivated successfully. Jan 15 23:48:02.074107 containerd[1903]: time="2026-01-15T23:48:02.073915454Z" level=info msg="received container exit event container_id:\"68a4c5950656271b4699a55dc336c0f2c943f3659719902467825f4bb05f410d\" id:\"68a4c5950656271b4699a55dc336c0f2c943f3659719902467825f4bb05f410d\" pid:5347 exited_at:{seconds:1768520882 nanos:72127627}" Jan 15 23:48:02.080078 containerd[1903]: time="2026-01-15T23:48:02.080038419Z" level=info msg="StartContainer for \"68a4c5950656271b4699a55dc336c0f2c943f3659719902467825f4bb05f410d\" returns successfully" Jan 15 23:48:02.090420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68a4c5950656271b4699a55dc336c0f2c943f3659719902467825f4bb05f410d-rootfs.mount: Deactivated successfully. Jan 15 23:48:02.998541 containerd[1903]: time="2026-01-15T23:48:02.998498212Z" level=info msg="CreateContainer within sandbox \"b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 15 23:48:03.017812 containerd[1903]: time="2026-01-15T23:48:03.017216896Z" level=info msg="Container 55d465fbd6640b6045193d2ef44d9bd76ba3d51250486074a896d81fc69c8460: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:48:03.033248 containerd[1903]: time="2026-01-15T23:48:03.033208876Z" level=info msg="CreateContainer within sandbox \"b912617e80996cb7a485dd286bb9e81b842401fad506588abf59102b6750fbb5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"55d465fbd6640b6045193d2ef44d9bd76ba3d51250486074a896d81fc69c8460\"" Jan 15 23:48:03.034825 containerd[1903]: time="2026-01-15T23:48:03.034787228Z" level=info msg="StartContainer for \"55d465fbd6640b6045193d2ef44d9bd76ba3d51250486074a896d81fc69c8460\"" Jan 15 23:48:03.035605 containerd[1903]: time="2026-01-15T23:48:03.035539783Z" level=info msg="connecting to shim 55d465fbd6640b6045193d2ef44d9bd76ba3d51250486074a896d81fc69c8460" address="unix:///run/containerd/s/6078a9b7eb2c7ed12de4a34a2bc53aaf081ce2f56d061c9872954bc05f691896" protocol=ttrpc version=3 Jan 15 23:48:03.057799 systemd[1]: Started cri-containerd-55d465fbd6640b6045193d2ef44d9bd76ba3d51250486074a896d81fc69c8460.scope - libcontainer container 55d465fbd6640b6045193d2ef44d9bd76ba3d51250486074a896d81fc69c8460. Jan 15 23:48:03.101930 containerd[1903]: time="2026-01-15T23:48:03.101860480Z" level=info msg="StartContainer for \"55d465fbd6640b6045193d2ef44d9bd76ba3d51250486074a896d81fc69c8460\" returns successfully" Jan 15 23:48:03.379670 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 15 23:48:04.021740 kubelet[3440]: I0115 23:48:04.021551 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8zm5l" podStartSLOduration=6.021537227 podStartE2EDuration="6.021537227s" podCreationTimestamp="2026-01-15 23:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:48:04.021237134 +0000 UTC m=+157.448345309" watchObservedRunningTime="2026-01-15 23:48:04.021537227 +0000 UTC m=+157.448645402" Jan 15 23:48:05.771103 systemd-networkd[1483]: lxc_health: Link UP Jan 15 23:48:05.772683 systemd-networkd[1483]: lxc_health: Gained carrier Jan 15 23:48:07.249851 systemd-networkd[1483]: lxc_health: Gained IPv6LL Jan 15 23:48:12.689695 sshd[5244]: Connection closed by 10.200.16.10 port 56828 Jan 15 23:48:12.690824 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Jan 15 23:48:12.694089 systemd[1]: sshd@23-10.200.20.15:22-10.200.16.10:56828.service: Deactivated successfully. Jan 15 23:48:12.695990 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 23:48:12.697602 systemd-logind[1880]: Session 26 logged out. Waiting for processes to exit. Jan 15 23:48:12.698790 systemd-logind[1880]: Removed session 26.