Jan 20 01:18:24.059494 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 20 01:18:24.059511 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Jan 19 22:57:34 -00 2026 Jan 20 01:18:24.059518 kernel: KASLR enabled Jan 20 01:18:24.059522 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 20 01:18:24.059525 kernel: printk: legacy bootconsole [pl11] enabled Jan 20 01:18:24.059530 kernel: efi: EFI v2.7 by EDK II Jan 20 01:18:24.059535 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89c018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 20 01:18:24.059539 kernel: random: crng init done Jan 20 01:18:24.059543 kernel: secureboot: Secure boot disabled Jan 20 01:18:24.059547 kernel: ACPI: Early table checksum verification disabled Jan 20 01:18:24.059551 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 20 01:18:24.059555 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:18:24.059559 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:18:24.059563 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 20 01:18:24.059569 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:18:24.059573 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:18:24.059577 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:18:24.059581 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:18:24.059585 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:18:24.059590 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:18:24.059594 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 20 01:18:24.059599 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:18:24.059603 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 20 01:18:24.059607 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 20 01:18:24.059611 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 20 01:18:24.059615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 20 01:18:24.059619 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 20 01:18:24.059623 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 20 01:18:24.059628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 20 01:18:24.059632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 20 01:18:24.059637 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 20 01:18:24.059641 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 20 01:18:24.059645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 20 01:18:24.059649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 20 01:18:24.059653 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 20 01:18:24.059657 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 20 01:18:24.059662 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 20 01:18:24.059666 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 20 01:18:24.059670 kernel: Zone ranges: Jan 20 01:18:24.059674 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 20 01:18:24.059681 kernel: DMA32 empty Jan 20 01:18:24.059685 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:18:24.059690 kernel: Device empty Jan 20 01:18:24.059694 kernel: Movable zone start for each node Jan 20 01:18:24.059698 kernel: Early memory node ranges Jan 20 01:18:24.059702 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 20 01:18:24.059708 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 20 01:18:24.059712 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 20 01:18:24.059716 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 20 01:18:24.059721 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 20 01:18:24.059725 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 20 01:18:24.059729 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:18:24.059734 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 20 01:18:24.059738 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 20 01:18:24.059742 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 20 01:18:24.059747 kernel: psci: probing for conduit method from ACPI. Jan 20 01:18:24.059751 kernel: psci: PSCIv1.3 detected in firmware. Jan 20 01:18:24.059755 kernel: psci: Using standard PSCI v0.2 function IDs Jan 20 01:18:24.059760 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 20 01:18:24.059765 kernel: psci: SMC Calling Convention v1.4 Jan 20 01:18:24.059769 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 20 01:18:24.059773 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 20 01:18:24.059778 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 20 01:18:24.059782 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 20 01:18:24.059787 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 20 01:18:24.059791 kernel: Detected PIPT I-cache on CPU0 Jan 20 01:18:24.059795 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 20 01:18:24.059800 kernel: CPU features: detected: GIC system register CPU interface Jan 20 01:18:24.059804 kernel: CPU features: detected: Spectre-v4 Jan 20 01:18:24.059808 kernel: CPU features: detected: Spectre-BHB Jan 20 01:18:24.059813 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 20 01:18:24.059818 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 20 01:18:24.059822 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 20 01:18:24.059827 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 20 01:18:24.059831 kernel: alternatives: applying boot alternatives Jan 20 01:18:24.059836 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3825f93c5ac04d887cdff1d17f655741a9a0c1b2ce2432debff700fb0368bb09 Jan 20 01:18:24.059841 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:18:24.059845 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:18:24.059849 kernel: Fallback order for Node 0: 0 Jan 20 01:18:24.059854 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 20 01:18:24.059859 kernel: Policy zone: Normal Jan 20 01:18:24.059863 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:18:24.059868 kernel: software IO TLB: area num 2. Jan 20 01:18:24.059872 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Jan 20 01:18:24.059876 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 01:18:24.059881 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:18:24.059886 kernel: rcu: RCU event tracing is enabled. Jan 20 01:18:24.059890 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 01:18:24.059894 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:18:24.059899 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:18:24.059903 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:18:24.059908 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 01:18:24.059913 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:18:24.059917 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:18:24.059922 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 20 01:18:24.059926 kernel: GICv3: 960 SPIs implemented Jan 20 01:18:24.059930 kernel: GICv3: 0 Extended SPIs implemented Jan 20 01:18:24.059935 kernel: Root IRQ handler: gic_handle_irq Jan 20 01:18:24.059939 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 20 01:18:24.059943 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 20 01:18:24.059948 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 20 01:18:24.059952 kernel: ITS: No ITS available, not enabling LPIs Jan 20 01:18:24.059957 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:18:24.059962 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 20 01:18:24.059966 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 01:18:24.059971 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 20 01:18:24.059975 kernel: Console: colour dummy device 80x25 Jan 20 01:18:24.059980 kernel: printk: legacy console [tty1] enabled Jan 20 01:18:24.059984 kernel: ACPI: Core revision 20240827 Jan 20 01:18:24.059989 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 20 01:18:24.059994 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:18:24.059998 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 01:18:24.060002 kernel: landlock: Up and running. Jan 20 01:18:24.060008 kernel: SELinux: Initializing. Jan 20 01:18:24.060012 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:18:24.060017 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:18:24.060022 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 20 01:18:24.060026 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 20 01:18:24.060034 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 20 01:18:24.060039 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:18:24.060044 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:18:24.060049 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 01:18:24.060054 kernel: Remapping and enabling EFI services. Jan 20 01:18:24.060058 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:18:24.060063 kernel: Detected PIPT I-cache on CPU1 Jan 20 01:18:24.060069 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 20 01:18:24.060073 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 20 01:18:24.060078 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 01:18:24.060083 kernel: SMP: Total of 2 processors activated. Jan 20 01:18:24.060087 kernel: CPU: All CPU(s) started at EL1 Jan 20 01:18:24.060093 kernel: CPU features: detected: 32-bit EL0 Support Jan 20 01:18:24.060098 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 20 01:18:24.060102 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 20 01:18:24.060107 kernel: CPU features: detected: Common not Private translations Jan 20 01:18:24.060112 kernel: CPU features: detected: CRC32 instructions Jan 20 01:18:24.060117 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 20 01:18:24.060121 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 20 01:18:24.060126 kernel: CPU features: detected: LSE atomic instructions Jan 20 01:18:24.060131 kernel: CPU features: detected: Privileged Access Never Jan 20 01:18:24.060136 kernel: CPU features: detected: Speculation barrier (SB) Jan 20 01:18:24.060141 kernel: CPU features: detected: TLB range maintenance instructions Jan 20 01:18:24.060146 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 20 01:18:24.060150 kernel: CPU features: detected: Scalable Vector Extension Jan 20 01:18:24.060155 kernel: alternatives: applying system-wide alternatives Jan 20 01:18:24.060160 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 20 01:18:24.060165 kernel: SVE: maximum available vector length 16 bytes per vector Jan 20 01:18:24.060169 kernel: SVE: default vector length 16 bytes per vector Jan 20 01:18:24.060174 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Jan 20 01:18:24.060180 kernel: devtmpfs: initialized Jan 20 01:18:24.060185 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:18:24.060190 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 01:18:24.060194 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 20 01:18:24.060199 kernel: 0 pages in range for non-PLT usage Jan 20 01:18:24.063700 kernel: 508400 pages in range for PLT usage Jan 20 01:18:24.063707 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:18:24.063712 kernel: SMBIOS 3.1.0 present. Jan 20 01:18:24.063719 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 20 01:18:24.063724 kernel: DMI: Memory slots populated: 2/2 Jan 20 01:18:24.063728 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:18:24.063733 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 20 01:18:24.063738 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 20 01:18:24.063743 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 20 01:18:24.063748 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:18:24.063753 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 20 01:18:24.063757 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:18:24.063763 kernel: cpuidle: using governor menu Jan 20 01:18:24.063768 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 20 01:18:24.063773 kernel: ASID allocator initialised with 32768 entries Jan 20 01:18:24.063777 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:18:24.063782 kernel: Serial: AMBA PL011 UART driver Jan 20 01:18:24.063787 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:18:24.063792 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:18:24.063796 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 20 01:18:24.063801 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 20 01:18:24.063807 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:18:24.063811 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:18:24.063816 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 20 01:18:24.063821 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 20 01:18:24.063826 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:18:24.063830 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:18:24.063835 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:18:24.063840 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:18:24.063844 kernel: ACPI: Interpreter enabled Jan 20 01:18:24.063850 kernel: ACPI: Using GIC for interrupt routing Jan 20 01:18:24.063855 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 20 01:18:24.063859 kernel: printk: legacy console [ttyAMA0] enabled Jan 20 01:18:24.063864 kernel: printk: legacy bootconsole [pl11] disabled Jan 20 01:18:24.063869 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 20 01:18:24.063874 kernel: ACPI: CPU0 has been hot-added Jan 20 01:18:24.063878 kernel: ACPI: CPU1 has been hot-added Jan 20 01:18:24.063883 kernel: iommu: Default domain type: Translated Jan 20 01:18:24.063888 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 20 01:18:24.063893 kernel: efivars: Registered efivars operations Jan 20 01:18:24.063898 kernel: vgaarb: loaded Jan 20 01:18:24.063903 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 20 01:18:24.063908 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:18:24.063912 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:18:24.063917 kernel: pnp: PnP ACPI init Jan 20 01:18:24.063921 kernel: pnp: PnP ACPI: found 0 devices Jan 20 01:18:24.063926 kernel: NET: Registered PF_INET protocol family Jan 20 01:18:24.063931 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:18:24.063936 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:18:24.063941 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:18:24.063946 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:18:24.063951 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:18:24.063956 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:18:24.063960 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:18:24.063965 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:18:24.063970 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:18:24.063974 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:18:24.063979 kernel: kvm [1]: HYP mode not available Jan 20 01:18:24.063985 kernel: Initialise system trusted keyrings Jan 20 01:18:24.063989 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:18:24.063994 kernel: Key type asymmetric registered Jan 20 01:18:24.063999 kernel: Asymmetric key parser 'x509' registered Jan 20 01:18:24.064004 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 20 01:18:24.064008 kernel: io scheduler mq-deadline registered Jan 20 01:18:24.064013 kernel: io scheduler kyber registered Jan 20 01:18:24.064017 kernel: io scheduler bfq registered Jan 20 01:18:24.064022 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:18:24.064028 kernel: thunder_xcv, ver 1.0 Jan 20 01:18:24.064032 kernel: thunder_bgx, ver 1.0 Jan 20 01:18:24.064037 kernel: nicpf, ver 1.0 Jan 20 01:18:24.064042 kernel: nicvf, ver 1.0 Jan 20 01:18:24.064146 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 20 01:18:24.064198 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-20T01:18:23 UTC (1768871903) Jan 20 01:18:24.064215 kernel: efifb: probing for efifb Jan 20 01:18:24.064224 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 20 01:18:24.064229 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 20 01:18:24.064234 kernel: efifb: scrolling: redraw Jan 20 01:18:24.064239 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 01:18:24.064244 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 01:18:24.064249 kernel: fb0: EFI VGA frame buffer device Jan 20 01:18:24.064253 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 20 01:18:24.064258 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 01:18:24.064263 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 20 01:18:24.064269 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:18:24.064274 kernel: watchdog: NMI not fully supported Jan 20 01:18:24.064278 kernel: watchdog: Hard watchdog permanently disabled Jan 20 01:18:24.064283 kernel: Segment Routing with IPv6 Jan 20 01:18:24.064288 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:18:24.064292 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:18:24.064297 kernel: Key type dns_resolver registered Jan 20 01:18:24.064302 kernel: registered taskstats version 1 Jan 20 01:18:24.064306 kernel: Loading compiled-in X.509 certificates Jan 20 01:18:24.064311 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3a8e96311e10f8204c78917500006eba3c60d834' Jan 20 01:18:24.064317 kernel: Demotion targets for Node 0: null Jan 20 01:18:24.064322 kernel: Key type .fscrypt registered Jan 20 01:18:24.064326 kernel: Key type fscrypt-provisioning registered Jan 20 01:18:24.064331 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:18:24.064336 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:18:24.064341 kernel: ima: No architecture policies found Jan 20 01:18:24.064345 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 20 01:18:24.064350 kernel: clk: Disabling unused clocks Jan 20 01:18:24.064355 kernel: PM: genpd: Disabling unused power domains Jan 20 01:18:24.064360 kernel: Warning: unable to open an initial console. Jan 20 01:18:24.064365 kernel: Freeing unused kernel memory: 39552K Jan 20 01:18:24.064370 kernel: Run /init as init process Jan 20 01:18:24.064374 kernel: with arguments: Jan 20 01:18:24.064379 kernel: /init Jan 20 01:18:24.064383 kernel: with environment: Jan 20 01:18:24.064388 kernel: HOME=/ Jan 20 01:18:24.064393 kernel: TERM=linux Jan 20 01:18:24.064398 systemd[1]: Successfully made /usr/ read-only. Jan 20 01:18:24.064406 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:18:24.064412 systemd[1]: Detected virtualization microsoft. Jan 20 01:18:24.064417 systemd[1]: Detected architecture arm64. Jan 20 01:18:24.064421 systemd[1]: Running in initrd. Jan 20 01:18:24.064426 systemd[1]: No hostname configured, using default hostname. Jan 20 01:18:24.064432 systemd[1]: Hostname set to . Jan 20 01:18:24.064437 systemd[1]: Initializing machine ID from random generator. Jan 20 01:18:24.064443 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:18:24.064448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:18:24.064453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:18:24.064459 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:18:24.064464 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:18:24.064469 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:18:24.064475 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:18:24.064482 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:18:24.064487 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:18:24.064492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:18:24.064497 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:18:24.064502 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:18:24.064508 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:18:24.064513 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:18:24.064518 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:18:24.064524 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:18:24.064529 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:18:24.064534 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:18:24.064539 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 01:18:24.064544 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:18:24.064550 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:18:24.064555 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:18:24.064560 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:18:24.064566 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:18:24.064571 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:18:24.064576 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:18:24.064582 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 01:18:24.064587 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:18:24.064592 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:18:24.064597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:18:24.064614 systemd-journald[225]: Collecting audit messages is disabled. Jan 20 01:18:24.064628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:18:24.064634 systemd-journald[225]: Journal started Jan 20 01:18:24.064649 systemd-journald[225]: Runtime Journal (/run/log/journal/ea8d6c1fc2894e15bde7287546c9cb8c) is 8M, max 78.3M, 70.3M free. Jan 20 01:18:24.064904 systemd-modules-load[227]: Inserted module 'overlay' Jan 20 01:18:24.080100 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:18:24.081011 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:18:24.102269 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:18:24.102284 kernel: Bridge firewalling registered Jan 20 01:18:24.097180 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:18:24.100806 systemd-modules-load[227]: Inserted module 'br_netfilter' Jan 20 01:18:24.107566 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:18:24.118100 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:18:24.126318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:18:24.135933 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:18:24.156758 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:18:24.163372 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:18:24.180922 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:18:24.191991 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:18:24.193914 systemd-tmpfiles[255]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 01:18:24.204941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:18:24.214223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:18:24.225507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:18:24.242857 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:18:24.251109 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:18:24.262766 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:18:24.277773 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3825f93c5ac04d887cdff1d17f655741a9a0c1b2ce2432debff700fb0368bb09 Jan 20 01:18:24.288219 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:18:24.323369 systemd-resolved[263]: Positive Trust Anchors: Jan 20 01:18:24.323379 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:18:24.323399 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:18:24.325130 systemd-resolved[263]: Defaulting to hostname 'linux'. Jan 20 01:18:24.325778 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:18:24.331881 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:18:24.416218 kernel: SCSI subsystem initialized Jan 20 01:18:24.422215 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:18:24.430221 kernel: iscsi: registered transport (tcp) Jan 20 01:18:24.442897 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:18:24.442909 kernel: QLogic iSCSI HBA Driver Jan 20 01:18:24.455454 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:18:24.473368 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:18:24.479718 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:18:24.525523 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:18:24.532316 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:18:24.588223 kernel: raid6: neonx8 gen() 18552 MB/s Jan 20 01:18:24.607212 kernel: raid6: neonx4 gen() 18568 MB/s Jan 20 01:18:24.626211 kernel: raid6: neonx2 gen() 17075 MB/s Jan 20 01:18:24.646211 kernel: raid6: neonx1 gen() 15036 MB/s Jan 20 01:18:24.665210 kernel: raid6: int64x8 gen() 10530 MB/s Jan 20 01:18:24.684228 kernel: raid6: int64x4 gen() 10612 MB/s Jan 20 01:18:24.704212 kernel: raid6: int64x2 gen() 8991 MB/s Jan 20 01:18:24.725929 kernel: raid6: int64x1 gen() 7012 MB/s Jan 20 01:18:24.725984 kernel: raid6: using algorithm neonx4 gen() 18568 MB/s Jan 20 01:18:24.747600 kernel: raid6: .... xor() 15147 MB/s, rmw enabled Jan 20 01:18:24.747640 kernel: raid6: using neon recovery algorithm Jan 20 01:18:24.756044 kernel: xor: measuring software checksum speed Jan 20 01:18:24.756059 kernel: 8regs : 28573 MB/sec Jan 20 01:18:24.759358 kernel: 32regs : 28800 MB/sec Jan 20 01:18:24.761881 kernel: arm64_neon : 37635 MB/sec Jan 20 01:18:24.764931 kernel: xor: using function: arm64_neon (37635 MB/sec) Jan 20 01:18:24.802236 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:18:24.807511 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:18:24.817336 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:18:24.841600 systemd-udevd[475]: Using default interface naming scheme 'v255'. Jan 20 01:18:24.845787 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:18:24.857999 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:18:24.884218 dracut-pre-trigger[491]: rd.md=0: removing MD RAID activation Jan 20 01:18:24.904612 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:18:24.910042 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:18:24.956579 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:18:24.969312 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:18:25.024040 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:18:25.028524 kernel: hv_vmbus: Vmbus version:5.3 Jan 20 01:18:25.027460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:18:25.047622 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:18:25.094267 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 20 01:18:25.094285 kernel: hv_vmbus: registering driver hid_hyperv Jan 20 01:18:25.094292 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 20 01:18:25.094298 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 20 01:18:25.094306 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 20 01:18:25.094313 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 20 01:18:25.094320 kernel: hv_vmbus: registering driver hv_netvsc Jan 20 01:18:25.094326 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 20 01:18:25.094440 kernel: hv_vmbus: registering driver hv_storvsc Jan 20 01:18:25.094447 kernel: PTP clock support registered Jan 20 01:18:25.085403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:18:25.114061 kernel: scsi host0: storvsc_host_t Jan 20 01:18:25.119547 kernel: scsi host1: storvsc_host_t Jan 20 01:18:25.119623 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 20 01:18:25.109920 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:18:25.114655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:18:25.137805 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 20 01:18:25.114742 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:18:25.124421 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:18:25.173221 kernel: hv_utils: Registering HyperV Utility Driver Jan 20 01:18:25.173255 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 20 01:18:25.173384 kernel: hv_vmbus: registering driver hv_utils Jan 20 01:18:25.173392 kernel: hv_netvsc 002248b4-6a7f-0022-48b4-6a7f002248b4 eth0: VF slot 1 added Jan 20 01:18:25.185871 kernel: hv_utils: Heartbeat IC version 3.0 Jan 20 01:18:25.185895 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 20 01:18:25.186003 kernel: hv_utils: Shutdown IC version 3.2 Jan 20 01:18:25.051292 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 20 01:18:25.063258 kernel: hv_utils: TimeSync IC version 4.0 Jan 20 01:18:25.063270 systemd-journald[225]: Time jumped backwards, rotating. Jan 20 01:18:25.063297 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 20 01:18:25.063388 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 20 01:18:25.055911 systemd-resolved[263]: Clock change detected. Flushing caches. Jan 20 01:18:25.077110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#60 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 01:18:25.083438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 01:18:25.056092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:18:25.094427 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:18:25.098432 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 20 01:18:25.104460 kernel: hv_vmbus: registering driver hv_pci Jan 20 01:18:25.111909 kernel: hv_pci 143e59c9-dbaa-47f8-8881-b171119e6b41: PCI VMBus probing: Using version 0x10004 Jan 20 01:18:25.112034 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 20 01:18:25.112115 kernel: hv_pci 143e59c9-dbaa-47f8-8881-b171119e6b41: PCI host bridge to bus dbaa:00 Jan 20 01:18:25.119030 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:18:25.119051 kernel: pci_bus dbaa:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 20 01:18:25.124424 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 20 01:18:25.124543 kernel: pci_bus dbaa:00: No busn resource found for root bus, will use [bus 00-ff] Jan 20 01:18:25.133525 kernel: pci dbaa:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 20 01:18:25.139462 kernel: pci dbaa:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 01:18:25.144431 kernel: pci dbaa:00:02.0: enabling Extended Tags Jan 20 01:18:25.158436 kernel: pci dbaa:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at dbaa:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 20 01:18:25.167992 kernel: pci_bus dbaa:00: busn_res: [bus 00-ff] end is updated to 00 Jan 20 01:18:25.168130 kernel: pci dbaa:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 20 01:18:25.184448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#42 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:18:25.203429 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:18:25.239584 kernel: mlx5_core dbaa:00:02.0: enabling device (0000 -> 0002) Jan 20 01:18:25.247578 kernel: mlx5_core dbaa:00:02.0: PTM is not supported by PCIe Jan 20 01:18:25.247664 kernel: mlx5_core dbaa:00:02.0: firmware version: 16.30.5026 Jan 20 01:18:25.418743 kernel: hv_netvsc 002248b4-6a7f-0022-48b4-6a7f002248b4 eth0: VF registering: eth1 Jan 20 01:18:25.418907 kernel: mlx5_core dbaa:00:02.0 eth1: joined to eth0 Jan 20 01:18:25.424202 kernel: mlx5_core dbaa:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 20 01:18:25.436428 kernel: mlx5_core dbaa:00:02.0 enP56234s1: renamed from eth1 Jan 20 01:18:25.566389 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 20 01:18:25.659886 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 01:18:25.683618 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 20 01:18:25.688924 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 20 01:18:25.702570 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:18:25.724343 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 20 01:18:25.735480 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:18:25.749649 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 01:18:25.745608 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:18:25.754820 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:18:25.759877 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:18:25.775038 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:18:25.795093 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:18:25.806833 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:18:26.805037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 01:18:26.818492 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:18:26.818859 disk-uuid[663]: The operation has completed successfully. Jan 20 01:18:26.887928 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:18:26.888031 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:18:26.912666 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:18:26.926532 sh[828]: Success Jan 20 01:18:26.961401 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:18:26.961450 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:18:26.966368 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 01:18:26.975445 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 01:18:27.238857 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:18:27.243809 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:18:27.262889 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:18:27.288883 kernel: BTRFS: device fsid b1d239e4-c666-4b78-9d3d-e9e6443c3359 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (846) Jan 20 01:18:27.288911 kernel: BTRFS info (device dm-0): first mount of filesystem b1d239e4-c666-4b78-9d3d-e9e6443c3359 Jan 20 01:18:27.293480 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:18:27.587059 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:18:27.587127 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 01:18:27.619597 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:18:27.623769 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:18:27.631146 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:18:27.631801 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:18:27.654240 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:18:27.679453 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (869) Jan 20 01:18:27.689676 kernel: BTRFS info (device sda6): first mount of filesystem e20a00db-1b49-4e8f-8029-c59d826af381 Jan 20 01:18:27.689710 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:18:27.714611 kernel: BTRFS info (device sda6): turning on async discard Jan 20 01:18:27.714641 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 01:18:27.722441 kernel: BTRFS info (device sda6): last unmount of filesystem e20a00db-1b49-4e8f-8029-c59d826af381 Jan 20 01:18:27.722955 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:18:27.732334 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:18:27.772182 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:18:27.779618 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:18:27.813569 systemd-networkd[1015]: lo: Link UP Jan 20 01:18:27.813580 systemd-networkd[1015]: lo: Gained carrier Jan 20 01:18:27.814266 systemd-networkd[1015]: Enumeration completed Jan 20 01:18:27.814333 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:18:27.821352 systemd-networkd[1015]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:18:27.821355 systemd-networkd[1015]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:18:27.821670 systemd[1]: Reached target network.target - Network. Jan 20 01:18:27.884429 kernel: mlx5_core dbaa:00:02.0 enP56234s1: Link up Jan 20 01:18:27.915752 kernel: hv_netvsc 002248b4-6a7f-0022-48b4-6a7f002248b4 eth0: Data path switched to VF: enP56234s1 Jan 20 01:18:27.915453 systemd-networkd[1015]: enP56234s1: Link UP Jan 20 01:18:27.915509 systemd-networkd[1015]: eth0: Link UP Jan 20 01:18:27.915573 systemd-networkd[1015]: eth0: Gained carrier Jan 20 01:18:27.915581 systemd-networkd[1015]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:18:27.933555 systemd-networkd[1015]: enP56234s1: Gained carrier Jan 20 01:18:27.945436 systemd-networkd[1015]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:18:28.977535 systemd-networkd[1015]: eth0: Gained IPv6LL Jan 20 01:18:29.068815 ignition[954]: Ignition 2.22.0 Jan 20 01:18:29.068826 ignition[954]: Stage: fetch-offline Jan 20 01:18:29.073254 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:18:29.068918 ignition[954]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:18:29.081164 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 01:18:29.068925 ignition[954]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:18:29.068989 ignition[954]: parsed url from cmdline: "" Jan 20 01:18:29.068991 ignition[954]: no config URL provided Jan 20 01:18:29.068995 ignition[954]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:18:29.069000 ignition[954]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:18:29.069003 ignition[954]: failed to fetch config: resource requires networking Jan 20 01:18:29.069348 ignition[954]: Ignition finished successfully Jan 20 01:18:29.110818 ignition[1026]: Ignition 2.22.0 Jan 20 01:18:29.110823 ignition[1026]: Stage: fetch Jan 20 01:18:29.111023 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:18:29.111030 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:18:29.111094 ignition[1026]: parsed url from cmdline: "" Jan 20 01:18:29.111096 ignition[1026]: no config URL provided Jan 20 01:18:29.111100 ignition[1026]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:18:29.111106 ignition[1026]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:18:29.111121 ignition[1026]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 20 01:18:29.200971 ignition[1026]: GET result: OK Jan 20 01:18:29.201028 ignition[1026]: config has been read from IMDS userdata Jan 20 01:18:29.201051 ignition[1026]: parsing config with SHA512: 85ec45fbbd10204876ff55c998b02db62ac733ed560214b332eb8a96b43e6f719e68adf5d9005eb890313a2c9290f5c1fc17d1c3f50c13cba1f7097164f35a32 Jan 20 01:18:29.204295 unknown[1026]: fetched base config from "system" Jan 20 01:18:29.204603 ignition[1026]: fetch: fetch complete Jan 20 01:18:29.204310 unknown[1026]: fetched base config from "system" Jan 20 01:18:29.204607 ignition[1026]: fetch: fetch passed Jan 20 01:18:29.204314 unknown[1026]: fetched user config from "azure" Jan 20 01:18:29.204652 ignition[1026]: Ignition finished successfully Jan 20 01:18:29.208485 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 01:18:29.216274 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:18:29.254582 ignition[1033]: Ignition 2.22.0 Jan 20 01:18:29.254589 ignition[1033]: Stage: kargs Jan 20 01:18:29.259693 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:18:29.255616 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:18:29.265405 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:18:29.255624 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:18:29.256194 ignition[1033]: kargs: kargs passed Jan 20 01:18:29.256233 ignition[1033]: Ignition finished successfully Jan 20 01:18:29.297849 ignition[1039]: Ignition 2.22.0 Jan 20 01:18:29.297858 ignition[1039]: Stage: disks Jan 20 01:18:29.302005 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:18:29.298008 ignition[1039]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:18:29.306262 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:18:29.298014 ignition[1039]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:18:29.312812 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:18:29.300869 ignition[1039]: disks: disks passed Jan 20 01:18:29.321522 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:18:29.300908 ignition[1039]: Ignition finished successfully Jan 20 01:18:29.329330 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:18:29.338078 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:18:29.347138 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:18:29.443618 systemd-fsck[1048]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 20 01:18:29.451516 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:18:29.457828 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:18:29.690434 kernel: EXT4-fs (sda9): mounted filesystem e54ab1b7-d0c9-4deb-8673-6708a877d2de r/w with ordered data mode. Quota mode: none. Jan 20 01:18:29.690962 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:18:29.694725 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:18:29.716955 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:18:29.733877 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:18:29.741823 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 20 01:18:29.751883 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:18:29.779586 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1062) Jan 20 01:18:29.779619 kernel: BTRFS info (device sda6): first mount of filesystem e20a00db-1b49-4e8f-8029-c59d826af381 Jan 20 01:18:29.779632 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:18:29.751907 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:18:29.771591 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:18:29.802895 kernel: BTRFS info (device sda6): turning on async discard Jan 20 01:18:29.802909 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 01:18:29.784837 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:18:29.808554 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:18:30.443910 coreos-metadata[1064]: Jan 20 01:18:30.443 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 01:18:30.451165 coreos-metadata[1064]: Jan 20 01:18:30.451 INFO Fetch successful Jan 20 01:18:30.451165 coreos-metadata[1064]: Jan 20 01:18:30.451 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 20 01:18:30.463988 coreos-metadata[1064]: Jan 20 01:18:30.463 INFO Fetch successful Jan 20 01:18:30.486983 coreos-metadata[1064]: Jan 20 01:18:30.486 INFO wrote hostname ci-4459.2.2-n-4dd77badda to /sysroot/etc/hostname Jan 20 01:18:30.494011 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:18:30.619362 initrd-setup-root[1092]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:18:30.657009 initrd-setup-root[1099]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:18:30.679140 initrd-setup-root[1106]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:18:30.685424 initrd-setup-root[1113]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:18:31.712472 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:18:31.718021 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:18:31.734859 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:18:31.742876 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:18:31.752558 kernel: BTRFS info (device sda6): last unmount of filesystem e20a00db-1b49-4e8f-8029-c59d826af381 Jan 20 01:18:31.770740 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:18:31.781559 ignition[1181]: INFO : Ignition 2.22.0 Jan 20 01:18:31.784704 ignition[1181]: INFO : Stage: mount Jan 20 01:18:31.784704 ignition[1181]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:18:31.784704 ignition[1181]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:18:31.784704 ignition[1181]: INFO : mount: mount passed Jan 20 01:18:31.784704 ignition[1181]: INFO : Ignition finished successfully Jan 20 01:18:31.784203 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:18:31.789345 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:18:31.819519 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:18:31.851209 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1192) Jan 20 01:18:31.851235 kernel: BTRFS info (device sda6): first mount of filesystem e20a00db-1b49-4e8f-8029-c59d826af381 Jan 20 01:18:31.855899 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:18:31.864427 kernel: BTRFS info (device sda6): turning on async discard Jan 20 01:18:31.864451 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 01:18:31.865724 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:18:31.894995 ignition[1209]: INFO : Ignition 2.22.0 Jan 20 01:18:31.898785 ignition[1209]: INFO : Stage: files Jan 20 01:18:31.898785 ignition[1209]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:18:31.898785 ignition[1209]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:18:31.898785 ignition[1209]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:18:31.916313 ignition[1209]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:18:31.916313 ignition[1209]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:18:31.981626 ignition[1209]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:18:31.987239 ignition[1209]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:18:31.987239 ignition[1209]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:18:31.981950 unknown[1209]: wrote ssh authorized keys file for user: core Jan 20 01:18:32.028963 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 20 01:18:32.037458 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 20 01:18:32.057544 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:18:32.164904 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 20 01:18:32.173441 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 01:18:32.173441 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 20 01:18:32.256753 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 20 01:18:32.363842 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 01:18:32.371059 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:18:32.371059 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:18:32.371059 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:18:32.371059 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:18:32.371059 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:18:32.371059 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:18:32.371059 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:18:32.371059 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:18:32.428057 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:18:32.428057 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:18:32.428057 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:18:32.428057 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:18:32.428057 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:18:32.428057 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 20 01:18:32.785478 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 20 01:18:33.094560 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:18:33.094560 ignition[1209]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 20 01:18:33.131326 ignition[1209]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:18:33.147942 ignition[1209]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:18:33.147942 ignition[1209]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 20 01:18:33.170031 ignition[1209]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:18:33.170031 ignition[1209]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:18:33.170031 ignition[1209]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:18:33.170031 ignition[1209]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:18:33.170031 ignition[1209]: INFO : files: files passed Jan 20 01:18:33.170031 ignition[1209]: INFO : Ignition finished successfully Jan 20 01:18:33.150060 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:18:33.160892 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:18:33.190950 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:18:33.205636 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:18:33.205709 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:18:33.235641 initrd-setup-root-after-ignition[1238]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:18:33.235641 initrd-setup-root-after-ignition[1238]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:18:33.248166 initrd-setup-root-after-ignition[1242]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:18:33.242741 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:18:33.253220 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:18:33.264177 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:18:33.302843 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:18:33.302932 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:18:33.312227 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:18:33.321254 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:18:33.329392 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:18:33.329939 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:18:33.361983 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:18:33.368311 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:18:33.397390 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:18:33.402355 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:18:33.411700 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:18:33.419852 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:18:33.419935 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:18:33.431817 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:18:33.436157 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:18:33.444806 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:18:33.453170 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:18:33.461378 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:18:33.470329 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:18:33.479510 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:18:33.488191 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:18:33.497533 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:18:33.506092 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:18:33.514963 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:18:33.522282 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:18:33.522374 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:18:33.533534 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:18:33.538265 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:18:33.546980 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:18:33.547041 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:18:33.556071 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:18:33.556151 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:18:33.569116 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:18:33.569192 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:18:33.574357 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:18:33.630488 ignition[1262]: INFO : Ignition 2.22.0 Jan 20 01:18:33.630488 ignition[1262]: INFO : Stage: umount Jan 20 01:18:33.630488 ignition[1262]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:18:33.630488 ignition[1262]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:18:33.630488 ignition[1262]: INFO : umount: umount passed Jan 20 01:18:33.630488 ignition[1262]: INFO : Ignition finished successfully Jan 20 01:18:33.574436 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:18:33.582293 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 20 01:18:33.582353 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:18:33.593164 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:18:33.616504 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:18:33.622624 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:18:33.626497 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:18:33.631473 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:18:33.631580 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:18:33.644261 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:18:33.644338 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:18:33.655713 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:18:33.655789 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:18:33.663190 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:18:33.663230 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:18:33.667281 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 01:18:33.667308 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 01:18:33.671431 systemd[1]: Stopped target network.target - Network. Jan 20 01:18:33.686977 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:18:33.687028 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:18:33.692567 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:18:33.705675 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:18:33.713431 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:18:33.723973 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:18:33.732053 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:18:33.744140 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:18:33.744190 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:18:33.751722 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:18:33.751751 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:18:33.759490 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:18:33.759537 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:18:33.767140 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:18:33.767166 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:18:33.774922 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:18:33.782431 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:18:33.790968 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:18:33.791522 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:18:33.791594 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:18:33.800560 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:18:33.800627 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:18:33.982656 kernel: hv_netvsc 002248b4-6a7f-0022-48b4-6a7f002248b4 eth0: Data path switched from VF: enP56234s1 Jan 20 01:18:33.813578 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 01:18:33.813770 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:18:33.813843 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:18:33.826545 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 01:18:33.828496 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 01:18:33.835089 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:18:33.835121 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:18:33.845187 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:18:33.859872 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:18:33.859926 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:18:33.868569 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:18:33.868606 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:18:33.880398 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:18:33.881258 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:18:33.888782 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:18:33.888823 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:18:33.901516 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:18:33.908402 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 01:18:33.908478 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:18:33.926383 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:18:33.926516 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:18:33.934608 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:18:33.934658 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:18:33.943457 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:18:33.943480 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:18:33.951219 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:18:33.951253 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:18:33.964031 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:18:33.964064 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:18:33.982722 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:18:33.982762 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:18:33.993534 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:18:34.004454 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 01:18:34.004504 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:18:34.016096 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:18:34.016130 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:18:34.025652 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 01:18:34.025693 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:18:34.034808 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:18:34.034852 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:18:34.040025 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:18:34.040056 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:18:34.054397 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 01:18:34.054459 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 20 01:18:34.054481 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 01:18:34.054503 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:18:34.054749 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:18:34.054819 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:18:34.077737 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:18:34.077840 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:18:34.236565 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:18:34.236661 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:18:34.246007 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:18:34.254497 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:18:34.254545 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:18:34.264096 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:18:34.282212 systemd[1]: Switching root. Jan 20 01:18:34.387158 systemd-journald[225]: Journal stopped Jan 20 01:18:38.833953 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jan 20 01:18:38.833971 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:18:38.833979 kernel: SELinux: policy capability open_perms=1 Jan 20 01:18:38.833985 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:18:38.833991 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:18:38.833996 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:18:38.834002 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:18:38.834008 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:18:38.834013 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:18:38.834018 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 01:18:38.834023 kernel: audit: type=1403 audit(1768871915.468:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:18:38.834030 systemd[1]: Successfully loaded SELinux policy in 261.643ms. Jan 20 01:18:38.834037 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.321ms. Jan 20 01:18:38.834043 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:18:38.834050 systemd[1]: Detected virtualization microsoft. Jan 20 01:18:38.834057 systemd[1]: Detected architecture arm64. Jan 20 01:18:38.834062 systemd[1]: Detected first boot. Jan 20 01:18:38.834069 systemd[1]: Hostname set to . Jan 20 01:18:38.834075 systemd[1]: Initializing machine ID from random generator. Jan 20 01:18:38.834081 zram_generator::config[1305]: No configuration found. Jan 20 01:18:38.834087 kernel: NET: Registered PF_VSOCK protocol family Jan 20 01:18:38.834093 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:18:38.834099 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 01:18:38.834106 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:18:38.834112 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:18:38.834118 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:18:38.834124 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:18:38.834131 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:18:38.834137 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:18:38.834143 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:18:38.834150 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:18:38.834157 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:18:38.834163 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:18:38.834169 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:18:38.834175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:18:38.834181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:18:38.834187 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:18:38.834193 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:18:38.834200 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:18:38.834206 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:18:38.834214 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 20 01:18:38.834220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:18:38.834226 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:18:38.834232 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:18:38.834238 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:18:38.834244 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:18:38.834252 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:18:38.834258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:18:38.834264 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:18:38.834270 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:18:38.834276 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:18:38.834282 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:18:38.834289 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:18:38.834296 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 01:18:38.834302 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:18:38.834308 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:18:38.834314 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:18:38.834321 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:18:38.834327 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:18:38.834334 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:18:38.834340 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:18:38.834346 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:18:38.834352 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:18:38.834358 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:18:38.834365 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:18:38.834371 systemd[1]: Reached target machines.target - Containers. Jan 20 01:18:38.834377 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:18:38.834384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:18:38.834390 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:18:38.834397 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:18:38.834403 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:18:38.834837 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:18:38.834864 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:18:38.834871 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:18:38.834878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:18:38.834888 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:18:38.834895 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:18:38.834901 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:18:38.834907 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:18:38.834913 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:18:38.834920 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:18:38.834927 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:18:38.834933 kernel: loop: module loaded Jan 20 01:18:38.834940 kernel: fuse: init (API version 7.41) Jan 20 01:18:38.834945 kernel: ACPI: bus type drm_connector registered Jan 20 01:18:38.834951 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:18:38.834957 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:18:38.834985 systemd-journald[1395]: Collecting audit messages is disabled. Jan 20 01:18:38.835002 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:18:38.835010 systemd-journald[1395]: Journal started Jan 20 01:18:38.835024 systemd-journald[1395]: Runtime Journal (/run/log/journal/634c663bccfa45a6a4460f2f4f0ed35b) is 8M, max 78.3M, 70.3M free. Jan 20 01:18:38.104717 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:18:38.122819 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 20 01:18:38.123183 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:18:38.123464 systemd[1]: systemd-journald.service: Consumed 2.396s CPU time. Jan 20 01:18:38.859209 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 01:18:38.868866 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:18:38.876593 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 01:18:38.876635 systemd[1]: Stopped verity-setup.service. Jan 20 01:18:38.889173 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:18:38.889803 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:18:38.894219 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:18:38.898971 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:18:38.903116 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:18:38.907826 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:18:38.912506 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:18:38.916882 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:18:38.922019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:18:38.927528 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:18:38.927708 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:18:38.932882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:18:38.933067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:18:38.938234 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:18:38.939501 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:18:38.944307 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:18:38.944621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:18:38.950031 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:18:38.950209 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:18:38.955046 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:18:38.955229 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:18:38.959951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:18:38.964873 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:18:38.970519 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:18:38.975975 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 01:18:38.981378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:18:38.994133 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:18:38.999634 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:18:39.017476 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:18:39.022433 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:18:39.022461 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:18:39.027494 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 01:18:39.035537 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:18:39.039694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:18:39.046942 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:18:39.052190 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:18:39.056942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:18:39.059512 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:18:39.064432 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:18:39.065051 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:18:39.072288 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:18:39.081295 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:18:39.087706 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:18:39.096490 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:18:39.096931 systemd-journald[1395]: Time spent on flushing to /var/log/journal/634c663bccfa45a6a4460f2f4f0ed35b is 57.005ms for 942 entries. Jan 20 01:18:39.096931 systemd-journald[1395]: System Journal (/var/log/journal/634c663bccfa45a6a4460f2f4f0ed35b) is 11.8M, max 2.6G, 2.6G free. Jan 20 01:18:39.215935 systemd-journald[1395]: Received client request to flush runtime journal. Jan 20 01:18:39.215976 systemd-journald[1395]: /var/log/journal/634c663bccfa45a6a4460f2f4f0ed35b/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 20 01:18:39.215997 systemd-journald[1395]: Rotating system journal. Jan 20 01:18:39.216015 kernel: loop0: detected capacity change from 0 to 100632 Jan 20 01:18:39.105903 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:18:39.113250 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:18:39.120365 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 01:18:39.158749 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:18:39.190694 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:18:39.191554 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 01:18:39.216991 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:18:39.266076 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Jan 20 01:18:39.266090 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Jan 20 01:18:39.268425 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:18:39.277529 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:18:39.427466 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:18:39.436054 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:18:39.450935 systemd-tmpfiles[1463]: ACLs are not supported, ignoring. Jan 20 01:18:39.451151 systemd-tmpfiles[1463]: ACLs are not supported, ignoring. Jan 20 01:18:39.453442 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:18:39.593472 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:18:39.600642 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:18:39.628437 systemd-udevd[1469]: Using default interface naming scheme 'v255'. Jan 20 01:18:39.637462 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:18:39.690441 kernel: loop1: detected capacity change from 0 to 119840 Jan 20 01:18:39.793286 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:18:39.803948 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:18:39.846580 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:18:39.857893 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 20 01:18:39.922035 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:18:39.922099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#55 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:18:39.956954 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:18:39.989722 kernel: hv_vmbus: registering driver hv_balloon Jan 20 01:18:39.989772 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 20 01:18:39.994288 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 20 01:18:40.020969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:18:40.045282 kernel: hv_vmbus: registering driver hyperv_fb Jan 20 01:18:40.045336 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 20 01:18:40.050553 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 20 01:18:40.057959 kernel: Console: switching to colour dummy device 80x25 Jan 20 01:18:40.058021 kernel: loop2: detected capacity change from 0 to 207008 Jan 20 01:18:40.063490 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 01:18:40.074933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:18:40.075079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:18:40.085299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:18:40.127446 kernel: loop3: detected capacity change from 0 to 27936 Jan 20 01:18:40.159924 systemd-networkd[1474]: lo: Link UP Jan 20 01:18:40.160303 systemd-networkd[1474]: lo: Gained carrier Jan 20 01:18:40.160422 kernel: MACsec IEEE 802.1AE Jan 20 01:18:40.162699 systemd-networkd[1474]: Enumeration completed Jan 20 01:18:40.163680 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:18:40.163685 systemd-networkd[1474]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:18:40.164487 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:18:40.182055 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 01:18:40.187686 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:18:40.200617 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 01:18:40.208511 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:18:40.225430 kernel: mlx5_core dbaa:00:02.0 enP56234s1: Link up Jan 20 01:18:40.246469 kernel: hv_netvsc 002248b4-6a7f-0022-48b4-6a7f002248b4 eth0: Data path switched to VF: enP56234s1 Jan 20 01:18:40.246910 systemd-networkd[1474]: enP56234s1: Link UP Jan 20 01:18:40.247027 systemd-networkd[1474]: eth0: Link UP Jan 20 01:18:40.247049 systemd-networkd[1474]: eth0: Gained carrier Jan 20 01:18:40.247064 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:18:40.253605 systemd-networkd[1474]: enP56234s1: Gained carrier Jan 20 01:18:40.257687 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:18:40.263470 systemd-networkd[1474]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:18:40.265629 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 01:18:40.506208 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:18:40.514681 kernel: loop4: detected capacity change from 0 to 100632 Jan 20 01:18:40.529429 kernel: loop5: detected capacity change from 0 to 119840 Jan 20 01:18:40.539435 kernel: loop6: detected capacity change from 0 to 207008 Jan 20 01:18:40.555431 kernel: loop7: detected capacity change from 0 to 27936 Jan 20 01:18:40.561938 (sd-merge)[1619]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 20 01:18:40.562279 (sd-merge)[1619]: Merged extensions into '/usr'. Jan 20 01:18:40.564899 systemd[1]: Reload requested from client PID 1444 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:18:40.564991 systemd[1]: Reloading... Jan 20 01:18:40.612434 zram_generator::config[1644]: No configuration found. Jan 20 01:18:40.777644 systemd[1]: Reloading finished in 212 ms. Jan 20 01:18:40.803458 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:18:40.814310 systemd[1]: Starting ensure-sysext.service... Jan 20 01:18:40.820528 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:18:40.840151 systemd[1]: Reload requested from client PID 1703 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:18:40.840163 systemd[1]: Reloading... Jan 20 01:18:40.848581 systemd-tmpfiles[1704]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:18:40.848842 systemd-tmpfiles[1704]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:18:40.849406 systemd-tmpfiles[1704]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:18:40.849666 systemd-tmpfiles[1704]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:18:40.850866 systemd-tmpfiles[1704]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:18:40.851443 systemd-tmpfiles[1704]: ACLs are not supported, ignoring. Jan 20 01:18:40.851601 systemd-tmpfiles[1704]: ACLs are not supported, ignoring. Jan 20 01:18:40.884230 systemd-tmpfiles[1704]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:18:40.884239 systemd-tmpfiles[1704]: Skipping /boot Jan 20 01:18:40.890482 systemd-tmpfiles[1704]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:18:40.890557 systemd-tmpfiles[1704]: Skipping /boot Jan 20 01:18:40.899454 zram_generator::config[1749]: No configuration found. Jan 20 01:18:41.040985 systemd[1]: Reloading finished in 200 ms. Jan 20 01:18:41.062235 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:18:41.074405 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:18:41.082581 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:18:41.087480 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:18:41.095924 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:18:41.103591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:18:41.124593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:18:41.129559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:18:41.129650 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:18:41.130508 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:18:41.141577 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:18:41.147940 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:18:41.156038 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:18:41.156523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:18:41.161835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:18:41.161960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:18:41.167208 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:18:41.167320 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:18:41.179530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:18:41.187601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:18:41.194721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:18:41.203528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:18:41.208379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:18:41.209587 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:18:41.212263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:18:41.214437 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:18:41.219922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:18:41.221446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:18:41.227290 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:18:41.227540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:18:41.234718 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:18:41.245308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:18:41.246467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:18:41.251399 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:18:41.260238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:18:41.267131 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:18:41.272173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:18:41.272202 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:18:41.272237 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:18:41.276817 systemd[1]: Finished ensure-sysext.service. Jan 20 01:18:41.280791 systemd-resolved[1804]: Positive Trust Anchors: Jan 20 01:18:41.280801 systemd-resolved[1804]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:18:41.280820 systemd-resolved[1804]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:18:41.281170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:18:41.281302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:18:41.286794 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:18:41.286905 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:18:41.291812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:18:41.291912 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:18:41.293189 systemd-resolved[1804]: Using system hostname 'ci-4459.2.2-n-4dd77badda'. Jan 20 01:18:41.297522 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:18:41.297642 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:18:41.306652 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:18:41.306706 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:18:41.310898 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:18:41.315457 systemd[1]: Reached target network.target - Network. Jan 20 01:18:41.319533 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:18:41.324748 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:18:41.337240 augenrules[1840]: No rules Jan 20 01:18:41.338311 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:18:41.340452 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:18:41.457555 systemd-networkd[1474]: eth0: Gained IPv6LL Jan 20 01:18:41.462590 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:18:41.468550 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:18:41.807544 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:18:41.813234 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:18:44.314930 ldconfig[1439]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:18:44.331575 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:18:44.337743 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:18:44.349457 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:18:44.354236 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:18:44.358759 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:18:44.363808 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:18:44.369189 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:18:44.373541 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:18:44.378924 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:18:44.384161 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:18:44.384185 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:18:44.387883 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:18:44.407043 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:18:44.412647 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:18:44.417838 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 01:18:44.423267 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 01:18:44.428456 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 01:18:44.442917 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:18:44.447318 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 01:18:44.452619 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:18:44.457076 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:18:44.460977 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:18:44.464816 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:18:44.464835 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:18:44.481674 systemd[1]: Starting chronyd.service - NTP client/server... Jan 20 01:18:44.492501 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:18:44.497717 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 01:18:44.504543 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:18:44.511526 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:18:44.519213 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:18:44.526444 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:18:44.530696 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:18:44.533509 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 20 01:18:44.537807 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 20 01:18:44.538867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:44.544820 jq[1862]: false Jan 20 01:18:44.545530 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:18:44.551516 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:18:44.557531 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:18:44.563559 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:18:44.573091 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:18:44.575890 extend-filesystems[1863]: Found /dev/sda6 Jan 20 01:18:44.584189 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:18:44.592831 kernel: hv_utils: KVP IC version 4.0 Jan 20 01:18:44.579542 KVP[1864]: KVP starting; pid is:1864 Jan 20 01:18:44.592216 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:18:44.582258 chronyd[1854]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 20 01:18:44.592531 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:18:44.590187 KVP[1864]: KVP LIC Version: 3.1 Jan 20 01:18:44.594317 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:18:44.599811 extend-filesystems[1863]: Found /dev/sda9 Jan 20 01:18:44.611938 extend-filesystems[1863]: Checking size of /dev/sda9 Jan 20 01:18:44.604623 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:18:44.615620 chronyd[1854]: Timezone right/UTC failed leap second check, ignoring Jan 20 01:18:44.616102 systemd[1]: Started chronyd.service - NTP client/server. Jan 20 01:18:44.615749 chronyd[1854]: Loaded seccomp filter (level 2) Jan 20 01:18:44.621858 jq[1888]: true Jan 20 01:18:44.624405 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:18:44.629952 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:18:44.630095 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:18:44.632668 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:18:44.632809 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:18:44.644922 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:18:44.649381 extend-filesystems[1863]: Old size kept for /dev/sda9 Jan 20 01:18:44.650050 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:18:44.669138 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:18:44.669293 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:18:44.676097 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:18:44.689602 update_engine[1881]: I20260120 01:18:44.689344 1881 main.cc:92] Flatcar Update Engine starting Jan 20 01:18:44.696667 (ntainerd)[1904]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 01:18:44.699080 jq[1903]: true Jan 20 01:18:44.718575 tar[1897]: linux-arm64/LICENSE Jan 20 01:18:44.718746 tar[1897]: linux-arm64/helm Jan 20 01:18:44.720966 systemd-logind[1878]: New seat seat0. Jan 20 01:18:44.742920 systemd-logind[1878]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 20 01:18:44.743074 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:18:44.806582 bash[1944]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:18:44.813138 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:18:44.821810 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 01:18:44.847329 dbus-daemon[1857]: [system] SELinux support is enabled Jan 20 01:18:44.847475 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:18:44.858685 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:18:44.858713 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:18:44.864918 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:18:44.864941 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:18:44.865350 update_engine[1881]: I20260120 01:18:44.865309 1881 update_check_scheduler.cc:74] Next update check in 4m48s Jan 20 01:18:44.871388 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:18:44.875365 dbus-daemon[1857]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 01:18:44.878589 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:18:44.903014 sshd_keygen[1885]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:18:44.920799 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:18:44.927592 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:18:44.929682 coreos-metadata[1856]: Jan 20 01:18:44.929 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 01:18:44.934092 coreos-metadata[1856]: Jan 20 01:18:44.933 INFO Fetch successful Jan 20 01:18:44.934936 coreos-metadata[1856]: Jan 20 01:18:44.934 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 20 01:18:44.935510 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 20 01:18:44.940515 coreos-metadata[1856]: Jan 20 01:18:44.940 INFO Fetch successful Jan 20 01:18:44.940515 coreos-metadata[1856]: Jan 20 01:18:44.940 INFO Fetching http://168.63.129.16/machine/3e4fafe4-8e48-4d39-823a-9d79a4573d25/d5c06514%2De08b%2D4239%2D8ccc%2Dd0ab7d9c5446.%5Fci%2D4459.2.2%2Dn%2D4dd77badda?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 20 01:18:44.941915 coreos-metadata[1856]: Jan 20 01:18:44.941 INFO Fetch successful Jan 20 01:18:44.942001 coreos-metadata[1856]: Jan 20 01:18:44.941 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 20 01:18:44.951945 coreos-metadata[1856]: Jan 20 01:18:44.950 INFO Fetch successful Jan 20 01:18:44.957768 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:18:44.957934 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:18:44.969462 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:18:44.990077 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 20 01:18:44.999276 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:18:45.007830 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 01:18:45.019310 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:18:45.026406 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:18:45.027842 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 20 01:18:45.035246 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:18:45.088429 locksmithd[1996]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:18:45.181232 tar[1897]: linux-arm64/README.md Jan 20 01:18:45.193280 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:18:45.416214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:45.421606 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:18:45.520514 containerd[1904]: time="2026-01-20T01:18:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 01:18:45.521356 containerd[1904]: time="2026-01-20T01:18:45.521321968Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.529833968Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.256µs" Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.529860576Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.529874456Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.530012616Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.530026376Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.530045584Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.530085240Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.530091944Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.530234608Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.530243944Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.530250384Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:18:45.530823 containerd[1904]: time="2026-01-20T01:18:45.530255016Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 01:18:45.531006 containerd[1904]: time="2026-01-20T01:18:45.530308104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 01:18:45.531006 containerd[1904]: time="2026-01-20T01:18:45.530463736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:18:45.531006 containerd[1904]: time="2026-01-20T01:18:45.530487992Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:18:45.531006 containerd[1904]: time="2026-01-20T01:18:45.530495952Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 01:18:45.531006 containerd[1904]: time="2026-01-20T01:18:45.530520560Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 01:18:45.531006 containerd[1904]: time="2026-01-20T01:18:45.530660288Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 01:18:45.531006 containerd[1904]: time="2026-01-20T01:18:45.530712192Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:18:45.545053 containerd[1904]: time="2026-01-20T01:18:45.545027440Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 01:18:45.545103 containerd[1904]: time="2026-01-20T01:18:45.545066352Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 01:18:45.545103 containerd[1904]: time="2026-01-20T01:18:45.545082800Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 01:18:45.545103 containerd[1904]: time="2026-01-20T01:18:45.545092032Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 01:18:45.545103 containerd[1904]: time="2026-01-20T01:18:45.545099944Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 01:18:45.545171 containerd[1904]: time="2026-01-20T01:18:45.545106704Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 01:18:45.545171 containerd[1904]: time="2026-01-20T01:18:45.545115696Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 01:18:45.545171 containerd[1904]: time="2026-01-20T01:18:45.545126312Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 01:18:45.545171 containerd[1904]: time="2026-01-20T01:18:45.545136224Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 01:18:45.545171 containerd[1904]: time="2026-01-20T01:18:45.545142840Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 01:18:45.545171 containerd[1904]: time="2026-01-20T01:18:45.545148576Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 01:18:45.545171 containerd[1904]: time="2026-01-20T01:18:45.545157104Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 01:18:45.545270 containerd[1904]: time="2026-01-20T01:18:45.545252528Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 01:18:45.545300 containerd[1904]: time="2026-01-20T01:18:45.545288824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 01:18:45.545317 containerd[1904]: time="2026-01-20T01:18:45.545303408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 01:18:45.545317 containerd[1904]: time="2026-01-20T01:18:45.545311352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 01:18:45.545345 containerd[1904]: time="2026-01-20T01:18:45.545318400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 01:18:45.545345 containerd[1904]: time="2026-01-20T01:18:45.545325800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 01:18:45.545345 containerd[1904]: time="2026-01-20T01:18:45.545334040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 01:18:45.545345 containerd[1904]: time="2026-01-20T01:18:45.545340504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 01:18:45.545396 containerd[1904]: time="2026-01-20T01:18:45.545347688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 01:18:45.545396 containerd[1904]: time="2026-01-20T01:18:45.545354080Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 01:18:45.545396 containerd[1904]: time="2026-01-20T01:18:45.545362776Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 01:18:45.545444 containerd[1904]: time="2026-01-20T01:18:45.545402416Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 01:18:45.545444 containerd[1904]: time="2026-01-20T01:18:45.545438896Z" level=info msg="Start snapshots syncer" Jan 20 01:18:45.545467 containerd[1904]: time="2026-01-20T01:18:45.545453824Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 01:18:45.545713 containerd[1904]: time="2026-01-20T01:18:45.545680520Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 01:18:45.545799 containerd[1904]: time="2026-01-20T01:18:45.545723312Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 01:18:45.545799 containerd[1904]: time="2026-01-20T01:18:45.545758544Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 01:18:45.545879 containerd[1904]: time="2026-01-20T01:18:45.545850120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 01:18:45.545879 containerd[1904]: time="2026-01-20T01:18:45.545871248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 01:18:45.545879 containerd[1904]: time="2026-01-20T01:18:45.545878224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 01:18:45.545933 containerd[1904]: time="2026-01-20T01:18:45.545884272Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 01:18:45.545933 containerd[1904]: time="2026-01-20T01:18:45.545891968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 01:18:45.545933 containerd[1904]: time="2026-01-20T01:18:45.545902256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 01:18:45.545933 containerd[1904]: time="2026-01-20T01:18:45.545909328Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 01:18:45.545933 containerd[1904]: time="2026-01-20T01:18:45.545926528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 01:18:45.546010 containerd[1904]: time="2026-01-20T01:18:45.545935368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 01:18:45.546010 containerd[1904]: time="2026-01-20T01:18:45.545942224Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 01:18:45.546010 containerd[1904]: time="2026-01-20T01:18:45.545962224Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:18:45.546010 containerd[1904]: time="2026-01-20T01:18:45.545970440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:18:45.546010 containerd[1904]: time="2026-01-20T01:18:45.545975872Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:18:45.546010 containerd[1904]: time="2026-01-20T01:18:45.545981208Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:18:45.546010 containerd[1904]: time="2026-01-20T01:18:45.545985992Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 01:18:45.546010 containerd[1904]: time="2026-01-20T01:18:45.545991144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 01:18:45.546010 containerd[1904]: time="2026-01-20T01:18:45.545997400Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 01:18:45.546010 containerd[1904]: time="2026-01-20T01:18:45.546008248Z" level=info msg="runtime interface created" Jan 20 01:18:45.546010 containerd[1904]: time="2026-01-20T01:18:45.546012776Z" level=info msg="created NRI interface" Jan 20 01:18:45.546250 containerd[1904]: time="2026-01-20T01:18:45.546018312Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 01:18:45.546250 containerd[1904]: time="2026-01-20T01:18:45.546026376Z" level=info msg="Connect containerd service" Jan 20 01:18:45.546250 containerd[1904]: time="2026-01-20T01:18:45.546040264Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:18:45.546647 containerd[1904]: time="2026-01-20T01:18:45.546572808Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:18:45.771369 kubelet[2043]: E0120 01:18:45.771247 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:18:45.773380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:18:45.773611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:18:45.774180 systemd[1]: kubelet.service: Consumed 539ms CPU time, 255.9M memory peak. Jan 20 01:18:45.816055 containerd[1904]: time="2026-01-20T01:18:45.815999272Z" level=info msg="Start subscribing containerd event" Jan 20 01:18:45.816121 containerd[1904]: time="2026-01-20T01:18:45.816061184Z" level=info msg="Start recovering state" Jan 20 01:18:45.816308 containerd[1904]: time="2026-01-20T01:18:45.816139984Z" level=info msg="Start event monitor" Jan 20 01:18:45.816308 containerd[1904]: time="2026-01-20T01:18:45.816156664Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:18:45.816308 containerd[1904]: time="2026-01-20T01:18:45.816162352Z" level=info msg="Start streaming server" Jan 20 01:18:45.816308 containerd[1904]: time="2026-01-20T01:18:45.816168816Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 01:18:45.816308 containerd[1904]: time="2026-01-20T01:18:45.816175320Z" level=info msg="runtime interface starting up..." Jan 20 01:18:45.816308 containerd[1904]: time="2026-01-20T01:18:45.816178920Z" level=info msg="starting plugins..." Jan 20 01:18:45.816308 containerd[1904]: time="2026-01-20T01:18:45.816189568Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 01:18:45.816308 containerd[1904]: time="2026-01-20T01:18:45.816275496Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:18:45.816640 containerd[1904]: time="2026-01-20T01:18:45.816519512Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:18:45.816640 containerd[1904]: time="2026-01-20T01:18:45.816598160Z" level=info msg="containerd successfully booted in 0.296909s" Jan 20 01:18:45.816744 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:18:45.823599 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:18:45.827875 systemd[1]: Startup finished in 1.700s (kernel) + 11.730s (initrd) + 10.619s (userspace) = 24.050s. Jan 20 01:18:46.138584 login[2024]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:18:46.139565 login[2025]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:18:46.150479 systemd-logind[1878]: New session 2 of user core. Jan 20 01:18:46.152476 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:18:46.153754 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:18:46.157437 systemd-logind[1878]: New session 1 of user core. Jan 20 01:18:46.187045 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:18:46.189430 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:18:46.203029 (systemd)[2071]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:18:46.204988 systemd-logind[1878]: New session c1 of user core. Jan 20 01:18:46.358110 systemd[2071]: Queued start job for default target default.target. Jan 20 01:18:46.364087 systemd[2071]: Created slice app.slice - User Application Slice. Jan 20 01:18:46.364109 systemd[2071]: Reached target paths.target - Paths. Jan 20 01:18:46.364135 systemd[2071]: Reached target timers.target - Timers. Jan 20 01:18:46.365056 systemd[2071]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:18:46.373189 systemd[2071]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:18:46.373228 systemd[2071]: Reached target sockets.target - Sockets. Jan 20 01:18:46.373256 systemd[2071]: Reached target basic.target - Basic System. Jan 20 01:18:46.373276 systemd[2071]: Reached target default.target - Main User Target. Jan 20 01:18:46.373293 systemd[2071]: Startup finished in 164ms. Jan 20 01:18:46.374464 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:18:46.378522 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:18:46.379105 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:18:46.573898 waagent[2021]: 2026-01-20T01:18:46.573774Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 20 01:18:46.578172 waagent[2021]: 2026-01-20T01:18:46.578135Z INFO Daemon Daemon OS: flatcar 4459.2.2 Jan 20 01:18:46.581520 waagent[2021]: 2026-01-20T01:18:46.581491Z INFO Daemon Daemon Python: 3.11.13 Jan 20 01:18:46.584815 waagent[2021]: 2026-01-20T01:18:46.584780Z INFO Daemon Daemon Run daemon Jan 20 01:18:46.587860 waagent[2021]: 2026-01-20T01:18:46.587822Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Jan 20 01:18:46.594695 waagent[2021]: 2026-01-20T01:18:46.594667Z INFO Daemon Daemon Using waagent for provisioning Jan 20 01:18:46.598574 waagent[2021]: 2026-01-20T01:18:46.598546Z INFO Daemon Daemon Activate resource disk Jan 20 01:18:46.602139 waagent[2021]: 2026-01-20T01:18:46.602113Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 20 01:18:46.610295 waagent[2021]: 2026-01-20T01:18:46.610260Z INFO Daemon Daemon Found device: None Jan 20 01:18:46.613628 waagent[2021]: 2026-01-20T01:18:46.613600Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 20 01:18:46.619693 waagent[2021]: 2026-01-20T01:18:46.619668Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 20 01:18:46.628193 waagent[2021]: 2026-01-20T01:18:46.628159Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 01:18:46.632504 waagent[2021]: 2026-01-20T01:18:46.632469Z INFO Daemon Daemon Running default provisioning handler Jan 20 01:18:46.641732 waagent[2021]: 2026-01-20T01:18:46.641687Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 20 01:18:46.651313 waagent[2021]: 2026-01-20T01:18:46.651277Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 20 01:18:46.658396 waagent[2021]: 2026-01-20T01:18:46.658369Z INFO Daemon Daemon cloud-init is enabled: False Jan 20 01:18:46.662096 waagent[2021]: 2026-01-20T01:18:46.662073Z INFO Daemon Daemon Copying ovf-env.xml Jan 20 01:18:46.727486 waagent[2021]: 2026-01-20T01:18:46.727328Z INFO Daemon Daemon Successfully mounted dvd Jan 20 01:18:46.753280 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 20 01:18:46.755085 waagent[2021]: 2026-01-20T01:18:46.755038Z INFO Daemon Daemon Detect protocol endpoint Jan 20 01:18:46.758604 waagent[2021]: 2026-01-20T01:18:46.758573Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 01:18:46.762732 waagent[2021]: 2026-01-20T01:18:46.762701Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 20 01:18:46.767567 waagent[2021]: 2026-01-20T01:18:46.767540Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 20 01:18:46.771524 waagent[2021]: 2026-01-20T01:18:46.771495Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 20 01:18:46.775261 waagent[2021]: 2026-01-20T01:18:46.775235Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 20 01:18:46.815573 waagent[2021]: 2026-01-20T01:18:46.815538Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 20 01:18:46.820606 waagent[2021]: 2026-01-20T01:18:46.820582Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 20 01:18:46.824396 waagent[2021]: 2026-01-20T01:18:46.824373Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 20 01:18:46.948949 waagent[2021]: 2026-01-20T01:18:46.948887Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 20 01:18:46.954026 waagent[2021]: 2026-01-20T01:18:46.953994Z INFO Daemon Daemon Forcing an update of the goal state. Jan 20 01:18:46.960430 waagent[2021]: 2026-01-20T01:18:46.960387Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 01:18:46.976039 waagent[2021]: 2026-01-20T01:18:46.976010Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 20 01:18:46.980229 waagent[2021]: 2026-01-20T01:18:46.980197Z INFO Daemon Jan 20 01:18:46.982267 waagent[2021]: 2026-01-20T01:18:46.982240Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: abb79f10-e96d-47d6-a9b7-4d47c81e1416 eTag: 15954368278934706285 source: Fabric] Jan 20 01:18:46.990536 waagent[2021]: 2026-01-20T01:18:46.990506Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 20 01:18:46.995229 waagent[2021]: 2026-01-20T01:18:46.995202Z INFO Daemon Jan 20 01:18:46.997289 waagent[2021]: 2026-01-20T01:18:46.997264Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 20 01:18:47.007619 waagent[2021]: 2026-01-20T01:18:47.007592Z INFO Daemon Daemon Downloading artifacts profile blob Jan 20 01:18:47.061328 waagent[2021]: 2026-01-20T01:18:47.061280Z INFO Daemon Downloaded certificate {'thumbprint': '2329BE30F65D102C0005DAB5EA54DB904C30CEF5', 'hasPrivateKey': True} Jan 20 01:18:47.069373 waagent[2021]: 2026-01-20T01:18:47.069339Z INFO Daemon Fetch goal state completed Jan 20 01:18:47.077443 waagent[2021]: 2026-01-20T01:18:47.077363Z INFO Daemon Daemon Starting provisioning Jan 20 01:18:47.081255 waagent[2021]: 2026-01-20T01:18:47.081225Z INFO Daemon Daemon Handle ovf-env.xml. Jan 20 01:18:47.084583 waagent[2021]: 2026-01-20T01:18:47.084558Z INFO Daemon Daemon Set hostname [ci-4459.2.2-n-4dd77badda] Jan 20 01:18:47.121984 waagent[2021]: 2026-01-20T01:18:47.121946Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-n-4dd77badda] Jan 20 01:18:47.126771 waagent[2021]: 2026-01-20T01:18:47.126737Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 20 01:18:47.131299 waagent[2021]: 2026-01-20T01:18:47.131271Z INFO Daemon Daemon Primary interface is [eth0] Jan 20 01:18:47.140744 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:18:47.140752 systemd-networkd[1474]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:18:47.140791 systemd-networkd[1474]: eth0: DHCP lease lost Jan 20 01:18:47.141403 waagent[2021]: 2026-01-20T01:18:47.141366Z INFO Daemon Daemon Create user account if not exists Jan 20 01:18:47.145433 waagent[2021]: 2026-01-20T01:18:47.145393Z INFO Daemon Daemon User core already exists, skip useradd Jan 20 01:18:47.149888 waagent[2021]: 2026-01-20T01:18:47.149802Z INFO Daemon Daemon Configure sudoer Jan 20 01:18:47.156803 waagent[2021]: 2026-01-20T01:18:47.156763Z INFO Daemon Daemon Configure sshd Jan 20 01:18:47.163804 waagent[2021]: 2026-01-20T01:18:47.163767Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 20 01:18:47.173040 waagent[2021]: 2026-01-20T01:18:47.173012Z INFO Daemon Daemon Deploy ssh public key. Jan 20 01:18:47.178304 systemd-networkd[1474]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:18:48.271026 waagent[2021]: 2026-01-20T01:18:48.270983Z INFO Daemon Daemon Provisioning complete Jan 20 01:18:48.282260 waagent[2021]: 2026-01-20T01:18:48.282227Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 20 01:18:48.286864 waagent[2021]: 2026-01-20T01:18:48.286831Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 20 01:18:48.294246 waagent[2021]: 2026-01-20T01:18:48.294217Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 20 01:18:48.390211 waagent[2121]: 2026-01-20T01:18:48.390159Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 20 01:18:48.390571 waagent[2121]: 2026-01-20T01:18:48.390529Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Jan 20 01:18:48.390607 waagent[2121]: 2026-01-20T01:18:48.390589Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 20 01:18:48.390645 waagent[2121]: 2026-01-20T01:18:48.390629Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 20 01:18:48.434119 waagent[2121]: 2026-01-20T01:18:48.434060Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 20 01:18:48.434241 waagent[2121]: 2026-01-20T01:18:48.434214Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:18:48.434273 waagent[2121]: 2026-01-20T01:18:48.434260Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:18:48.438999 waagent[2121]: 2026-01-20T01:18:48.438953Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 01:18:48.442891 waagent[2121]: 2026-01-20T01:18:48.442861Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 20 01:18:48.443231 waagent[2121]: 2026-01-20T01:18:48.443200Z INFO ExtHandler Jan 20 01:18:48.443282 waagent[2121]: 2026-01-20T01:18:48.443263Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ea40ef14-27ee-40c8-ac45-e5330f39b13a eTag: 15954368278934706285 source: Fabric] Jan 20 01:18:48.443534 waagent[2121]: 2026-01-20T01:18:48.443506Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 01:18:48.443922 waagent[2121]: 2026-01-20T01:18:48.443892Z INFO ExtHandler Jan 20 01:18:48.443961 waagent[2121]: 2026-01-20T01:18:48.443944Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 20 01:18:48.446580 waagent[2121]: 2026-01-20T01:18:48.446557Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 01:18:48.495852 waagent[2121]: 2026-01-20T01:18:48.495799Z INFO ExtHandler Downloaded certificate {'thumbprint': '2329BE30F65D102C0005DAB5EA54DB904C30CEF5', 'hasPrivateKey': True} Jan 20 01:18:48.496176 waagent[2121]: 2026-01-20T01:18:48.496143Z INFO ExtHandler Fetch goal state completed Jan 20 01:18:48.505654 waagent[2121]: 2026-01-20T01:18:48.505610Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 20 01:18:48.508700 waagent[2121]: 2026-01-20T01:18:48.508659Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2121 Jan 20 01:18:48.508796 waagent[2121]: 2026-01-20T01:18:48.508771Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 20 01:18:48.509032 waagent[2121]: 2026-01-20T01:18:48.509006Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 20 01:18:48.510086 waagent[2121]: 2026-01-20T01:18:48.510050Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Jan 20 01:18:48.510394 waagent[2121]: 2026-01-20T01:18:48.510366Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 20 01:18:48.510528 waagent[2121]: 2026-01-20T01:18:48.510502Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 20 01:18:48.510936 waagent[2121]: 2026-01-20T01:18:48.510907Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 20 01:18:48.545576 waagent[2121]: 2026-01-20T01:18:48.545518Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 20 01:18:48.545677 waagent[2121]: 2026-01-20T01:18:48.545649Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 20 01:18:48.549983 waagent[2121]: 2026-01-20T01:18:48.549945Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 20 01:18:48.554120 systemd[1]: Reload requested from client PID 2136 ('systemctl') (unit waagent.service)... Jan 20 01:18:48.554134 systemd[1]: Reloading... Jan 20 01:18:48.621447 zram_generator::config[2173]: No configuration found. Jan 20 01:18:48.775540 systemd[1]: Reloading finished in 221 ms. Jan 20 01:18:48.789563 waagent[2121]: 2026-01-20T01:18:48.787287Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 20 01:18:48.789563 waagent[2121]: 2026-01-20T01:18:48.787455Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 20 01:18:49.028764 waagent[2121]: 2026-01-20T01:18:49.028698Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 20 01:18:49.029034 waagent[2121]: 2026-01-20T01:18:49.029001Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 20 01:18:49.029664 waagent[2121]: 2026-01-20T01:18:49.029622Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 20 01:18:49.029928 waagent[2121]: 2026-01-20T01:18:49.029892Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 20 01:18:49.030692 waagent[2121]: 2026-01-20T01:18:49.030109Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:18:49.030692 waagent[2121]: 2026-01-20T01:18:49.030177Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:18:49.030692 waagent[2121]: 2026-01-20T01:18:49.030331Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 20 01:18:49.030692 waagent[2121]: 2026-01-20T01:18:49.030468Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 20 01:18:49.030692 waagent[2121]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 20 01:18:49.030692 waagent[2121]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 20 01:18:49.030692 waagent[2121]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 20 01:18:49.030692 waagent[2121]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:18:49.030692 waagent[2121]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:18:49.030692 waagent[2121]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:18:49.030961 waagent[2121]: 2026-01-20T01:18:49.030921Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 20 01:18:49.031000 waagent[2121]: 2026-01-20T01:18:49.030966Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 20 01:18:49.031231 waagent[2121]: 2026-01-20T01:18:49.031205Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:18:49.031351 waagent[2121]: 2026-01-20T01:18:49.031324Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:18:49.031549 waagent[2121]: 2026-01-20T01:18:49.031518Z INFO EnvHandler ExtHandler Configure routes Jan 20 01:18:49.031644 waagent[2121]: 2026-01-20T01:18:49.031626Z INFO EnvHandler ExtHandler Gateway:None Jan 20 01:18:49.031712 waagent[2121]: 2026-01-20T01:18:49.031698Z INFO EnvHandler ExtHandler Routes:None Jan 20 01:18:49.032109 waagent[2121]: 2026-01-20T01:18:49.032079Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 20 01:18:49.032160 waagent[2121]: 2026-01-20T01:18:49.032121Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 20 01:18:49.032356 waagent[2121]: 2026-01-20T01:18:49.032328Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 20 01:18:49.036997 waagent[2121]: 2026-01-20T01:18:49.036963Z INFO ExtHandler ExtHandler Jan 20 01:18:49.037275 waagent[2121]: 2026-01-20T01:18:49.037249Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 71689425-c331-40e3-aeed-911268efd18c correlation 34583598-48b3-4c59-a314-8ad2687f815a created: 2026-01-20T01:17:53.657059Z] Jan 20 01:18:49.037918 waagent[2121]: 2026-01-20T01:18:49.037882Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 01:18:49.038404 waagent[2121]: 2026-01-20T01:18:49.038374Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 20 01:18:49.059042 waagent[2121]: 2026-01-20T01:18:49.058998Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 20 01:18:49.059042 waagent[2121]: Try `iptables -h' or 'iptables --help' for more information.) Jan 20 01:18:49.059314 waagent[2121]: 2026-01-20T01:18:49.059282Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6327B7F3-F72E-4857-8B16-A7F086C7B181;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 20 01:18:49.102423 waagent[2121]: 2026-01-20T01:18:49.101840Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 20 01:18:49.102423 waagent[2121]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:18:49.102423 waagent[2121]: pkts bytes target prot opt in out source destination Jan 20 01:18:49.102423 waagent[2121]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:18:49.102423 waagent[2121]: pkts bytes target prot opt in out source destination Jan 20 01:18:49.102423 waagent[2121]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:18:49.102423 waagent[2121]: pkts bytes target prot opt in out source destination Jan 20 01:18:49.102423 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 01:18:49.102423 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 01:18:49.102423 waagent[2121]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 01:18:49.104591 waagent[2121]: 2026-01-20T01:18:49.104560Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 20 01:18:49.104591 waagent[2121]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:18:49.104591 waagent[2121]: pkts bytes target prot opt in out source destination Jan 20 01:18:49.104591 waagent[2121]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:18:49.104591 waagent[2121]: pkts bytes target prot opt in out source destination Jan 20 01:18:49.104591 waagent[2121]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:18:49.104591 waagent[2121]: pkts bytes target prot opt in out source destination Jan 20 01:18:49.104591 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 01:18:49.104591 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 01:18:49.104591 waagent[2121]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 01:18:49.104932 waagent[2121]: 2026-01-20T01:18:49.104910Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 20 01:18:49.105791 waagent[2121]: 2026-01-20T01:18:49.105765Z INFO MonitorHandler ExtHandler Network interfaces: Jan 20 01:18:49.105791 waagent[2121]: Executing ['ip', '-a', '-o', 'link']: Jan 20 01:18:49.105791 waagent[2121]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 20 01:18:49.105791 waagent[2121]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:6a:7f brd ff:ff:ff:ff:ff:ff Jan 20 01:18:49.105791 waagent[2121]: 3: enP56234s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:6a:7f brd ff:ff:ff:ff:ff:ff\ altname enP56234p0s2 Jan 20 01:18:49.105791 waagent[2121]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 20 01:18:49.105791 waagent[2121]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 20 01:18:49.105791 waagent[2121]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 20 01:18:49.105791 waagent[2121]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 20 01:18:49.105791 waagent[2121]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 20 01:18:49.105791 waagent[2121]: 2: eth0 inet6 fe80::222:48ff:feb4:6a7f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 20 01:18:55.824861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:18:55.827049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:55.938123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:55.946613 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:18:56.056739 kubelet[2270]: E0120 01:18:56.056679 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:18:56.059360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:18:56.059570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:18:56.060052 systemd[1]: kubelet.service: Consumed 107ms CPU time, 106.6M memory peak. Jan 20 01:19:06.076450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:19:06.078112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:19:06.162331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:19:06.164944 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:19:06.285610 kubelet[2285]: E0120 01:19:06.285559 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:19:06.287841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:19:06.288041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:19:06.288558 systemd[1]: kubelet.service: Consumed 102ms CPU time, 107.2M memory peak. Jan 20 01:19:08.425168 chronyd[1854]: Selected source PHC0 Jan 20 01:19:10.287911 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:19:10.289831 systemd[1]: Started sshd@0-10.200.20.24:22-10.200.16.10:35876.service - OpenSSH per-connection server daemon (10.200.16.10:35876). Jan 20 01:19:10.946476 sshd[2292]: Accepted publickey for core from 10.200.16.10 port 35876 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:19:10.947238 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:19:10.950588 systemd-logind[1878]: New session 3 of user core. Jan 20 01:19:10.957533 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:19:11.383035 systemd[1]: Started sshd@1-10.200.20.24:22-10.200.16.10:35878.service - OpenSSH per-connection server daemon (10.200.16.10:35878). Jan 20 01:19:11.878550 sshd[2298]: Accepted publickey for core from 10.200.16.10 port 35878 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:19:11.879563 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:19:11.882897 systemd-logind[1878]: New session 4 of user core. Jan 20 01:19:11.889525 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:19:12.229339 sshd[2301]: Connection closed by 10.200.16.10 port 35878 Jan 20 01:19:12.229148 sshd-session[2298]: pam_unix(sshd:session): session closed for user core Jan 20 01:19:12.233127 systemd[1]: sshd@1-10.200.20.24:22-10.200.16.10:35878.service: Deactivated successfully. Jan 20 01:19:12.235029 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 01:19:12.237389 systemd-logind[1878]: Session 4 logged out. Waiting for processes to exit. Jan 20 01:19:12.238589 systemd-logind[1878]: Removed session 4. Jan 20 01:19:12.311596 systemd[1]: Started sshd@2-10.200.20.24:22-10.200.16.10:35886.service - OpenSSH per-connection server daemon (10.200.16.10:35886). Jan 20 01:19:12.762462 sshd[2307]: Accepted publickey for core from 10.200.16.10 port 35886 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:19:12.763476 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:19:12.766722 systemd-logind[1878]: New session 5 of user core. Jan 20 01:19:12.773537 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:19:13.088441 sshd[2310]: Connection closed by 10.200.16.10 port 35886 Jan 20 01:19:13.088863 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Jan 20 01:19:13.092513 systemd[1]: sshd@2-10.200.20.24:22-10.200.16.10:35886.service: Deactivated successfully. Jan 20 01:19:13.094553 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 01:19:13.095844 systemd-logind[1878]: Session 5 logged out. Waiting for processes to exit. Jan 20 01:19:13.096779 systemd-logind[1878]: Removed session 5. Jan 20 01:19:13.168595 systemd[1]: Started sshd@3-10.200.20.24:22-10.200.16.10:35890.service - OpenSSH per-connection server daemon (10.200.16.10:35890). Jan 20 01:19:13.624264 sshd[2316]: Accepted publickey for core from 10.200.16.10 port 35890 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:19:13.625259 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:19:13.628515 systemd-logind[1878]: New session 6 of user core. Jan 20 01:19:13.634524 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:19:13.953525 sshd[2319]: Connection closed by 10.200.16.10 port 35890 Jan 20 01:19:13.954021 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Jan 20 01:19:13.957092 systemd[1]: sshd@3-10.200.20.24:22-10.200.16.10:35890.service: Deactivated successfully. Jan 20 01:19:13.958609 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:19:13.959874 systemd-logind[1878]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:19:13.960766 systemd-logind[1878]: Removed session 6. Jan 20 01:19:14.056494 systemd[1]: Started sshd@4-10.200.20.24:22-10.200.16.10:35906.service - OpenSSH per-connection server daemon (10.200.16.10:35906). Jan 20 01:19:14.504865 sshd[2325]: Accepted publickey for core from 10.200.16.10 port 35906 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:19:14.505863 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:19:14.509170 systemd-logind[1878]: New session 7 of user core. Jan 20 01:19:14.520694 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:19:14.849538 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 01:19:14.849741 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:19:14.877880 sudo[2329]: pam_unix(sudo:session): session closed for user root Jan 20 01:19:14.954499 sshd[2328]: Connection closed by 10.200.16.10 port 35906 Jan 20 01:19:14.954953 sshd-session[2325]: pam_unix(sshd:session): session closed for user core Jan 20 01:19:14.957748 systemd[1]: sshd@4-10.200.20.24:22-10.200.16.10:35906.service: Deactivated successfully. Jan 20 01:19:14.961471 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:19:14.962535 systemd-logind[1878]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:19:14.963850 systemd-logind[1878]: Removed session 7. Jan 20 01:19:15.039609 systemd[1]: Started sshd@5-10.200.20.24:22-10.200.16.10:35920.service - OpenSSH per-connection server daemon (10.200.16.10:35920). Jan 20 01:19:15.494095 sshd[2335]: Accepted publickey for core from 10.200.16.10 port 35920 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:19:15.494799 sshd-session[2335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:19:15.498820 systemd-logind[1878]: New session 8 of user core. Jan 20 01:19:15.504523 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:19:15.748180 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 01:19:15.748507 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:19:15.755032 sudo[2340]: pam_unix(sudo:session): session closed for user root Jan 20 01:19:15.758278 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 01:19:15.758489 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:19:15.764501 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:19:15.790189 augenrules[2362]: No rules Jan 20 01:19:15.791296 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:19:15.791482 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:19:15.793003 sudo[2339]: pam_unix(sudo:session): session closed for user root Jan 20 01:19:15.870941 sshd[2338]: Connection closed by 10.200.16.10 port 35920 Jan 20 01:19:15.870054 sshd-session[2335]: pam_unix(sshd:session): session closed for user core Jan 20 01:19:15.872941 systemd-logind[1878]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:19:15.873385 systemd[1]: sshd@5-10.200.20.24:22-10.200.16.10:35920.service: Deactivated successfully. Jan 20 01:19:15.874836 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:19:15.876273 systemd-logind[1878]: Removed session 8. Jan 20 01:19:15.961593 systemd[1]: Started sshd@6-10.200.20.24:22-10.200.16.10:35922.service - OpenSSH per-connection server daemon (10.200.16.10:35922). Jan 20 01:19:16.324795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:19:16.325996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:19:16.450191 sshd[2371]: Accepted publickey for core from 10.200.16.10 port 35922 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:19:16.450884 sshd-session[2371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:19:16.454608 systemd-logind[1878]: New session 9 of user core. Jan 20 01:19:16.463535 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:19:16.721533 sudo[2378]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:19:16.721767 sudo[2378]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:19:16.801292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:19:16.804122 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:19:16.828968 kubelet[2388]: E0120 01:19:16.828916 2388 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:19:16.830668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:19:16.830778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:19:16.831192 systemd[1]: kubelet.service: Consumed 102ms CPU time, 105M memory peak. Jan 20 01:19:18.220361 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:19:18.228779 (dockerd)[2407]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:19:19.345441 dockerd[2407]: time="2026-01-20T01:19:19.345045560Z" level=info msg="Starting up" Jan 20 01:19:19.346007 dockerd[2407]: time="2026-01-20T01:19:19.345985705Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 01:19:19.355102 dockerd[2407]: time="2026-01-20T01:19:19.355071111Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 01:19:19.431660 dockerd[2407]: time="2026-01-20T01:19:19.431634355Z" level=info msg="Loading containers: start." Jan 20 01:19:19.460428 kernel: Initializing XFRM netlink socket Jan 20 01:19:19.731263 systemd-networkd[1474]: docker0: Link UP Jan 20 01:19:19.747022 dockerd[2407]: time="2026-01-20T01:19:19.746647728Z" level=info msg="Loading containers: done." Jan 20 01:19:19.755810 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck648070556-merged.mount: Deactivated successfully. Jan 20 01:19:19.770902 dockerd[2407]: time="2026-01-20T01:19:19.770875079Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:19:19.771094 dockerd[2407]: time="2026-01-20T01:19:19.771061998Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 01:19:19.771298 dockerd[2407]: time="2026-01-20T01:19:19.771242180Z" level=info msg="Initializing buildkit" Jan 20 01:19:19.815087 dockerd[2407]: time="2026-01-20T01:19:19.815062592Z" level=info msg="Completed buildkit initialization" Jan 20 01:19:19.820221 dockerd[2407]: time="2026-01-20T01:19:19.820057814Z" level=info msg="Daemon has completed initialization" Jan 20 01:19:19.820358 dockerd[2407]: time="2026-01-20T01:19:19.820310031Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:19:19.820549 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:19:20.427248 containerd[1904]: time="2026-01-20T01:19:20.427209200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 01:19:21.193744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547975074.mount: Deactivated successfully. Jan 20 01:19:22.460401 containerd[1904]: time="2026-01-20T01:19:22.459846051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:22.462810 containerd[1904]: time="2026-01-20T01:19:22.462791442Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 20 01:19:22.465813 containerd[1904]: time="2026-01-20T01:19:22.465794195Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:22.470445 containerd[1904]: time="2026-01-20T01:19:22.470402628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:22.471000 containerd[1904]: time="2026-01-20T01:19:22.470834939Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.043474326s" Jan 20 01:19:22.471000 containerd[1904]: time="2026-01-20T01:19:22.470866797Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 20 01:19:22.471522 containerd[1904]: time="2026-01-20T01:19:22.471391727Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 01:19:23.747436 containerd[1904]: time="2026-01-20T01:19:23.747370390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:23.751366 containerd[1904]: time="2026-01-20T01:19:23.751334281Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 20 01:19:23.754830 containerd[1904]: time="2026-01-20T01:19:23.754794682Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:23.759860 containerd[1904]: time="2026-01-20T01:19:23.759826417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:23.761054 containerd[1904]: time="2026-01-20T01:19:23.760952793Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.289526681s" Jan 20 01:19:23.761054 containerd[1904]: time="2026-01-20T01:19:23.760978258Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 20 01:19:23.761581 containerd[1904]: time="2026-01-20T01:19:23.761436210Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 01:19:24.886445 containerd[1904]: time="2026-01-20T01:19:24.886063819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:24.889655 containerd[1904]: time="2026-01-20T01:19:24.889630761Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 20 01:19:24.892672 containerd[1904]: time="2026-01-20T01:19:24.892639732Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:24.905044 containerd[1904]: time="2026-01-20T01:19:24.904513619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:24.905044 containerd[1904]: time="2026-01-20T01:19:24.904938042Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.143477255s" Jan 20 01:19:24.905044 containerd[1904]: time="2026-01-20T01:19:24.904960122Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 20 01:19:24.905805 containerd[1904]: time="2026-01-20T01:19:24.905789440Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 01:19:25.120998 waagent[2121]: 2026-01-20T01:19:25.120946Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 20 01:19:25.129046 waagent[2121]: 2026-01-20T01:19:25.129011Z INFO ExtHandler Jan 20 01:19:25.129116 waagent[2121]: 2026-01-20T01:19:25.129096Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 416e5993-133a-4fa5-b36c-172ed72551fa eTag: 5720864504721942739 source: Fabric] Jan 20 01:19:25.129346 waagent[2121]: 2026-01-20T01:19:25.129319Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 01:19:25.129838 waagent[2121]: 2026-01-20T01:19:25.129805Z INFO ExtHandler Jan 20 01:19:25.129884 waagent[2121]: 2026-01-20T01:19:25.129866Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 20 01:19:25.185647 waagent[2121]: 2026-01-20T01:19:25.185571Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 01:19:25.226320 waagent[2121]: 2026-01-20T01:19:25.226278Z INFO ExtHandler Downloaded certificate {'thumbprint': '2329BE30F65D102C0005DAB5EA54DB904C30CEF5', 'hasPrivateKey': True} Jan 20 01:19:25.226781 waagent[2121]: 2026-01-20T01:19:25.226746Z INFO ExtHandler Fetch goal state completed Jan 20 01:19:25.227181 waagent[2121]: 2026-01-20T01:19:25.227152Z INFO ExtHandler ExtHandler Jan 20 01:19:25.227310 waagent[2121]: 2026-01-20T01:19:25.227284Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: c995e42b-d2e5-4725-a274-9857cc0e348d correlation 34583598-48b3-4c59-a314-8ad2687f815a created: 2026-01-20T01:19:17.859459Z] Jan 20 01:19:25.227696 waagent[2121]: 2026-01-20T01:19:25.227626Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 01:19:25.228141 waagent[2121]: 2026-01-20T01:19:25.228112Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 20 01:19:25.928940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953882785.mount: Deactivated successfully. Jan 20 01:19:26.172340 containerd[1904]: time="2026-01-20T01:19:26.172289246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:26.175438 containerd[1904]: time="2026-01-20T01:19:26.175297577Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 20 01:19:26.178344 containerd[1904]: time="2026-01-20T01:19:26.178304580Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:26.182494 containerd[1904]: time="2026-01-20T01:19:26.181874418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:26.182494 containerd[1904]: time="2026-01-20T01:19:26.182095986Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.276226847s" Jan 20 01:19:26.182494 containerd[1904]: time="2026-01-20T01:19:26.182121771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 20 01:19:26.182705 containerd[1904]: time="2026-01-20T01:19:26.182687543Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 01:19:26.951449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 01:19:26.955562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:19:26.960981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2102768076.mount: Deactivated successfully. Jan 20 01:19:27.082795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:19:27.095721 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:19:27.118336 kubelet[2706]: E0120 01:19:27.118305 2706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:19:27.120124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:19:27.120222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:19:27.120609 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106.9M memory peak. Jan 20 01:19:28.093428 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 20 01:19:28.979276 containerd[1904]: time="2026-01-20T01:19:28.979225717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:28.982400 containerd[1904]: time="2026-01-20T01:19:28.982232320Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 20 01:19:28.985636 containerd[1904]: time="2026-01-20T01:19:28.985612352Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:28.989892 containerd[1904]: time="2026-01-20T01:19:28.989867495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:28.990425 containerd[1904]: time="2026-01-20T01:19:28.990391378Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.80762716s" Jan 20 01:19:28.990608 containerd[1904]: time="2026-01-20T01:19:28.990502798Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 20 01:19:28.990958 containerd[1904]: time="2026-01-20T01:19:28.990920029Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 01:19:29.567263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4031893645.mount: Deactivated successfully. Jan 20 01:19:29.596769 containerd[1904]: time="2026-01-20T01:19:29.596729576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:19:29.600378 containerd[1904]: time="2026-01-20T01:19:29.600349417Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 20 01:19:29.603640 containerd[1904]: time="2026-01-20T01:19:29.603614061Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:19:29.608224 containerd[1904]: time="2026-01-20T01:19:29.608197864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:19:29.609219 containerd[1904]: time="2026-01-20T01:19:29.608473865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 617.52834ms" Jan 20 01:19:29.609219 containerd[1904]: time="2026-01-20T01:19:29.608495738Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 20 01:19:29.609333 containerd[1904]: time="2026-01-20T01:19:29.609302895Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 01:19:29.778859 update_engine[1881]: I20260120 01:19:29.778812 1881 update_attempter.cc:509] Updating boot flags... Jan 20 01:19:30.282310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279153854.mount: Deactivated successfully. Jan 20 01:19:32.913053 containerd[1904]: time="2026-01-20T01:19:32.912996672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:32.917095 containerd[1904]: time="2026-01-20T01:19:32.917070604Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 20 01:19:32.920806 containerd[1904]: time="2026-01-20T01:19:32.920782623Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:32.926151 containerd[1904]: time="2026-01-20T01:19:32.926125633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:32.927778 containerd[1904]: time="2026-01-20T01:19:32.926488042Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.317161297s" Jan 20 01:19:32.927778 containerd[1904]: time="2026-01-20T01:19:32.926511643Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 20 01:19:35.517542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:19:35.517999 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106.9M memory peak. Jan 20 01:19:35.519691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:19:35.538576 systemd[1]: Reload requested from client PID 2909 ('systemctl') (unit session-9.scope)... Jan 20 01:19:35.538588 systemd[1]: Reloading... Jan 20 01:19:35.617576 zram_generator::config[2958]: No configuration found. Jan 20 01:19:35.768127 systemd[1]: Reloading finished in 229 ms. Jan 20 01:19:35.804914 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 01:19:35.805088 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 01:19:35.805342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:19:35.807464 systemd[1]: kubelet.service: Consumed 59ms CPU time, 84.4M memory peak. Jan 20 01:19:35.810786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:19:36.004438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:19:36.010726 (kubelet)[3019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:19:36.036214 kubelet[3019]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:19:36.036214 kubelet[3019]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:19:36.036214 kubelet[3019]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:19:36.036214 kubelet[3019]: I0120 01:19:36.036045 3019 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:19:36.221436 kubelet[3019]: I0120 01:19:36.221389 3019 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 01:19:36.221436 kubelet[3019]: I0120 01:19:36.221439 3019 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:19:36.221654 kubelet[3019]: I0120 01:19:36.221634 3019 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 01:19:36.240337 kubelet[3019]: E0120 01:19:36.240303 3019 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:19:36.242427 kubelet[3019]: I0120 01:19:36.241749 3019 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:19:36.247248 kubelet[3019]: I0120 01:19:36.247234 3019 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:19:36.249851 kubelet[3019]: I0120 01:19:36.249837 3019 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:19:36.250107 kubelet[3019]: I0120 01:19:36.250088 3019 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:19:36.250280 kubelet[3019]: I0120 01:19:36.250164 3019 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-4dd77badda","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:19:36.250394 kubelet[3019]: I0120 01:19:36.250383 3019 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:19:36.250466 kubelet[3019]: I0120 01:19:36.250457 3019 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 01:19:36.250625 kubelet[3019]: I0120 01:19:36.250612 3019 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:19:36.253318 kubelet[3019]: I0120 01:19:36.253301 3019 kubelet.go:446] "Attempting to sync node with API server" Jan 20 01:19:36.253399 kubelet[3019]: I0120 01:19:36.253389 3019 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:19:36.253480 kubelet[3019]: I0120 01:19:36.253472 3019 kubelet.go:352] "Adding apiserver pod source" Jan 20 01:19:36.253533 kubelet[3019]: I0120 01:19:36.253524 3019 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:19:36.256526 kubelet[3019]: W0120 01:19:36.256488 3019 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-4dd77badda&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jan 20 01:19:36.256585 kubelet[3019]: E0120 01:19:36.256531 3019 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-4dd77badda&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:19:36.256811 kubelet[3019]: W0120 01:19:36.256780 3019 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jan 20 01:19:36.256870 kubelet[3019]: E0120 01:19:36.256813 3019 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:19:36.257047 kubelet[3019]: I0120 01:19:36.257031 3019 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:19:36.257314 kubelet[3019]: I0120 01:19:36.257297 3019 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 01:19:36.257348 kubelet[3019]: W0120 01:19:36.257342 3019 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:19:36.259992 kubelet[3019]: I0120 01:19:36.259967 3019 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:19:36.259992 kubelet[3019]: I0120 01:19:36.259998 3019 server.go:1287] "Started kubelet" Jan 20 01:19:36.261842 kubelet[3019]: I0120 01:19:36.261820 3019 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:19:36.262499 kubelet[3019]: I0120 01:19:36.262487 3019 server.go:479] "Adding debug handlers to kubelet server" Jan 20 01:19:36.262668 kubelet[3019]: I0120 01:19:36.262620 3019 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:19:36.262896 kubelet[3019]: I0120 01:19:36.262872 3019 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:19:36.263078 kubelet[3019]: E0120 01:19:36.262999 3019 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-n-4dd77badda.188c4bb0c61e9854 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-n-4dd77badda,UID:ci-4459.2.2-n-4dd77badda,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-n-4dd77badda,},FirstTimestamp:2026-01-20 01:19:36.259983444 +0000 UTC m=+0.246683973,LastTimestamp:2026-01-20 01:19:36.259983444 +0000 UTC m=+0.246683973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-n-4dd77badda,}" Jan 20 01:19:36.264137 kubelet[3019]: I0120 01:19:36.264015 3019 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:19:36.265216 kubelet[3019]: I0120 01:19:36.265196 3019 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:19:36.267429 kubelet[3019]: E0120 01:19:36.267395 3019 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-4dd77badda\" not found" Jan 20 01:19:36.267487 kubelet[3019]: I0120 01:19:36.267445 3019 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:19:36.267582 kubelet[3019]: I0120 01:19:36.267564 3019 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:19:36.267615 kubelet[3019]: I0120 01:19:36.267606 3019 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:19:36.267960 kubelet[3019]: W0120 01:19:36.267811 3019 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jan 20 01:19:36.267960 kubelet[3019]: E0120 01:19:36.267842 3019 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:19:36.268035 kubelet[3019]: E0120 01:19:36.268013 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-4dd77badda?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="200ms" Jan 20 01:19:36.268505 kubelet[3019]: E0120 01:19:36.268484 3019 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:19:36.268697 kubelet[3019]: I0120 01:19:36.268680 3019 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:19:36.270406 kubelet[3019]: I0120 01:19:36.269516 3019 factory.go:221] Registration of the containerd container factory successfully Jan 20 01:19:36.270406 kubelet[3019]: I0120 01:19:36.269528 3019 factory.go:221] Registration of the systemd container factory successfully Jan 20 01:19:36.294457 kubelet[3019]: I0120 01:19:36.293881 3019 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:19:36.294457 kubelet[3019]: I0120 01:19:36.293895 3019 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:19:36.294457 kubelet[3019]: I0120 01:19:36.293909 3019 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:19:36.368363 kubelet[3019]: E0120 01:19:36.368335 3019 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-4dd77badda\" not found" Jan 20 01:19:36.468602 kubelet[3019]: E0120 01:19:36.468572 3019 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-4dd77badda\" not found" Jan 20 01:19:36.468958 kubelet[3019]: E0120 01:19:36.468928 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-4dd77badda?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="400ms" Jan 20 01:19:36.553884 kubelet[3019]: I0120 01:19:36.553794 3019 policy_none.go:49] "None policy: Start" Jan 20 01:19:36.553884 kubelet[3019]: I0120 01:19:36.553836 3019 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:19:36.553884 kubelet[3019]: I0120 01:19:36.553849 3019 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:19:36.561993 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:19:36.568703 kubelet[3019]: E0120 01:19:36.568676 3019 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-4dd77badda\" not found" Jan 20 01:19:36.570532 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:19:36.575433 kubelet[3019]: I0120 01:19:36.575382 3019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 01:19:36.576980 kubelet[3019]: I0120 01:19:36.576865 3019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 01:19:36.576980 kubelet[3019]: I0120 01:19:36.576892 3019 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 01:19:36.576980 kubelet[3019]: I0120 01:19:36.576910 3019 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:19:36.576980 kubelet[3019]: I0120 01:19:36.576915 3019 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 01:19:36.576980 kubelet[3019]: E0120 01:19:36.576947 3019 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:19:36.577911 kubelet[3019]: W0120 01:19:36.577775 3019 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jan 20 01:19:36.577911 kubelet[3019]: E0120 01:19:36.577802 3019 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:19:36.579100 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:19:36.581328 kubelet[3019]: I0120 01:19:36.580999 3019 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 01:19:36.581328 kubelet[3019]: I0120 01:19:36.581141 3019 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:19:36.581328 kubelet[3019]: I0120 01:19:36.581149 3019 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:19:36.582148 kubelet[3019]: I0120 01:19:36.582136 3019 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:19:36.583168 kubelet[3019]: E0120 01:19:36.583155 3019 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:19:36.583269 kubelet[3019]: E0120 01:19:36.583258 3019 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-n-4dd77badda\" not found" Jan 20 01:19:36.682698 kubelet[3019]: I0120 01:19:36.682668 3019 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.683210 kubelet[3019]: E0120 01:19:36.683100 3019 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.684994 systemd[1]: Created slice kubepods-burstable-pod88c17379893b66ca5e16a9f342e6f0b0.slice - libcontainer container kubepods-burstable-pod88c17379893b66ca5e16a9f342e6f0b0.slice. Jan 20 01:19:36.695918 kubelet[3019]: E0120 01:19:36.695901 3019 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-4dd77badda\" not found" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.698258 systemd[1]: Created slice kubepods-burstable-pod9ead58db5f4259b27815f9ad4b03cdac.slice - libcontainer container kubepods-burstable-pod9ead58db5f4259b27815f9ad4b03cdac.slice. Jan 20 01:19:36.700220 kubelet[3019]: E0120 01:19:36.700204 3019 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-4dd77badda\" not found" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.701347 systemd[1]: Created slice kubepods-burstable-pod4b93d1b8963a8b03c51b6a13605a0a1d.slice - libcontainer container kubepods-burstable-pod4b93d1b8963a8b03c51b6a13605a0a1d.slice. Jan 20 01:19:36.702905 kubelet[3019]: E0120 01:19:36.702816 3019 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-4dd77badda\" not found" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.769474 kubelet[3019]: I0120 01:19:36.769401 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ead58db5f4259b27815f9ad4b03cdac-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-4dd77badda\" (UID: \"9ead58db5f4259b27815f9ad4b03cdac\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.769652 kubelet[3019]: I0120 01:19:36.769535 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ead58db5f4259b27815f9ad4b03cdac-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-4dd77badda\" (UID: \"9ead58db5f4259b27815f9ad4b03cdac\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.769652 kubelet[3019]: I0120 01:19:36.769553 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88c17379893b66ca5e16a9f342e6f0b0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-4dd77badda\" (UID: \"88c17379893b66ca5e16a9f342e6f0b0\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.769652 kubelet[3019]: I0120 01:19:36.769565 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ead58db5f4259b27815f9ad4b03cdac-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-4dd77badda\" (UID: \"9ead58db5f4259b27815f9ad4b03cdac\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.769789 kubelet[3019]: I0120 01:19:36.769575 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ead58db5f4259b27815f9ad4b03cdac-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-4dd77badda\" (UID: \"9ead58db5f4259b27815f9ad4b03cdac\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.769789 kubelet[3019]: I0120 01:19:36.769745 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ead58db5f4259b27815f9ad4b03cdac-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-4dd77badda\" (UID: \"9ead58db5f4259b27815f9ad4b03cdac\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.769789 kubelet[3019]: I0120 01:19:36.769759 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b93d1b8963a8b03c51b6a13605a0a1d-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-4dd77badda\" (UID: \"4b93d1b8963a8b03c51b6a13605a0a1d\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.769789 kubelet[3019]: I0120 01:19:36.769769 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88c17379893b66ca5e16a9f342e6f0b0-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-4dd77badda\" (UID: \"88c17379893b66ca5e16a9f342e6f0b0\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.769900 kubelet[3019]: I0120 01:19:36.769878 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88c17379893b66ca5e16a9f342e6f0b0-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-4dd77badda\" (UID: \"88c17379893b66ca5e16a9f342e6f0b0\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.869937 kubelet[3019]: E0120 01:19:36.869905 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-4dd77badda?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="800ms" Jan 20 01:19:36.885059 kubelet[3019]: I0120 01:19:36.885029 3019 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.885290 kubelet[3019]: E0120 01:19:36.885263 3019 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:36.997659 containerd[1904]: time="2026-01-20T01:19:36.997616297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-4dd77badda,Uid:88c17379893b66ca5e16a9f342e6f0b0,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:37.001009 containerd[1904]: time="2026-01-20T01:19:37.000982493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-4dd77badda,Uid:9ead58db5f4259b27815f9ad4b03cdac,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:37.003594 containerd[1904]: time="2026-01-20T01:19:37.003568204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-4dd77badda,Uid:4b93d1b8963a8b03c51b6a13605a0a1d,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:37.053438 containerd[1904]: time="2026-01-20T01:19:37.053114095Z" level=info msg="connecting to shim a84a4099001f308a5532aacfed2dff3090c4ca9db0c9a4b33903b6a257f3359e" address="unix:///run/containerd/s/edeaa0057ea19ccab140f93b9d356f6b634135dcf87a140e56d42fd9ee601d89" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:37.072567 systemd[1]: Started cri-containerd-a84a4099001f308a5532aacfed2dff3090c4ca9db0c9a4b33903b6a257f3359e.scope - libcontainer container a84a4099001f308a5532aacfed2dff3090c4ca9db0c9a4b33903b6a257f3359e. Jan 20 01:19:37.097691 containerd[1904]: time="2026-01-20T01:19:37.097659843Z" level=info msg="connecting to shim 6d2c029bf4e16e95d5fb91cdfdb4047b6541f74612918e31439527f1e6a8cff8" address="unix:///run/containerd/s/33eeface4b00a05e94d1001f7ad9c4a4427149772e9a37ac95da5a516f4353ae" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:37.103007 containerd[1904]: time="2026-01-20T01:19:37.102983734Z" level=info msg="connecting to shim b567ea2f3e0026cbd8c07aaf800272f01e3441f3d1ca241f106acc1a8d08a278" address="unix:///run/containerd/s/402908da69f268283888af98e3547ab93d35ae4757ecce0f932cb4909bfb31f6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:37.115052 kubelet[3019]: W0120 01:19:37.114997 3019 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jan 20 01:19:37.115505 kubelet[3019]: E0120 01:19:37.115058 3019 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:19:37.117555 containerd[1904]: time="2026-01-20T01:19:37.116998553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-4dd77badda,Uid:88c17379893b66ca5e16a9f342e6f0b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a84a4099001f308a5532aacfed2dff3090c4ca9db0c9a4b33903b6a257f3359e\"" Jan 20 01:19:37.122886 containerd[1904]: time="2026-01-20T01:19:37.122631527Z" level=info msg="CreateContainer within sandbox \"a84a4099001f308a5532aacfed2dff3090c4ca9db0c9a4b33903b6a257f3359e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:19:37.124553 systemd[1]: Started cri-containerd-6d2c029bf4e16e95d5fb91cdfdb4047b6541f74612918e31439527f1e6a8cff8.scope - libcontainer container 6d2c029bf4e16e95d5fb91cdfdb4047b6541f74612918e31439527f1e6a8cff8. Jan 20 01:19:37.127787 systemd[1]: Started cri-containerd-b567ea2f3e0026cbd8c07aaf800272f01e3441f3d1ca241f106acc1a8d08a278.scope - libcontainer container b567ea2f3e0026cbd8c07aaf800272f01e3441f3d1ca241f106acc1a8d08a278. Jan 20 01:19:37.151118 containerd[1904]: time="2026-01-20T01:19:37.151087036Z" level=info msg="Container 27b28145381ac9dd13e5954db199e73d75cd65d376b74deb2d9bd94047dd9d4f: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:37.165627 containerd[1904]: time="2026-01-20T01:19:37.165595953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-4dd77badda,Uid:9ead58db5f4259b27815f9ad4b03cdac,Namespace:kube-system,Attempt:0,} returns sandbox id \"b567ea2f3e0026cbd8c07aaf800272f01e3441f3d1ca241f106acc1a8d08a278\"" Jan 20 01:19:37.167228 containerd[1904]: time="2026-01-20T01:19:37.167200356Z" level=info msg="CreateContainer within sandbox \"b567ea2f3e0026cbd8c07aaf800272f01e3441f3d1ca241f106acc1a8d08a278\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:19:37.182641 containerd[1904]: time="2026-01-20T01:19:37.182608953Z" level=info msg="CreateContainer within sandbox \"a84a4099001f308a5532aacfed2dff3090c4ca9db0c9a4b33903b6a257f3359e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"27b28145381ac9dd13e5954db199e73d75cd65d376b74deb2d9bd94047dd9d4f\"" Jan 20 01:19:37.183142 containerd[1904]: time="2026-01-20T01:19:37.182909268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-4dd77badda,Uid:4b93d1b8963a8b03c51b6a13605a0a1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d2c029bf4e16e95d5fb91cdfdb4047b6541f74612918e31439527f1e6a8cff8\"" Jan 20 01:19:37.183224 containerd[1904]: time="2026-01-20T01:19:37.183175070Z" level=info msg="StartContainer for \"27b28145381ac9dd13e5954db199e73d75cd65d376b74deb2d9bd94047dd9d4f\"" Jan 20 01:19:37.183907 containerd[1904]: time="2026-01-20T01:19:37.183879536Z" level=info msg="connecting to shim 27b28145381ac9dd13e5954db199e73d75cd65d376b74deb2d9bd94047dd9d4f" address="unix:///run/containerd/s/edeaa0057ea19ccab140f93b9d356f6b634135dcf87a140e56d42fd9ee601d89" protocol=ttrpc version=3 Jan 20 01:19:37.185989 containerd[1904]: time="2026-01-20T01:19:37.185689011Z" level=info msg="CreateContainer within sandbox \"6d2c029bf4e16e95d5fb91cdfdb4047b6541f74612918e31439527f1e6a8cff8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:19:37.203541 systemd[1]: Started cri-containerd-27b28145381ac9dd13e5954db199e73d75cd65d376b74deb2d9bd94047dd9d4f.scope - libcontainer container 27b28145381ac9dd13e5954db199e73d75cd65d376b74deb2d9bd94047dd9d4f. Jan 20 01:19:37.205351 containerd[1904]: time="2026-01-20T01:19:37.205330284Z" level=info msg="Container 66a491b98b1346ce6702e9f8831ce0ff511fd629aefc77bb6778330ccfb7c371: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:37.213915 containerd[1904]: time="2026-01-20T01:19:37.213889470Z" level=info msg="Container 4fb4e1ff37b33f974861003f5389456e5385bbbfbed5b480dd5f8f6df8971a31: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:37.237941 containerd[1904]: time="2026-01-20T01:19:37.237571420Z" level=info msg="CreateContainer within sandbox \"b567ea2f3e0026cbd8c07aaf800272f01e3441f3d1ca241f106acc1a8d08a278\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"66a491b98b1346ce6702e9f8831ce0ff511fd629aefc77bb6778330ccfb7c371\"" Jan 20 01:19:37.238504 containerd[1904]: time="2026-01-20T01:19:37.238389754Z" level=info msg="StartContainer for \"66a491b98b1346ce6702e9f8831ce0ff511fd629aefc77bb6778330ccfb7c371\"" Jan 20 01:19:37.238915 containerd[1904]: time="2026-01-20T01:19:37.238896252Z" level=info msg="StartContainer for \"27b28145381ac9dd13e5954db199e73d75cd65d376b74deb2d9bd94047dd9d4f\" returns successfully" Jan 20 01:19:37.240249 containerd[1904]: time="2026-01-20T01:19:37.239926370Z" level=info msg="connecting to shim 66a491b98b1346ce6702e9f8831ce0ff511fd629aefc77bb6778330ccfb7c371" address="unix:///run/containerd/s/402908da69f268283888af98e3547ab93d35ae4757ecce0f932cb4909bfb31f6" protocol=ttrpc version=3 Jan 20 01:19:37.241865 containerd[1904]: time="2026-01-20T01:19:37.241841104Z" level=info msg="CreateContainer within sandbox \"6d2c029bf4e16e95d5fb91cdfdb4047b6541f74612918e31439527f1e6a8cff8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4fb4e1ff37b33f974861003f5389456e5385bbbfbed5b480dd5f8f6df8971a31\"" Jan 20 01:19:37.242239 containerd[1904]: time="2026-01-20T01:19:37.242221870Z" level=info msg="StartContainer for \"4fb4e1ff37b33f974861003f5389456e5385bbbfbed5b480dd5f8f6df8971a31\"" Jan 20 01:19:37.242922 containerd[1904]: time="2026-01-20T01:19:37.242898727Z" level=info msg="connecting to shim 4fb4e1ff37b33f974861003f5389456e5385bbbfbed5b480dd5f8f6df8971a31" address="unix:///run/containerd/s/33eeface4b00a05e94d1001f7ad9c4a4427149772e9a37ac95da5a516f4353ae" protocol=ttrpc version=3 Jan 20 01:19:37.265526 systemd[1]: Started cri-containerd-4fb4e1ff37b33f974861003f5389456e5385bbbfbed5b480dd5f8f6df8971a31.scope - libcontainer container 4fb4e1ff37b33f974861003f5389456e5385bbbfbed5b480dd5f8f6df8971a31. Jan 20 01:19:37.273670 systemd[1]: Started cri-containerd-66a491b98b1346ce6702e9f8831ce0ff511fd629aefc77bb6778330ccfb7c371.scope - libcontainer container 66a491b98b1346ce6702e9f8831ce0ff511fd629aefc77bb6778330ccfb7c371. Jan 20 01:19:37.287620 kubelet[3019]: I0120 01:19:37.287591 3019 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:37.324517 containerd[1904]: time="2026-01-20T01:19:37.324488484Z" level=info msg="StartContainer for \"4fb4e1ff37b33f974861003f5389456e5385bbbfbed5b480dd5f8f6df8971a31\" returns successfully" Jan 20 01:19:37.342715 containerd[1904]: time="2026-01-20T01:19:37.342653463Z" level=info msg="StartContainer for \"66a491b98b1346ce6702e9f8831ce0ff511fd629aefc77bb6778330ccfb7c371\" returns successfully" Jan 20 01:19:37.585450 kubelet[3019]: E0120 01:19:37.585336 3019 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-4dd77badda\" not found" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:37.586817 kubelet[3019]: E0120 01:19:37.586803 3019 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-4dd77badda\" not found" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:37.589428 kubelet[3019]: E0120 01:19:37.589357 3019 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-4dd77badda\" not found" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:38.633788 kubelet[3019]: I0120 01:19:38.633667 3019 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:38.633788 kubelet[3019]: E0120 01:19:38.633694 3019 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-n-4dd77badda\": node \"ci-4459.2.2-n-4dd77badda\" not found" Jan 20 01:19:38.636581 kubelet[3019]: E0120 01:19:38.635605 3019 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-4dd77badda\" not found" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:38.636749 kubelet[3019]: E0120 01:19:38.636541 3019 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-4dd77badda\" not found" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:38.668169 kubelet[3019]: I0120 01:19:38.668028 3019 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:38.701459 kubelet[3019]: E0120 01:19:38.701432 3019 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-4dd77badda\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:38.701459 kubelet[3019]: I0120 01:19:38.701453 3019 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:38.703512 kubelet[3019]: E0120 01:19:38.703481 3019 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-n-4dd77badda\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:38.703512 kubelet[3019]: I0120 01:19:38.703506 3019 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:38.705580 kubelet[3019]: E0120 01:19:38.705553 3019 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-4dd77badda\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:38.708349 kubelet[3019]: E0120 01:19:38.708327 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Jan 20 01:19:39.258831 kubelet[3019]: I0120 01:19:39.258787 3019 apiserver.go:52] "Watching apiserver" Jan 20 01:19:39.267895 kubelet[3019]: I0120 01:19:39.267871 3019 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:19:40.561846 systemd[1]: Reload requested from client PID 3289 ('systemctl') (unit session-9.scope)... Jan 20 01:19:40.561859 systemd[1]: Reloading... Jan 20 01:19:40.650436 zram_generator::config[3342]: No configuration found. Jan 20 01:19:40.793929 systemd[1]: Reloading finished in 231 ms. Jan 20 01:19:40.829118 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:19:40.842643 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:19:40.842823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:19:40.842862 systemd[1]: kubelet.service: Consumed 464ms CPU time, 127.7M memory peak. Jan 20 01:19:40.844471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:19:40.970342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:19:40.976728 (kubelet)[3400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:19:41.006406 kubelet[3400]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:19:41.006634 kubelet[3400]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:19:41.006665 kubelet[3400]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:19:41.006808 kubelet[3400]: I0120 01:19:41.006783 3400 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:19:41.010989 kubelet[3400]: I0120 01:19:41.010961 3400 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 01:19:41.010989 kubelet[3400]: I0120 01:19:41.010985 3400 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:19:41.011156 kubelet[3400]: I0120 01:19:41.011139 3400 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 01:19:41.011979 kubelet[3400]: I0120 01:19:41.011962 3400 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 01:19:41.013863 kubelet[3400]: I0120 01:19:41.013404 3400 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:19:41.016044 kubelet[3400]: I0120 01:19:41.016027 3400 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:19:41.018609 kubelet[3400]: I0120 01:19:41.018590 3400 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:19:41.018925 kubelet[3400]: I0120 01:19:41.018808 3400 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:19:41.019084 kubelet[3400]: I0120 01:19:41.018831 3400 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-4dd77badda","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:19:41.019187 kubelet[3400]: I0120 01:19:41.019176 3400 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:19:41.019228 kubelet[3400]: I0120 01:19:41.019221 3400 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 01:19:41.019311 kubelet[3400]: I0120 01:19:41.019303 3400 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:19:41.019470 kubelet[3400]: I0120 01:19:41.019460 3400 kubelet.go:446] "Attempting to sync node with API server" Jan 20 01:19:41.019521 kubelet[3400]: I0120 01:19:41.019515 3400 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:19:41.019575 kubelet[3400]: I0120 01:19:41.019569 3400 kubelet.go:352] "Adding apiserver pod source" Jan 20 01:19:41.019618 kubelet[3400]: I0120 01:19:41.019612 3400 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:19:41.025569 kubelet[3400]: I0120 01:19:41.025541 3400 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:19:41.025821 kubelet[3400]: I0120 01:19:41.025804 3400 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 01:19:41.026802 kubelet[3400]: I0120 01:19:41.026077 3400 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:19:41.026802 kubelet[3400]: I0120 01:19:41.026105 3400 server.go:1287] "Started kubelet" Jan 20 01:19:41.028406 kubelet[3400]: I0120 01:19:41.028326 3400 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:19:41.028714 kubelet[3400]: I0120 01:19:41.028549 3400 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:19:41.028714 kubelet[3400]: I0120 01:19:41.028591 3400 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:19:41.029164 kubelet[3400]: I0120 01:19:41.029147 3400 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:19:41.029366 kubelet[3400]: I0120 01:19:41.029352 3400 server.go:479] "Adding debug handlers to kubelet server" Jan 20 01:19:41.031860 kubelet[3400]: I0120 01:19:41.030132 3400 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:19:41.033417 kubelet[3400]: E0120 01:19:41.033286 3400 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-4dd77badda\" not found" Jan 20 01:19:41.033417 kubelet[3400]: I0120 01:19:41.033324 3400 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:19:41.034095 kubelet[3400]: I0120 01:19:41.034076 3400 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:19:41.034174 kubelet[3400]: I0120 01:19:41.034161 3400 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:19:41.034396 kubelet[3400]: E0120 01:19:41.034348 3400 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:19:41.044292 kubelet[3400]: I0120 01:19:41.044255 3400 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 01:19:41.045083 kubelet[3400]: I0120 01:19:41.044983 3400 factory.go:221] Registration of the containerd container factory successfully Jan 20 01:19:41.045083 kubelet[3400]: I0120 01:19:41.044999 3400 factory.go:221] Registration of the systemd container factory successfully Jan 20 01:19:41.045275 kubelet[3400]: I0120 01:19:41.045095 3400 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:19:41.045373 kubelet[3400]: I0120 01:19:41.045351 3400 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 01:19:41.045373 kubelet[3400]: I0120 01:19:41.045369 3400 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 01:19:41.045457 kubelet[3400]: I0120 01:19:41.045380 3400 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:19:41.045457 kubelet[3400]: I0120 01:19:41.045385 3400 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 01:19:41.045457 kubelet[3400]: E0120 01:19:41.045421 3400 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:19:41.087058 kubelet[3400]: I0120 01:19:41.086990 3400 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:19:41.087348 kubelet[3400]: I0120 01:19:41.087171 3400 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:19:41.087348 kubelet[3400]: I0120 01:19:41.087192 3400 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:19:41.087740 kubelet[3400]: I0120 01:19:41.087691 3400 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:19:41.087875 kubelet[3400]: I0120 01:19:41.087852 3400 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:19:41.087925 kubelet[3400]: I0120 01:19:41.087917 3400 policy_none.go:49] "None policy: Start" Jan 20 01:19:41.087969 kubelet[3400]: I0120 01:19:41.087962 3400 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:19:41.088012 kubelet[3400]: I0120 01:19:41.088005 3400 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:19:41.088143 kubelet[3400]: I0120 01:19:41.088132 3400 state_mem.go:75] "Updated machine memory state" Jan 20 01:19:41.091408 kubelet[3400]: I0120 01:19:41.091382 3400 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 01:19:41.092865 kubelet[3400]: I0120 01:19:41.092623 3400 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:19:41.092865 kubelet[3400]: I0120 01:19:41.092637 3400 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:19:41.093172 kubelet[3400]: I0120 01:19:41.093144 3400 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:19:41.095635 kubelet[3400]: E0120 01:19:41.095456 3400 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:19:41.146609 kubelet[3400]: I0120 01:19:41.146570 3400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.146732 kubelet[3400]: I0120 01:19:41.146575 3400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.147089 kubelet[3400]: I0120 01:19:41.146920 3400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.154422 kubelet[3400]: W0120 01:19:41.154339 3400 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:19:41.159864 kubelet[3400]: W0120 01:19:41.159843 3400 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:19:41.160349 kubelet[3400]: W0120 01:19:41.160324 3400 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:19:41.195665 kubelet[3400]: I0120 01:19:41.195639 3400 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.208222 kubelet[3400]: I0120 01:19:41.208197 3400 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.208300 kubelet[3400]: I0120 01:19:41.208278 3400 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.235225 kubelet[3400]: I0120 01:19:41.235200 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ead58db5f4259b27815f9ad4b03cdac-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-4dd77badda\" (UID: \"9ead58db5f4259b27815f9ad4b03cdac\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.235291 kubelet[3400]: I0120 01:19:41.235267 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ead58db5f4259b27815f9ad4b03cdac-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-4dd77badda\" (UID: \"9ead58db5f4259b27815f9ad4b03cdac\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.235291 kubelet[3400]: I0120 01:19:41.235281 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ead58db5f4259b27815f9ad4b03cdac-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-4dd77badda\" (UID: \"9ead58db5f4259b27815f9ad4b03cdac\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.235327 kubelet[3400]: I0120 01:19:41.235292 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ead58db5f4259b27815f9ad4b03cdac-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-4dd77badda\" (UID: \"9ead58db5f4259b27815f9ad4b03cdac\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.235350 kubelet[3400]: I0120 01:19:41.235337 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ead58db5f4259b27815f9ad4b03cdac-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-4dd77badda\" (UID: \"9ead58db5f4259b27815f9ad4b03cdac\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.335994 kubelet[3400]: I0120 01:19:41.335967 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88c17379893b66ca5e16a9f342e6f0b0-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-4dd77badda\" (UID: \"88c17379893b66ca5e16a9f342e6f0b0\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.335994 kubelet[3400]: I0120 01:19:41.335996 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88c17379893b66ca5e16a9f342e6f0b0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-4dd77badda\" (UID: \"88c17379893b66ca5e16a9f342e6f0b0\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.336123 kubelet[3400]: I0120 01:19:41.336106 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b93d1b8963a8b03c51b6a13605a0a1d-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-4dd77badda\" (UID: \"4b93d1b8963a8b03c51b6a13605a0a1d\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.336142 kubelet[3400]: I0120 01:19:41.336137 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88c17379893b66ca5e16a9f342e6f0b0-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-4dd77badda\" (UID: \"88c17379893b66ca5e16a9f342e6f0b0\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:41.580121 sudo[3431]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 20 01:19:41.580335 sudo[3431]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 20 01:19:41.806394 sudo[3431]: pam_unix(sudo:session): session closed for user root Jan 20 01:19:42.020865 kubelet[3400]: I0120 01:19:42.020651 3400 apiserver.go:52] "Watching apiserver" Jan 20 01:19:42.034354 kubelet[3400]: I0120 01:19:42.034326 3400 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:19:42.076228 kubelet[3400]: I0120 01:19:42.075307 3400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:42.076228 kubelet[3400]: I0120 01:19:42.075722 3400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:42.089161 kubelet[3400]: W0120 01:19:42.088946 3400 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:19:42.089161 kubelet[3400]: E0120 01:19:42.088990 3400 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-4dd77badda\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:42.093971 kubelet[3400]: W0120 01:19:42.093950 3400 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:19:42.094042 kubelet[3400]: E0120 01:19:42.093986 3400 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-4dd77badda\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-n-4dd77badda" Jan 20 01:19:42.094227 kubelet[3400]: I0120 01:19:42.094191 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-n-4dd77badda" podStartSLOduration=1.09418276 podStartE2EDuration="1.09418276s" podCreationTimestamp="2026-01-20 01:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:19:42.094182968 +0000 UTC m=+1.114390157" watchObservedRunningTime="2026-01-20 01:19:42.09418276 +0000 UTC m=+1.114389941" Jan 20 01:19:42.104177 kubelet[3400]: I0120 01:19:42.104065 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-n-4dd77badda" podStartSLOduration=1.104056829 podStartE2EDuration="1.104056829s" podCreationTimestamp="2026-01-20 01:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:19:42.103279913 +0000 UTC m=+1.123487094" watchObservedRunningTime="2026-01-20 01:19:42.104056829 +0000 UTC m=+1.124264018" Jan 20 01:19:42.123734 kubelet[3400]: I0120 01:19:42.123692 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-4dd77badda" podStartSLOduration=1.123682324 podStartE2EDuration="1.123682324s" podCreationTimestamp="2026-01-20 01:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:19:42.113782261 +0000 UTC m=+1.133989442" watchObservedRunningTime="2026-01-20 01:19:42.123682324 +0000 UTC m=+1.143889505" Jan 20 01:19:42.961791 sudo[2378]: pam_unix(sudo:session): session closed for user root Jan 20 01:19:43.039451 sshd[2377]: Connection closed by 10.200.16.10 port 35922 Jan 20 01:19:43.039075 sshd-session[2371]: pam_unix(sshd:session): session closed for user core Jan 20 01:19:43.042119 systemd[1]: sshd@6-10.200.20.24:22-10.200.16.10:35922.service: Deactivated successfully. Jan 20 01:19:43.042270 systemd-logind[1878]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:19:43.044927 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:19:43.045215 systemd[1]: session-9.scope: Consumed 3.361s CPU time, 261.1M memory peak. Jan 20 01:19:43.048269 systemd-logind[1878]: Removed session 9. Jan 20 01:19:47.763750 kubelet[3400]: I0120 01:19:47.763720 3400 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:19:47.764107 containerd[1904]: time="2026-01-20T01:19:47.763980413Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:19:47.764241 kubelet[3400]: I0120 01:19:47.764123 3400 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:19:48.501140 systemd[1]: Created slice kubepods-besteffort-podf343a0c7_4ef6_417b_8976_d7a0b8228567.slice - libcontainer container kubepods-besteffort-podf343a0c7_4ef6_417b_8976_d7a0b8228567.slice. Jan 20 01:19:48.511862 systemd[1]: Created slice kubepods-burstable-pod6fe26685_4ccf_4ef5_870f_21cf5b7e5660.slice - libcontainer container kubepods-burstable-pod6fe26685_4ccf_4ef5_870f_21cf5b7e5660.slice. Jan 20 01:19:48.582656 kubelet[3400]: I0120 01:19:48.582203 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cni-path\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582656 kubelet[3400]: I0120 01:19:48.582230 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7skgs\" (UniqueName: \"kubernetes.io/projected/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-kube-api-access-7skgs\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582656 kubelet[3400]: I0120 01:19:48.582257 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f343a0c7-4ef6-417b-8976-d7a0b8228567-xtables-lock\") pod \"kube-proxy-4wdzj\" (UID: \"f343a0c7-4ef6-417b-8976-d7a0b8228567\") " pod="kube-system/kube-proxy-4wdzj" Jan 20 01:19:48.582656 kubelet[3400]: I0120 01:19:48.582268 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-hostproc\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582656 kubelet[3400]: I0120 01:19:48.582278 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-cgroup\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582656 kubelet[3400]: I0120 01:19:48.582290 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-clustermesh-secrets\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582845 kubelet[3400]: I0120 01:19:48.582320 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f343a0c7-4ef6-417b-8976-d7a0b8228567-lib-modules\") pod \"kube-proxy-4wdzj\" (UID: \"f343a0c7-4ef6-417b-8976-d7a0b8228567\") " pod="kube-system/kube-proxy-4wdzj" Jan 20 01:19:48.582845 kubelet[3400]: I0120 01:19:48.582332 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf7pw\" (UniqueName: \"kubernetes.io/projected/f343a0c7-4ef6-417b-8976-d7a0b8228567-kube-api-access-mf7pw\") pod \"kube-proxy-4wdzj\" (UID: \"f343a0c7-4ef6-417b-8976-d7a0b8228567\") " pod="kube-system/kube-proxy-4wdzj" Jan 20 01:19:48.582845 kubelet[3400]: I0120 01:19:48.582343 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-run\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582845 kubelet[3400]: I0120 01:19:48.582368 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-etc-cni-netd\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582845 kubelet[3400]: I0120 01:19:48.582398 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-xtables-lock\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582845 kubelet[3400]: I0120 01:19:48.582463 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f343a0c7-4ef6-417b-8976-d7a0b8228567-kube-proxy\") pod \"kube-proxy-4wdzj\" (UID: \"f343a0c7-4ef6-417b-8976-d7a0b8228567\") " pod="kube-system/kube-proxy-4wdzj" Jan 20 01:19:48.582944 kubelet[3400]: I0120 01:19:48.582481 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-config-path\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582944 kubelet[3400]: I0120 01:19:48.582519 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-bpf-maps\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582944 kubelet[3400]: I0120 01:19:48.582532 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-host-proc-sys-kernel\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582944 kubelet[3400]: I0120 01:19:48.582544 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-lib-modules\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582944 kubelet[3400]: I0120 01:19:48.582553 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-host-proc-sys-net\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.582944 kubelet[3400]: I0120 01:19:48.582630 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-hubble-tls\") pod \"cilium-rzk8c\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " pod="kube-system/cilium-rzk8c" Jan 20 01:19:48.699447 kubelet[3400]: E0120 01:19:48.699000 3400 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 20 01:19:48.699447 kubelet[3400]: E0120 01:19:48.699028 3400 projected.go:194] Error preparing data for projected volume kube-api-access-mf7pw for pod kube-system/kube-proxy-4wdzj: configmap "kube-root-ca.crt" not found Jan 20 01:19:48.699447 kubelet[3400]: E0120 01:19:48.699071 3400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f343a0c7-4ef6-417b-8976-d7a0b8228567-kube-api-access-mf7pw podName:f343a0c7-4ef6-417b-8976-d7a0b8228567 nodeName:}" failed. No retries permitted until 2026-01-20 01:19:49.19905327 +0000 UTC m=+8.219260459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mf7pw" (UniqueName: "kubernetes.io/projected/f343a0c7-4ef6-417b-8976-d7a0b8228567-kube-api-access-mf7pw") pod "kube-proxy-4wdzj" (UID: "f343a0c7-4ef6-417b-8976-d7a0b8228567") : configmap "kube-root-ca.crt" not found Jan 20 01:19:48.707302 kubelet[3400]: E0120 01:19:48.707267 3400 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 20 01:19:48.707302 kubelet[3400]: E0120 01:19:48.707288 3400 projected.go:194] Error preparing data for projected volume kube-api-access-7skgs for pod kube-system/cilium-rzk8c: configmap "kube-root-ca.crt" not found Jan 20 01:19:48.707462 kubelet[3400]: E0120 01:19:48.707318 3400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-kube-api-access-7skgs podName:6fe26685-4ccf-4ef5-870f-21cf5b7e5660 nodeName:}" failed. No retries permitted until 2026-01-20 01:19:49.207307901 +0000 UTC m=+8.227515082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7skgs" (UniqueName: "kubernetes.io/projected/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-kube-api-access-7skgs") pod "cilium-rzk8c" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660") : configmap "kube-root-ca.crt" not found Jan 20 01:19:48.903802 systemd[1]: Created slice kubepods-besteffort-poda3879dac_8d0a_4c9f_9bb6_5ced8d49eb5a.slice - libcontainer container kubepods-besteffort-poda3879dac_8d0a_4c9f_9bb6_5ced8d49eb5a.slice. Jan 20 01:19:48.985372 kubelet[3400]: I0120 01:19:48.985296 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xsknf\" (UID: \"a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a\") " pod="kube-system/cilium-operator-6c4d7847fc-xsknf" Jan 20 01:19:48.985372 kubelet[3400]: I0120 01:19:48.985346 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4qw\" (UniqueName: \"kubernetes.io/projected/a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a-kube-api-access-lx4qw\") pod \"cilium-operator-6c4d7847fc-xsknf\" (UID: \"a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a\") " pod="kube-system/cilium-operator-6c4d7847fc-xsknf" Jan 20 01:19:49.209064 containerd[1904]: time="2026-01-20T01:19:49.208523180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xsknf,Uid:a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:49.250811 containerd[1904]: time="2026-01-20T01:19:49.250729953Z" level=info msg="connecting to shim a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce" address="unix:///run/containerd/s/84a395ec463d981389772c3663dc2d757d123450764d6a021e830df719db4c3c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:49.269515 systemd[1]: Started cri-containerd-a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce.scope - libcontainer container a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce. Jan 20 01:19:49.301370 containerd[1904]: time="2026-01-20T01:19:49.301323048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xsknf,Uid:a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce\"" Jan 20 01:19:49.303865 containerd[1904]: time="2026-01-20T01:19:49.303820554Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 01:19:49.409122 containerd[1904]: time="2026-01-20T01:19:49.409092884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4wdzj,Uid:f343a0c7-4ef6-417b-8976-d7a0b8228567,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:49.415264 containerd[1904]: time="2026-01-20T01:19:49.415238551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rzk8c,Uid:6fe26685-4ccf-4ef5-870f-21cf5b7e5660,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:49.486902 containerd[1904]: time="2026-01-20T01:19:49.486741978Z" level=info msg="connecting to shim 25fe559413584aa1ea338bb787dc172ab99ba31bdc065e3208360730246b2885" address="unix:///run/containerd/s/6f838f9181781556fe8ada844035f2367c1a6635a617d8216fc18e51aeb30ad4" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:49.492928 containerd[1904]: time="2026-01-20T01:19:49.492899071Z" level=info msg="connecting to shim 56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137" address="unix:///run/containerd/s/7503f968604665d651593e1dad068177e6fae9a2208ffe43de51c0f8ea14d8aa" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:49.502536 systemd[1]: Started cri-containerd-25fe559413584aa1ea338bb787dc172ab99ba31bdc065e3208360730246b2885.scope - libcontainer container 25fe559413584aa1ea338bb787dc172ab99ba31bdc065e3208360730246b2885. Jan 20 01:19:49.508295 systemd[1]: Started cri-containerd-56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137.scope - libcontainer container 56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137. Jan 20 01:19:49.533636 containerd[1904]: time="2026-01-20T01:19:49.533604021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4wdzj,Uid:f343a0c7-4ef6-417b-8976-d7a0b8228567,Namespace:kube-system,Attempt:0,} returns sandbox id \"25fe559413584aa1ea338bb787dc172ab99ba31bdc065e3208360730246b2885\"" Jan 20 01:19:49.536563 containerd[1904]: time="2026-01-20T01:19:49.536265684Z" level=info msg="CreateContainer within sandbox \"25fe559413584aa1ea338bb787dc172ab99ba31bdc065e3208360730246b2885\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:19:49.537605 containerd[1904]: time="2026-01-20T01:19:49.537582971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rzk8c,Uid:6fe26685-4ccf-4ef5-870f-21cf5b7e5660,Namespace:kube-system,Attempt:0,} returns sandbox id \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\"" Jan 20 01:19:49.562107 containerd[1904]: time="2026-01-20T01:19:49.562075294Z" level=info msg="Container 5e7ecad702f68490a80b5e90c86ae48368d02d605b713f0ab1c6f7ba7a0df289: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:49.584135 containerd[1904]: time="2026-01-20T01:19:49.584096729Z" level=info msg="CreateContainer within sandbox \"25fe559413584aa1ea338bb787dc172ab99ba31bdc065e3208360730246b2885\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5e7ecad702f68490a80b5e90c86ae48368d02d605b713f0ab1c6f7ba7a0df289\"" Jan 20 01:19:49.585216 containerd[1904]: time="2026-01-20T01:19:49.585190425Z" level=info msg="StartContainer for \"5e7ecad702f68490a80b5e90c86ae48368d02d605b713f0ab1c6f7ba7a0df289\"" Jan 20 01:19:49.586621 containerd[1904]: time="2026-01-20T01:19:49.586597123Z" level=info msg="connecting to shim 5e7ecad702f68490a80b5e90c86ae48368d02d605b713f0ab1c6f7ba7a0df289" address="unix:///run/containerd/s/6f838f9181781556fe8ada844035f2367c1a6635a617d8216fc18e51aeb30ad4" protocol=ttrpc version=3 Jan 20 01:19:49.602514 systemd[1]: Started cri-containerd-5e7ecad702f68490a80b5e90c86ae48368d02d605b713f0ab1c6f7ba7a0df289.scope - libcontainer container 5e7ecad702f68490a80b5e90c86ae48368d02d605b713f0ab1c6f7ba7a0df289. Jan 20 01:19:49.659476 containerd[1904]: time="2026-01-20T01:19:49.659145675Z" level=info msg="StartContainer for \"5e7ecad702f68490a80b5e90c86ae48368d02d605b713f0ab1c6f7ba7a0df289\" returns successfully" Jan 20 01:19:51.036518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187903131.mount: Deactivated successfully. Jan 20 01:19:51.475434 containerd[1904]: time="2026-01-20T01:19:51.475008974Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:51.478962 containerd[1904]: time="2026-01-20T01:19:51.478934282Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 20 01:19:51.482554 containerd[1904]: time="2026-01-20T01:19:51.482528154Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:51.483598 containerd[1904]: time="2026-01-20T01:19:51.483567024Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.179721253s" Jan 20 01:19:51.483625 containerd[1904]: time="2026-01-20T01:19:51.483600977Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 20 01:19:51.484484 containerd[1904]: time="2026-01-20T01:19:51.484427854Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 01:19:51.487452 containerd[1904]: time="2026-01-20T01:19:51.486652934Z" level=info msg="CreateContainer within sandbox \"a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 01:19:51.541263 containerd[1904]: time="2026-01-20T01:19:51.541229292Z" level=info msg="Container ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:51.579037 containerd[1904]: time="2026-01-20T01:19:51.579011362Z" level=info msg="CreateContainer within sandbox \"a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\"" Jan 20 01:19:51.579507 containerd[1904]: time="2026-01-20T01:19:51.579487787Z" level=info msg="StartContainer for \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\"" Jan 20 01:19:51.580291 containerd[1904]: time="2026-01-20T01:19:51.580145907Z" level=info msg="connecting to shim ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad" address="unix:///run/containerd/s/84a395ec463d981389772c3663dc2d757d123450764d6a021e830df719db4c3c" protocol=ttrpc version=3 Jan 20 01:19:51.599524 systemd[1]: Started cri-containerd-ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad.scope - libcontainer container ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad. Jan 20 01:19:51.624927 containerd[1904]: time="2026-01-20T01:19:51.624671474Z" level=info msg="StartContainer for \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\" returns successfully" Jan 20 01:19:52.118795 kubelet[3400]: I0120 01:19:52.118499 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4wdzj" podStartSLOduration=4.118486113 podStartE2EDuration="4.118486113s" podCreationTimestamp="2026-01-20 01:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:19:50.09840542 +0000 UTC m=+9.118612601" watchObservedRunningTime="2026-01-20 01:19:52.118486113 +0000 UTC m=+11.138693302" Jan 20 01:19:54.827300 kubelet[3400]: I0120 01:19:54.827248 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xsknf" podStartSLOduration=4.645484258 podStartE2EDuration="6.827234119s" podCreationTimestamp="2026-01-20 01:19:48 +0000 UTC" firstStartedPulling="2026-01-20 01:19:49.302400223 +0000 UTC m=+8.322607404" lastFinishedPulling="2026-01-20 01:19:51.484150084 +0000 UTC m=+10.504357265" observedRunningTime="2026-01-20 01:19:52.118656487 +0000 UTC m=+11.138863668" watchObservedRunningTime="2026-01-20 01:19:54.827234119 +0000 UTC m=+13.847441300" Jan 20 01:19:56.263268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount789316878.mount: Deactivated successfully. Jan 20 01:19:57.593446 containerd[1904]: time="2026-01-20T01:19:57.593129872Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:57.596290 containerd[1904]: time="2026-01-20T01:19:57.596264465Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 20 01:19:57.600183 containerd[1904]: time="2026-01-20T01:19:57.600071611Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:57.601517 containerd[1904]: time="2026-01-20T01:19:57.601472518Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.117023687s" Jan 20 01:19:57.601517 containerd[1904]: time="2026-01-20T01:19:57.601496751Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 20 01:19:57.603948 containerd[1904]: time="2026-01-20T01:19:57.603915918Z" level=info msg="CreateContainer within sandbox \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 01:19:57.659143 containerd[1904]: time="2026-01-20T01:19:57.658965991Z" level=info msg="Container ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:57.661010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66784032.mount: Deactivated successfully. Jan 20 01:19:57.945246 containerd[1904]: time="2026-01-20T01:19:57.945134244Z" level=info msg="CreateContainer within sandbox \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\"" Jan 20 01:19:57.946191 containerd[1904]: time="2026-01-20T01:19:57.946159985Z" level=info msg="StartContainer for \"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\"" Jan 20 01:19:57.947642 containerd[1904]: time="2026-01-20T01:19:57.947602021Z" level=info msg="connecting to shim ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f" address="unix:///run/containerd/s/7503f968604665d651593e1dad068177e6fae9a2208ffe43de51c0f8ea14d8aa" protocol=ttrpc version=3 Jan 20 01:19:57.970519 systemd[1]: Started cri-containerd-ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f.scope - libcontainer container ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f. Jan 20 01:19:57.997427 containerd[1904]: time="2026-01-20T01:19:57.996978376Z" level=info msg="StartContainer for \"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\" returns successfully" Jan 20 01:19:57.998641 systemd[1]: cri-containerd-ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f.scope: Deactivated successfully. Jan 20 01:19:58.001678 containerd[1904]: time="2026-01-20T01:19:58.001646161Z" level=info msg="received container exit event container_id:\"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\" id:\"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\" pid:3857 exited_at:{seconds:1768871998 nanos:1318333}" Jan 20 01:19:58.018979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f-rootfs.mount: Deactivated successfully. Jan 20 01:20:00.109014 containerd[1904]: time="2026-01-20T01:20:00.108970931Z" level=info msg="CreateContainer within sandbox \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 01:20:00.215623 containerd[1904]: time="2026-01-20T01:20:00.215588070Z" level=info msg="Container 62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:20:00.219389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892507378.mount: Deactivated successfully. Jan 20 01:20:00.231233 containerd[1904]: time="2026-01-20T01:20:00.231202675Z" level=info msg="CreateContainer within sandbox \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\"" Jan 20 01:20:00.231842 containerd[1904]: time="2026-01-20T01:20:00.231698181Z" level=info msg="StartContainer for \"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\"" Jan 20 01:20:00.232992 containerd[1904]: time="2026-01-20T01:20:00.232958259Z" level=info msg="connecting to shim 62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00" address="unix:///run/containerd/s/7503f968604665d651593e1dad068177e6fae9a2208ffe43de51c0f8ea14d8aa" protocol=ttrpc version=3 Jan 20 01:20:00.249525 systemd[1]: Started cri-containerd-62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00.scope - libcontainer container 62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00. Jan 20 01:20:00.272930 containerd[1904]: time="2026-01-20T01:20:00.272901456Z" level=info msg="StartContainer for \"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\" returns successfully" Jan 20 01:20:00.282120 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:20:00.282713 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:20:00.282854 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:20:00.284993 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:20:00.286531 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 01:20:00.288777 systemd[1]: cri-containerd-62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00.scope: Deactivated successfully. Jan 20 01:20:00.289339 containerd[1904]: time="2026-01-20T01:20:00.289313578Z" level=info msg="received container exit event container_id:\"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\" id:\"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\" pid:3902 exited_at:{seconds:1768872000 nanos:289173965}" Jan 20 01:20:00.300483 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:20:01.114222 containerd[1904]: time="2026-01-20T01:20:01.113861996Z" level=info msg="CreateContainer within sandbox \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 01:20:01.134505 containerd[1904]: time="2026-01-20T01:20:01.134479614Z" level=info msg="Container 6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:20:01.156778 containerd[1904]: time="2026-01-20T01:20:01.156736916Z" level=info msg="CreateContainer within sandbox \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\"" Jan 20 01:20:01.157826 containerd[1904]: time="2026-01-20T01:20:01.157388604Z" level=info msg="StartContainer for \"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\"" Jan 20 01:20:01.159937 containerd[1904]: time="2026-01-20T01:20:01.159775690Z" level=info msg="connecting to shim 6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f" address="unix:///run/containerd/s/7503f968604665d651593e1dad068177e6fae9a2208ffe43de51c0f8ea14d8aa" protocol=ttrpc version=3 Jan 20 01:20:01.177527 systemd[1]: Started cri-containerd-6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f.scope - libcontainer container 6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f. Jan 20 01:20:01.214259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00-rootfs.mount: Deactivated successfully. Jan 20 01:20:01.232968 systemd[1]: cri-containerd-6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f.scope: Deactivated successfully. Jan 20 01:20:01.235384 containerd[1904]: time="2026-01-20T01:20:01.235351697Z" level=info msg="received container exit event container_id:\"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\" id:\"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\" pid:3950 exited_at:{seconds:1768872001 nanos:235213412}" Jan 20 01:20:01.242041 containerd[1904]: time="2026-01-20T01:20:01.242016186Z" level=info msg="StartContainer for \"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\" returns successfully" Jan 20 01:20:01.251310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f-rootfs.mount: Deactivated successfully. Jan 20 01:20:02.120124 containerd[1904]: time="2026-01-20T01:20:02.120062749Z" level=info msg="CreateContainer within sandbox \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 01:20:02.149947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504805806.mount: Deactivated successfully. Jan 20 01:20:02.151936 containerd[1904]: time="2026-01-20T01:20:02.151858595Z" level=info msg="Container 7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:20:02.171697 containerd[1904]: time="2026-01-20T01:20:02.171604478Z" level=info msg="CreateContainer within sandbox \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\"" Jan 20 01:20:02.172998 containerd[1904]: time="2026-01-20T01:20:02.172951063Z" level=info msg="StartContainer for \"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\"" Jan 20 01:20:02.173601 containerd[1904]: time="2026-01-20T01:20:02.173577110Z" level=info msg="connecting to shim 7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556" address="unix:///run/containerd/s/7503f968604665d651593e1dad068177e6fae9a2208ffe43de51c0f8ea14d8aa" protocol=ttrpc version=3 Jan 20 01:20:02.187538 systemd[1]: Started cri-containerd-7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556.scope - libcontainer container 7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556. Jan 20 01:20:02.204822 systemd[1]: cri-containerd-7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556.scope: Deactivated successfully. Jan 20 01:20:02.209845 containerd[1904]: time="2026-01-20T01:20:02.209769755Z" level=info msg="received container exit event container_id:\"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\" id:\"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\" pid:3990 exited_at:{seconds:1768872002 nanos:205352227}" Jan 20 01:20:02.211235 containerd[1904]: time="2026-01-20T01:20:02.211172150Z" level=info msg="StartContainer for \"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\" returns successfully" Jan 20 01:20:02.228326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556-rootfs.mount: Deactivated successfully. Jan 20 01:20:03.122666 containerd[1904]: time="2026-01-20T01:20:03.122572470Z" level=info msg="CreateContainer within sandbox \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 01:20:03.157561 containerd[1904]: time="2026-01-20T01:20:03.155623251Z" level=info msg="Container a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:20:03.176744 containerd[1904]: time="2026-01-20T01:20:03.176708518Z" level=info msg="CreateContainer within sandbox \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\"" Jan 20 01:20:03.177229 containerd[1904]: time="2026-01-20T01:20:03.177206872Z" level=info msg="StartContainer for \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\"" Jan 20 01:20:03.178555 containerd[1904]: time="2026-01-20T01:20:03.178531392Z" level=info msg="connecting to shim a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050" address="unix:///run/containerd/s/7503f968604665d651593e1dad068177e6fae9a2208ffe43de51c0f8ea14d8aa" protocol=ttrpc version=3 Jan 20 01:20:03.202528 systemd[1]: Started cri-containerd-a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050.scope - libcontainer container a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050. Jan 20 01:20:03.233249 containerd[1904]: time="2026-01-20T01:20:03.233217739Z" level=info msg="StartContainer for \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\" returns successfully" Jan 20 01:20:03.354848 kubelet[3400]: I0120 01:20:03.354659 3400 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 01:20:03.392859 systemd[1]: Created slice kubepods-burstable-pod28e5fee0_29e9_477f_91e0_9b84f4220a3a.slice - libcontainer container kubepods-burstable-pod28e5fee0_29e9_477f_91e0_9b84f4220a3a.slice. Jan 20 01:20:03.403169 systemd[1]: Created slice kubepods-burstable-pod518f949c_a47a_44f0_be85_b6ae1f17b543.slice - libcontainer container kubepods-burstable-pod518f949c_a47a_44f0_be85_b6ae1f17b543.slice. Jan 20 01:20:03.475102 kubelet[3400]: I0120 01:20:03.475077 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/518f949c-a47a-44f0-be85-b6ae1f17b543-config-volume\") pod \"coredns-668d6bf9bc-h7bv9\" (UID: \"518f949c-a47a-44f0-be85-b6ae1f17b543\") " pod="kube-system/coredns-668d6bf9bc-h7bv9" Jan 20 01:20:03.475102 kubelet[3400]: I0120 01:20:03.475143 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l42g8\" (UniqueName: \"kubernetes.io/projected/518f949c-a47a-44f0-be85-b6ae1f17b543-kube-api-access-l42g8\") pod \"coredns-668d6bf9bc-h7bv9\" (UID: \"518f949c-a47a-44f0-be85-b6ae1f17b543\") " pod="kube-system/coredns-668d6bf9bc-h7bv9" Jan 20 01:20:03.475102 kubelet[3400]: I0120 01:20:03.475159 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28e5fee0-29e9-477f-91e0-9b84f4220a3a-config-volume\") pod \"coredns-668d6bf9bc-rhhxr\" (UID: \"28e5fee0-29e9-477f-91e0-9b84f4220a3a\") " pod="kube-system/coredns-668d6bf9bc-rhhxr" Jan 20 01:20:03.475102 kubelet[3400]: I0120 01:20:03.475171 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccmdw\" (UniqueName: \"kubernetes.io/projected/28e5fee0-29e9-477f-91e0-9b84f4220a3a-kube-api-access-ccmdw\") pod \"coredns-668d6bf9bc-rhhxr\" (UID: \"28e5fee0-29e9-477f-91e0-9b84f4220a3a\") " pod="kube-system/coredns-668d6bf9bc-rhhxr" Jan 20 01:20:03.698178 containerd[1904]: time="2026-01-20T01:20:03.697806889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rhhxr,Uid:28e5fee0-29e9-477f-91e0-9b84f4220a3a,Namespace:kube-system,Attempt:0,}" Jan 20 01:20:03.710264 containerd[1904]: time="2026-01-20T01:20:03.710109231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7bv9,Uid:518f949c-a47a-44f0-be85-b6ae1f17b543,Namespace:kube-system,Attempt:0,}" Jan 20 01:20:04.141689 kubelet[3400]: I0120 01:20:04.141571 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rzk8c" podStartSLOduration=8.077960244 podStartE2EDuration="16.141556805s" podCreationTimestamp="2026-01-20 01:19:48 +0000 UTC" firstStartedPulling="2026-01-20 01:19:49.5385075 +0000 UTC m=+8.558714681" lastFinishedPulling="2026-01-20 01:19:57.602104053 +0000 UTC m=+16.622311242" observedRunningTime="2026-01-20 01:20:04.14101901 +0000 UTC m=+23.161226191" watchObservedRunningTime="2026-01-20 01:20:04.141556805 +0000 UTC m=+23.161763986" Jan 20 01:20:05.154396 systemd-networkd[1474]: cilium_host: Link UP Jan 20 01:20:05.155508 systemd-networkd[1474]: cilium_net: Link UP Jan 20 01:20:05.155607 systemd-networkd[1474]: cilium_net: Gained carrier Jan 20 01:20:05.155675 systemd-networkd[1474]: cilium_host: Gained carrier Jan 20 01:20:05.177522 systemd-networkd[1474]: cilium_net: Gained IPv6LL Jan 20 01:20:05.297238 systemd-networkd[1474]: cilium_vxlan: Link UP Jan 20 01:20:05.297444 systemd-networkd[1474]: cilium_vxlan: Gained carrier Jan 20 01:20:05.491442 kernel: NET: Registered PF_ALG protocol family Jan 20 01:20:05.745577 systemd-networkd[1474]: cilium_host: Gained IPv6LL Jan 20 01:20:05.949850 systemd-networkd[1474]: lxc_health: Link UP Jan 20 01:20:05.961026 systemd-networkd[1474]: lxc_health: Gained carrier Jan 20 01:20:06.234795 systemd-networkd[1474]: lxc6616d4087a16: Link UP Jan 20 01:20:06.235500 kernel: eth0: renamed from tmp97b47 Jan 20 01:20:06.237141 systemd-networkd[1474]: lxc6616d4087a16: Gained carrier Jan 20 01:20:06.250455 kernel: eth0: renamed from tmpc5b87 Jan 20 01:20:06.250664 systemd-networkd[1474]: lxc77f8a96ec263: Link UP Jan 20 01:20:06.255614 systemd-networkd[1474]: lxc77f8a96ec263: Gained carrier Jan 20 01:20:06.770591 systemd-networkd[1474]: cilium_vxlan: Gained IPv6LL Jan 20 01:20:07.281586 systemd-networkd[1474]: lxc_health: Gained IPv6LL Jan 20 01:20:07.985585 systemd-networkd[1474]: lxc6616d4087a16: Gained IPv6LL Jan 20 01:20:08.113569 systemd-networkd[1474]: lxc77f8a96ec263: Gained IPv6LL Jan 20 01:20:08.740439 containerd[1904]: time="2026-01-20T01:20:08.739626880Z" level=info msg="connecting to shim c5b8739f7e588c7761f46f393549ffb2a193d8b3932b26e6c07dc83a2963de4e" address="unix:///run/containerd/s/ea293bf39ec7dd659471db287cd8f51d857f40de5816acab4c303605e17c689a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:20:08.757837 containerd[1904]: time="2026-01-20T01:20:08.757800514Z" level=info msg="connecting to shim 97b476b0af8e94475c335d113b9881437a1a867a061e908ae9edfe389eb8c562" address="unix:///run/containerd/s/8bba4e682532c1b2157ae7fb504590597aa82b60b3daa26e25a127dc694c54b4" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:20:08.775620 systemd[1]: Started cri-containerd-c5b8739f7e588c7761f46f393549ffb2a193d8b3932b26e6c07dc83a2963de4e.scope - libcontainer container c5b8739f7e588c7761f46f393549ffb2a193d8b3932b26e6c07dc83a2963de4e. Jan 20 01:20:08.778894 systemd[1]: Started cri-containerd-97b476b0af8e94475c335d113b9881437a1a867a061e908ae9edfe389eb8c562.scope - libcontainer container 97b476b0af8e94475c335d113b9881437a1a867a061e908ae9edfe389eb8c562. Jan 20 01:20:08.813297 containerd[1904]: time="2026-01-20T01:20:08.813258360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7bv9,Uid:518f949c-a47a-44f0-be85-b6ae1f17b543,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5b8739f7e588c7761f46f393549ffb2a193d8b3932b26e6c07dc83a2963de4e\"" Jan 20 01:20:08.817403 containerd[1904]: time="2026-01-20T01:20:08.817369117Z" level=info msg="CreateContainer within sandbox \"c5b8739f7e588c7761f46f393549ffb2a193d8b3932b26e6c07dc83a2963de4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:20:08.820729 containerd[1904]: time="2026-01-20T01:20:08.820255009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rhhxr,Uid:28e5fee0-29e9-477f-91e0-9b84f4220a3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"97b476b0af8e94475c335d113b9881437a1a867a061e908ae9edfe389eb8c562\"" Jan 20 01:20:08.823861 containerd[1904]: time="2026-01-20T01:20:08.823736657Z" level=info msg="CreateContainer within sandbox \"97b476b0af8e94475c335d113b9881437a1a867a061e908ae9edfe389eb8c562\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:20:08.851739 containerd[1904]: time="2026-01-20T01:20:08.851600848Z" level=info msg="Container 03080b246c8eacde4ac34d2577b6118c677787d6bb3a2dcfc227f12bb4c5e968: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:20:08.857117 containerd[1904]: time="2026-01-20T01:20:08.857093581Z" level=info msg="Container b3ca1da39984b41648b73e02c0fe083a21f04ae175cabc58878077778c600dc3: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:20:08.868334 containerd[1904]: time="2026-01-20T01:20:08.868284335Z" level=info msg="CreateContainer within sandbox \"97b476b0af8e94475c335d113b9881437a1a867a061e908ae9edfe389eb8c562\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03080b246c8eacde4ac34d2577b6118c677787d6bb3a2dcfc227f12bb4c5e968\"" Jan 20 01:20:08.868856 containerd[1904]: time="2026-01-20T01:20:08.868685132Z" level=info msg="StartContainer for \"03080b246c8eacde4ac34d2577b6118c677787d6bb3a2dcfc227f12bb4c5e968\"" Jan 20 01:20:08.869586 containerd[1904]: time="2026-01-20T01:20:08.869499632Z" level=info msg="connecting to shim 03080b246c8eacde4ac34d2577b6118c677787d6bb3a2dcfc227f12bb4c5e968" address="unix:///run/containerd/s/8bba4e682532c1b2157ae7fb504590597aa82b60b3daa26e25a127dc694c54b4" protocol=ttrpc version=3 Jan 20 01:20:08.881393 containerd[1904]: time="2026-01-20T01:20:08.881365329Z" level=info msg="CreateContainer within sandbox \"c5b8739f7e588c7761f46f393549ffb2a193d8b3932b26e6c07dc83a2963de4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3ca1da39984b41648b73e02c0fe083a21f04ae175cabc58878077778c600dc3\"" Jan 20 01:20:08.881916 containerd[1904]: time="2026-01-20T01:20:08.881816361Z" level=info msg="StartContainer for \"b3ca1da39984b41648b73e02c0fe083a21f04ae175cabc58878077778c600dc3\"" Jan 20 01:20:08.882370 containerd[1904]: time="2026-01-20T01:20:08.882321218Z" level=info msg="connecting to shim b3ca1da39984b41648b73e02c0fe083a21f04ae175cabc58878077778c600dc3" address="unix:///run/containerd/s/ea293bf39ec7dd659471db287cd8f51d857f40de5816acab4c303605e17c689a" protocol=ttrpc version=3 Jan 20 01:20:08.884644 systemd[1]: Started cri-containerd-03080b246c8eacde4ac34d2577b6118c677787d6bb3a2dcfc227f12bb4c5e968.scope - libcontainer container 03080b246c8eacde4ac34d2577b6118c677787d6bb3a2dcfc227f12bb4c5e968. Jan 20 01:20:08.906531 systemd[1]: Started cri-containerd-b3ca1da39984b41648b73e02c0fe083a21f04ae175cabc58878077778c600dc3.scope - libcontainer container b3ca1da39984b41648b73e02c0fe083a21f04ae175cabc58878077778c600dc3. Jan 20 01:20:08.930339 containerd[1904]: time="2026-01-20T01:20:08.930261341Z" level=info msg="StartContainer for \"03080b246c8eacde4ac34d2577b6118c677787d6bb3a2dcfc227f12bb4c5e968\" returns successfully" Jan 20 01:20:08.939183 containerd[1904]: time="2026-01-20T01:20:08.939100557Z" level=info msg="StartContainer for \"b3ca1da39984b41648b73e02c0fe083a21f04ae175cabc58878077778c600dc3\" returns successfully" Jan 20 01:20:09.159100 kubelet[3400]: I0120 01:20:09.159050 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rhhxr" podStartSLOduration=21.159035866 podStartE2EDuration="21.159035866s" podCreationTimestamp="2026-01-20 01:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:20:09.158113235 +0000 UTC m=+28.178320480" watchObservedRunningTime="2026-01-20 01:20:09.159035866 +0000 UTC m=+28.179243048" Jan 20 01:20:09.191928 kubelet[3400]: I0120 01:20:09.191881 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h7bv9" podStartSLOduration=21.191867557 podStartE2EDuration="21.191867557s" podCreationTimestamp="2026-01-20 01:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:20:09.191272136 +0000 UTC m=+28.211479365" watchObservedRunningTime="2026-01-20 01:20:09.191867557 +0000 UTC m=+28.212074746" Jan 20 01:20:15.163699 kubelet[3400]: I0120 01:20:15.163548 3400 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:21:12.480675 systemd[1]: Started sshd@7-10.200.20.24:22-10.200.16.10:58270.service - OpenSSH per-connection server daemon (10.200.16.10:58270). Jan 20 01:21:12.933436 sshd[4718]: Accepted publickey for core from 10.200.16.10 port 58270 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:12.934230 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:12.937778 systemd-logind[1878]: New session 10 of user core. Jan 20 01:21:12.945518 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:21:13.346445 sshd[4721]: Connection closed by 10.200.16.10 port 58270 Jan 20 01:21:13.347492 sshd-session[4718]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:13.350276 systemd[1]: sshd@7-10.200.20.24:22-10.200.16.10:58270.service: Deactivated successfully. Jan 20 01:21:13.351787 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:21:13.352887 systemd-logind[1878]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:21:13.354060 systemd-logind[1878]: Removed session 10. Jan 20 01:21:18.438595 systemd[1]: Started sshd@8-10.200.20.24:22-10.200.16.10:58286.service - OpenSSH per-connection server daemon (10.200.16.10:58286). Jan 20 01:21:18.928262 sshd[4733]: Accepted publickey for core from 10.200.16.10 port 58286 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:18.928992 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:18.932467 systemd-logind[1878]: New session 11 of user core. Jan 20 01:21:18.938535 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:21:19.313528 sshd[4736]: Connection closed by 10.200.16.10 port 58286 Jan 20 01:21:19.313987 sshd-session[4733]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:19.316789 systemd[1]: sshd@8-10.200.20.24:22-10.200.16.10:58286.service: Deactivated successfully. Jan 20 01:21:19.318076 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:21:19.318698 systemd-logind[1878]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:21:19.319757 systemd-logind[1878]: Removed session 11. Jan 20 01:21:24.395862 systemd[1]: Started sshd@9-10.200.20.24:22-10.200.16.10:52146.service - OpenSSH per-connection server daemon (10.200.16.10:52146). Jan 20 01:21:24.848721 sshd[4751]: Accepted publickey for core from 10.200.16.10 port 52146 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:24.849751 sshd-session[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:24.853150 systemd-logind[1878]: New session 12 of user core. Jan 20 01:21:24.859533 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:21:25.222868 sshd[4754]: Connection closed by 10.200.16.10 port 52146 Jan 20 01:21:25.223330 sshd-session[4751]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:25.226938 systemd-logind[1878]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:21:25.227333 systemd[1]: sshd@9-10.200.20.24:22-10.200.16.10:52146.service: Deactivated successfully. Jan 20 01:21:25.229425 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:21:25.231346 systemd-logind[1878]: Removed session 12. Jan 20 01:21:30.304848 systemd[1]: Started sshd@10-10.200.20.24:22-10.200.16.10:34630.service - OpenSSH per-connection server daemon (10.200.16.10:34630). Jan 20 01:21:30.754996 sshd[4767]: Accepted publickey for core from 10.200.16.10 port 34630 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:30.756035 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:30.759308 systemd-logind[1878]: New session 13 of user core. Jan 20 01:21:30.763516 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:21:31.123444 sshd[4770]: Connection closed by 10.200.16.10 port 34630 Jan 20 01:21:31.123344 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:31.126247 systemd[1]: sshd@10-10.200.20.24:22-10.200.16.10:34630.service: Deactivated successfully. Jan 20 01:21:31.127770 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:21:31.129292 systemd-logind[1878]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:21:31.130781 systemd-logind[1878]: Removed session 13. Jan 20 01:21:31.203863 systemd[1]: Started sshd@11-10.200.20.24:22-10.200.16.10:34632.service - OpenSSH per-connection server daemon (10.200.16.10:34632). Jan 20 01:21:31.657450 sshd[4783]: Accepted publickey for core from 10.200.16.10 port 34632 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:31.658117 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:31.661553 systemd-logind[1878]: New session 14 of user core. Jan 20 01:21:31.666695 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:21:32.054632 sshd[4786]: Connection closed by 10.200.16.10 port 34632 Jan 20 01:21:32.054989 sshd-session[4783]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:32.059183 systemd[1]: sshd@11-10.200.20.24:22-10.200.16.10:34632.service: Deactivated successfully. Jan 20 01:21:32.061055 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:21:32.062749 systemd-logind[1878]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:21:32.064398 systemd-logind[1878]: Removed session 14. Jan 20 01:21:32.141329 systemd[1]: Started sshd@12-10.200.20.24:22-10.200.16.10:34642.service - OpenSSH per-connection server daemon (10.200.16.10:34642). Jan 20 01:21:32.634439 sshd[4796]: Accepted publickey for core from 10.200.16.10 port 34642 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:32.634998 sshd-session[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:32.638308 systemd-logind[1878]: New session 15 of user core. Jan 20 01:21:32.649619 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:21:33.022514 sshd[4799]: Connection closed by 10.200.16.10 port 34642 Jan 20 01:21:33.022161 sshd-session[4796]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:33.025583 systemd[1]: sshd@12-10.200.20.24:22-10.200.16.10:34642.service: Deactivated successfully. Jan 20 01:21:33.027242 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:21:33.028325 systemd-logind[1878]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:21:33.029660 systemd-logind[1878]: Removed session 15. Jan 20 01:21:38.114578 systemd[1]: Started sshd@13-10.200.20.24:22-10.200.16.10:34648.service - OpenSSH per-connection server daemon (10.200.16.10:34648). Jan 20 01:21:38.604632 sshd[4811]: Accepted publickey for core from 10.200.16.10 port 34648 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:38.605695 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:38.609097 systemd-logind[1878]: New session 16 of user core. Jan 20 01:21:38.624533 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:21:38.992587 sshd[4814]: Connection closed by 10.200.16.10 port 34648 Jan 20 01:21:38.993151 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:38.996082 systemd[1]: sshd@13-10.200.20.24:22-10.200.16.10:34648.service: Deactivated successfully. Jan 20 01:21:38.997586 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:21:38.998852 systemd-logind[1878]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:21:39.000456 systemd-logind[1878]: Removed session 16. Jan 20 01:21:39.075492 systemd[1]: Started sshd@14-10.200.20.24:22-10.200.16.10:34664.service - OpenSSH per-connection server daemon (10.200.16.10:34664). Jan 20 01:21:39.538553 sshd[4826]: Accepted publickey for core from 10.200.16.10 port 34664 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:39.539598 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:39.543647 systemd-logind[1878]: New session 17 of user core. Jan 20 01:21:39.547516 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:21:39.940436 sshd[4829]: Connection closed by 10.200.16.10 port 34664 Jan 20 01:21:39.940898 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:39.944632 systemd-logind[1878]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:21:39.945024 systemd[1]: sshd@14-10.200.20.24:22-10.200.16.10:34664.service: Deactivated successfully. Jan 20 01:21:39.946379 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:21:39.948002 systemd-logind[1878]: Removed session 17. Jan 20 01:21:40.025932 systemd[1]: Started sshd@15-10.200.20.24:22-10.200.16.10:50566.service - OpenSSH per-connection server daemon (10.200.16.10:50566). Jan 20 01:21:40.483559 sshd[4838]: Accepted publickey for core from 10.200.16.10 port 50566 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:40.484714 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:40.488467 systemd-logind[1878]: New session 18 of user core. Jan 20 01:21:40.494516 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:21:41.120768 sshd[4841]: Connection closed by 10.200.16.10 port 50566 Jan 20 01:21:41.120697 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:41.124725 systemd-logind[1878]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:21:41.124854 systemd[1]: sshd@15-10.200.20.24:22-10.200.16.10:50566.service: Deactivated successfully. Jan 20 01:21:41.127585 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:21:41.128407 systemd-logind[1878]: Removed session 18. Jan 20 01:21:41.205553 systemd[1]: Started sshd@16-10.200.20.24:22-10.200.16.10:50570.service - OpenSSH per-connection server daemon (10.200.16.10:50570). Jan 20 01:21:41.694957 sshd[4860]: Accepted publickey for core from 10.200.16.10 port 50570 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:41.695986 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:41.699255 systemd-logind[1878]: New session 19 of user core. Jan 20 01:21:41.707617 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:21:42.159485 sshd[4863]: Connection closed by 10.200.16.10 port 50570 Jan 20 01:21:42.159779 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:42.163133 systemd[1]: sshd@16-10.200.20.24:22-10.200.16.10:50570.service: Deactivated successfully. Jan 20 01:21:42.165920 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:21:42.167441 systemd-logind[1878]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:21:42.169110 systemd-logind[1878]: Removed session 19. Jan 20 01:21:42.251562 systemd[1]: Started sshd@17-10.200.20.24:22-10.200.16.10:50586.service - OpenSSH per-connection server daemon (10.200.16.10:50586). Jan 20 01:21:42.740591 sshd[4872]: Accepted publickey for core from 10.200.16.10 port 50586 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:42.741359 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:42.744731 systemd-logind[1878]: New session 20 of user core. Jan 20 01:21:42.755517 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:21:43.126596 sshd[4875]: Connection closed by 10.200.16.10 port 50586 Jan 20 01:21:43.126516 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:43.129871 systemd[1]: sshd@17-10.200.20.24:22-10.200.16.10:50586.service: Deactivated successfully. Jan 20 01:21:43.131632 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:21:43.133477 systemd-logind[1878]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:21:43.134397 systemd-logind[1878]: Removed session 20. Jan 20 01:21:48.218991 systemd[1]: Started sshd@18-10.200.20.24:22-10.200.16.10:50594.service - OpenSSH per-connection server daemon (10.200.16.10:50594). Jan 20 01:21:48.714048 sshd[4890]: Accepted publickey for core from 10.200.16.10 port 50594 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:48.714832 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:48.718333 systemd-logind[1878]: New session 21 of user core. Jan 20 01:21:48.730530 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:21:49.099855 sshd[4893]: Connection closed by 10.200.16.10 port 50594 Jan 20 01:21:49.099772 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:49.103322 systemd[1]: sshd@18-10.200.20.24:22-10.200.16.10:50594.service: Deactivated successfully. Jan 20 01:21:49.104897 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:21:49.106772 systemd-logind[1878]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:21:49.107793 systemd-logind[1878]: Removed session 21. Jan 20 01:21:54.207135 systemd[1]: Started sshd@19-10.200.20.24:22-10.200.16.10:58264.service - OpenSSH per-connection server daemon (10.200.16.10:58264). Jan 20 01:21:54.703715 sshd[4906]: Accepted publickey for core from 10.200.16.10 port 58264 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:54.704775 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:54.708429 systemd-logind[1878]: New session 22 of user core. Jan 20 01:21:54.715517 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:21:55.089880 sshd[4909]: Connection closed by 10.200.16.10 port 58264 Jan 20 01:21:55.090351 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:55.092938 systemd[1]: sshd@19-10.200.20.24:22-10.200.16.10:58264.service: Deactivated successfully. Jan 20 01:21:55.095687 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:21:55.096759 systemd-logind[1878]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:21:55.098736 systemd-logind[1878]: Removed session 22. Jan 20 01:22:00.174633 systemd[1]: Started sshd@20-10.200.20.24:22-10.200.16.10:33788.service - OpenSSH per-connection server daemon (10.200.16.10:33788). Jan 20 01:22:00.626132 sshd[4921]: Accepted publickey for core from 10.200.16.10 port 33788 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:22:00.627179 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:22:00.631099 systemd-logind[1878]: New session 23 of user core. Jan 20 01:22:00.640540 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:22:00.994535 sshd[4924]: Connection closed by 10.200.16.10 port 33788 Jan 20 01:22:00.995156 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Jan 20 01:22:00.998168 systemd[1]: sshd@20-10.200.20.24:22-10.200.16.10:33788.service: Deactivated successfully. Jan 20 01:22:00.999774 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:22:01.002513 systemd-logind[1878]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:22:01.005198 systemd-logind[1878]: Removed session 23. Jan 20 01:22:01.080247 systemd[1]: Started sshd@21-10.200.20.24:22-10.200.16.10:33796.service - OpenSSH per-connection server daemon (10.200.16.10:33796). Jan 20 01:22:01.537447 sshd[4936]: Accepted publickey for core from 10.200.16.10 port 33796 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:22:01.538431 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:22:01.541714 systemd-logind[1878]: New session 24 of user core. Jan 20 01:22:01.548516 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:22:03.085111 containerd[1904]: time="2026-01-20T01:22:03.084805241Z" level=info msg="StopContainer for \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\" with timeout 30 (s)" Jan 20 01:22:03.086226 containerd[1904]: time="2026-01-20T01:22:03.086208056Z" level=info msg="Stop container \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\" with signal terminated" Jan 20 01:22:03.096881 systemd[1]: cri-containerd-ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad.scope: Deactivated successfully. Jan 20 01:22:03.098429 containerd[1904]: time="2026-01-20T01:22:03.098384290Z" level=info msg="received container exit event container_id:\"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\" id:\"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\" pid:3797 exited_at:{seconds:1768872123 nanos:98037423}" Jan 20 01:22:03.109740 containerd[1904]: time="2026-01-20T01:22:03.109712751Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:22:03.116908 containerd[1904]: time="2026-01-20T01:22:03.116889129Z" level=info msg="StopContainer for \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\" with timeout 2 (s)" Jan 20 01:22:03.117598 containerd[1904]: time="2026-01-20T01:22:03.117549039Z" level=info msg="Stop container \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\" with signal terminated" Jan 20 01:22:03.120281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad-rootfs.mount: Deactivated successfully. Jan 20 01:22:03.124825 systemd-networkd[1474]: lxc_health: Link DOWN Jan 20 01:22:03.124835 systemd-networkd[1474]: lxc_health: Lost carrier Jan 20 01:22:03.137805 systemd[1]: cri-containerd-a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050.scope: Deactivated successfully. Jan 20 01:22:03.138020 systemd[1]: cri-containerd-a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050.scope: Consumed 4.223s CPU time, 123.4M memory peak, 128K read from disk, 12.9M written to disk. Jan 20 01:22:03.140008 containerd[1904]: time="2026-01-20T01:22:03.139928608Z" level=info msg="received container exit event container_id:\"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\" id:\"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\" pid:4027 exited_at:{seconds:1768872123 nanos:139499538}" Jan 20 01:22:03.154519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050-rootfs.mount: Deactivated successfully. Jan 20 01:22:03.208164 containerd[1904]: time="2026-01-20T01:22:03.208081078Z" level=info msg="StopContainer for \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\" returns successfully" Jan 20 01:22:03.208680 containerd[1904]: time="2026-01-20T01:22:03.208661313Z" level=info msg="StopPodSandbox for \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\"" Jan 20 01:22:03.208879 containerd[1904]: time="2026-01-20T01:22:03.208845175Z" level=info msg="Container to stop \"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:22:03.209097 containerd[1904]: time="2026-01-20T01:22:03.208861272Z" level=info msg="Container to stop \"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:22:03.209097 containerd[1904]: time="2026-01-20T01:22:03.208938179Z" level=info msg="Container to stop \"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:22:03.209097 containerd[1904]: time="2026-01-20T01:22:03.208948195Z" level=info msg="Container to stop \"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:22:03.209097 containerd[1904]: time="2026-01-20T01:22:03.208957899Z" level=info msg="Container to stop \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:22:03.212683 containerd[1904]: time="2026-01-20T01:22:03.212657592Z" level=info msg="StopContainer for \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\" returns successfully" Jan 20 01:22:03.213081 containerd[1904]: time="2026-01-20T01:22:03.213047877Z" level=info msg="StopPodSandbox for \"a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce\"" Jan 20 01:22:03.213134 containerd[1904]: time="2026-01-20T01:22:03.213097822Z" level=info msg="Container to stop \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:22:03.214942 systemd[1]: cri-containerd-56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137.scope: Deactivated successfully. Jan 20 01:22:03.219114 containerd[1904]: time="2026-01-20T01:22:03.219078144Z" level=info msg="received sandbox exit event container_id:\"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" id:\"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" exit_status:137 exited_at:{seconds:1768872123 nanos:218342415}" monitor_name=podsandbox Jan 20 01:22:03.226875 systemd[1]: cri-containerd-a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce.scope: Deactivated successfully. Jan 20 01:22:03.234943 containerd[1904]: time="2026-01-20T01:22:03.234901268Z" level=info msg="received sandbox exit event container_id:\"a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce\" id:\"a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce\" exit_status:137 exited_at:{seconds:1768872123 nanos:234779576}" monitor_name=podsandbox Jan 20 01:22:03.239108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137-rootfs.mount: Deactivated successfully. Jan 20 01:22:03.250735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce-rootfs.mount: Deactivated successfully. Jan 20 01:22:03.257950 containerd[1904]: time="2026-01-20T01:22:03.257931235Z" level=info msg="shim disconnected" id=56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137 namespace=k8s.io Jan 20 01:22:03.258723 containerd[1904]: time="2026-01-20T01:22:03.258593690Z" level=warning msg="cleaning up after shim disconnected" id=56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137 namespace=k8s.io Jan 20 01:22:03.258723 containerd[1904]: time="2026-01-20T01:22:03.258626691Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:22:03.258723 containerd[1904]: time="2026-01-20T01:22:03.258494510Z" level=info msg="shim disconnected" id=a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce namespace=k8s.io Jan 20 01:22:03.258723 containerd[1904]: time="2026-01-20T01:22:03.258669652Z" level=warning msg="cleaning up after shim disconnected" id=a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce namespace=k8s.io Jan 20 01:22:03.258723 containerd[1904]: time="2026-01-20T01:22:03.258694421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:22:03.267354 containerd[1904]: time="2026-01-20T01:22:03.267323247Z" level=info msg="received sandbox container exit event sandbox_id:\"a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce\" exit_status:137 exited_at:{seconds:1768872123 nanos:234779576}" monitor_name=criService Jan 20 01:22:03.268736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce-shm.mount: Deactivated successfully. Jan 20 01:22:03.269749 containerd[1904]: time="2026-01-20T01:22:03.268785153Z" level=info msg="TearDown network for sandbox \"a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce\" successfully" Jan 20 01:22:03.269749 containerd[1904]: time="2026-01-20T01:22:03.268803737Z" level=info msg="StopPodSandbox for \"a7a95016fbe31e26245d82c9c21e6e4db4589205ea0598726d4ea888e080bdce\" returns successfully" Jan 20 01:22:03.269749 containerd[1904]: time="2026-01-20T01:22:03.268952726Z" level=info msg="received sandbox container exit event sandbox_id:\"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" exit_status:137 exited_at:{seconds:1768872123 nanos:218342415}" monitor_name=criService Jan 20 01:22:03.270337 containerd[1904]: time="2026-01-20T01:22:03.270264714Z" level=info msg="TearDown network for sandbox \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" successfully" Jan 20 01:22:03.270337 containerd[1904]: time="2026-01-20T01:22:03.270282531Z" level=info msg="StopPodSandbox for \"56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137\" returns successfully" Jan 20 01:22:03.325913 kubelet[3400]: I0120 01:22:03.325883 3400 scope.go:117] "RemoveContainer" containerID="a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050" Jan 20 01:22:03.329136 containerd[1904]: time="2026-01-20T01:22:03.329090174Z" level=info msg="RemoveContainer for \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\"" Jan 20 01:22:03.343220 containerd[1904]: time="2026-01-20T01:22:03.342650190Z" level=info msg="RemoveContainer for \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\" returns successfully" Jan 20 01:22:03.343271 kubelet[3400]: I0120 01:22:03.342832 3400 scope.go:117] "RemoveContainer" containerID="7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556" Jan 20 01:22:03.344106 containerd[1904]: time="2026-01-20T01:22:03.344083655Z" level=info msg="RemoveContainer for \"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\"" Jan 20 01:22:03.356878 containerd[1904]: time="2026-01-20T01:22:03.356849980Z" level=info msg="RemoveContainer for \"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\" returns successfully" Jan 20 01:22:03.357092 kubelet[3400]: I0120 01:22:03.357064 3400 scope.go:117] "RemoveContainer" containerID="6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f" Jan 20 01:22:03.359038 containerd[1904]: time="2026-01-20T01:22:03.358809718Z" level=info msg="RemoveContainer for \"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\"" Jan 20 01:22:03.367346 containerd[1904]: time="2026-01-20T01:22:03.367318821Z" level=info msg="RemoveContainer for \"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\" returns successfully" Jan 20 01:22:03.367589 kubelet[3400]: I0120 01:22:03.367488 3400 scope.go:117] "RemoveContainer" containerID="62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00" Jan 20 01:22:03.368817 containerd[1904]: time="2026-01-20T01:22:03.368784670Z" level=info msg="RemoveContainer for \"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\"" Jan 20 01:22:03.381435 containerd[1904]: time="2026-01-20T01:22:03.380853028Z" level=info msg="RemoveContainer for \"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\" returns successfully" Jan 20 01:22:03.381723 kubelet[3400]: I0120 01:22:03.381702 3400 scope.go:117] "RemoveContainer" containerID="ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f" Jan 20 01:22:03.387087 containerd[1904]: time="2026-01-20T01:22:03.387017988Z" level=info msg="RemoveContainer for \"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\"" Jan 20 01:22:03.392598 kubelet[3400]: I0120 01:22:03.392573 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7skgs\" (UniqueName: \"kubernetes.io/projected/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-kube-api-access-7skgs\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392652 kubelet[3400]: I0120 01:22:03.392601 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-hubble-tls\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392652 kubelet[3400]: I0120 01:22:03.392616 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-hostproc\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392652 kubelet[3400]: I0120 01:22:03.392628 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-host-proc-sys-kernel\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392652 kubelet[3400]: I0120 01:22:03.392638 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-bpf-maps\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392652 kubelet[3400]: I0120 01:22:03.392648 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx4qw\" (UniqueName: \"kubernetes.io/projected/a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a-kube-api-access-lx4qw\") pod \"a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a\" (UID: \"a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a\") " Jan 20 01:22:03.392738 kubelet[3400]: I0120 01:22:03.392658 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cni-path\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392738 kubelet[3400]: I0120 01:22:03.392668 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-lib-modules\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392738 kubelet[3400]: I0120 01:22:03.392676 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-cgroup\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392738 kubelet[3400]: I0120 01:22:03.392684 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-xtables-lock\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392738 kubelet[3400]: I0120 01:22:03.392714 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-clustermesh-secrets\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392738 kubelet[3400]: I0120 01:22:03.392724 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-etc-cni-netd\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392825 kubelet[3400]: I0120 01:22:03.392733 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-host-proc-sys-net\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392825 kubelet[3400]: I0120 01:22:03.392743 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a-cilium-config-path\") pod \"a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a\" (UID: \"a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a\") " Jan 20 01:22:03.392825 kubelet[3400]: I0120 01:22:03.392767 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-run\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.392825 kubelet[3400]: I0120 01:22:03.392779 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-config-path\") pod \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\" (UID: \"6fe26685-4ccf-4ef5-870f-21cf5b7e5660\") " Jan 20 01:22:03.393530 kubelet[3400]: I0120 01:22:03.393480 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:22:03.394119 kubelet[3400]: I0120 01:22:03.394044 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:22:03.394191 kubelet[3400]: I0120 01:22:03.394171 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:22:03.394226 kubelet[3400]: I0120 01:22:03.394193 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:22:03.394654 kubelet[3400]: I0120 01:22:03.394601 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:22:03.394654 kubelet[3400]: I0120 01:22:03.394628 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:22:03.394890 kubelet[3400]: I0120 01:22:03.394834 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-hostproc" (OuterVolumeSpecName: "hostproc") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:22:03.394890 kubelet[3400]: I0120 01:22:03.394857 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:22:03.394890 kubelet[3400]: I0120 01:22:03.394868 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:22:03.395227 kubelet[3400]: I0120 01:22:03.395186 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cni-path" (OuterVolumeSpecName: "cni-path") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:22:03.395419 kubelet[3400]: I0120 01:22:03.395214 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:22:03.395697 containerd[1904]: time="2026-01-20T01:22:03.395632725Z" level=info msg="RemoveContainer for \"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\" returns successfully" Jan 20 01:22:03.395990 kubelet[3400]: I0120 01:22:03.395957 3400 scope.go:117] "RemoveContainer" containerID="a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050" Jan 20 01:22:03.396436 containerd[1904]: time="2026-01-20T01:22:03.396366430Z" level=error msg="ContainerStatus for \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\": not found" Jan 20 01:22:03.398916 kubelet[3400]: I0120 01:22:03.398495 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a" (UID: "a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:22:03.399095 kubelet[3400]: E0120 01:22:03.399077 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\": not found" containerID="a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050" Jan 20 01:22:03.399226 kubelet[3400]: I0120 01:22:03.399162 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050"} err="failed to get container status \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\": rpc error: code = NotFound desc = an error occurred when try to find container \"a96ab19092f3518d87f857acc32bc3ff268efa7ece182a4dcb002d892e745050\": not found" Jan 20 01:22:03.399299 kubelet[3400]: I0120 01:22:03.399289 3400 scope.go:117] "RemoveContainer" containerID="7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556" Jan 20 01:22:03.399484 kubelet[3400]: I0120 01:22:03.399465 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 01:22:03.399608 containerd[1904]: time="2026-01-20T01:22:03.399583762Z" level=error msg="ContainerStatus for \"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\": not found" Jan 20 01:22:03.399747 kubelet[3400]: E0120 01:22:03.399731 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\": not found" containerID="7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556" Jan 20 01:22:03.399818 kubelet[3400]: I0120 01:22:03.399805 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556"} err="failed to get container status \"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c5a3ccee7c8a140ac4d67d533ccef4b774f5ccdc90afc812c7fa03a39498556\": not found" Jan 20 01:22:03.400090 kubelet[3400]: I0120 01:22:03.399870 3400 scope.go:117] "RemoveContainer" containerID="6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f" Jan 20 01:22:03.400144 containerd[1904]: time="2026-01-20T01:22:03.400035818Z" level=error msg="ContainerStatus for \"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\": not found" Jan 20 01:22:03.400166 kubelet[3400]: I0120 01:22:03.400114 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:22:03.400270 kubelet[3400]: E0120 01:22:03.400180 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\": not found" containerID="6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f" Jan 20 01:22:03.400270 kubelet[3400]: I0120 01:22:03.400202 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f"} err="failed to get container status \"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bc09c943ac936a11c44489236fc6d09c0abac362b1185c6835514331561558f\": not found" Jan 20 01:22:03.400270 kubelet[3400]: I0120 01:22:03.400214 3400 scope.go:117] "RemoveContainer" containerID="62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00" Jan 20 01:22:03.400382 containerd[1904]: time="2026-01-20T01:22:03.400340140Z" level=error msg="ContainerStatus for \"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\": not found" Jan 20 01:22:03.400773 kubelet[3400]: E0120 01:22:03.400449 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\": not found" containerID="62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00" Jan 20 01:22:03.400773 kubelet[3400]: I0120 01:22:03.400468 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00"} err="failed to get container status \"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\": rpc error: code = NotFound desc = an error occurred when try to find container \"62a38fbfc7088fcd95cf9f060d19a4adfe585294f8c9fb4c17697638ae9fab00\": not found" Jan 20 01:22:03.400773 kubelet[3400]: I0120 01:22:03.400481 3400 scope.go:117] "RemoveContainer" containerID="ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f" Jan 20 01:22:03.400773 kubelet[3400]: E0120 01:22:03.400722 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\": not found" containerID="ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f" Jan 20 01:22:03.400773 kubelet[3400]: I0120 01:22:03.400736 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f"} err="failed to get container status \"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\": not found" Jan 20 01:22:03.400773 kubelet[3400]: I0120 01:22:03.400748 3400 scope.go:117] "RemoveContainer" containerID="ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad" Jan 20 01:22:03.400943 containerd[1904]: time="2026-01-20T01:22:03.400629718Z" level=error msg="ContainerStatus for \"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae6b63b58972683cae5b9a281419586320799d4b353226c7213e282ee26afc9f\": not found" Jan 20 01:22:03.401361 kubelet[3400]: I0120 01:22:03.401331 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a-kube-api-access-lx4qw" (OuterVolumeSpecName: "kube-api-access-lx4qw") pod "a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a" (UID: "a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a"). InnerVolumeSpecName "kube-api-access-lx4qw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:22:03.401849 containerd[1904]: time="2026-01-20T01:22:03.401820654Z" level=info msg="RemoveContainer for \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\"" Jan 20 01:22:03.402243 kubelet[3400]: I0120 01:22:03.402215 3400 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-kube-api-access-7skgs" (OuterVolumeSpecName: "kube-api-access-7skgs") pod "6fe26685-4ccf-4ef5-870f-21cf5b7e5660" (UID: "6fe26685-4ccf-4ef5-870f-21cf5b7e5660"). InnerVolumeSpecName "kube-api-access-7skgs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:22:03.409615 containerd[1904]: time="2026-01-20T01:22:03.409588331Z" level=info msg="RemoveContainer for \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\" returns successfully" Jan 20 01:22:03.409871 kubelet[3400]: I0120 01:22:03.409826 3400 scope.go:117] "RemoveContainer" containerID="ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad" Jan 20 01:22:03.410077 containerd[1904]: time="2026-01-20T01:22:03.410044322Z" level=error msg="ContainerStatus for \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\": not found" Jan 20 01:22:03.410164 kubelet[3400]: E0120 01:22:03.410141 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\": not found" containerID="ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad" Jan 20 01:22:03.410187 kubelet[3400]: I0120 01:22:03.410165 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad"} err="failed to get container status \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffd49ecade77068f9679ec428e341401a6c9a0ec207032232ac47745347d01ad\": not found" Jan 20 01:22:03.493594 kubelet[3400]: I0120 01:22:03.493559 3400 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cni-path\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493594 kubelet[3400]: I0120 01:22:03.493589 3400 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-host-proc-sys-kernel\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493594 kubelet[3400]: I0120 01:22:03.493601 3400 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-bpf-maps\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493744 kubelet[3400]: I0120 01:22:03.493609 3400 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lx4qw\" (UniqueName: \"kubernetes.io/projected/a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a-kube-api-access-lx4qw\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493744 kubelet[3400]: I0120 01:22:03.493618 3400 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-lib-modules\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493744 kubelet[3400]: I0120 01:22:03.493624 3400 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-cgroup\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493744 kubelet[3400]: I0120 01:22:03.493630 3400 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-xtables-lock\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493744 kubelet[3400]: I0120 01:22:03.493636 3400 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-clustermesh-secrets\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493744 kubelet[3400]: I0120 01:22:03.493641 3400 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-etc-cni-netd\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493744 kubelet[3400]: I0120 01:22:03.493647 3400 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-host-proc-sys-net\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493744 kubelet[3400]: I0120 01:22:03.493653 3400 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a-cilium-config-path\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493859 kubelet[3400]: I0120 01:22:03.493658 3400 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-run\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493859 kubelet[3400]: I0120 01:22:03.493664 3400 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-cilium-config-path\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493859 kubelet[3400]: I0120 01:22:03.493670 3400 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-hubble-tls\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493859 kubelet[3400]: I0120 01:22:03.493675 3400 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7skgs\" (UniqueName: \"kubernetes.io/projected/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-kube-api-access-7skgs\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.493859 kubelet[3400]: I0120 01:22:03.493681 3400 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fe26685-4ccf-4ef5-870f-21cf5b7e5660-hostproc\") on node \"ci-4459.2.2-n-4dd77badda\" DevicePath \"\"" Jan 20 01:22:03.631479 systemd[1]: Removed slice kubepods-burstable-pod6fe26685_4ccf_4ef5_870f_21cf5b7e5660.slice - libcontainer container kubepods-burstable-pod6fe26685_4ccf_4ef5_870f_21cf5b7e5660.slice. Jan 20 01:22:03.631552 systemd[1]: kubepods-burstable-pod6fe26685_4ccf_4ef5_870f_21cf5b7e5660.slice: Consumed 4.283s CPU time, 123.8M memory peak, 128K read from disk, 12.9M written to disk. Jan 20 01:22:03.633277 systemd[1]: Removed slice kubepods-besteffort-poda3879dac_8d0a_4c9f_9bb6_5ced8d49eb5a.slice - libcontainer container kubepods-besteffort-poda3879dac_8d0a_4c9f_9bb6_5ced8d49eb5a.slice. Jan 20 01:22:04.120057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56c3b6eb53a08f91cb323e6667e3fc1c6cf9b389854a9f88563beb2876bbd137-shm.mount: Deactivated successfully. Jan 20 01:22:04.120144 systemd[1]: var-lib-kubelet-pods-6fe26685\x2d4ccf\x2d4ef5\x2d870f\x2d21cf5b7e5660-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7skgs.mount: Deactivated successfully. Jan 20 01:22:04.120193 systemd[1]: var-lib-kubelet-pods-a3879dac\x2d8d0a\x2d4c9f\x2d9bb6\x2d5ced8d49eb5a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlx4qw.mount: Deactivated successfully. Jan 20 01:22:04.120229 systemd[1]: var-lib-kubelet-pods-6fe26685\x2d4ccf\x2d4ef5\x2d870f\x2d21cf5b7e5660-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 01:22:04.120265 systemd[1]: var-lib-kubelet-pods-6fe26685\x2d4ccf\x2d4ef5\x2d870f\x2d21cf5b7e5660-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 01:22:05.047901 kubelet[3400]: I0120 01:22:05.047860 3400 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe26685-4ccf-4ef5-870f-21cf5b7e5660" path="/var/lib/kubelet/pods/6fe26685-4ccf-4ef5-870f-21cf5b7e5660/volumes" Jan 20 01:22:05.048303 kubelet[3400]: I0120 01:22:05.048280 3400 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a" path="/var/lib/kubelet/pods/a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a/volumes" Jan 20 01:22:05.100202 sshd[4939]: Connection closed by 10.200.16.10 port 33796 Jan 20 01:22:05.100685 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Jan 20 01:22:05.103684 systemd[1]: sshd@21-10.200.20.24:22-10.200.16.10:33796.service: Deactivated successfully. Jan 20 01:22:05.105543 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:22:05.106376 systemd-logind[1878]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:22:05.108005 systemd-logind[1878]: Removed session 24. Jan 20 01:22:05.190505 systemd[1]: Started sshd@22-10.200.20.24:22-10.200.16.10:33810.service - OpenSSH per-connection server daemon (10.200.16.10:33810). Jan 20 01:22:05.676321 sshd[5080]: Accepted publickey for core from 10.200.16.10 port 33810 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:22:05.677427 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:22:05.680672 systemd-logind[1878]: New session 25 of user core. Jan 20 01:22:05.692522 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:22:06.124255 kubelet[3400]: E0120 01:22:06.124223 3400 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:22:06.530471 sshd[5083]: Connection closed by 10.200.16.10 port 33810 Jan 20 01:22:06.531598 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Jan 20 01:22:06.536841 systemd[1]: sshd@22-10.200.20.24:22-10.200.16.10:33810.service: Deactivated successfully. Jan 20 01:22:06.537227 systemd-logind[1878]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:22:06.539392 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:22:06.542843 kubelet[3400]: I0120 01:22:06.542817 3400 memory_manager.go:355] "RemoveStaleState removing state" podUID="6fe26685-4ccf-4ef5-870f-21cf5b7e5660" containerName="cilium-agent" Jan 20 01:22:06.542843 kubelet[3400]: I0120 01:22:06.542837 3400 memory_manager.go:355] "RemoveStaleState removing state" podUID="a3879dac-8d0a-4c9f-9bb6-5ced8d49eb5a" containerName="cilium-operator" Jan 20 01:22:06.544701 systemd-logind[1878]: Removed session 25. Jan 20 01:22:06.550612 systemd[1]: Created slice kubepods-burstable-podcf7e8b7b_e529_4ae8_8561_6250a2ef8083.slice - libcontainer container kubepods-burstable-podcf7e8b7b_e529_4ae8_8561_6250a2ef8083.slice. Jan 20 01:22:06.554467 kubelet[3400]: W0120 01:22:06.554443 3400 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4459.2.2-n-4dd77badda" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.2-n-4dd77badda' and this object Jan 20 01:22:06.554539 kubelet[3400]: E0120 01:22:06.554484 3400 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4459.2.2-n-4dd77badda\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-n-4dd77badda' and this object" logger="UnhandledError" Jan 20 01:22:06.554566 kubelet[3400]: I0120 01:22:06.554532 3400 status_manager.go:890] "Failed to get status for pod" podUID="cf7e8b7b-e529-4ae8-8561-6250a2ef8083" pod="kube-system/cilium-f4nkx" err="pods \"cilium-f4nkx\" is forbidden: User \"system:node:ci-4459.2.2-n-4dd77badda\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-n-4dd77badda' and this object" Jan 20 01:22:06.554566 kubelet[3400]: W0120 01:22:06.554558 3400 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4459.2.2-n-4dd77badda" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.2-n-4dd77badda' and this object Jan 20 01:22:06.554597 kubelet[3400]: E0120 01:22:06.554567 3400 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4459.2.2-n-4dd77badda\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-n-4dd77badda' and this object" logger="UnhandledError" Jan 20 01:22:06.554597 kubelet[3400]: W0120 01:22:06.554588 3400 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4459.2.2-n-4dd77badda" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.2-n-4dd77badda' and this object Jan 20 01:22:06.554651 kubelet[3400]: E0120 01:22:06.554594 3400 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4459.2.2-n-4dd77badda\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-n-4dd77badda' and this object" logger="UnhandledError" Jan 20 01:22:06.554651 kubelet[3400]: W0120 01:22:06.554622 3400 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4459.2.2-n-4dd77badda" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.2-n-4dd77badda' and this object Jan 20 01:22:06.554651 kubelet[3400]: E0120 01:22:06.554630 3400 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4459.2.2-n-4dd77badda\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-n-4dd77badda' and this object" logger="UnhandledError" Jan 20 01:22:06.610368 kubelet[3400]: I0120 01:22:06.610337 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-xtables-lock\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610368 kubelet[3400]: I0120 01:22:06.610368 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-cilium-ipsec-secrets\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610368 kubelet[3400]: I0120 01:22:06.610380 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-hubble-tls\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610368 kubelet[3400]: I0120 01:22:06.610404 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-cilium-run\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610558 kubelet[3400]: I0120 01:22:06.610429 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-lib-modules\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610558 kubelet[3400]: I0120 01:22:06.610440 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-host-proc-sys-net\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610558 kubelet[3400]: I0120 01:22:06.610452 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-etc-cni-netd\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610558 kubelet[3400]: I0120 01:22:06.610462 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-cilium-config-path\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610558 kubelet[3400]: I0120 01:22:06.610473 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5mv\" (UniqueName: \"kubernetes.io/projected/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-kube-api-access-bf5mv\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610558 kubelet[3400]: I0120 01:22:06.610484 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-cni-path\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610643 kubelet[3400]: I0120 01:22:06.610494 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-clustermesh-secrets\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610643 kubelet[3400]: I0120 01:22:06.610503 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-cilium-cgroup\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610643 kubelet[3400]: I0120 01:22:06.610514 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-hostproc\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610643 kubelet[3400]: I0120 01:22:06.610524 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-host-proc-sys-kernel\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.610643 kubelet[3400]: I0120 01:22:06.610534 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-bpf-maps\") pod \"cilium-f4nkx\" (UID: \"cf7e8b7b-e529-4ae8-8561-6250a2ef8083\") " pod="kube-system/cilium-f4nkx" Jan 20 01:22:06.619012 systemd[1]: Started sshd@23-10.200.20.24:22-10.200.16.10:33824.service - OpenSSH per-connection server daemon (10.200.16.10:33824). Jan 20 01:22:07.071872 sshd[5093]: Accepted publickey for core from 10.200.16.10 port 33824 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:22:07.072924 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:22:07.077025 systemd-logind[1878]: New session 26 of user core. Jan 20 01:22:07.088557 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:22:07.400712 sshd[5097]: Connection closed by 10.200.16.10 port 33824 Jan 20 01:22:07.401667 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Jan 20 01:22:07.406247 systemd-logind[1878]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:22:07.406794 systemd[1]: sshd@23-10.200.20.24:22-10.200.16.10:33824.service: Deactivated successfully. Jan 20 01:22:07.408326 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:22:07.409802 systemd-logind[1878]: Removed session 26. Jan 20 01:22:07.482607 systemd[1]: Started sshd@24-10.200.20.24:22-10.200.16.10:33832.service - OpenSSH per-connection server daemon (10.200.16.10:33832). Jan 20 01:22:07.711977 kubelet[3400]: E0120 01:22:07.711883 3400 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 20 01:22:07.711977 kubelet[3400]: E0120 01:22:07.711910 3400 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-f4nkx: failed to sync secret cache: timed out waiting for the condition Jan 20 01:22:07.713197 kubelet[3400]: E0120 01:22:07.712454 3400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-hubble-tls podName:cf7e8b7b-e529-4ae8-8561-6250a2ef8083 nodeName:}" failed. No retries permitted until 2026-01-20 01:22:08.211951539 +0000 UTC m=+147.232158728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/cf7e8b7b-e529-4ae8-8561-6250a2ef8083-hubble-tls") pod "cilium-f4nkx" (UID: "cf7e8b7b-e529-4ae8-8561-6250a2ef8083") : failed to sync secret cache: timed out waiting for the condition Jan 20 01:22:07.931953 sshd[5105]: Accepted publickey for core from 10.200.16.10 port 33832 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:22:07.933022 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:22:07.936494 systemd-logind[1878]: New session 27 of user core. Jan 20 01:22:07.940549 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 01:22:08.353820 containerd[1904]: time="2026-01-20T01:22:08.353755791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f4nkx,Uid:cf7e8b7b-e529-4ae8-8561-6250a2ef8083,Namespace:kube-system,Attempt:0,}" Jan 20 01:22:08.393703 containerd[1904]: time="2026-01-20T01:22:08.393646177Z" level=info msg="connecting to shim d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0" address="unix:///run/containerd/s/f287cb7166b44e45a9faf425d5bb08db475480e33d55452063eab009801a3e36" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:22:08.410539 systemd[1]: Started cri-containerd-d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0.scope - libcontainer container d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0. Jan 20 01:22:08.433338 containerd[1904]: time="2026-01-20T01:22:08.433301676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f4nkx,Uid:cf7e8b7b-e529-4ae8-8561-6250a2ef8083,Namespace:kube-system,Attempt:0,} returns sandbox id \"d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0\"" Jan 20 01:22:08.436354 containerd[1904]: time="2026-01-20T01:22:08.436331111Z" level=info msg="CreateContainer within sandbox \"d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 01:22:08.460565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277686383.mount: Deactivated successfully. Jan 20 01:22:08.462063 containerd[1904]: time="2026-01-20T01:22:08.461527864Z" level=info msg="Container 99123d13dc809e373e8ea33298bc5376da2698dec7cbd0b475fd4cbfd64c8826: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:22:08.475188 containerd[1904]: time="2026-01-20T01:22:08.475161943Z" level=info msg="CreateContainer within sandbox \"d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"99123d13dc809e373e8ea33298bc5376da2698dec7cbd0b475fd4cbfd64c8826\"" Jan 20 01:22:08.476511 containerd[1904]: time="2026-01-20T01:22:08.476486850Z" level=info msg="StartContainer for \"99123d13dc809e373e8ea33298bc5376da2698dec7cbd0b475fd4cbfd64c8826\"" Jan 20 01:22:08.477335 containerd[1904]: time="2026-01-20T01:22:08.477315861Z" level=info msg="connecting to shim 99123d13dc809e373e8ea33298bc5376da2698dec7cbd0b475fd4cbfd64c8826" address="unix:///run/containerd/s/f287cb7166b44e45a9faf425d5bb08db475480e33d55452063eab009801a3e36" protocol=ttrpc version=3 Jan 20 01:22:08.496521 systemd[1]: Started cri-containerd-99123d13dc809e373e8ea33298bc5376da2698dec7cbd0b475fd4cbfd64c8826.scope - libcontainer container 99123d13dc809e373e8ea33298bc5376da2698dec7cbd0b475fd4cbfd64c8826. Jan 20 01:22:08.519828 systemd[1]: cri-containerd-99123d13dc809e373e8ea33298bc5376da2698dec7cbd0b475fd4cbfd64c8826.scope: Deactivated successfully. Jan 20 01:22:08.522006 containerd[1904]: time="2026-01-20T01:22:08.521985500Z" level=info msg="StartContainer for \"99123d13dc809e373e8ea33298bc5376da2698dec7cbd0b475fd4cbfd64c8826\" returns successfully" Jan 20 01:22:08.525631 containerd[1904]: time="2026-01-20T01:22:08.525562858Z" level=info msg="received container exit event container_id:\"99123d13dc809e373e8ea33298bc5376da2698dec7cbd0b475fd4cbfd64c8826\" id:\"99123d13dc809e373e8ea33298bc5376da2698dec7cbd0b475fd4cbfd64c8826\" pid:5176 exited_at:{seconds:1768872128 nanos:524164908}" Jan 20 01:22:09.346449 containerd[1904]: time="2026-01-20T01:22:09.343870947Z" level=info msg="CreateContainer within sandbox \"d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 01:22:09.367978 containerd[1904]: time="2026-01-20T01:22:09.367940271Z" level=info msg="Container 32834fa395a70b85c6f57efc7e561769c7aaeb763bec9370853243ab1e20aa83: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:22:09.386515 containerd[1904]: time="2026-01-20T01:22:09.386477567Z" level=info msg="CreateContainer within sandbox \"d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"32834fa395a70b85c6f57efc7e561769c7aaeb763bec9370853243ab1e20aa83\"" Jan 20 01:22:09.387176 containerd[1904]: time="2026-01-20T01:22:09.387112419Z" level=info msg="StartContainer for \"32834fa395a70b85c6f57efc7e561769c7aaeb763bec9370853243ab1e20aa83\"" Jan 20 01:22:09.388096 containerd[1904]: time="2026-01-20T01:22:09.388042610Z" level=info msg="connecting to shim 32834fa395a70b85c6f57efc7e561769c7aaeb763bec9370853243ab1e20aa83" address="unix:///run/containerd/s/f287cb7166b44e45a9faf425d5bb08db475480e33d55452063eab009801a3e36" protocol=ttrpc version=3 Jan 20 01:22:09.412530 systemd[1]: Started cri-containerd-32834fa395a70b85c6f57efc7e561769c7aaeb763bec9370853243ab1e20aa83.scope - libcontainer container 32834fa395a70b85c6f57efc7e561769c7aaeb763bec9370853243ab1e20aa83. Jan 20 01:22:09.437432 containerd[1904]: time="2026-01-20T01:22:09.437378266Z" level=info msg="StartContainer for \"32834fa395a70b85c6f57efc7e561769c7aaeb763bec9370853243ab1e20aa83\" returns successfully" Jan 20 01:22:09.439436 systemd[1]: cri-containerd-32834fa395a70b85c6f57efc7e561769c7aaeb763bec9370853243ab1e20aa83.scope: Deactivated successfully. Jan 20 01:22:09.441447 containerd[1904]: time="2026-01-20T01:22:09.440119227Z" level=info msg="received container exit event container_id:\"32834fa395a70b85c6f57efc7e561769c7aaeb763bec9370853243ab1e20aa83\" id:\"32834fa395a70b85c6f57efc7e561769c7aaeb763bec9370853243ab1e20aa83\" pid:5221 exited_at:{seconds:1768872129 nanos:439970031}" Jan 20 01:22:10.225035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32834fa395a70b85c6f57efc7e561769c7aaeb763bec9370853243ab1e20aa83-rootfs.mount: Deactivated successfully. Jan 20 01:22:10.347826 containerd[1904]: time="2026-01-20T01:22:10.347567468Z" level=info msg="CreateContainer within sandbox \"d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 01:22:10.365933 containerd[1904]: time="2026-01-20T01:22:10.365909564Z" level=info msg="Container 5f2f3f313272a56a2d483881fc04adde205968b531148e32225247c5d059b301: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:22:10.385255 containerd[1904]: time="2026-01-20T01:22:10.385228493Z" level=info msg="CreateContainer within sandbox \"d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5f2f3f313272a56a2d483881fc04adde205968b531148e32225247c5d059b301\"" Jan 20 01:22:10.386681 containerd[1904]: time="2026-01-20T01:22:10.386661844Z" level=info msg="StartContainer for \"5f2f3f313272a56a2d483881fc04adde205968b531148e32225247c5d059b301\"" Jan 20 01:22:10.388019 containerd[1904]: time="2026-01-20T01:22:10.387963943Z" level=info msg="connecting to shim 5f2f3f313272a56a2d483881fc04adde205968b531148e32225247c5d059b301" address="unix:///run/containerd/s/f287cb7166b44e45a9faf425d5bb08db475480e33d55452063eab009801a3e36" protocol=ttrpc version=3 Jan 20 01:22:10.406536 systemd[1]: Started cri-containerd-5f2f3f313272a56a2d483881fc04adde205968b531148e32225247c5d059b301.scope - libcontainer container 5f2f3f313272a56a2d483881fc04adde205968b531148e32225247c5d059b301. Jan 20 01:22:10.458042 systemd[1]: cri-containerd-5f2f3f313272a56a2d483881fc04adde205968b531148e32225247c5d059b301.scope: Deactivated successfully. Jan 20 01:22:10.459849 containerd[1904]: time="2026-01-20T01:22:10.459697268Z" level=info msg="received container exit event container_id:\"5f2f3f313272a56a2d483881fc04adde205968b531148e32225247c5d059b301\" id:\"5f2f3f313272a56a2d483881fc04adde205968b531148e32225247c5d059b301\" pid:5265 exited_at:{seconds:1768872130 nanos:459548919}" Jan 20 01:22:10.465362 containerd[1904]: time="2026-01-20T01:22:10.465338309Z" level=info msg="StartContainer for \"5f2f3f313272a56a2d483881fc04adde205968b531148e32225247c5d059b301\" returns successfully" Jan 20 01:22:10.474734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f2f3f313272a56a2d483881fc04adde205968b531148e32225247c5d059b301-rootfs.mount: Deactivated successfully. Jan 20 01:22:11.125074 kubelet[3400]: E0120 01:22:11.125034 3400 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:22:11.351728 containerd[1904]: time="2026-01-20T01:22:11.351677418Z" level=info msg="CreateContainer within sandbox \"d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 01:22:11.372858 containerd[1904]: time="2026-01-20T01:22:11.372477667Z" level=info msg="Container ba9573151caa1298cba91b3e1b8823bd629cefc1d906c3ea14480326bd1a4763: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:22:11.375134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082772100.mount: Deactivated successfully. Jan 20 01:22:11.387174 containerd[1904]: time="2026-01-20T01:22:11.387143060Z" level=info msg="CreateContainer within sandbox \"d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ba9573151caa1298cba91b3e1b8823bd629cefc1d906c3ea14480326bd1a4763\"" Jan 20 01:22:11.388096 containerd[1904]: time="2026-01-20T01:22:11.388062810Z" level=info msg="StartContainer for \"ba9573151caa1298cba91b3e1b8823bd629cefc1d906c3ea14480326bd1a4763\"" Jan 20 01:22:11.388943 containerd[1904]: time="2026-01-20T01:22:11.388920686Z" level=info msg="connecting to shim ba9573151caa1298cba91b3e1b8823bd629cefc1d906c3ea14480326bd1a4763" address="unix:///run/containerd/s/f287cb7166b44e45a9faf425d5bb08db475480e33d55452063eab009801a3e36" protocol=ttrpc version=3 Jan 20 01:22:11.409533 systemd[1]: Started cri-containerd-ba9573151caa1298cba91b3e1b8823bd629cefc1d906c3ea14480326bd1a4763.scope - libcontainer container ba9573151caa1298cba91b3e1b8823bd629cefc1d906c3ea14480326bd1a4763. Jan 20 01:22:11.427809 systemd[1]: cri-containerd-ba9573151caa1298cba91b3e1b8823bd629cefc1d906c3ea14480326bd1a4763.scope: Deactivated successfully. Jan 20 01:22:11.432801 containerd[1904]: time="2026-01-20T01:22:11.432772762Z" level=info msg="received container exit event container_id:\"ba9573151caa1298cba91b3e1b8823bd629cefc1d906c3ea14480326bd1a4763\" id:\"ba9573151caa1298cba91b3e1b8823bd629cefc1d906c3ea14480326bd1a4763\" pid:5305 exited_at:{seconds:1768872131 nanos:428753927}" Jan 20 01:22:11.439743 containerd[1904]: time="2026-01-20T01:22:11.439725918Z" level=info msg="StartContainer for \"ba9573151caa1298cba91b3e1b8823bd629cefc1d906c3ea14480326bd1a4763\" returns successfully" Jan 20 01:22:11.449743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba9573151caa1298cba91b3e1b8823bd629cefc1d906c3ea14480326bd1a4763-rootfs.mount: Deactivated successfully. Jan 20 01:22:12.355659 containerd[1904]: time="2026-01-20T01:22:12.355613778Z" level=info msg="CreateContainer within sandbox \"d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 01:22:12.383505 containerd[1904]: time="2026-01-20T01:22:12.383465555Z" level=info msg="Container 7805927e074fad3e0498fbce9ecb4858b6a9984ae4eb5c21e3c20763f94da843: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:22:12.399670 containerd[1904]: time="2026-01-20T01:22:12.399639060Z" level=info msg="CreateContainer within sandbox \"d43eb7f1a83b678a7b9508349d9141e982bc68a0b9683764c2a1f6f4ea2438b0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7805927e074fad3e0498fbce9ecb4858b6a9984ae4eb5c21e3c20763f94da843\"" Jan 20 01:22:12.400343 containerd[1904]: time="2026-01-20T01:22:12.400323123Z" level=info msg="StartContainer for \"7805927e074fad3e0498fbce9ecb4858b6a9984ae4eb5c21e3c20763f94da843\"" Jan 20 01:22:12.401003 containerd[1904]: time="2026-01-20T01:22:12.400980232Z" level=info msg="connecting to shim 7805927e074fad3e0498fbce9ecb4858b6a9984ae4eb5c21e3c20763f94da843" address="unix:///run/containerd/s/f287cb7166b44e45a9faf425d5bb08db475480e33d55452063eab009801a3e36" protocol=ttrpc version=3 Jan 20 01:22:12.417526 systemd[1]: Started cri-containerd-7805927e074fad3e0498fbce9ecb4858b6a9984ae4eb5c21e3c20763f94da843.scope - libcontainer container 7805927e074fad3e0498fbce9ecb4858b6a9984ae4eb5c21e3c20763f94da843. Jan 20 01:22:12.448769 containerd[1904]: time="2026-01-20T01:22:12.448740701Z" level=info msg="StartContainer for \"7805927e074fad3e0498fbce9ecb4858b6a9984ae4eb5c21e3c20763f94da843\" returns successfully" Jan 20 01:22:12.742429 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 20 01:22:14.407391 kubelet[3400]: E0120 01:22:14.407355 3400 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54546->127.0.0.1:43785: write tcp 127.0.0.1:54546->127.0.0.1:43785: write: broken pipe Jan 20 01:22:14.800778 kubelet[3400]: I0120 01:22:14.800264 3400 setters.go:602] "Node became not ready" node="ci-4459.2.2-n-4dd77badda" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T01:22:14Z","lastTransitionTime":"2026-01-20T01:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 01:22:15.090626 systemd-networkd[1474]: lxc_health: Link UP Jan 20 01:22:15.091855 systemd-networkd[1474]: lxc_health: Gained carrier Jan 20 01:22:16.177569 systemd-networkd[1474]: lxc_health: Gained IPv6LL Jan 20 01:22:16.372702 kubelet[3400]: I0120 01:22:16.372640 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f4nkx" podStartSLOduration=10.372624774 podStartE2EDuration="10.372624774s" podCreationTimestamp="2026-01-20 01:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:22:13.372872698 +0000 UTC m=+152.393079879" watchObservedRunningTime="2026-01-20 01:22:16.372624774 +0000 UTC m=+155.392831955" Jan 20 01:22:20.741448 sshd[5109]: Connection closed by 10.200.16.10 port 33832 Jan 20 01:22:20.742038 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Jan 20 01:22:20.745645 systemd-logind[1878]: Session 27 logged out. Waiting for processes to exit. Jan 20 01:22:20.746156 systemd[1]: sshd@24-10.200.20.24:22-10.200.16.10:33832.service: Deactivated successfully. Jan 20 01:22:20.748114 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 01:22:20.749407 systemd-logind[1878]: Removed session 27.