Jan 23 00:06:02.080479 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 23 00:06:02.080499 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Jan 22 22:21:53 -00 2026 Jan 23 00:06:02.080505 kernel: KASLR enabled Jan 23 00:06:02.080509 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 00:06:02.080513 kernel: printk: legacy bootconsole [pl11] enabled Jan 23 00:06:02.080518 kernel: efi: EFI v2.7 by EDK II Jan 23 00:06:02.080523 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 23 00:06:02.080527 kernel: random: crng init done Jan 23 00:06:02.080531 kernel: secureboot: Secure boot disabled Jan 23 00:06:02.080535 kernel: ACPI: Early table checksum verification disabled Jan 23 00:06:02.080539 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 23 00:06:02.080543 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:06:02.080547 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:06:02.080551 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 00:06:02.080557 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:06:02.080561 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:06:02.080566 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:06:02.080570 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:06:02.080574 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:06:02.080579 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:06:02.080584 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 00:06:02.080588 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:06:02.080592 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 00:06:02.080596 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 00:06:02.080613 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 23 00:06:02.080617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 23 00:06:02.080621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 23 00:06:02.080625 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 23 00:06:02.080630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 23 00:06:02.080634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 23 00:06:02.080639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 23 00:06:02.080643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 23 00:06:02.080648 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 23 00:06:02.080652 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 23 00:06:02.080656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 23 00:06:02.080660 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 23 00:06:02.080664 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 23 00:06:02.080668 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 23 00:06:02.080672 kernel: Zone ranges: Jan 23 00:06:02.080677 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 00:06:02.080684 kernel: DMA32 empty Jan 23 00:06:02.080688 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 00:06:02.080692 kernel: Device empty Jan 23 00:06:02.080697 kernel: Movable zone start for each node Jan 23 00:06:02.080701 kernel: Early memory node ranges Jan 23 00:06:02.080705 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 00:06:02.080710 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 23 00:06:02.080715 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 23 00:06:02.080719 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 23 00:06:02.080723 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 23 00:06:02.080728 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 23 00:06:02.080732 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 00:06:02.080736 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 00:06:02.080740 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 00:06:02.080745 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 23 00:06:02.080749 kernel: psci: probing for conduit method from ACPI. Jan 23 00:06:02.080753 kernel: psci: PSCIv1.3 detected in firmware. Jan 23 00:06:02.080758 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 00:06:02.080763 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 00:06:02.080767 kernel: psci: SMC Calling Convention v1.4 Jan 23 00:06:02.080772 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 00:06:02.080776 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 00:06:02.080780 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 00:06:02.080785 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 00:06:02.080789 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 00:06:02.080794 kernel: Detected PIPT I-cache on CPU0 Jan 23 00:06:02.080798 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 23 00:06:02.080803 kernel: CPU features: detected: GIC system register CPU interface Jan 23 00:06:02.080807 kernel: CPU features: detected: Spectre-v4 Jan 23 00:06:02.080811 kernel: CPU features: detected: Spectre-BHB Jan 23 00:06:02.080816 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 00:06:02.080821 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 00:06:02.080825 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 23 00:06:02.080829 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 00:06:02.080834 kernel: alternatives: applying boot alternatives Jan 23 00:06:02.080839 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:06:02.080844 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 00:06:02.080848 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 00:06:02.080852 kernel: Fallback order for Node 0: 0 Jan 23 00:06:02.080857 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 23 00:06:02.080862 kernel: Policy zone: Normal Jan 23 00:06:02.080866 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 00:06:02.080870 kernel: software IO TLB: area num 2. Jan 23 00:06:02.080875 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Jan 23 00:06:02.080879 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 00:06:02.080883 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 00:06:02.080889 kernel: rcu: RCU event tracing is enabled. Jan 23 00:06:02.080893 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 00:06:02.080897 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 00:06:02.080902 kernel: Tracing variant of Tasks RCU enabled. Jan 23 00:06:02.080906 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 00:06:02.080911 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 00:06:02.080916 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:06:02.080920 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:06:02.080925 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 00:06:02.080929 kernel: GICv3: 960 SPIs implemented Jan 23 00:06:02.080933 kernel: GICv3: 0 Extended SPIs implemented Jan 23 00:06:02.080938 kernel: Root IRQ handler: gic_handle_irq Jan 23 00:06:02.080942 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 00:06:02.080946 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 23 00:06:02.080951 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 00:06:02.080955 kernel: ITS: No ITS available, not enabling LPIs Jan 23 00:06:02.080959 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 00:06:02.080965 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 23 00:06:02.080969 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 00:06:02.080974 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 23 00:06:02.080978 kernel: Console: colour dummy device 80x25 Jan 23 00:06:02.080983 kernel: printk: legacy console [tty1] enabled Jan 23 00:06:02.080988 kernel: ACPI: Core revision 20240827 Jan 23 00:06:02.080992 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 23 00:06:02.080997 kernel: pid_max: default: 32768 minimum: 301 Jan 23 00:06:02.081001 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 00:06:02.081006 kernel: landlock: Up and running. Jan 23 00:06:02.081011 kernel: SELinux: Initializing. Jan 23 00:06:02.081015 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:06:02.081020 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:06:02.081024 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 23 00:06:02.081029 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 23 00:06:02.081037 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 00:06:02.081042 kernel: rcu: Hierarchical SRCU implementation. Jan 23 00:06:02.081047 kernel: rcu: Max phase no-delay instances is 400. Jan 23 00:06:02.081052 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 00:06:02.081057 kernel: Remapping and enabling EFI services. Jan 23 00:06:02.081061 kernel: smp: Bringing up secondary CPUs ... Jan 23 00:06:02.081066 kernel: Detected PIPT I-cache on CPU1 Jan 23 00:06:02.081072 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 00:06:02.081076 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 23 00:06:02.081081 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 00:06:02.081086 kernel: SMP: Total of 2 processors activated. Jan 23 00:06:02.081090 kernel: CPU: All CPU(s) started at EL1 Jan 23 00:06:02.081096 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 00:06:02.081101 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 00:06:02.081106 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 00:06:02.081111 kernel: CPU features: detected: Common not Private translations Jan 23 00:06:02.081116 kernel: CPU features: detected: CRC32 instructions Jan 23 00:06:02.081120 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 23 00:06:02.081125 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 00:06:02.081130 kernel: CPU features: detected: LSE atomic instructions Jan 23 00:06:02.081134 kernel: CPU features: detected: Privileged Access Never Jan 23 00:06:02.081140 kernel: CPU features: detected: Speculation barrier (SB) Jan 23 00:06:02.081145 kernel: CPU features: detected: TLB range maintenance instructions Jan 23 00:06:02.081150 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 00:06:02.081154 kernel: CPU features: detected: Scalable Vector Extension Jan 23 00:06:02.081159 kernel: alternatives: applying system-wide alternatives Jan 23 00:06:02.081164 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 23 00:06:02.081168 kernel: SVE: maximum available vector length 16 bytes per vector Jan 23 00:06:02.081173 kernel: SVE: default vector length 16 bytes per vector Jan 23 00:06:02.081178 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Jan 23 00:06:02.081184 kernel: devtmpfs: initialized Jan 23 00:06:02.081189 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 00:06:02.081193 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 00:06:02.081198 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 00:06:02.081203 kernel: 0 pages in range for non-PLT usage Jan 23 00:06:02.081208 kernel: 508400 pages in range for PLT usage Jan 23 00:06:02.081212 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 00:06:02.081217 kernel: SMBIOS 3.1.0 present. Jan 23 00:06:02.081223 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 23 00:06:02.081227 kernel: DMI: Memory slots populated: 2/2 Jan 23 00:06:02.081232 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 00:06:02.081237 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 00:06:02.081241 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 00:06:02.081246 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 00:06:02.081251 kernel: audit: initializing netlink subsys (disabled) Jan 23 00:06:02.081256 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 23 00:06:02.081260 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 00:06:02.081266 kernel: cpuidle: using governor menu Jan 23 00:06:02.081270 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 00:06:02.081275 kernel: ASID allocator initialised with 32768 entries Jan 23 00:06:02.081280 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 00:06:02.081285 kernel: Serial: AMBA PL011 UART driver Jan 23 00:06:02.081289 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 00:06:02.081294 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 00:06:02.081299 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 00:06:02.081303 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 00:06:02.081309 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 00:06:02.081314 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 00:06:02.081318 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 00:06:02.081323 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 00:06:02.081328 kernel: ACPI: Added _OSI(Module Device) Jan 23 00:06:02.081333 kernel: ACPI: Added _OSI(Processor Device) Jan 23 00:06:02.081337 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 00:06:02.081342 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 00:06:02.081347 kernel: ACPI: Interpreter enabled Jan 23 00:06:02.081352 kernel: ACPI: Using GIC for interrupt routing Jan 23 00:06:02.081357 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 00:06:02.081362 kernel: printk: legacy console [ttyAMA0] enabled Jan 23 00:06:02.081367 kernel: printk: legacy bootconsole [pl11] disabled Jan 23 00:06:02.081371 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 00:06:02.081376 kernel: ACPI: CPU0 has been hot-added Jan 23 00:06:02.081381 kernel: ACPI: CPU1 has been hot-added Jan 23 00:06:02.081385 kernel: iommu: Default domain type: Translated Jan 23 00:06:02.081390 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 00:06:02.081396 kernel: efivars: Registered efivars operations Jan 23 00:06:02.081400 kernel: vgaarb: loaded Jan 23 00:06:02.081405 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 00:06:02.081410 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 00:06:02.081414 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 00:06:02.081419 kernel: pnp: PnP ACPI init Jan 23 00:06:02.081424 kernel: pnp: PnP ACPI: found 0 devices Jan 23 00:06:02.081428 kernel: NET: Registered PF_INET protocol family Jan 23 00:06:02.081433 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 00:06:02.081438 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 00:06:02.081444 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 00:06:02.081449 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 00:06:02.081454 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 00:06:02.081458 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 00:06:02.081463 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:06:02.081468 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:06:02.081473 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 00:06:02.081477 kernel: PCI: CLS 0 bytes, default 64 Jan 23 00:06:02.081482 kernel: kvm [1]: HYP mode not available Jan 23 00:06:02.081487 kernel: Initialise system trusted keyrings Jan 23 00:06:02.081492 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 00:06:02.081497 kernel: Key type asymmetric registered Jan 23 00:06:02.081502 kernel: Asymmetric key parser 'x509' registered Jan 23 00:06:02.081506 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 00:06:02.081511 kernel: io scheduler mq-deadline registered Jan 23 00:06:02.081516 kernel: io scheduler kyber registered Jan 23 00:06:02.081520 kernel: io scheduler bfq registered Jan 23 00:06:02.081525 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 00:06:02.081531 kernel: thunder_xcv, ver 1.0 Jan 23 00:06:02.081535 kernel: thunder_bgx, ver 1.0 Jan 23 00:06:02.081540 kernel: nicpf, ver 1.0 Jan 23 00:06:02.081544 kernel: nicvf, ver 1.0 Jan 23 00:06:02.081671 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 00:06:02.081724 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T00:06:01 UTC (1769126761) Jan 23 00:06:02.081730 kernel: efifb: probing for efifb Jan 23 00:06:02.081736 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 00:06:02.081741 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 00:06:02.081746 kernel: efifb: scrolling: redraw Jan 23 00:06:02.081751 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 00:06:02.081756 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 00:06:02.081760 kernel: fb0: EFI VGA frame buffer device Jan 23 00:06:02.081765 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 00:06:02.081770 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 00:06:02.081775 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 23 00:06:02.081781 kernel: watchdog: NMI not fully supported Jan 23 00:06:02.081785 kernel: NET: Registered PF_INET6 protocol family Jan 23 00:06:02.081790 kernel: watchdog: Hard watchdog permanently disabled Jan 23 00:06:02.081795 kernel: Segment Routing with IPv6 Jan 23 00:06:02.081799 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 00:06:02.081804 kernel: NET: Registered PF_PACKET protocol family Jan 23 00:06:02.081809 kernel: Key type dns_resolver registered Jan 23 00:06:02.081813 kernel: registered taskstats version 1 Jan 23 00:06:02.081818 kernel: Loading compiled-in X.509 certificates Jan 23 00:06:02.081823 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 380753d9165686712e58c1d21e00c0268e70f18f' Jan 23 00:06:02.081829 kernel: Demotion targets for Node 0: null Jan 23 00:06:02.081834 kernel: Key type .fscrypt registered Jan 23 00:06:02.081838 kernel: Key type fscrypt-provisioning registered Jan 23 00:06:02.081843 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 00:06:02.081848 kernel: ima: Allocated hash algorithm: sha1 Jan 23 00:06:02.081852 kernel: ima: No architecture policies found Jan 23 00:06:02.081857 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 00:06:02.081862 kernel: clk: Disabling unused clocks Jan 23 00:06:02.081866 kernel: PM: genpd: Disabling unused power domains Jan 23 00:06:02.081872 kernel: Warning: unable to open an initial console. Jan 23 00:06:02.081877 kernel: Freeing unused kernel memory: 39552K Jan 23 00:06:02.081882 kernel: Run /init as init process Jan 23 00:06:02.081886 kernel: with arguments: Jan 23 00:06:02.081891 kernel: /init Jan 23 00:06:02.081895 kernel: with environment: Jan 23 00:06:02.081900 kernel: HOME=/ Jan 23 00:06:02.081904 kernel: TERM=linux Jan 23 00:06:02.081910 systemd[1]: Successfully made /usr/ read-only. Jan 23 00:06:02.081918 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:06:02.081924 systemd[1]: Detected virtualization microsoft. Jan 23 00:06:02.081929 systemd[1]: Detected architecture arm64. Jan 23 00:06:02.081934 systemd[1]: Running in initrd. Jan 23 00:06:02.081939 systemd[1]: No hostname configured, using default hostname. Jan 23 00:06:02.081944 systemd[1]: Hostname set to . Jan 23 00:06:02.081949 systemd[1]: Initializing machine ID from random generator. Jan 23 00:06:02.081955 systemd[1]: Queued start job for default target initrd.target. Jan 23 00:06:02.081960 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:06:02.081966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:06:02.081971 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 00:06:02.081977 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:06:02.081982 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 00:06:02.081988 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 00:06:02.081995 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 00:06:02.082000 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 00:06:02.082005 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:06:02.082010 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:06:02.082016 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:06:02.082021 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:06:02.082026 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:06:02.082031 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:06:02.082037 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:06:02.082042 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:06:02.082047 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 00:06:02.082052 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 00:06:02.082058 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:06:02.082063 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:06:02.082068 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:06:02.082073 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:06:02.082079 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 00:06:02.082084 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:06:02.082090 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 00:06:02.082095 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 00:06:02.082100 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 00:06:02.082105 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:06:02.082111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:06:02.082128 systemd-journald[225]: Collecting audit messages is disabled. Jan 23 00:06:02.082143 systemd-journald[225]: Journal started Jan 23 00:06:02.082157 systemd-journald[225]: Runtime Journal (/run/log/journal/de01a5ed9e904c148ca83b7427ccd9dc) is 8M, max 78.3M, 70.3M free. Jan 23 00:06:02.090649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:02.096254 systemd-modules-load[227]: Inserted module 'overlay' Jan 23 00:06:02.119722 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 00:06:02.119778 kernel: Bridge firewalling registered Jan 23 00:06:02.119787 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:06:02.119892 systemd-modules-load[227]: Inserted module 'br_netfilter' Jan 23 00:06:02.133502 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 00:06:02.138668 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:06:02.153805 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 00:06:02.157584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:06:02.165565 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:02.175975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 00:06:02.192791 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:06:02.208728 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:06:02.223513 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:06:02.231331 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:06:02.247640 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:06:02.254414 systemd-tmpfiles[251]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 00:06:02.254922 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:06:02.273407 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:06:02.287237 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 00:06:02.304466 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:06:02.315756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:06:02.335294 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:06:02.367378 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:06:02.370311 systemd-resolved[263]: Positive Trust Anchors: Jan 23 00:06:02.370320 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:06:02.370341 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:06:02.372404 systemd-resolved[263]: Defaulting to hostname 'linux'. Jan 23 00:06:02.380108 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:06:02.394552 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:06:02.489617 kernel: SCSI subsystem initialized Jan 23 00:06:02.494623 kernel: Loading iSCSI transport class v2.0-870. Jan 23 00:06:02.502625 kernel: iscsi: registered transport (tcp) Jan 23 00:06:02.515994 kernel: iscsi: registered transport (qla4xxx) Jan 23 00:06:02.516062 kernel: QLogic iSCSI HBA Driver Jan 23 00:06:02.529693 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:06:02.549722 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:06:02.556497 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:06:02.605253 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 00:06:02.612765 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 00:06:02.677622 kernel: raid6: neonx8 gen() 18555 MB/s Jan 23 00:06:02.694609 kernel: raid6: neonx4 gen() 18549 MB/s Jan 23 00:06:02.713609 kernel: raid6: neonx2 gen() 17078 MB/s Jan 23 00:06:02.733612 kernel: raid6: neonx1 gen() 15146 MB/s Jan 23 00:06:02.752610 kernel: raid6: int64x8 gen() 10527 MB/s Jan 23 00:06:02.771609 kernel: raid6: int64x4 gen() 10609 MB/s Jan 23 00:06:02.791633 kernel: raid6: int64x2 gen() 8994 MB/s Jan 23 00:06:02.813127 kernel: raid6: int64x1 gen() 7047 MB/s Jan 23 00:06:02.813139 kernel: raid6: using algorithm neonx8 gen() 18555 MB/s Jan 23 00:06:02.836693 kernel: raid6: .... xor() 14904 MB/s, rmw enabled Jan 23 00:06:02.836769 kernel: raid6: using neon recovery algorithm Jan 23 00:06:02.844771 kernel: xor: measuring software checksum speed Jan 23 00:06:02.844858 kernel: 8regs : 28599 MB/sec Jan 23 00:06:02.847768 kernel: 32regs : 28816 MB/sec Jan 23 00:06:02.850429 kernel: arm64_neon : 37666 MB/sec Jan 23 00:06:02.853563 kernel: xor: using function: arm64_neon (37666 MB/sec) Jan 23 00:06:02.891619 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 00:06:02.897849 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:06:02.907197 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:06:02.934624 systemd-udevd[475]: Using default interface naming scheme 'v255'. Jan 23 00:06:02.938779 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:06:02.951307 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 00:06:02.987707 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Jan 23 00:06:03.008645 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:06:03.018752 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:06:03.067965 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:06:03.080422 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 00:06:03.153628 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 00:06:03.169827 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:06:03.211236 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 00:06:03.211256 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 00:06:03.211263 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 00:06:03.211270 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 00:06:03.211277 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 00:06:03.211292 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 23 00:06:03.211300 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 00:06:03.211436 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 23 00:06:03.211444 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 00:06:03.169932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:03.234086 kernel: scsi host0: storvsc_host_t Jan 23 00:06:03.234241 kernel: PTP clock support registered Jan 23 00:06:03.234249 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 00:06:03.234266 kernel: scsi host1: storvsc_host_t Jan 23 00:06:03.211096 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:03.253972 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 00:06:03.216971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:03.279464 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 00:06:03.279484 kernel: hv_vmbus: registering driver hv_utils Jan 23 00:06:03.279499 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 00:06:03.279505 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 00:06:03.246053 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:06:03.397483 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 00:06:03.397892 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 00:06:03.398063 kernel: hv_netvsc 7ced8dd0-b318-7ced-8dd0-b3187ced8dd0 eth0: VF slot 1 added Jan 23 00:06:03.253890 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:06:03.253990 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:03.413814 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 00:06:03.414006 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 00:06:03.414073 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 00:06:03.414142 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 00:06:03.264945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:03.430525 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#69 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 00:06:03.383793 systemd-resolved[263]: Clock change detected. Flushing caches. Jan 23 00:06:03.441720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#76 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 00:06:03.442529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:03.464574 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:06:03.464621 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 00:06:03.471668 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 00:06:03.471858 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 00:06:03.471868 kernel: hv_vmbus: registering driver hv_pci Jan 23 00:06:03.474680 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 00:06:03.474836 kernel: hv_pci 1e704e20-8153-460c-afe6-2f2a85116cb5: PCI VMBus probing: Using version 0x10004 Jan 23 00:06:03.490493 kernel: hv_pci 1e704e20-8153-460c-afe6-2f2a85116cb5: PCI host bridge to bus 8153:00 Jan 23 00:06:03.490738 kernel: pci_bus 8153:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 00:06:03.490828 kernel: pci_bus 8153:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 00:06:03.506563 kernel: pci 8153:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 23 00:06:03.506653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#239 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 00:06:03.506817 kernel: pci 8153:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 00:06:03.519715 kernel: pci 8153:00:02.0: enabling Extended Tags Jan 23 00:06:03.529738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#223 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 00:06:03.529976 kernel: pci 8153:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8153:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 23 00:06:03.551859 kernel: pci_bus 8153:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 00:06:03.552058 kernel: pci 8153:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 23 00:06:03.613381 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 00:06:03.632432 kernel: mlx5_core 8153:00:02.0: enabling device (0000 -> 0002) Jan 23 00:06:03.649439 kernel: mlx5_core 8153:00:02.0: PTM is not supported by PCIe Jan 23 00:06:03.649640 kernel: mlx5_core 8153:00:02.0: firmware version: 16.30.5026 Jan 23 00:06:03.666813 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 00:06:03.683880 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 00:06:03.699130 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 00:06:03.709823 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 00:06:03.723815 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 00:06:03.754687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#206 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 00:06:03.763677 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:06:03.774688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#119 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 00:06:03.786700 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:06:03.871685 kernel: hv_netvsc 7ced8dd0-b318-7ced-8dd0-b3187ced8dd0 eth0: VF registering: eth1 Jan 23 00:06:03.891703 kernel: mlx5_core 8153:00:02.0 eth1: joined to eth0 Jan 23 00:06:03.891901 kernel: mlx5_core 8153:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 00:06:03.917681 kernel: mlx5_core 8153:00:02.0 enP33107s1: renamed from eth1 Jan 23 00:06:03.990890 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 00:06:04.013210 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:06:04.023502 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:06:04.028482 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:06:04.041845 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 00:06:04.064708 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:06:04.794349 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#92 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 00:06:04.811693 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:06:04.812193 disk-uuid[640]: The operation has completed successfully. Jan 23 00:06:04.890606 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 00:06:04.891840 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 00:06:04.913146 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 00:06:04.938325 sh[820]: Success Jan 23 00:06:04.960306 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 00:06:04.960374 kernel: device-mapper: uevent: version 1.0.3 Jan 23 00:06:04.966686 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 00:06:04.974689 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 00:06:05.053043 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 00:06:05.069028 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 00:06:05.078703 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 00:06:05.103697 kernel: BTRFS: device fsid 97a43946-ed04-45c1-a355-c0350e8b973e devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (838) Jan 23 00:06:05.113263 kernel: BTRFS info (device dm-0): first mount of filesystem 97a43946-ed04-45c1-a355-c0350e8b973e Jan 23 00:06:05.113311 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:05.166714 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 00:06:05.166746 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 00:06:05.175511 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 00:06:05.179757 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:06:05.187126 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 00:06:05.187908 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 00:06:05.208406 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 00:06:05.241720 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (871) Jan 23 00:06:05.252844 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:05.252905 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:05.265058 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:06:05.265127 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:06:05.274943 kernel: BTRFS info (device sda6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:05.275224 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 00:06:05.284464 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 00:06:05.330297 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:06:05.341369 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:06:05.378976 systemd-networkd[1007]: lo: Link UP Jan 23 00:06:05.378985 systemd-networkd[1007]: lo: Gained carrier Jan 23 00:06:05.381512 systemd-networkd[1007]: Enumeration completed Jan 23 00:06:05.381696 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:06:05.386571 systemd[1]: Reached target network.target - Network. Jan 23 00:06:05.389402 systemd-networkd[1007]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:05.389405 systemd-networkd[1007]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:06:05.461684 kernel: mlx5_core 8153:00:02.0 enP33107s1: Link up Jan 23 00:06:05.494685 kernel: hv_netvsc 7ced8dd0-b318-7ced-8dd0-b3187ced8dd0 eth0: Data path switched to VF: enP33107s1 Jan 23 00:06:05.495396 systemd-networkd[1007]: enP33107s1: Link UP Jan 23 00:06:05.495458 systemd-networkd[1007]: eth0: Link UP Jan 23 00:06:05.495523 systemd-networkd[1007]: eth0: Gained carrier Jan 23 00:06:05.495538 systemd-networkd[1007]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:05.501518 systemd-networkd[1007]: enP33107s1: Gained carrier Jan 23 00:06:05.522725 systemd-networkd[1007]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 00:06:07.156943 systemd-networkd[1007]: eth0: Gained IPv6LL Jan 23 00:06:07.237710 ignition[948]: Ignition 2.22.0 Jan 23 00:06:07.237723 ignition[948]: Stage: fetch-offline Jan 23 00:06:07.241108 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:06:07.237823 ignition[948]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:07.250165 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 00:06:07.237829 ignition[948]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:06:07.237902 ignition[948]: parsed url from cmdline: "" Jan 23 00:06:07.237904 ignition[948]: no config URL provided Jan 23 00:06:07.237907 ignition[948]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:06:07.237912 ignition[948]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:06:07.237916 ignition[948]: failed to fetch config: resource requires networking Jan 23 00:06:07.238158 ignition[948]: Ignition finished successfully Jan 23 00:06:07.280733 ignition[1019]: Ignition 2.22.0 Jan 23 00:06:07.280738 ignition[1019]: Stage: fetch Jan 23 00:06:07.280967 ignition[1019]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:07.280975 ignition[1019]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:06:07.281068 ignition[1019]: parsed url from cmdline: "" Jan 23 00:06:07.281071 ignition[1019]: no config URL provided Jan 23 00:06:07.281075 ignition[1019]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:06:07.281081 ignition[1019]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:06:07.281098 ignition[1019]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 00:06:07.401533 ignition[1019]: GET result: OK Jan 23 00:06:07.401652 ignition[1019]: config has been read from IMDS userdata Jan 23 00:06:07.404981 unknown[1019]: fetched base config from "system" Jan 23 00:06:07.401699 ignition[1019]: parsing config with SHA512: 59e1d9a6e6c651942409dc9b9eb57da31121a73efb21fa597d8b311e9b69a6e036b50919de8d0acafa1caa2991fd8eb9d4db7a961663a8ca772986a33dbf1fad Jan 23 00:06:07.404987 unknown[1019]: fetched base config from "system" Jan 23 00:06:07.405302 ignition[1019]: fetch: fetch complete Jan 23 00:06:07.404990 unknown[1019]: fetched user config from "azure" Jan 23 00:06:07.405306 ignition[1019]: fetch: fetch passed Jan 23 00:06:07.407222 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 00:06:07.405347 ignition[1019]: Ignition finished successfully Jan 23 00:06:07.413460 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 00:06:07.455146 ignition[1026]: Ignition 2.22.0 Jan 23 00:06:07.455162 ignition[1026]: Stage: kargs Jan 23 00:06:07.459342 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 00:06:07.455408 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:07.466737 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 00:06:07.455417 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:06:07.455931 ignition[1026]: kargs: kargs passed Jan 23 00:06:07.455975 ignition[1026]: Ignition finished successfully Jan 23 00:06:07.502346 ignition[1033]: Ignition 2.22.0 Jan 23 00:06:07.502361 ignition[1033]: Stage: disks Jan 23 00:06:07.507904 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 00:06:07.502544 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:07.512560 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 00:06:07.502551 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:06:07.520711 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 00:06:07.503148 ignition[1033]: disks: disks passed Jan 23 00:06:07.529557 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:06:07.503200 ignition[1033]: Ignition finished successfully Jan 23 00:06:07.537569 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:06:07.546015 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:06:07.555300 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 00:06:07.652507 systemd-fsck[1042]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 23 00:06:07.659640 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 00:06:07.667773 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 00:06:08.075687 kernel: EXT4-fs (sda9): mounted filesystem f31390ab-27e9-47d9-a374-053913301d53 r/w with ordered data mode. Quota mode: none. Jan 23 00:06:08.076119 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 00:06:08.079959 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 00:06:08.128866 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:06:08.141410 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 00:06:08.161691 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1056) Jan 23 00:06:08.171897 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:08.171940 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:08.172909 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 00:06:08.192073 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:06:08.192103 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:06:08.192467 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 00:06:08.192514 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:06:08.207176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:06:08.214355 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 00:06:08.219901 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 00:06:09.286879 coreos-metadata[1059]: Jan 23 00:06:09.286 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 00:06:09.293736 coreos-metadata[1059]: Jan 23 00:06:09.293 INFO Fetch successful Jan 23 00:06:09.293736 coreos-metadata[1059]: Jan 23 00:06:09.293 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 00:06:09.306108 coreos-metadata[1059]: Jan 23 00:06:09.305 INFO Fetch successful Jan 23 00:06:09.336997 coreos-metadata[1059]: Jan 23 00:06:09.336 INFO wrote hostname ci-4459.2.2-n-db2e6badfc to /sysroot/etc/hostname Jan 23 00:06:09.344255 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 00:06:09.710286 initrd-setup-root[1086]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 00:06:09.794640 initrd-setup-root[1093]: cut: /sysroot/etc/group: No such file or directory Jan 23 00:06:09.833213 initrd-setup-root[1100]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 00:06:09.839504 initrd-setup-root[1107]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 00:06:11.848545 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 00:06:11.854801 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 00:06:11.871461 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 00:06:11.884053 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 00:06:11.893004 kernel: BTRFS info (device sda6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:11.917761 ignition[1178]: INFO : Ignition 2.22.0 Jan 23 00:06:11.917761 ignition[1178]: INFO : Stage: mount Jan 23 00:06:11.917761 ignition[1178]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:11.917761 ignition[1178]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:06:11.948291 ignition[1178]: INFO : mount: mount passed Jan 23 00:06:11.948291 ignition[1178]: INFO : Ignition finished successfully Jan 23 00:06:11.919158 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 00:06:11.926119 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 00:06:11.935830 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 00:06:11.962890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:06:11.999805 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1192) Jan 23 00:06:12.009872 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:12.009922 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:12.019455 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:06:12.019514 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:06:12.020932 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:06:12.051703 ignition[1209]: INFO : Ignition 2.22.0 Jan 23 00:06:12.051703 ignition[1209]: INFO : Stage: files Jan 23 00:06:12.051703 ignition[1209]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:12.051703 ignition[1209]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:06:12.067281 ignition[1209]: DEBUG : files: compiled without relabeling support, skipping Jan 23 00:06:12.067281 ignition[1209]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 00:06:12.067281 ignition[1209]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 00:06:12.173791 ignition[1209]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 00:06:12.179833 ignition[1209]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 00:06:12.179833 ignition[1209]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 00:06:12.179833 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 00:06:12.179833 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 00:06:12.174912 unknown[1209]: wrote ssh authorized keys file for user: core Jan 23 00:06:12.210818 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 00:06:12.416543 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 00:06:12.424647 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 00:06:12.424647 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 00:06:12.610823 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 00:06:12.725825 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 00:06:12.725825 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 00:06:12.740235 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 00:06:13.262179 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 00:06:13.760814 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 00:06:13.760814 ignition[1209]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 00:06:13.868338 ignition[1209]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:06:13.877470 ignition[1209]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:06:13.877470 ignition[1209]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 00:06:13.877470 ignition[1209]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 00:06:13.877470 ignition[1209]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 00:06:13.877470 ignition[1209]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:06:13.877470 ignition[1209]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:06:13.877470 ignition[1209]: INFO : files: files passed Jan 23 00:06:13.877470 ignition[1209]: INFO : Ignition finished successfully Jan 23 00:06:13.877184 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 00:06:13.892342 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 00:06:13.930893 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 00:06:14.035629 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 00:06:14.035744 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 00:06:14.051575 initrd-setup-root-after-ignition[1239]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:06:14.051575 initrd-setup-root-after-ignition[1239]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:06:14.070372 initrd-setup-root-after-ignition[1243]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:06:14.051900 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:06:14.063531 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 00:06:14.075993 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 00:06:14.114131 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 00:06:14.114238 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 00:06:14.123856 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 00:06:14.133047 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 00:06:14.141990 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 00:06:14.142771 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 00:06:14.176236 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:06:14.183237 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 00:06:14.209249 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:06:14.214222 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:06:14.223563 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 00:06:14.232431 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 00:06:14.232552 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:06:14.244762 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 00:06:14.249163 systemd[1]: Stopped target basic.target - Basic System. Jan 23 00:06:14.257231 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 00:06:14.265557 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:06:14.273748 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 00:06:14.282397 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:06:14.291731 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 00:06:14.300781 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:06:14.310019 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 00:06:14.318035 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 00:06:14.327702 systemd[1]: Stopped target swap.target - Swaps. Jan 23 00:06:14.334789 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 00:06:14.334910 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:06:14.346755 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:06:14.351572 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:06:14.360148 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 00:06:14.363997 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:06:14.369538 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 00:06:14.369645 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 00:06:14.382714 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 00:06:14.382801 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:06:14.388114 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 00:06:14.388193 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 00:06:14.396335 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 00:06:14.396414 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 00:06:14.407572 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 00:06:14.436863 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 00:06:14.445955 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 00:06:14.486984 ignition[1263]: INFO : Ignition 2.22.0 Jan 23 00:06:14.486984 ignition[1263]: INFO : Stage: umount Jan 23 00:06:14.486984 ignition[1263]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:14.486984 ignition[1263]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:06:14.486984 ignition[1263]: INFO : umount: umount passed Jan 23 00:06:14.486984 ignition[1263]: INFO : Ignition finished successfully Jan 23 00:06:14.446101 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:06:14.458516 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 00:06:14.458617 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:06:14.483998 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 00:06:14.484544 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 00:06:14.484621 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 00:06:14.495870 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 00:06:14.495963 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 00:06:14.503048 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 00:06:14.503139 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 00:06:14.511922 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 00:06:14.512036 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 00:06:14.520506 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 00:06:14.520549 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 00:06:14.528125 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 00:06:14.528165 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 00:06:14.535648 systemd[1]: Stopped target network.target - Network. Jan 23 00:06:14.543709 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 00:06:14.543746 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:06:14.552460 systemd[1]: Stopped target paths.target - Path Units. Jan 23 00:06:14.560490 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 00:06:14.563815 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:06:14.569093 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 00:06:14.576677 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 00:06:14.585081 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 00:06:14.585121 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:06:14.592983 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 00:06:14.593012 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:06:14.600944 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 00:06:14.600998 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 00:06:14.608835 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 00:06:14.608866 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 00:06:14.615855 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 00:06:14.615888 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 00:06:14.624109 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 00:06:14.631718 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 00:06:14.648658 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 00:06:14.648924 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 00:06:14.665827 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 00:06:14.666046 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 00:06:14.666149 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 00:06:14.678938 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 00:06:14.679502 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 00:06:14.687692 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 00:06:14.844423 kernel: hv_netvsc 7ced8dd0-b318-7ced-8dd0-b3187ced8dd0 eth0: Data path switched from VF: enP33107s1 Jan 23 00:06:14.687754 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:06:14.705783 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 00:06:14.718841 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 00:06:14.718923 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:06:14.728386 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:06:14.728441 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:06:14.736821 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 00:06:14.736861 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 00:06:14.741411 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 00:06:14.741444 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:06:14.754189 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:06:14.760186 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:06:14.760250 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:06:14.794799 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 00:06:14.794982 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:06:14.805255 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 00:06:14.805301 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 00:06:14.813576 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 00:06:14.813608 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:06:14.821711 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 00:06:14.821767 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:06:14.840577 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 00:06:14.840628 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 00:06:14.853323 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 00:06:14.853373 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:06:14.867187 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 00:06:14.883598 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 00:06:14.883690 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:06:14.901880 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 00:06:14.901940 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:06:14.911092 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 00:06:14.911148 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:06:14.921468 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 00:06:14.921516 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:06:14.926562 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:06:15.070153 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jan 23 00:06:14.926596 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:14.942167 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 00:06:14.942212 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 00:06:14.942234 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 00:06:14.942257 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:06:14.942522 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 00:06:14.942619 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 00:06:14.952089 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 00:06:14.952155 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 00:06:14.960956 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 00:06:14.970885 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 00:06:14.994635 systemd[1]: Switching root. Jan 23 00:06:15.126338 systemd-journald[225]: Journal stopped Jan 23 00:06:21.353587 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 00:06:21.353607 kernel: SELinux: policy capability open_perms=1 Jan 23 00:06:21.353614 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 00:06:21.353620 kernel: SELinux: policy capability always_check_network=0 Jan 23 00:06:21.353625 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 00:06:21.353632 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 00:06:21.353638 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 00:06:21.353643 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 00:06:21.353649 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 00:06:21.353654 kernel: audit: type=1403 audit(1769126775.563:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 00:06:21.353661 systemd[1]: Successfully loaded SELinux policy in 98.572ms. Jan 23 00:06:21.353682 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.448ms. Jan 23 00:06:21.353688 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:06:21.353695 systemd[1]: Detected virtualization microsoft. Jan 23 00:06:21.353701 systemd[1]: Detected architecture arm64. Jan 23 00:06:21.353707 systemd[1]: Detected first boot. Jan 23 00:06:21.353714 systemd[1]: Hostname set to . Jan 23 00:06:21.353721 systemd[1]: Initializing machine ID from random generator. Jan 23 00:06:21.353727 zram_generator::config[1305]: No configuration found. Jan 23 00:06:21.353734 kernel: NET: Registered PF_VSOCK protocol family Jan 23 00:06:21.353739 systemd[1]: Populated /etc with preset unit settings. Jan 23 00:06:21.353746 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 00:06:21.353752 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 00:06:21.353758 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 00:06:21.353764 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 00:06:21.353770 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 00:06:21.353777 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 00:06:21.353782 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 00:06:21.353788 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 00:06:21.353794 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 00:06:21.353802 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 00:06:21.353808 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 00:06:21.353814 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 00:06:21.353820 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:06:21.353826 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:06:21.353832 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 00:06:21.353838 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 00:06:21.353844 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 00:06:21.353852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:06:21.353858 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 00:06:21.353866 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:06:21.353872 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:06:21.353878 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 00:06:21.353884 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 00:06:21.353890 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 00:06:21.353896 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 00:06:21.353903 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:06:21.353909 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:06:21.353915 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:06:21.353921 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:06:21.353927 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 00:06:21.353933 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 00:06:21.353941 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 00:06:21.353947 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:06:21.353953 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:06:21.353959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:06:21.353965 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 00:06:21.353971 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 00:06:21.353978 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 00:06:21.353985 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 00:06:21.353992 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 00:06:21.353998 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 00:06:21.354004 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 00:06:21.354011 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 00:06:21.354017 systemd[1]: Reached target machines.target - Containers. Jan 23 00:06:21.354023 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 00:06:21.354030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:06:21.354037 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:06:21.354043 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 00:06:21.354049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:06:21.354056 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:06:21.354062 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:06:21.354068 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:06:21.354075 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:06:21.354081 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 00:06:21.354087 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 00:06:21.354094 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 00:06:21.354101 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 00:06:21.354107 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 00:06:21.354113 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:06:21.354120 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:06:21.354126 kernel: loop: module loaded Jan 23 00:06:21.354145 systemd-journald[1388]: Collecting audit messages is disabled. Jan 23 00:06:21.354168 systemd-journald[1388]: Journal started Jan 23 00:06:21.354183 systemd-journald[1388]: Runtime Journal (/run/log/journal/9cd4a22dd4994a72ba9295ca78cef054) is 8M, max 78.3M, 70.3M free. Jan 23 00:06:20.592444 systemd[1]: Queued start job for default target multi-user.target. Jan 23 00:06:20.598299 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 00:06:20.598753 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 00:06:20.599041 systemd[1]: systemd-journald.service: Consumed 2.475s CPU time. Jan 23 00:06:21.373688 kernel: fuse: init (API version 7.41) Jan 23 00:06:21.391416 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:06:21.407353 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:06:21.415690 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 00:06:21.435403 kernel: ACPI: bus type drm_connector registered Jan 23 00:06:21.435477 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 00:06:21.451732 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:06:21.460797 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 00:06:21.460849 systemd[1]: Stopped verity-setup.service. Jan 23 00:06:21.476101 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:06:21.476872 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 00:06:21.481978 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 00:06:21.488614 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 00:06:21.492986 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 00:06:21.497468 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 00:06:21.502390 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 00:06:21.508691 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 00:06:21.515434 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:06:21.520791 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 00:06:21.520932 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 00:06:21.528307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:06:21.528451 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:06:21.533649 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:06:21.533824 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:06:21.538646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:06:21.538799 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:06:21.544404 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:06:21.544537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:06:21.549390 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:06:21.549541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:06:21.554338 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:06:21.560014 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 00:06:21.573490 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:06:21.579312 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 00:06:21.588804 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 00:06:21.596550 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 00:06:21.596587 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:06:21.601977 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 00:06:21.610826 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 00:06:21.616092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:06:21.660866 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 00:06:21.672408 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 00:06:21.677343 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:06:21.678239 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 00:06:21.683128 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:06:21.684113 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 00:06:21.689790 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:06:21.698291 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:06:21.704726 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 00:06:21.711223 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:06:21.716340 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 00:06:21.720995 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 00:06:21.725912 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 00:06:21.734972 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 00:06:21.742818 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 00:06:21.750802 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:06:21.776300 systemd-journald[1388]: Time spent on flushing to /var/log/journal/9cd4a22dd4994a72ba9295ca78cef054 is 12.060ms for 946 entries. Jan 23 00:06:21.776300 systemd-journald[1388]: System Journal (/var/log/journal/9cd4a22dd4994a72ba9295ca78cef054) is 8M, max 2.6G, 2.6G free. Jan 23 00:06:21.802951 systemd-journald[1388]: Received client request to flush runtime journal. Jan 23 00:06:21.802997 kernel: loop0: detected capacity change from 0 to 119840 Jan 23 00:06:21.804998 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 00:06:21.826865 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 00:06:21.828565 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 00:06:21.899493 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Jan 23 00:06:21.899505 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Jan 23 00:06:21.902686 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:06:21.910076 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 00:06:21.963222 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:06:22.002914 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 00:06:22.009127 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:06:22.031900 systemd-tmpfiles[1462]: ACLs are not supported, ignoring. Jan 23 00:06:22.031913 systemd-tmpfiles[1462]: ACLs are not supported, ignoring. Jan 23 00:06:22.034277 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:06:22.564915 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 00:06:22.617711 kernel: loop1: detected capacity change from 0 to 211168 Jan 23 00:06:22.635735 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 00:06:22.642303 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:06:22.667214 systemd-udevd[1469]: Using default interface naming scheme 'v255'. Jan 23 00:06:22.705697 kernel: loop2: detected capacity change from 0 to 27936 Jan 23 00:06:23.358133 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:06:23.368805 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:06:23.460371 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 00:06:23.472693 kernel: loop3: detected capacity change from 0 to 100632 Jan 23 00:06:23.473959 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 00:06:23.514724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#200 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 00:06:23.535060 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 00:06:23.551698 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 00:06:23.600848 kernel: hv_vmbus: registering driver hv_balloon Jan 23 00:06:23.600935 kernel: loop4: detected capacity change from 0 to 119840 Jan 23 00:06:23.603793 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 00:06:23.608919 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 23 00:06:23.622720 kernel: loop5: detected capacity change from 0 to 211168 Jan 23 00:06:23.650452 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 00:06:23.650556 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 00:06:23.650570 kernel: loop6: detected capacity change from 0 to 27936 Jan 23 00:06:23.650582 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 00:06:23.661316 kernel: Console: switching to colour dummy device 80x25 Jan 23 00:06:23.664782 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 00:06:23.684134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:23.686682 kernel: loop7: detected capacity change from 0 to 100632 Jan 23 00:06:23.706153 (sd-merge)[1541]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 00:06:23.708048 (sd-merge)[1541]: Merged extensions into '/usr'. Jan 23 00:06:23.725826 systemd[1]: Reload requested from client PID 1440 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 00:06:23.725975 systemd[1]: Reloading... Jan 23 00:06:23.795701 zram_generator::config[1583]: No configuration found. Jan 23 00:06:23.894223 systemd-networkd[1486]: lo: Link UP Jan 23 00:06:23.894552 systemd-networkd[1486]: lo: Gained carrier Jan 23 00:06:23.895567 systemd-networkd[1486]: Enumeration completed Jan 23 00:06:23.896435 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:23.896512 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:06:23.943731 kernel: mlx5_core 8153:00:02.0 enP33107s1: Link up Jan 23 00:06:23.966759 kernel: hv_netvsc 7ced8dd0-b318-7ced-8dd0-b3187ced8dd0 eth0: Data path switched to VF: enP33107s1 Jan 23 00:06:23.968203 systemd-networkd[1486]: enP33107s1: Link UP Jan 23 00:06:23.968422 systemd-networkd[1486]: eth0: Link UP Jan 23 00:06:23.968429 systemd-networkd[1486]: eth0: Gained carrier Jan 23 00:06:23.968450 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:23.973854 systemd-networkd[1486]: enP33107s1: Gained carrier Jan 23 00:06:23.980736 systemd-networkd[1486]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 00:06:24.025262 systemd[1]: Reloading finished in 298 ms. Jan 23 00:06:24.053970 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:06:24.060431 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 00:06:24.070698 kernel: MACsec IEEE 802.1AE Jan 23 00:06:24.080508 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 00:06:24.096763 systemd[1]: Starting ensure-sysext.service... Jan 23 00:06:24.103811 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 00:06:24.110866 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 00:06:24.118836 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 00:06:24.132209 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:06:24.153904 systemd[1]: Reload requested from client PID 1689 ('systemctl') (unit ensure-sysext.service)... Jan 23 00:06:24.153919 systemd[1]: Reloading... Jan 23 00:06:24.158861 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 00:06:24.160127 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 00:06:24.160360 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 00:06:24.160499 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 00:06:24.162177 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 00:06:24.162422 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Jan 23 00:06:24.162814 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Jan 23 00:06:24.169637 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:06:24.169776 systemd-tmpfiles[1693]: Skipping /boot Jan 23 00:06:24.177016 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:06:24.177147 systemd-tmpfiles[1693]: Skipping /boot Jan 23 00:06:24.231810 zram_generator::config[1731]: No configuration found. Jan 23 00:06:24.385971 systemd[1]: Reloading finished in 231 ms. Jan 23 00:06:24.403691 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 00:06:24.409848 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 00:06:24.415964 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:06:24.422100 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:06:24.422302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:24.429998 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:06:24.436303 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:06:24.448904 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 00:06:24.457734 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 00:06:24.471605 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:06:24.479931 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 00:06:24.489619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:24.501442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:06:24.508903 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:06:24.518969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:06:24.527487 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:06:24.534504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:06:24.535610 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:06:24.540112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:06:24.541716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:06:24.547391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:06:24.547548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:06:24.553918 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:06:24.554153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:06:24.564713 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 00:06:24.579144 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:06:24.579922 systemd-resolved[1797]: Positive Trust Anchors: Jan 23 00:06:24.580164 systemd-resolved[1797]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:06:24.580224 systemd-resolved[1797]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:06:24.580893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:06:24.588898 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:06:24.597273 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:06:24.605150 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:06:24.610132 systemd-resolved[1797]: Using system hostname 'ci-4459.2.2-n-db2e6badfc'. Jan 23 00:06:24.610953 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:06:24.611071 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:06:24.611175 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 00:06:24.616738 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:06:24.622223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:06:24.622388 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:06:24.628352 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:06:24.628493 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:06:24.636432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:06:24.636572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:06:24.642372 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:06:24.642497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:06:24.650704 systemd[1]: Finished ensure-sysext.service. Jan 23 00:06:24.654911 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 00:06:24.663245 systemd[1]: Reached target network.target - Network. Jan 23 00:06:24.667322 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:06:24.673306 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:06:24.673366 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:06:24.677424 augenrules[1834]: No rules Jan 23 00:06:24.678708 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:06:24.678908 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:06:24.698814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:25.829288 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 00:06:25.835045 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:06:25.972813 systemd-networkd[1486]: eth0: Gained IPv6LL Jan 23 00:06:25.974837 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 00:06:25.980706 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 00:06:32.077549 ldconfig[1436]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 00:06:32.091759 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 00:06:32.099117 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 00:06:32.112719 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 00:06:32.117607 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:06:32.122273 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 00:06:32.127775 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 00:06:32.133482 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 00:06:32.138314 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 00:06:32.143991 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 00:06:32.149468 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 00:06:32.149494 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:06:32.153507 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:06:32.158957 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 00:06:32.166007 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 00:06:32.172151 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 00:06:32.178700 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 00:06:32.184709 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 00:06:32.201325 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 00:06:32.206035 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 00:06:32.212015 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 00:06:32.216600 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:06:32.220597 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:06:32.224966 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:06:32.224992 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:06:32.227246 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 00:06:32.242420 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 00:06:32.248821 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 00:06:32.254949 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 00:06:32.261121 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 00:06:32.269808 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 00:06:32.276107 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 00:06:32.280273 jq[1854]: false Jan 23 00:06:32.280849 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 00:06:32.283802 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 00:06:32.288639 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 00:06:32.289622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:32.295805 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 00:06:32.302802 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 00:06:32.309014 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 00:06:32.316803 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 00:06:32.323395 KVP[1859]: KVP starting; pid is:1859 Jan 23 00:06:32.327052 chronyd[1849]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 00:06:32.327403 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 00:06:32.346496 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 00:06:32.393374 kernel: hv_utils: KVP IC version 4.0 Jan 23 00:06:32.392469 KVP[1859]: KVP LIC Version: 3.1 Jan 23 00:06:32.397341 chronyd[1849]: Timezone right/UTC failed leap second check, ignoring Jan 23 00:06:32.397847 chronyd[1849]: Loaded seccomp filter (level 2) Jan 23 00:06:32.399390 extend-filesystems[1858]: Found /dev/sda6 Jan 23 00:06:32.400407 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 00:06:32.411027 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 00:06:32.411618 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 00:06:32.422785 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 00:06:32.431609 extend-filesystems[1858]: Found /dev/sda9 Jan 23 00:06:32.439028 extend-filesystems[1858]: Checking size of /dev/sda9 Jan 23 00:06:32.437481 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 00:06:32.448190 jq[1883]: true Jan 23 00:06:32.452504 update_engine[1881]: I20260123 00:06:32.452432 1881 main.cc:92] Flatcar Update Engine starting Jan 23 00:06:32.454049 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 00:06:32.462818 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 00:06:32.463002 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 00:06:32.463962 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 00:06:32.464122 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 00:06:32.471804 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 00:06:32.481547 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 00:06:32.481744 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 00:06:32.503121 (ntainerd)[1894]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 00:06:32.506387 jq[1893]: true Jan 23 00:06:32.530361 systemd-logind[1876]: New seat seat0. Jan 23 00:06:32.539580 systemd-logind[1876]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 23 00:06:32.539933 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 00:06:32.562098 extend-filesystems[1858]: Old size kept for /dev/sda9 Jan 23 00:06:32.569021 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 00:06:32.569284 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 00:06:32.593244 tar[1892]: linux-arm64/LICENSE Jan 23 00:06:32.593244 tar[1892]: linux-arm64/helm Jan 23 00:06:32.617657 bash[1935]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:06:32.620232 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 00:06:32.629537 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 00:06:32.709440 dbus-daemon[1852]: [system] SELinux support is enabled Jan 23 00:06:32.709747 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 00:06:32.718243 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 00:06:32.718488 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 00:06:32.725127 update_engine[1881]: I20260123 00:06:32.724934 1881 update_check_scheduler.cc:74] Next update check in 2m39s Jan 23 00:06:32.725204 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 00:06:32.725224 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 00:06:32.735087 systemd[1]: Started update-engine.service - Update Engine. Jan 23 00:06:32.735313 dbus-daemon[1852]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 00:06:32.759050 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 00:06:32.816719 coreos-metadata[1851]: Jan 23 00:06:32.816 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 00:06:32.824505 coreos-metadata[1851]: Jan 23 00:06:32.824 INFO Fetch successful Jan 23 00:06:32.824505 coreos-metadata[1851]: Jan 23 00:06:32.824 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 00:06:32.830407 coreos-metadata[1851]: Jan 23 00:06:32.829 INFO Fetch successful Jan 23 00:06:32.830407 coreos-metadata[1851]: Jan 23 00:06:32.829 INFO Fetching http://168.63.129.16/machine/f197bb44-8c9f-4c7c-91eb-044fb7b9ec1d/e4bfa7f2%2Dbada%2D4e41%2Db0e2%2Ddad82dfdef0d.%5Fci%2D4459.2.2%2Dn%2Ddb2e6badfc?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 00:06:32.835888 coreos-metadata[1851]: Jan 23 00:06:32.835 INFO Fetch successful Jan 23 00:06:32.835888 coreos-metadata[1851]: Jan 23 00:06:32.835 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 00:06:32.846116 coreos-metadata[1851]: Jan 23 00:06:32.846 INFO Fetch successful Jan 23 00:06:32.889563 sshd_keygen[1880]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 00:06:32.931740 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 00:06:32.941991 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 00:06:32.949962 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 00:06:32.956024 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 00:06:32.959457 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 00:06:32.983685 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 00:06:32.983940 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 00:06:32.995079 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 00:06:33.018624 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 00:06:33.022571 locksmithd[1958]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 00:06:33.026967 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 00:06:33.041016 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 00:06:33.051691 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 00:06:33.060247 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 00:06:33.126235 tar[1892]: linux-arm64/README.md Jan 23 00:06:33.139724 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 00:06:33.398366 containerd[1894]: time="2026-01-23T00:06:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 00:06:33.399774 containerd[1894]: time="2026-01-23T00:06:33.398901628Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 00:06:33.405318 containerd[1894]: time="2026-01-23T00:06:33.405281612Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.256µs" Jan 23 00:06:33.405318 containerd[1894]: time="2026-01-23T00:06:33.405312460Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 00:06:33.405395 containerd[1894]: time="2026-01-23T00:06:33.405327492Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 00:06:33.405485 containerd[1894]: time="2026-01-23T00:06:33.405469252Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 00:06:33.405504 containerd[1894]: time="2026-01-23T00:06:33.405485316Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 00:06:33.405537 containerd[1894]: time="2026-01-23T00:06:33.405505844Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:06:33.405563 containerd[1894]: time="2026-01-23T00:06:33.405549884Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:06:33.405563 containerd[1894]: time="2026-01-23T00:06:33.405559884Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:06:33.405849 containerd[1894]: time="2026-01-23T00:06:33.405779572Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:06:33.405849 containerd[1894]: time="2026-01-23T00:06:33.405796004Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:06:33.405849 containerd[1894]: time="2026-01-23T00:06:33.405805596Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:06:33.405849 containerd[1894]: time="2026-01-23T00:06:33.405812020Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 00:06:33.405935 containerd[1894]: time="2026-01-23T00:06:33.405878980Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 00:06:33.406049 containerd[1894]: time="2026-01-23T00:06:33.406027708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:06:33.406075 containerd[1894]: time="2026-01-23T00:06:33.406055420Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:06:33.406075 containerd[1894]: time="2026-01-23T00:06:33.406062732Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 00:06:33.406122 containerd[1894]: time="2026-01-23T00:06:33.406091972Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 00:06:33.406337 containerd[1894]: time="2026-01-23T00:06:33.406310772Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 00:06:33.406411 containerd[1894]: time="2026-01-23T00:06:33.406396516Z" level=info msg="metadata content store policy set" policy=shared Jan 23 00:06:33.415424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:33.424478 containerd[1894]: time="2026-01-23T00:06:33.424431188Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 00:06:33.424594 containerd[1894]: time="2026-01-23T00:06:33.424501956Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 00:06:33.424594 containerd[1894]: time="2026-01-23T00:06:33.424518404Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 00:06:33.424594 containerd[1894]: time="2026-01-23T00:06:33.424529028Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 00:06:33.424594 containerd[1894]: time="2026-01-23T00:06:33.424537084Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 00:06:33.424594 containerd[1894]: time="2026-01-23T00:06:33.424544316Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 00:06:33.424594 containerd[1894]: time="2026-01-23T00:06:33.424553748Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 00:06:33.424594 containerd[1894]: time="2026-01-23T00:06:33.424561676Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 00:06:33.424594 containerd[1894]: time="2026-01-23T00:06:33.424570012Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 00:06:33.424594 containerd[1894]: time="2026-01-23T00:06:33.424576164Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 00:06:33.424594 containerd[1894]: time="2026-01-23T00:06:33.424582508Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 00:06:33.424594 containerd[1894]: time="2026-01-23T00:06:33.424598476Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 00:06:33.425034 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:06:33.425375 containerd[1894]: time="2026-01-23T00:06:33.425278644Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 00:06:33.425375 containerd[1894]: time="2026-01-23T00:06:33.425311572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 00:06:33.425375 containerd[1894]: time="2026-01-23T00:06:33.425367220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 00:06:33.425442 containerd[1894]: time="2026-01-23T00:06:33.425380684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 00:06:33.425442 containerd[1894]: time="2026-01-23T00:06:33.425389348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 00:06:33.426484 containerd[1894]: time="2026-01-23T00:06:33.425397196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 00:06:33.426525 containerd[1894]: time="2026-01-23T00:06:33.426502940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 00:06:33.426547 containerd[1894]: time="2026-01-23T00:06:33.426523988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 00:06:33.426561 containerd[1894]: time="2026-01-23T00:06:33.426549044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 00:06:33.426561 containerd[1894]: time="2026-01-23T00:06:33.426558868Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 00:06:33.426584 containerd[1894]: time="2026-01-23T00:06:33.426567740Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 00:06:33.426648 containerd[1894]: time="2026-01-23T00:06:33.426631588Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 00:06:33.426698 containerd[1894]: time="2026-01-23T00:06:33.426653676Z" level=info msg="Start snapshots syncer" Jan 23 00:06:33.426698 containerd[1894]: time="2026-01-23T00:06:33.426687908Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 00:06:33.427949 containerd[1894]: time="2026-01-23T00:06:33.427230692Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 00:06:33.427949 containerd[1894]: time="2026-01-23T00:06:33.427311404Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427379604Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427520956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427542740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427551588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427561796Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427573412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427589036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427597604Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427627996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427638364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427648388Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427699572Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427715868Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:06:33.428081 containerd[1894]: time="2026-01-23T00:06:33.427724852Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:06:33.428249 containerd[1894]: time="2026-01-23T00:06:33.427737260Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:06:33.428249 containerd[1894]: time="2026-01-23T00:06:33.427745060Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 00:06:33.428249 containerd[1894]: time="2026-01-23T00:06:33.427752884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 00:06:33.428249 containerd[1894]: time="2026-01-23T00:06:33.427761964Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 00:06:33.428249 containerd[1894]: time="2026-01-23T00:06:33.427777564Z" level=info msg="runtime interface created" Jan 23 00:06:33.428249 containerd[1894]: time="2026-01-23T00:06:33.427781076Z" level=info msg="created NRI interface" Jan 23 00:06:33.428249 containerd[1894]: time="2026-01-23T00:06:33.427788836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 00:06:33.428249 containerd[1894]: time="2026-01-23T00:06:33.427799036Z" level=info msg="Connect containerd service" Jan 23 00:06:33.428249 containerd[1894]: time="2026-01-23T00:06:33.427820420Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 00:06:33.430927 containerd[1894]: time="2026-01-23T00:06:33.430890828Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:06:33.778403 kubelet[2041]: E0123 00:06:33.778264 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:06:33.782989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:06:33.783222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:06:33.783604 systemd[1]: kubelet.service: Consumed 563ms CPU time, 259.6M memory peak. Jan 23 00:06:33.966034 containerd[1894]: time="2026-01-23T00:06:33.965925620Z" level=info msg="Start subscribing containerd event" Jan 23 00:06:33.966034 containerd[1894]: time="2026-01-23T00:06:33.965997732Z" level=info msg="Start recovering state" Jan 23 00:06:33.966172 containerd[1894]: time="2026-01-23T00:06:33.966074356Z" level=info msg="Start event monitor" Jan 23 00:06:33.966172 containerd[1894]: time="2026-01-23T00:06:33.966084580Z" level=info msg="Start cni network conf syncer for default" Jan 23 00:06:33.966172 containerd[1894]: time="2026-01-23T00:06:33.966093084Z" level=info msg="Start streaming server" Jan 23 00:06:33.966172 containerd[1894]: time="2026-01-23T00:06:33.966099260Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 00:06:33.966172 containerd[1894]: time="2026-01-23T00:06:33.966103932Z" level=info msg="runtime interface starting up..." Jan 23 00:06:33.966172 containerd[1894]: time="2026-01-23T00:06:33.966108956Z" level=info msg="starting plugins..." Jan 23 00:06:33.966172 containerd[1894]: time="2026-01-23T00:06:33.966119540Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 00:06:33.971056 containerd[1894]: time="2026-01-23T00:06:33.966376412Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 00:06:33.971056 containerd[1894]: time="2026-01-23T00:06:33.966424436Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 00:06:33.971056 containerd[1894]: time="2026-01-23T00:06:33.966488404Z" level=info msg="containerd successfully booted in 0.568512s" Jan 23 00:06:33.966612 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 00:06:33.975515 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 00:06:33.981445 systemd[1]: Startup finished in 1.634s (kernel) + 13.746s (initrd) + 18.513s (userspace) = 33.895s. Jan 23 00:06:34.304864 login[2030]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 23 00:06:34.305056 login[2028]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:06:34.355180 systemd-logind[1876]: New session 1 of user core. Jan 23 00:06:34.356257 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 00:06:34.358160 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 00:06:34.403094 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 00:06:34.405788 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 00:06:34.452602 (systemd)[2067]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 00:06:34.454747 systemd-logind[1876]: New session c1 of user core. Jan 23 00:06:34.568640 systemd[2067]: Queued start job for default target default.target. Jan 23 00:06:34.587975 systemd[2067]: Created slice app.slice - User Application Slice. Jan 23 00:06:34.588151 systemd[2067]: Reached target paths.target - Paths. Jan 23 00:06:34.588245 systemd[2067]: Reached target timers.target - Timers. Jan 23 00:06:34.589509 systemd[2067]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 00:06:34.597343 systemd[2067]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 00:06:34.597504 systemd[2067]: Reached target sockets.target - Sockets. Jan 23 00:06:34.597661 systemd[2067]: Reached target basic.target - Basic System. Jan 23 00:06:34.597777 systemd[2067]: Reached target default.target - Main User Target. Jan 23 00:06:34.597846 systemd[2067]: Startup finished in 138ms. Jan 23 00:06:34.597954 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 00:06:34.604806 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 00:06:35.306085 login[2030]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:06:35.310044 systemd-logind[1876]: New session 2 of user core. Jan 23 00:06:35.317967 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 00:06:35.620414 waagent[2023]: 2026-01-23T00:06:35.620269Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 23 00:06:35.624784 waagent[2023]: 2026-01-23T00:06:35.624738Z INFO Daemon Daemon OS: flatcar 4459.2.2 Jan 23 00:06:35.628116 waagent[2023]: 2026-01-23T00:06:35.628089Z INFO Daemon Daemon Python: 3.11.13 Jan 23 00:06:35.631434 waagent[2023]: 2026-01-23T00:06:35.631399Z INFO Daemon Daemon Run daemon Jan 23 00:06:35.634287 waagent[2023]: 2026-01-23T00:06:35.634251Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Jan 23 00:06:35.641122 waagent[2023]: 2026-01-23T00:06:35.641076Z INFO Daemon Daemon Using waagent for provisioning Jan 23 00:06:35.645170 waagent[2023]: 2026-01-23T00:06:35.645136Z INFO Daemon Daemon Activate resource disk Jan 23 00:06:35.648721 waagent[2023]: 2026-01-23T00:06:35.648691Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 00:06:35.657716 waagent[2023]: 2026-01-23T00:06:35.657661Z INFO Daemon Daemon Found device: None Jan 23 00:06:35.661018 waagent[2023]: 2026-01-23T00:06:35.660989Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 00:06:35.667382 waagent[2023]: 2026-01-23T00:06:35.667356Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 00:06:35.676350 waagent[2023]: 2026-01-23T00:06:35.676310Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 00:06:35.680742 waagent[2023]: 2026-01-23T00:06:35.680709Z INFO Daemon Daemon Running default provisioning handler Jan 23 00:06:35.689626 waagent[2023]: 2026-01-23T00:06:35.689566Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 00:06:35.699932 waagent[2023]: 2026-01-23T00:06:35.699882Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 00:06:35.707189 waagent[2023]: 2026-01-23T00:06:35.707151Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 00:06:35.710819 waagent[2023]: 2026-01-23T00:06:35.710795Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 00:06:35.870235 waagent[2023]: 2026-01-23T00:06:35.870146Z INFO Daemon Daemon Successfully mounted dvd Jan 23 00:06:35.911571 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 00:06:35.913588 waagent[2023]: 2026-01-23T00:06:35.913521Z INFO Daemon Daemon Detect protocol endpoint Jan 23 00:06:35.917265 waagent[2023]: 2026-01-23T00:06:35.917223Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 00:06:35.921627 waagent[2023]: 2026-01-23T00:06:35.921596Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 00:06:35.926558 waagent[2023]: 2026-01-23T00:06:35.926531Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 00:06:35.930403 waagent[2023]: 2026-01-23T00:06:35.930369Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 00:06:35.933957 waagent[2023]: 2026-01-23T00:06:35.933928Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 00:06:36.018677 waagent[2023]: 2026-01-23T00:06:36.018627Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 00:06:36.023734 waagent[2023]: 2026-01-23T00:06:36.023713Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 00:06:36.027579 waagent[2023]: 2026-01-23T00:06:36.027555Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 00:06:36.215761 waagent[2023]: 2026-01-23T00:06:36.215251Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 00:06:36.220095 waagent[2023]: 2026-01-23T00:06:36.220038Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 00:06:36.227727 waagent[2023]: 2026-01-23T00:06:36.227686Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 00:06:36.245921 waagent[2023]: 2026-01-23T00:06:36.245887Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 00:06:36.250514 waagent[2023]: 2026-01-23T00:06:36.250478Z INFO Daemon Jan 23 00:06:36.252763 waagent[2023]: 2026-01-23T00:06:36.252733Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5a4b4b41-1e2f-4fd1-a4bd-f67a9f7a7c77 eTag: 519484515456769813 source: Fabric] Jan 23 00:06:36.261251 waagent[2023]: 2026-01-23T00:06:36.261218Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 00:06:36.266215 waagent[2023]: 2026-01-23T00:06:36.266185Z INFO Daemon Jan 23 00:06:36.268561 waagent[2023]: 2026-01-23T00:06:36.268535Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 00:06:36.277453 waagent[2023]: 2026-01-23T00:06:36.277423Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 00:06:36.390600 waagent[2023]: 2026-01-23T00:06:36.390526Z INFO Daemon Downloaded certificate {'thumbprint': '9C99916B8F50CA5728D2622D07BE08BB4206A761', 'hasPrivateKey': True} Jan 23 00:06:36.398606 waagent[2023]: 2026-01-23T00:06:36.398563Z INFO Daemon Fetch goal state completed Jan 23 00:06:36.434945 waagent[2023]: 2026-01-23T00:06:36.434904Z INFO Daemon Daemon Starting provisioning Jan 23 00:06:36.438833 waagent[2023]: 2026-01-23T00:06:36.438790Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 00:06:36.442616 waagent[2023]: 2026-01-23T00:06:36.442588Z INFO Daemon Daemon Set hostname [ci-4459.2.2-n-db2e6badfc] Jan 23 00:06:36.448926 waagent[2023]: 2026-01-23T00:06:36.448879Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-n-db2e6badfc] Jan 23 00:06:36.453709 waagent[2023]: 2026-01-23T00:06:36.453654Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 00:06:36.458575 waagent[2023]: 2026-01-23T00:06:36.458541Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 00:06:36.468604 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:36.468613 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:06:36.468644 systemd-networkd[1486]: eth0: DHCP lease lost Jan 23 00:06:36.469655 waagent[2023]: 2026-01-23T00:06:36.469594Z INFO Daemon Daemon Create user account if not exists Jan 23 00:06:36.473970 waagent[2023]: 2026-01-23T00:06:36.473932Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 00:06:36.478274 waagent[2023]: 2026-01-23T00:06:36.478235Z INFO Daemon Daemon Configure sudoer Jan 23 00:06:36.485787 waagent[2023]: 2026-01-23T00:06:36.485734Z INFO Daemon Daemon Configure sshd Jan 23 00:06:36.492998 waagent[2023]: 2026-01-23T00:06:36.492949Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 00:06:36.502614 waagent[2023]: 2026-01-23T00:06:36.502576Z INFO Daemon Daemon Deploy ssh public key. Jan 23 00:06:36.508747 systemd-networkd[1486]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 00:06:37.597463 waagent[2023]: 2026-01-23T00:06:37.597414Z INFO Daemon Daemon Provisioning complete Jan 23 00:06:37.611860 waagent[2023]: 2026-01-23T00:06:37.611816Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 00:06:37.617186 waagent[2023]: 2026-01-23T00:06:37.617149Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 00:06:37.624862 waagent[2023]: 2026-01-23T00:06:37.624752Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 23 00:06:37.725554 waagent[2117]: 2026-01-23T00:06:37.725486Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 23 00:06:37.726703 waagent[2117]: 2026-01-23T00:06:37.726011Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Jan 23 00:06:37.726703 waagent[2117]: 2026-01-23T00:06:37.726068Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 23 00:06:37.726703 waagent[2117]: 2026-01-23T00:06:37.726106Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 23 00:06:38.029441 waagent[2117]: 2026-01-23T00:06:38.029325Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 23 00:06:38.029788 waagent[2117]: 2026-01-23T00:06:38.029756Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 00:06:38.029917 waagent[2117]: 2026-01-23T00:06:38.029893Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 00:06:38.035964 waagent[2117]: 2026-01-23T00:06:38.035917Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 00:06:38.041190 waagent[2117]: 2026-01-23T00:06:38.041158Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 00:06:38.041749 waagent[2117]: 2026-01-23T00:06:38.041711Z INFO ExtHandler Jan 23 00:06:38.041881 waagent[2117]: 2026-01-23T00:06:38.041856Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c89cd1b0-8da2-4f16-b1fd-8e6c33f638f7 eTag: 519484515456769813 source: Fabric] Jan 23 00:06:38.042212 waagent[2117]: 2026-01-23T00:06:38.042183Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 00:06:38.042739 waagent[2117]: 2026-01-23T00:06:38.042708Z INFO ExtHandler Jan 23 00:06:38.042855 waagent[2117]: 2026-01-23T00:06:38.042832Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 00:06:38.046350 waagent[2117]: 2026-01-23T00:06:38.046324Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 00:06:38.170638 waagent[2117]: 2026-01-23T00:06:38.170574Z INFO ExtHandler Downloaded certificate {'thumbprint': '9C99916B8F50CA5728D2622D07BE08BB4206A761', 'hasPrivateKey': True} Jan 23 00:06:38.171214 waagent[2117]: 2026-01-23T00:06:38.171180Z INFO ExtHandler Fetch goal state completed Jan 23 00:06:38.183392 waagent[2117]: 2026-01-23T00:06:38.183352Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 23 00:06:38.187255 waagent[2117]: 2026-01-23T00:06:38.187211Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2117 Jan 23 00:06:38.187468 waagent[2117]: 2026-01-23T00:06:38.187439Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 00:06:38.187851 waagent[2117]: 2026-01-23T00:06:38.187818Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 23 00:06:38.189091 waagent[2117]: 2026-01-23T00:06:38.189054Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 00:06:38.189495 waagent[2117]: 2026-01-23T00:06:38.189462Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 23 00:06:38.189760 waagent[2117]: 2026-01-23T00:06:38.189729Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 23 00:06:38.190287 waagent[2117]: 2026-01-23T00:06:38.190255Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 00:06:38.289452 waagent[2117]: 2026-01-23T00:06:38.289357Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 00:06:38.289602 waagent[2117]: 2026-01-23T00:06:38.289569Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 00:06:38.294271 waagent[2117]: 2026-01-23T00:06:38.294238Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 00:06:38.298944 systemd[1]: Reload requested from client PID 2132 ('systemctl') (unit waagent.service)... Jan 23 00:06:38.299165 systemd[1]: Reloading... Jan 23 00:06:38.368695 zram_generator::config[2174]: No configuration found. Jan 23 00:06:38.519538 systemd[1]: Reloading finished in 220 ms. Jan 23 00:06:38.533703 waagent[2117]: 2026-01-23T00:06:38.533452Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 00:06:38.533703 waagent[2117]: 2026-01-23T00:06:38.533599Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 00:06:38.874917 waagent[2117]: 2026-01-23T00:06:38.874836Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 00:06:38.875207 waagent[2117]: 2026-01-23T00:06:38.875163Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 23 00:06:38.875863 waagent[2117]: 2026-01-23T00:06:38.875817Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 00:06:38.876196 waagent[2117]: 2026-01-23T00:06:38.876125Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 00:06:38.876695 waagent[2117]: 2026-01-23T00:06:38.876377Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 00:06:38.876695 waagent[2117]: 2026-01-23T00:06:38.876449Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 00:06:38.876695 waagent[2117]: 2026-01-23T00:06:38.876608Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 00:06:38.876877 waagent[2117]: 2026-01-23T00:06:38.876841Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 00:06:38.876910 waagent[2117]: 2026-01-23T00:06:38.876880Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 00:06:38.877168 waagent[2117]: 2026-01-23T00:06:38.877140Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 00:06:38.877298 waagent[2117]: 2026-01-23T00:06:38.877277Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 00:06:38.877476 waagent[2117]: 2026-01-23T00:06:38.877437Z INFO EnvHandler ExtHandler Configure routes Jan 23 00:06:38.877640 waagent[2117]: 2026-01-23T00:06:38.877612Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 00:06:38.877640 waagent[2117]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 00:06:38.877640 waagent[2117]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 00:06:38.877640 waagent[2117]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 00:06:38.877640 waagent[2117]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 00:06:38.877640 waagent[2117]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 00:06:38.877640 waagent[2117]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 00:06:38.877928 waagent[2117]: 2026-01-23T00:06:38.877876Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 00:06:38.877987 waagent[2117]: 2026-01-23T00:06:38.877918Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 00:06:38.878192 waagent[2117]: 2026-01-23T00:06:38.878162Z INFO EnvHandler ExtHandler Gateway:None Jan 23 00:06:38.878536 waagent[2117]: 2026-01-23T00:06:38.878506Z INFO EnvHandler ExtHandler Routes:None Jan 23 00:06:38.879238 waagent[2117]: 2026-01-23T00:06:38.879219Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 00:06:38.884506 waagent[2117]: 2026-01-23T00:06:38.884457Z INFO ExtHandler ExtHandler Jan 23 00:06:38.884553 waagent[2117]: 2026-01-23T00:06:38.884529Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 19f94ef8-d6e5-4726-955e-7262a6cb8147 correlation d4a23dbb-2b08-4725-98a8-76fb0f0779bd created: 2026-01-23T00:05:44.390800Z] Jan 23 00:06:38.884869 waagent[2117]: 2026-01-23T00:06:38.884833Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 00:06:38.885273 waagent[2117]: 2026-01-23T00:06:38.885244Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 23 00:06:38.920391 waagent[2117]: 2026-01-23T00:06:38.920327Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 23 00:06:38.920391 waagent[2117]: Try `iptables -h' or 'iptables --help' for more information.) Jan 23 00:06:38.921699 waagent[2117]: 2026-01-23T00:06:38.920777Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EB52B80E-2F36-4D75-8759-DC6BD8397B0B;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 23 00:06:39.000989 waagent[2117]: 2026-01-23T00:06:39.000916Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 00:06:39.000989 waagent[2117]: Executing ['ip', '-a', '-o', 'link']: Jan 23 00:06:39.000989 waagent[2117]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 00:06:39.000989 waagent[2117]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d0:b3:18 brd ff:ff:ff:ff:ff:ff Jan 23 00:06:39.000989 waagent[2117]: 3: enP33107s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d0:b3:18 brd ff:ff:ff:ff:ff:ff\ altname enP33107p0s2 Jan 23 00:06:39.000989 waagent[2117]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 00:06:39.000989 waagent[2117]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 00:06:39.000989 waagent[2117]: 2: eth0 inet 10.200.20.18/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 00:06:39.000989 waagent[2117]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 00:06:39.000989 waagent[2117]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 00:06:39.000989 waagent[2117]: 2: eth0 inet6 fe80::7eed:8dff:fed0:b318/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 00:06:39.045084 waagent[2117]: 2026-01-23T00:06:39.045042Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 23 00:06:39.045084 waagent[2117]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 00:06:39.045084 waagent[2117]: pkts bytes target prot opt in out source destination Jan 23 00:06:39.045084 waagent[2117]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 00:06:39.045084 waagent[2117]: pkts bytes target prot opt in out source destination Jan 23 00:06:39.045084 waagent[2117]: Chain OUTPUT (policy ACCEPT 2 packets, 304 bytes) Jan 23 00:06:39.045084 waagent[2117]: pkts bytes target prot opt in out source destination Jan 23 00:06:39.045084 waagent[2117]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 00:06:39.045084 waagent[2117]: 6 510 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 00:06:39.045084 waagent[2117]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 00:06:39.048086 waagent[2117]: 2026-01-23T00:06:39.047782Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 00:06:39.048086 waagent[2117]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 00:06:39.048086 waagent[2117]: pkts bytes target prot opt in out source destination Jan 23 00:06:39.048086 waagent[2117]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 00:06:39.048086 waagent[2117]: pkts bytes target prot opt in out source destination Jan 23 00:06:39.048086 waagent[2117]: Chain OUTPUT (policy ACCEPT 2 packets, 304 bytes) Jan 23 00:06:39.048086 waagent[2117]: pkts bytes target prot opt in out source destination Jan 23 00:06:39.048086 waagent[2117]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 00:06:39.048086 waagent[2117]: 10 1104 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 00:06:39.048086 waagent[2117]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 00:06:39.048086 waagent[2117]: 2026-01-23T00:06:39.048003Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 23 00:06:43.993067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 00:06:43.995035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:44.101868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:44.104950 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:06:44.246021 kubelet[2266]: E0123 00:06:44.245902 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:06:44.249005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:06:44.249243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:06:44.249789 systemd[1]: kubelet.service: Consumed 111ms CPU time, 107.1M memory peak. Jan 23 00:06:54.493084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 00:06:54.494448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:54.602932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:54.609139 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:06:54.710398 kubelet[2280]: E0123 00:06:54.710323 2280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:06:54.712446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:06:54.712563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:06:54.713106 systemd[1]: kubelet.service: Consumed 178ms CPU time, 105.5M memory peak. Jan 23 00:06:56.190039 chronyd[1849]: Selected source PHC0 Jan 23 00:07:00.495024 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 00:07:00.496036 systemd[1]: Started sshd@0-10.200.20.18:22-10.200.16.10:46252.service - OpenSSH per-connection server daemon (10.200.16.10:46252). Jan 23 00:07:01.011092 sshd[2287]: Accepted publickey for core from 10.200.16.10 port 46252 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:07:01.012203 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:01.015652 systemd-logind[1876]: New session 3 of user core. Jan 23 00:07:01.027812 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 00:07:01.448200 systemd[1]: Started sshd@1-10.200.20.18:22-10.200.16.10:46256.service - OpenSSH per-connection server daemon (10.200.16.10:46256). Jan 23 00:07:01.937342 sshd[2293]: Accepted publickey for core from 10.200.16.10 port 46256 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:07:01.938951 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:01.942724 systemd-logind[1876]: New session 4 of user core. Jan 23 00:07:01.950819 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 00:07:02.288384 sshd[2296]: Connection closed by 10.200.16.10 port 46256 Jan 23 00:07:02.288289 sshd-session[2293]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:02.291728 systemd[1]: sshd@1-10.200.20.18:22-10.200.16.10:46256.service: Deactivated successfully. Jan 23 00:07:02.293783 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 00:07:02.294791 systemd-logind[1876]: Session 4 logged out. Waiting for processes to exit. Jan 23 00:07:02.296337 systemd-logind[1876]: Removed session 4. Jan 23 00:07:02.377593 systemd[1]: Started sshd@2-10.200.20.18:22-10.200.16.10:46260.service - OpenSSH per-connection server daemon (10.200.16.10:46260). Jan 23 00:07:02.871044 sshd[2302]: Accepted publickey for core from 10.200.16.10 port 46260 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:07:02.872142 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:02.875607 systemd-logind[1876]: New session 5 of user core. Jan 23 00:07:02.882801 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 00:07:03.218888 sshd[2305]: Connection closed by 10.200.16.10 port 46260 Jan 23 00:07:03.218527 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:03.222264 systemd[1]: sshd@2-10.200.20.18:22-10.200.16.10:46260.service: Deactivated successfully. Jan 23 00:07:03.223976 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 00:07:03.226029 systemd-logind[1876]: Session 5 logged out. Waiting for processes to exit. Jan 23 00:07:03.227075 systemd-logind[1876]: Removed session 5. Jan 23 00:07:03.312885 systemd[1]: Started sshd@3-10.200.20.18:22-10.200.16.10:46268.service - OpenSSH per-connection server daemon (10.200.16.10:46268). Jan 23 00:07:03.804160 sshd[2311]: Accepted publickey for core from 10.200.16.10 port 46268 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:07:03.805679 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:03.809466 systemd-logind[1876]: New session 6 of user core. Jan 23 00:07:03.816796 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 00:07:04.156135 sshd[2314]: Connection closed by 10.200.16.10 port 46268 Jan 23 00:07:04.156852 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:04.161201 systemd-logind[1876]: Session 6 logged out. Waiting for processes to exit. Jan 23 00:07:04.161432 systemd[1]: sshd@3-10.200.20.18:22-10.200.16.10:46268.service: Deactivated successfully. Jan 23 00:07:04.164413 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 00:07:04.166533 systemd-logind[1876]: Removed session 6. Jan 23 00:07:04.248310 systemd[1]: Started sshd@4-10.200.20.18:22-10.200.16.10:46280.service - OpenSSH per-connection server daemon (10.200.16.10:46280). Jan 23 00:07:04.739716 sshd[2320]: Accepted publickey for core from 10.200.16.10 port 46280 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:07:04.740808 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:04.741580 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 00:07:04.744839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:04.747729 systemd-logind[1876]: New session 7 of user core. Jan 23 00:07:04.756841 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 00:07:04.850113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:04.862158 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:04.944644 kubelet[2332]: E0123 00:07:04.944577 2332 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:04.946891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:04.947006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:04.948744 systemd[1]: kubelet.service: Consumed 110ms CPU time, 105.6M memory peak. Jan 23 00:07:05.323545 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 00:07:05.323794 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:05.341123 sudo[2339]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:05.419459 sshd[2326]: Connection closed by 10.200.16.10 port 46280 Jan 23 00:07:05.418721 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:05.422249 systemd[1]: sshd@4-10.200.20.18:22-10.200.16.10:46280.service: Deactivated successfully. Jan 23 00:07:05.423642 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 00:07:05.425251 systemd-logind[1876]: Session 7 logged out. Waiting for processes to exit. Jan 23 00:07:05.426236 systemd-logind[1876]: Removed session 7. Jan 23 00:07:05.504706 systemd[1]: Started sshd@5-10.200.20.18:22-10.200.16.10:46292.service - OpenSSH per-connection server daemon (10.200.16.10:46292). Jan 23 00:07:05.963552 sshd[2345]: Accepted publickey for core from 10.200.16.10 port 46292 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:07:05.964355 sshd-session[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:05.968071 systemd-logind[1876]: New session 8 of user core. Jan 23 00:07:05.976012 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 00:07:06.221175 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 00:07:06.221955 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:06.256032 sudo[2350]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:06.260542 sudo[2349]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 00:07:06.260811 sudo[2349]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:06.268856 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:07:06.300059 augenrules[2372]: No rules Jan 23 00:07:06.301340 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:07:06.301796 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:07:06.303880 sudo[2349]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:06.380979 sshd[2348]: Connection closed by 10.200.16.10 port 46292 Jan 23 00:07:06.381715 sshd-session[2345]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:06.384458 systemd[1]: sshd@5-10.200.20.18:22-10.200.16.10:46292.service: Deactivated successfully. Jan 23 00:07:06.386042 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 00:07:06.387345 systemd-logind[1876]: Session 8 logged out. Waiting for processes to exit. Jan 23 00:07:06.389129 systemd-logind[1876]: Removed session 8. Jan 23 00:07:06.472703 systemd[1]: Started sshd@6-10.200.20.18:22-10.200.16.10:46302.service - OpenSSH per-connection server daemon (10.200.16.10:46302). Jan 23 00:07:06.968694 sshd[2381]: Accepted publickey for core from 10.200.16.10 port 46302 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:07:06.969419 sshd-session[2381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:06.973029 systemd-logind[1876]: New session 9 of user core. Jan 23 00:07:06.982890 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 00:07:07.241923 sudo[2385]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 00:07:07.242136 sudo[2385]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:09.639868 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 00:07:09.648176 (dockerd)[2402]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 00:07:11.659707 dockerd[2402]: time="2026-01-23T00:07:11.659165102Z" level=info msg="Starting up" Jan 23 00:07:11.661493 dockerd[2402]: time="2026-01-23T00:07:11.661465319Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 00:07:11.670334 dockerd[2402]: time="2026-01-23T00:07:11.670278238Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 00:07:11.717981 systemd[1]: var-lib-docker-metacopy\x2dcheck1424956243-merged.mount: Deactivated successfully. Jan 23 00:07:11.723689 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 23 00:07:11.750388 dockerd[2402]: time="2026-01-23T00:07:11.750345554Z" level=info msg="Loading containers: start." Jan 23 00:07:11.836683 kernel: Initializing XFRM netlink socket Jan 23 00:07:12.374219 systemd-networkd[1486]: docker0: Link UP Jan 23 00:07:12.388788 dockerd[2402]: time="2026-01-23T00:07:12.388685774Z" level=info msg="Loading containers: done." Jan 23 00:07:12.407911 dockerd[2402]: time="2026-01-23T00:07:12.407835683Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 00:07:12.408100 dockerd[2402]: time="2026-01-23T00:07:12.407958431Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 00:07:12.408100 dockerd[2402]: time="2026-01-23T00:07:12.408051955Z" level=info msg="Initializing buildkit" Jan 23 00:07:12.457724 dockerd[2402]: time="2026-01-23T00:07:12.457679192Z" level=info msg="Completed buildkit initialization" Jan 23 00:07:12.463242 dockerd[2402]: time="2026-01-23T00:07:12.463198548Z" level=info msg="Daemon has completed initialization" Jan 23 00:07:12.463345 dockerd[2402]: time="2026-01-23T00:07:12.463254758Z" level=info msg="API listen on /run/docker.sock" Jan 23 00:07:12.463981 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 00:07:13.378757 containerd[1894]: time="2026-01-23T00:07:13.378710396Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 00:07:14.253572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1815054855.mount: Deactivated successfully. Jan 23 00:07:14.992887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 00:07:14.996010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:15.247090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:15.254948 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:15.278477 kubelet[2671]: E0123 00:07:15.278423 2671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:15.280616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:15.280944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:15.282731 systemd[1]: kubelet.service: Consumed 104ms CPU time, 104.8M memory peak. Jan 23 00:07:16.090378 containerd[1894]: time="2026-01-23T00:07:16.089709307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:16.092282 containerd[1894]: time="2026-01-23T00:07:16.092250633Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Jan 23 00:07:16.095420 containerd[1894]: time="2026-01-23T00:07:16.095393341Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:16.100062 containerd[1894]: time="2026-01-23T00:07:16.100030337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:16.100742 containerd[1894]: time="2026-01-23T00:07:16.100721338Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.721966581s" Jan 23 00:07:16.100856 containerd[1894]: time="2026-01-23T00:07:16.100840663Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 23 00:07:16.102249 containerd[1894]: time="2026-01-23T00:07:16.102224418Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 00:07:17.325040 containerd[1894]: time="2026-01-23T00:07:17.324995256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:17.328899 containerd[1894]: time="2026-01-23T00:07:17.328865839Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Jan 23 00:07:17.332165 containerd[1894]: time="2026-01-23T00:07:17.332127928Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:17.336764 containerd[1894]: time="2026-01-23T00:07:17.336712410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:17.337190 containerd[1894]: time="2026-01-23T00:07:17.337043078Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.234774179s" Jan 23 00:07:17.337190 containerd[1894]: time="2026-01-23T00:07:17.337073287Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 23 00:07:17.337596 containerd[1894]: time="2026-01-23T00:07:17.337580498Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 00:07:18.087306 update_engine[1881]: I20260123 00:07:18.086831 1881 update_attempter.cc:509] Updating boot flags... Jan 23 00:07:18.946098 containerd[1894]: time="2026-01-23T00:07:18.946040878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:18.948843 containerd[1894]: time="2026-01-23T00:07:18.948672232Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Jan 23 00:07:18.951732 containerd[1894]: time="2026-01-23T00:07:18.951706160Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:18.956519 containerd[1894]: time="2026-01-23T00:07:18.956481513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:18.957215 containerd[1894]: time="2026-01-23T00:07:18.956870119Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.619209186s" Jan 23 00:07:18.957215 containerd[1894]: time="2026-01-23T00:07:18.956896144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 23 00:07:18.957634 containerd[1894]: time="2026-01-23T00:07:18.957604883Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 00:07:19.908222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696590162.mount: Deactivated successfully. Jan 23 00:07:20.169785 containerd[1894]: time="2026-01-23T00:07:20.169229156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:20.171854 containerd[1894]: time="2026-01-23T00:07:20.171830452Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 23 00:07:20.180178 containerd[1894]: time="2026-01-23T00:07:20.180129471Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:20.183755 containerd[1894]: time="2026-01-23T00:07:20.183715268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:20.184146 containerd[1894]: time="2026-01-23T00:07:20.183989582Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.226349099s" Jan 23 00:07:20.184146 containerd[1894]: time="2026-01-23T00:07:20.184022760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 23 00:07:20.184526 containerd[1894]: time="2026-01-23T00:07:20.184506329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 00:07:20.793995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941322037.mount: Deactivated successfully. Jan 23 00:07:21.657931 containerd[1894]: time="2026-01-23T00:07:21.657872691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:21.660462 containerd[1894]: time="2026-01-23T00:07:21.660277078Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jan 23 00:07:21.663020 containerd[1894]: time="2026-01-23T00:07:21.662992288Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:21.667087 containerd[1894]: time="2026-01-23T00:07:21.667056112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:21.669027 containerd[1894]: time="2026-01-23T00:07:21.668989786Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.48440175s" Jan 23 00:07:21.669027 containerd[1894]: time="2026-01-23T00:07:21.669025756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 23 00:07:21.670197 containerd[1894]: time="2026-01-23T00:07:21.670161174Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 00:07:22.195465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22779922.mount: Deactivated successfully. Jan 23 00:07:22.215713 containerd[1894]: time="2026-01-23T00:07:22.215498943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:07:22.218592 containerd[1894]: time="2026-01-23T00:07:22.218442725Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 00:07:22.221489 containerd[1894]: time="2026-01-23T00:07:22.221460191Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:07:22.226821 containerd[1894]: time="2026-01-23T00:07:22.226764445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:07:22.227535 containerd[1894]: time="2026-01-23T00:07:22.227115711Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 556.841996ms" Jan 23 00:07:22.227535 containerd[1894]: time="2026-01-23T00:07:22.227145857Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 00:07:22.227834 containerd[1894]: time="2026-01-23T00:07:22.227812211Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 00:07:22.839051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519150844.mount: Deactivated successfully. Jan 23 00:07:25.295187 containerd[1894]: time="2026-01-23T00:07:25.295129095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:25.298695 containerd[1894]: time="2026-01-23T00:07:25.298560399Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Jan 23 00:07:25.492940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 00:07:25.494289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:25.582265 containerd[1894]: time="2026-01-23T00:07:25.581659942Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:25.592023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:25.601072 (kubelet)[2881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:25.795686 kubelet[2881]: E0123 00:07:25.795583 2881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:25.797820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:25.797931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:25.798779 systemd[1]: kubelet.service: Consumed 109ms CPU time, 104.3M memory peak. Jan 23 00:07:26.233977 containerd[1894]: time="2026-01-23T00:07:26.233911281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:26.234679 containerd[1894]: time="2026-01-23T00:07:26.234628658Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 4.006787694s" Jan 23 00:07:26.234679 containerd[1894]: time="2026-01-23T00:07:26.234661659Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 23 00:07:29.019186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:29.019290 systemd[1]: kubelet.service: Consumed 109ms CPU time, 104.3M memory peak. Jan 23 00:07:29.020879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:29.048892 systemd[1]: Reload requested from client PID 2914 ('systemctl') (unit session-9.scope)... Jan 23 00:07:29.049011 systemd[1]: Reloading... Jan 23 00:07:29.136701 zram_generator::config[2957]: No configuration found. Jan 23 00:07:29.306454 systemd[1]: Reloading finished in 257 ms. Jan 23 00:07:29.349032 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 00:07:29.349244 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 00:07:29.349444 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:29.349480 systemd[1]: kubelet.service: Consumed 74ms CPU time, 95M memory peak. Jan 23 00:07:29.350743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:29.552841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:29.559917 (kubelet)[3028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:07:29.586702 kubelet[3028]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:07:29.586702 kubelet[3028]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:07:29.586702 kubelet[3028]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:07:29.586702 kubelet[3028]: I0123 00:07:29.586283 3028 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:07:29.986313 kubelet[3028]: I0123 00:07:29.986203 3028 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 00:07:29.986313 kubelet[3028]: I0123 00:07:29.986234 3028 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:07:29.986824 kubelet[3028]: I0123 00:07:29.986803 3028 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:07:30.004693 kubelet[3028]: E0123 00:07:30.003290 3028 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 00:07:30.004693 kubelet[3028]: I0123 00:07:30.003813 3028 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:07:30.010121 kubelet[3028]: I0123 00:07:30.010102 3028 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:07:30.012936 kubelet[3028]: I0123 00:07:30.012910 3028 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 00:07:30.014019 kubelet[3028]: I0123 00:07:30.013977 3028 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:07:30.014232 kubelet[3028]: I0123 00:07:30.014106 3028 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-db2e6badfc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:07:30.014359 kubelet[3028]: I0123 00:07:30.014346 3028 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:07:30.014405 kubelet[3028]: I0123 00:07:30.014398 3028 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 00:07:30.014591 kubelet[3028]: I0123 00:07:30.014575 3028 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:07:30.017645 kubelet[3028]: I0123 00:07:30.017621 3028 kubelet.go:480] "Attempting to sync node with API server" Jan 23 00:07:30.017762 kubelet[3028]: I0123 00:07:30.017750 3028 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:07:30.017827 kubelet[3028]: I0123 00:07:30.017820 3028 kubelet.go:386] "Adding apiserver pod source" Jan 23 00:07:30.017877 kubelet[3028]: I0123 00:07:30.017869 3028 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:07:30.019813 kubelet[3028]: E0123 00:07:30.019754 3028 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-db2e6badfc&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:07:30.021791 kubelet[3028]: E0123 00:07:30.021764 3028 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 00:07:30.021872 kubelet[3028]: I0123 00:07:30.021858 3028 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:07:30.022247 kubelet[3028]: I0123 00:07:30.022225 3028 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:07:30.022296 kubelet[3028]: W0123 00:07:30.022272 3028 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 00:07:30.023929 kubelet[3028]: I0123 00:07:30.023910 3028 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 00:07:30.023987 kubelet[3028]: I0123 00:07:30.023945 3028 server.go:1289] "Started kubelet" Jan 23 00:07:30.025254 kubelet[3028]: I0123 00:07:30.025232 3028 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:07:30.027405 kubelet[3028]: E0123 00:07:30.026454 3028 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-n-db2e6badfc.188d337f32481fd1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-n-db2e6badfc,UID:ci-4459.2.2-n-db2e6badfc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-n-db2e6badfc,},FirstTimestamp:2026-01-23 00:07:30.023923665 +0000 UTC m=+0.460577724,LastTimestamp:2026-01-23 00:07:30.023923665 +0000 UTC m=+0.460577724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-n-db2e6badfc,}" Jan 23 00:07:30.029338 kubelet[3028]: I0123 00:07:30.029313 3028 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:07:30.030010 kubelet[3028]: I0123 00:07:30.029990 3028 server.go:317] "Adding debug handlers to kubelet server" Jan 23 00:07:30.031717 kubelet[3028]: I0123 00:07:30.031632 3028 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 00:07:30.031952 kubelet[3028]: E0123 00:07:30.031928 3028 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-db2e6badfc\" not found" Jan 23 00:07:30.034012 kubelet[3028]: I0123 00:07:30.033942 3028 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:07:30.034219 kubelet[3028]: I0123 00:07:30.034199 3028 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:07:30.034417 kubelet[3028]: I0123 00:07:30.034398 3028 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:07:30.035070 kubelet[3028]: I0123 00:07:30.035050 3028 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 00:07:30.035133 kubelet[3028]: I0123 00:07:30.035120 3028 reconciler.go:26] "Reconciler: start to sync state" Jan 23 00:07:30.035951 kubelet[3028]: E0123 00:07:30.035910 3028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-db2e6badfc?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="200ms" Jan 23 00:07:30.036141 kubelet[3028]: I0123 00:07:30.036115 3028 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:07:30.036218 kubelet[3028]: I0123 00:07:30.036203 3028 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:07:30.037978 kubelet[3028]: E0123 00:07:30.037951 3028 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:07:30.038588 kubelet[3028]: E0123 00:07:30.038560 3028 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:07:30.038966 kubelet[3028]: I0123 00:07:30.038938 3028 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:07:30.050408 kubelet[3028]: I0123 00:07:30.050384 3028 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:07:30.050408 kubelet[3028]: I0123 00:07:30.050400 3028 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:07:30.050529 kubelet[3028]: I0123 00:07:30.050429 3028 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:07:30.088195 kubelet[3028]: I0123 00:07:30.088158 3028 policy_none.go:49] "None policy: Start" Jan 23 00:07:30.088195 kubelet[3028]: I0123 00:07:30.088192 3028 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 00:07:30.088195 kubelet[3028]: I0123 00:07:30.088205 3028 state_mem.go:35] "Initializing new in-memory state store" Jan 23 00:07:30.096184 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 00:07:30.105165 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 00:07:30.107807 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 00:07:30.118511 kubelet[3028]: E0123 00:07:30.118480 3028 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:07:30.119013 kubelet[3028]: I0123 00:07:30.118677 3028 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:07:30.119013 kubelet[3028]: I0123 00:07:30.118693 3028 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:07:30.119013 kubelet[3028]: I0123 00:07:30.118898 3028 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:07:30.123777 kubelet[3028]: E0123 00:07:30.123753 3028 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:07:30.123850 kubelet[3028]: E0123 00:07:30.123788 3028 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-n-db2e6badfc\" not found" Jan 23 00:07:30.124607 kubelet[3028]: I0123 00:07:30.124243 3028 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 00:07:30.125985 kubelet[3028]: I0123 00:07:30.125660 3028 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 00:07:30.125985 kubelet[3028]: I0123 00:07:30.125786 3028 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 00:07:30.125985 kubelet[3028]: I0123 00:07:30.125805 3028 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:07:30.125985 kubelet[3028]: I0123 00:07:30.125810 3028 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 00:07:30.125985 kubelet[3028]: E0123 00:07:30.125841 3028 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 00:07:30.128086 kubelet[3028]: E0123 00:07:30.128054 3028 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 00:07:30.220515 kubelet[3028]: I0123 00:07:30.220473 3028 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.220950 kubelet[3028]: E0123 00:07:30.220925 3028 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.238171 kubelet[3028]: E0123 00:07:30.237168 3028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-db2e6badfc?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="400ms" Jan 23 00:07:30.238171 kubelet[3028]: I0123 00:07:30.237248 3028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c28b69518d2381c3fb1ee29f0c070b1a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-db2e6badfc\" (UID: \"c28b69518d2381c3fb1ee29f0c070b1a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.238171 kubelet[3028]: I0123 00:07:30.237265 3028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c28b69518d2381c3fb1ee29f0c070b1a-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-db2e6badfc\" (UID: \"c28b69518d2381c3fb1ee29f0c070b1a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.238171 kubelet[3028]: I0123 00:07:30.237277 3028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c28b69518d2381c3fb1ee29f0c070b1a-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-db2e6badfc\" (UID: \"c28b69518d2381c3fb1ee29f0c070b1a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.238171 kubelet[3028]: I0123 00:07:30.237287 3028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c28b69518d2381c3fb1ee29f0c070b1a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-db2e6badfc\" (UID: \"c28b69518d2381c3fb1ee29f0c070b1a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.238020 systemd[1]: Created slice kubepods-burstable-pod69941e2613c6cb6a878a420c9c5c8d41.slice - libcontainer container kubepods-burstable-pod69941e2613c6cb6a878a420c9c5c8d41.slice. Jan 23 00:07:30.238380 kubelet[3028]: I0123 00:07:30.237300 3028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69941e2613c6cb6a878a420c9c5c8d41-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-db2e6badfc\" (UID: \"69941e2613c6cb6a878a420c9c5c8d41\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.238380 kubelet[3028]: I0123 00:07:30.237310 3028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69941e2613c6cb6a878a420c9c5c8d41-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-db2e6badfc\" (UID: \"69941e2613c6cb6a878a420c9c5c8d41\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.238380 kubelet[3028]: I0123 00:07:30.237320 3028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69941e2613c6cb6a878a420c9c5c8d41-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-db2e6badfc\" (UID: \"69941e2613c6cb6a878a420c9c5c8d41\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.238380 kubelet[3028]: I0123 00:07:30.237328 3028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c28b69518d2381c3fb1ee29f0c070b1a-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-db2e6badfc\" (UID: \"c28b69518d2381c3fb1ee29f0c070b1a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.247513 kubelet[3028]: E0123 00:07:30.247483 3028 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-db2e6badfc\" not found" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.251394 systemd[1]: Created slice kubepods-burstable-podc28b69518d2381c3fb1ee29f0c070b1a.slice - libcontainer container kubepods-burstable-podc28b69518d2381c3fb1ee29f0c070b1a.slice. Jan 23 00:07:30.262535 kubelet[3028]: E0123 00:07:30.262478 3028 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-db2e6badfc\" not found" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.264682 systemd[1]: Created slice kubepods-burstable-pode805ee4f7c0df2d2a0a00a91ffa166fe.slice - libcontainer container kubepods-burstable-pode805ee4f7c0df2d2a0a00a91ffa166fe.slice. Jan 23 00:07:30.266684 kubelet[3028]: E0123 00:07:30.266650 3028 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-db2e6badfc\" not found" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.338154 kubelet[3028]: I0123 00:07:30.338101 3028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e805ee4f7c0df2d2a0a00a91ffa166fe-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-db2e6badfc\" (UID: \"e805ee4f7c0df2d2a0a00a91ffa166fe\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.422970 kubelet[3028]: I0123 00:07:30.422943 3028 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.423277 kubelet[3028]: E0123 00:07:30.423248 3028 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.549152 containerd[1894]: time="2026-01-23T00:07:30.549109542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-db2e6badfc,Uid:69941e2613c6cb6a878a420c9c5c8d41,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:30.564487 containerd[1894]: time="2026-01-23T00:07:30.564449082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-db2e6badfc,Uid:c28b69518d2381c3fb1ee29f0c070b1a,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:30.568259 containerd[1894]: time="2026-01-23T00:07:30.568226372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-db2e6badfc,Uid:e805ee4f7c0df2d2a0a00a91ffa166fe,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:30.605629 containerd[1894]: time="2026-01-23T00:07:30.605585296Z" level=info msg="connecting to shim 4fd067817e50b0196ea67c36f305841979301e46547c84e4d6c903bec7441da5" address="unix:///run/containerd/s/ce75639aa4eb4324951978410ad716e8d658600607458f9229681a38d762504d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:07:30.624790 systemd[1]: Started cri-containerd-4fd067817e50b0196ea67c36f305841979301e46547c84e4d6c903bec7441da5.scope - libcontainer container 4fd067817e50b0196ea67c36f305841979301e46547c84e4d6c903bec7441da5. Jan 23 00:07:30.638423 kubelet[3028]: E0123 00:07:30.638384 3028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-db2e6badfc?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="800ms" Jan 23 00:07:30.658758 containerd[1894]: time="2026-01-23T00:07:30.658596077Z" level=info msg="connecting to shim 9902ef45ed928c9db0804171cd26b6d6b12238898c6933d0351f69d71343f2fd" address="unix:///run/containerd/s/3492eb6e833cae7e77162601f12fda3cab70253b488ed4f9cf01286372a7e52f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:07:30.660218 containerd[1894]: time="2026-01-23T00:07:30.660158485Z" level=info msg="connecting to shim 5206fbafbaa81afb6277bacc28b798efb106ce82af79c70f61a96463c0324562" address="unix:///run/containerd/s/3c1cf6523926c7901097023ab9b78366127d1b938e62ea8a2a63a679026c8cbc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:07:30.661993 containerd[1894]: time="2026-01-23T00:07:30.661938990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-db2e6badfc,Uid:69941e2613c6cb6a878a420c9c5c8d41,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fd067817e50b0196ea67c36f305841979301e46547c84e4d6c903bec7441da5\"" Jan 23 00:07:30.672697 containerd[1894]: time="2026-01-23T00:07:30.672382729Z" level=info msg="CreateContainer within sandbox \"4fd067817e50b0196ea67c36f305841979301e46547c84e4d6c903bec7441da5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 00:07:30.696825 systemd[1]: Started cri-containerd-5206fbafbaa81afb6277bacc28b798efb106ce82af79c70f61a96463c0324562.scope - libcontainer container 5206fbafbaa81afb6277bacc28b798efb106ce82af79c70f61a96463c0324562. Jan 23 00:07:30.698613 systemd[1]: Started cri-containerd-9902ef45ed928c9db0804171cd26b6d6b12238898c6933d0351f69d71343f2fd.scope - libcontainer container 9902ef45ed928c9db0804171cd26b6d6b12238898c6933d0351f69d71343f2fd. Jan 23 00:07:30.825572 kubelet[3028]: I0123 00:07:30.825405 3028 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:30.826000 kubelet[3028]: E0123 00:07:30.825971 3028 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:31.055453 kubelet[3028]: E0123 00:07:31.055402 3028 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-db2e6badfc&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:07:31.148068 kubelet[3028]: E0123 00:07:31.147942 3028 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 00:07:31.439896 kubelet[3028]: E0123 00:07:31.439764 3028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-db2e6badfc?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="1.6s" Jan 23 00:07:31.490462 containerd[1894]: time="2026-01-23T00:07:31.490360316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-db2e6badfc,Uid:e805ee4f7c0df2d2a0a00a91ffa166fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"9902ef45ed928c9db0804171cd26b6d6b12238898c6933d0351f69d71343f2fd\"" Jan 23 00:07:31.493681 containerd[1894]: time="2026-01-23T00:07:31.493615914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-db2e6badfc,Uid:c28b69518d2381c3fb1ee29f0c070b1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5206fbafbaa81afb6277bacc28b798efb106ce82af79c70f61a96463c0324562\"" Jan 23 00:07:31.499261 containerd[1894]: time="2026-01-23T00:07:31.499219566Z" level=info msg="CreateContainer within sandbox \"9902ef45ed928c9db0804171cd26b6d6b12238898c6933d0351f69d71343f2fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 00:07:31.505132 containerd[1894]: time="2026-01-23T00:07:31.505076003Z" level=info msg="CreateContainer within sandbox \"5206fbafbaa81afb6277bacc28b798efb106ce82af79c70f61a96463c0324562\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 00:07:31.505759 containerd[1894]: time="2026-01-23T00:07:31.505736531Z" level=info msg="Container 30f4892aa3092a70c62a771d53ba0d61503757806b105f922d9b3c26fe898b6f: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:31.532711 containerd[1894]: time="2026-01-23T00:07:31.532322856Z" level=info msg="Container 00fab06a9ff4af55c4e62bb72d1232b39d0958305436015759c8ce42a1da6409: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:31.540558 containerd[1894]: time="2026-01-23T00:07:31.540516514Z" level=info msg="CreateContainer within sandbox \"4fd067817e50b0196ea67c36f305841979301e46547c84e4d6c903bec7441da5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"30f4892aa3092a70c62a771d53ba0d61503757806b105f922d9b3c26fe898b6f\"" Jan 23 00:07:31.544535 containerd[1894]: time="2026-01-23T00:07:31.544501850Z" level=info msg="StartContainer for \"30f4892aa3092a70c62a771d53ba0d61503757806b105f922d9b3c26fe898b6f\"" Jan 23 00:07:31.545323 containerd[1894]: time="2026-01-23T00:07:31.545292695Z" level=info msg="connecting to shim 30f4892aa3092a70c62a771d53ba0d61503757806b105f922d9b3c26fe898b6f" address="unix:///run/containerd/s/ce75639aa4eb4324951978410ad716e8d658600607458f9229681a38d762504d" protocol=ttrpc version=3 Jan 23 00:07:31.553393 containerd[1894]: time="2026-01-23T00:07:31.553357900Z" level=info msg="CreateContainer within sandbox \"9902ef45ed928c9db0804171cd26b6d6b12238898c6933d0351f69d71343f2fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"00fab06a9ff4af55c4e62bb72d1232b39d0958305436015759c8ce42a1da6409\"" Jan 23 00:07:31.555469 containerd[1894]: time="2026-01-23T00:07:31.555415295Z" level=info msg="StartContainer for \"00fab06a9ff4af55c4e62bb72d1232b39d0958305436015759c8ce42a1da6409\"" Jan 23 00:07:31.557564 containerd[1894]: time="2026-01-23T00:07:31.557298483Z" level=info msg="connecting to shim 00fab06a9ff4af55c4e62bb72d1232b39d0958305436015759c8ce42a1da6409" address="unix:///run/containerd/s/3492eb6e833cae7e77162601f12fda3cab70253b488ed4f9cf01286372a7e52f" protocol=ttrpc version=3 Jan 23 00:07:31.558578 containerd[1894]: time="2026-01-23T00:07:31.558548865Z" level=info msg="Container 2489121f698856ea34b6ddc544e7cdfb3eddcfe4f507849fec2ba58c48a9a9d7: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:31.560897 systemd[1]: Started cri-containerd-30f4892aa3092a70c62a771d53ba0d61503757806b105f922d9b3c26fe898b6f.scope - libcontainer container 30f4892aa3092a70c62a771d53ba0d61503757806b105f922d9b3c26fe898b6f. Jan 23 00:07:31.575634 containerd[1894]: time="2026-01-23T00:07:31.575590683Z" level=info msg="CreateContainer within sandbox \"5206fbafbaa81afb6277bacc28b798efb106ce82af79c70f61a96463c0324562\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2489121f698856ea34b6ddc544e7cdfb3eddcfe4f507849fec2ba58c48a9a9d7\"" Jan 23 00:07:31.576300 containerd[1894]: time="2026-01-23T00:07:31.576269380Z" level=info msg="StartContainer for \"2489121f698856ea34b6ddc544e7cdfb3eddcfe4f507849fec2ba58c48a9a9d7\"" Jan 23 00:07:31.576823 systemd[1]: Started cri-containerd-00fab06a9ff4af55c4e62bb72d1232b39d0958305436015759c8ce42a1da6409.scope - libcontainer container 00fab06a9ff4af55c4e62bb72d1232b39d0958305436015759c8ce42a1da6409. Jan 23 00:07:31.578034 containerd[1894]: time="2026-01-23T00:07:31.577942585Z" level=info msg="connecting to shim 2489121f698856ea34b6ddc544e7cdfb3eddcfe4f507849fec2ba58c48a9a9d7" address="unix:///run/containerd/s/3c1cf6523926c7901097023ab9b78366127d1b938e62ea8a2a63a679026c8cbc" protocol=ttrpc version=3 Jan 23 00:07:31.598981 systemd[1]: Started cri-containerd-2489121f698856ea34b6ddc544e7cdfb3eddcfe4f507849fec2ba58c48a9a9d7.scope - libcontainer container 2489121f698856ea34b6ddc544e7cdfb3eddcfe4f507849fec2ba58c48a9a9d7. Jan 23 00:07:31.628317 containerd[1894]: time="2026-01-23T00:07:31.628023147Z" level=info msg="StartContainer for \"30f4892aa3092a70c62a771d53ba0d61503757806b105f922d9b3c26fe898b6f\" returns successfully" Jan 23 00:07:31.629103 kubelet[3028]: I0123 00:07:31.629077 3028 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:31.631928 kubelet[3028]: E0123 00:07:31.631264 3028 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:31.632732 kubelet[3028]: E0123 00:07:31.632259 3028 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:07:31.648703 containerd[1894]: time="2026-01-23T00:07:31.648146638Z" level=info msg="StartContainer for \"00fab06a9ff4af55c4e62bb72d1232b39d0958305436015759c8ce42a1da6409\" returns successfully" Jan 23 00:07:31.669363 containerd[1894]: time="2026-01-23T00:07:31.669324503Z" level=info msg="StartContainer for \"2489121f698856ea34b6ddc544e7cdfb3eddcfe4f507849fec2ba58c48a9a9d7\" returns successfully" Jan 23 00:07:32.140700 kubelet[3028]: E0123 00:07:32.139851 3028 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-db2e6badfc\" not found" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:32.147163 kubelet[3028]: E0123 00:07:32.146912 3028 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-db2e6badfc\" not found" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:32.150522 kubelet[3028]: E0123 00:07:32.150491 3028 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-db2e6badfc\" not found" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:33.031414 waagent[2117]: 2026-01-23T00:07:33.031357Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 00:07:33.039141 waagent[2117]: 2026-01-23T00:07:33.039087Z INFO ExtHandler Jan 23 00:07:33.039266 waagent[2117]: 2026-01-23T00:07:33.039188Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ac3d05cc-262e-4d4a-9ba0-6d0ffbb78180 eTag: 10785353423144826861 source: Fabric] Jan 23 00:07:33.039492 waagent[2117]: 2026-01-23T00:07:33.039451Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 00:07:33.040872 waagent[2117]: 2026-01-23T00:07:33.040826Z INFO ExtHandler Jan 23 00:07:33.040952 waagent[2117]: 2026-01-23T00:07:33.040916Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 00:07:33.090585 waagent[2117]: 2026-01-23T00:07:33.090523Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 00:07:33.093085 kubelet[3028]: E0123 00:07:33.093046 3028 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-n-db2e6badfc\" not found" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:33.149953 waagent[2117]: 2026-01-23T00:07:33.149817Z INFO ExtHandler Downloaded certificate {'thumbprint': '9C99916B8F50CA5728D2622D07BE08BB4206A761', 'hasPrivateKey': True} Jan 23 00:07:33.151076 waagent[2117]: 2026-01-23T00:07:33.150989Z INFO ExtHandler Fetch goal state completed Jan 23 00:07:33.152049 waagent[2117]: 2026-01-23T00:07:33.151945Z INFO ExtHandler ExtHandler Jan 23 00:07:33.152049 waagent[2117]: 2026-01-23T00:07:33.152031Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 903e9818-531c-4ad6-a948-dff63d17a111 correlation d4a23dbb-2b08-4725-98a8-76fb0f0779bd created: 2026-01-23T00:07:24.313393Z] Jan 23 00:07:33.153379 waagent[2117]: 2026-01-23T00:07:33.153278Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 00:07:33.154681 kubelet[3028]: E0123 00:07:33.153972 3028 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-db2e6badfc\" not found" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:33.154933 waagent[2117]: 2026-01-23T00:07:33.154320Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 2 ms] Jan 23 00:07:33.155222 kubelet[3028]: E0123 00:07:33.155202 3028 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-db2e6badfc\" not found" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:33.235625 kubelet[3028]: I0123 00:07:33.235426 3028 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:33.246858 kubelet[3028]: I0123 00:07:33.246830 3028 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:33.332466 kubelet[3028]: I0123 00:07:33.332359 3028 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:33.339697 kubelet[3028]: E0123 00:07:33.339557 3028 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-n-db2e6badfc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:33.339697 kubelet[3028]: I0123 00:07:33.339594 3028 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:33.341249 kubelet[3028]: E0123 00:07:33.341218 3028 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-db2e6badfc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:33.341249 kubelet[3028]: I0123 00:07:33.341245 3028 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:33.342600 kubelet[3028]: E0123 00:07:33.342564 3028 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-db2e6badfc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:34.022409 kubelet[3028]: I0123 00:07:34.022364 3028 apiserver.go:52] "Watching apiserver" Jan 23 00:07:34.035163 kubelet[3028]: I0123 00:07:34.035124 3028 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 00:07:34.152129 kubelet[3028]: I0123 00:07:34.152097 3028 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:34.159453 kubelet[3028]: I0123 00:07:34.159340 3028 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 00:07:35.120563 systemd[1]: Reload requested from client PID 3312 ('systemctl') (unit session-9.scope)... Jan 23 00:07:35.120580 systemd[1]: Reloading... Jan 23 00:07:35.202715 zram_generator::config[3359]: No configuration found. Jan 23 00:07:35.371356 systemd[1]: Reloading finished in 250 ms. Jan 23 00:07:35.390289 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:35.401734 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 00:07:35.401956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:35.402012 systemd[1]: kubelet.service: Consumed 720ms CPU time, 126.7M memory peak. Jan 23 00:07:35.403556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:35.507810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:35.515132 (kubelet)[3423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:07:35.545576 kubelet[3423]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:07:35.545576 kubelet[3423]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:07:35.545576 kubelet[3423]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:07:35.545576 kubelet[3423]: I0123 00:07:35.544573 3423 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:07:35.551934 kubelet[3423]: I0123 00:07:35.551900 3423 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 00:07:35.552074 kubelet[3423]: I0123 00:07:35.552064 3423 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:07:35.552259 kubelet[3423]: I0123 00:07:35.552244 3423 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:07:35.553274 kubelet[3423]: I0123 00:07:35.553250 3423 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 00:07:35.555075 kubelet[3423]: I0123 00:07:35.555041 3423 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:07:35.558402 kubelet[3423]: I0123 00:07:35.558389 3423 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:07:35.560893 kubelet[3423]: I0123 00:07:35.560874 3423 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 00:07:35.561246 kubelet[3423]: I0123 00:07:35.561214 3423 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:07:35.561609 kubelet[3423]: I0123 00:07:35.561331 3423 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-db2e6badfc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:07:35.561609 kubelet[3423]: I0123 00:07:35.561608 3423 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:07:35.561609 kubelet[3423]: I0123 00:07:35.561617 3423 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 00:07:35.561770 kubelet[3423]: I0123 00:07:35.561659 3423 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:07:35.561806 kubelet[3423]: I0123 00:07:35.561792 3423 kubelet.go:480] "Attempting to sync node with API server" Jan 23 00:07:35.561827 kubelet[3423]: I0123 00:07:35.561808 3423 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:07:35.561844 kubelet[3423]: I0123 00:07:35.561829 3423 kubelet.go:386] "Adding apiserver pod source" Jan 23 00:07:35.561844 kubelet[3423]: I0123 00:07:35.561842 3423 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:07:35.567647 kubelet[3423]: I0123 00:07:35.567625 3423 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:07:35.568415 kubelet[3423]: I0123 00:07:35.568396 3423 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:07:35.572427 kubelet[3423]: I0123 00:07:35.572235 3423 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 00:07:35.572575 kubelet[3423]: I0123 00:07:35.572553 3423 server.go:1289] "Started kubelet" Jan 23 00:07:35.574127 kubelet[3423]: I0123 00:07:35.574112 3423 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:07:35.577596 kubelet[3423]: I0123 00:07:35.577474 3423 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:07:35.578603 kubelet[3423]: I0123 00:07:35.578573 3423 server.go:317] "Adding debug handlers to kubelet server" Jan 23 00:07:35.581112 kubelet[3423]: I0123 00:07:35.581025 3423 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:07:35.581795 kubelet[3423]: I0123 00:07:35.581209 3423 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:07:35.581795 kubelet[3423]: I0123 00:07:35.581385 3423 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:07:35.583188 kubelet[3423]: E0123 00:07:35.583166 3423 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:07:35.584599 kubelet[3423]: I0123 00:07:35.584571 3423 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 00:07:35.585138 kubelet[3423]: I0123 00:07:35.584642 3423 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 00:07:35.585138 kubelet[3423]: I0123 00:07:35.584837 3423 reconciler.go:26] "Reconciler: start to sync state" Jan 23 00:07:35.586420 kubelet[3423]: I0123 00:07:35.586231 3423 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:07:35.586725 kubelet[3423]: I0123 00:07:35.586582 3423 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:07:35.589325 kubelet[3423]: I0123 00:07:35.589258 3423 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:07:35.590947 kubelet[3423]: I0123 00:07:35.590913 3423 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 00:07:35.591679 kubelet[3423]: I0123 00:07:35.591617 3423 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 00:07:35.591679 kubelet[3423]: I0123 00:07:35.591639 3423 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 00:07:35.591679 kubelet[3423]: I0123 00:07:35.591656 3423 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:07:35.591679 kubelet[3423]: I0123 00:07:35.591660 3423 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 00:07:35.591792 kubelet[3423]: E0123 00:07:35.591713 3423 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:07:35.627635 kubelet[3423]: I0123 00:07:35.627538 3423 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:07:35.627635 kubelet[3423]: I0123 00:07:35.627557 3423 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:07:35.627635 kubelet[3423]: I0123 00:07:35.627578 3423 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:07:35.628149 kubelet[3423]: I0123 00:07:35.628128 3423 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 00:07:35.628149 kubelet[3423]: I0123 00:07:35.628146 3423 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 00:07:35.628218 kubelet[3423]: I0123 00:07:35.628160 3423 policy_none.go:49] "None policy: Start" Jan 23 00:07:35.628218 kubelet[3423]: I0123 00:07:35.628170 3423 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 00:07:35.628218 kubelet[3423]: I0123 00:07:35.628178 3423 state_mem.go:35] "Initializing new in-memory state store" Jan 23 00:07:35.628271 kubelet[3423]: I0123 00:07:35.628246 3423 state_mem.go:75] "Updated machine memory state" Jan 23 00:07:35.631389 kubelet[3423]: E0123 00:07:35.631370 3423 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:07:35.632429 kubelet[3423]: I0123 00:07:35.631916 3423 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:07:35.632429 kubelet[3423]: I0123 00:07:35.631935 3423 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:07:35.632429 kubelet[3423]: I0123 00:07:35.632145 3423 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:07:35.635254 kubelet[3423]: E0123 00:07:35.635238 3423 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:07:35.692372 kubelet[3423]: I0123 00:07:35.692333 3423 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.693005 kubelet[3423]: I0123 00:07:35.692709 3423 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.693141 kubelet[3423]: I0123 00:07:35.692850 3423 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.703559 kubelet[3423]: I0123 00:07:35.703540 3423 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 00:07:35.703961 kubelet[3423]: I0123 00:07:35.703554 3423 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 00:07:35.704099 kubelet[3423]: I0123 00:07:35.703612 3423 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 00:07:35.704152 kubelet[3423]: E0123 00:07:35.704119 3423 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-db2e6badfc\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.738738 kubelet[3423]: I0123 00:07:35.738653 3423 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.750334 kubelet[3423]: I0123 00:07:35.750298 3423 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.750469 kubelet[3423]: I0123 00:07:35.750380 3423 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.786449 kubelet[3423]: I0123 00:07:35.786404 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c28b69518d2381c3fb1ee29f0c070b1a-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-db2e6badfc\" (UID: \"c28b69518d2381c3fb1ee29f0c070b1a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.786449 kubelet[3423]: I0123 00:07:35.786444 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c28b69518d2381c3fb1ee29f0c070b1a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-db2e6badfc\" (UID: \"c28b69518d2381c3fb1ee29f0c070b1a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.786449 kubelet[3423]: I0123 00:07:35.786461 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69941e2613c6cb6a878a420c9c5c8d41-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-db2e6badfc\" (UID: \"69941e2613c6cb6a878a420c9c5c8d41\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.786449 kubelet[3423]: I0123 00:07:35.786472 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69941e2613c6cb6a878a420c9c5c8d41-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-db2e6badfc\" (UID: \"69941e2613c6cb6a878a420c9c5c8d41\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.786449 kubelet[3423]: I0123 00:07:35.786486 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c28b69518d2381c3fb1ee29f0c070b1a-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-db2e6badfc\" (UID: \"c28b69518d2381c3fb1ee29f0c070b1a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.786866 kubelet[3423]: I0123 00:07:35.786496 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c28b69518d2381c3fb1ee29f0c070b1a-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-db2e6badfc\" (UID: \"c28b69518d2381c3fb1ee29f0c070b1a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.786866 kubelet[3423]: I0123 00:07:35.786506 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c28b69518d2381c3fb1ee29f0c070b1a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-db2e6badfc\" (UID: \"c28b69518d2381c3fb1ee29f0c070b1a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.786866 kubelet[3423]: I0123 00:07:35.786520 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e805ee4f7c0df2d2a0a00a91ffa166fe-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-db2e6badfc\" (UID: \"e805ee4f7c0df2d2a0a00a91ffa166fe\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:35.786866 kubelet[3423]: I0123 00:07:35.786536 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69941e2613c6cb6a878a420c9c5c8d41-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-db2e6badfc\" (UID: \"69941e2613c6cb6a878a420c9c5c8d41\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:36.206018 sudo[3459]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 00:07:36.206613 sudo[3459]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 00:07:36.448305 sudo[3459]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:36.568485 kubelet[3423]: I0123 00:07:36.568384 3423 apiserver.go:52] "Watching apiserver" Jan 23 00:07:36.585006 kubelet[3423]: I0123 00:07:36.584979 3423 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 00:07:36.617170 kubelet[3423]: I0123 00:07:36.617125 3423 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:36.631980 kubelet[3423]: I0123 00:07:36.630481 3423 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 00:07:36.632111 kubelet[3423]: E0123 00:07:36.632048 3423 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-db2e6badfc\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" Jan 23 00:07:36.660682 kubelet[3423]: I0123 00:07:36.659683 3423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-n-db2e6badfc" podStartSLOduration=1.659657081 podStartE2EDuration="1.659657081s" podCreationTimestamp="2026-01-23 00:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:07:36.646051975 +0000 UTC m=+1.127319228" watchObservedRunningTime="2026-01-23 00:07:36.659657081 +0000 UTC m=+1.140924334" Jan 23 00:07:36.673678 kubelet[3423]: I0123 00:07:36.673624 3423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-n-db2e6badfc" podStartSLOduration=2.673595625 podStartE2EDuration="2.673595625s" podCreationTimestamp="2026-01-23 00:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:07:36.660269065 +0000 UTC m=+1.141536326" watchObservedRunningTime="2026-01-23 00:07:36.673595625 +0000 UTC m=+1.154862878" Jan 23 00:07:36.842366 kubelet[3423]: I0123 00:07:36.690489 3423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-db2e6badfc" podStartSLOduration=1.690474445 podStartE2EDuration="1.690474445s" podCreationTimestamp="2026-01-23 00:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:07:36.676049106 +0000 UTC m=+1.157316359" watchObservedRunningTime="2026-01-23 00:07:36.690474445 +0000 UTC m=+1.171741698" Jan 23 00:07:37.583866 sudo[2385]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:37.660519 sshd[2384]: Connection closed by 10.200.16.10 port 46302 Jan 23 00:07:37.661078 sshd-session[2381]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:37.664254 systemd[1]: sshd@6-10.200.20.18:22-10.200.16.10:46302.service: Deactivated successfully. Jan 23 00:07:37.665938 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 00:07:37.666136 systemd[1]: session-9.scope: Consumed 3.575s CPU time, 263.1M memory peak. Jan 23 00:07:37.667179 systemd-logind[1876]: Session 9 logged out. Waiting for processes to exit. Jan 23 00:07:37.668618 systemd-logind[1876]: Removed session 9. Jan 23 00:07:41.414577 kubelet[3423]: I0123 00:07:41.414544 3423 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 00:07:41.415869 containerd[1894]: time="2026-01-23T00:07:41.415774920Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 00:07:41.416157 kubelet[3423]: I0123 00:07:41.416131 3423 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 00:07:42.339557 systemd[1]: Created slice kubepods-burstable-pod56470302_2682_4296_a9e7_7f2ee55dd4de.slice - libcontainer container kubepods-burstable-pod56470302_2682_4296_a9e7_7f2ee55dd4de.slice. Jan 23 00:07:42.357315 systemd[1]: Created slice kubepods-besteffort-pod54d33f20_04a9_4e67_ade3_969a8e1219ba.slice - libcontainer container kubepods-besteffort-pod54d33f20_04a9_4e67_ade3_969a8e1219ba.slice. Jan 23 00:07:42.429251 kubelet[3423]: I0123 00:07:42.429210 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-etc-cni-netd\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430059 kubelet[3423]: I0123 00:07:42.429316 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-host-proc-sys-net\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430059 kubelet[3423]: I0123 00:07:42.429333 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56470302-2682-4296-a9e7-7f2ee55dd4de-hubble-tls\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430059 kubelet[3423]: I0123 00:07:42.429346 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-hostproc\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430059 kubelet[3423]: I0123 00:07:42.429354 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-cgroup\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430059 kubelet[3423]: I0123 00:07:42.429363 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-lib-modules\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430059 kubelet[3423]: I0123 00:07:42.429791 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqhk4\" (UniqueName: \"kubernetes.io/projected/56470302-2682-4296-a9e7-7f2ee55dd4de-kube-api-access-fqhk4\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430203 kubelet[3423]: I0123 00:07:42.429826 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54d33f20-04a9-4e67-ade3-969a8e1219ba-lib-modules\") pod \"kube-proxy-nxj2s\" (UID: \"54d33f20-04a9-4e67-ade3-969a8e1219ba\") " pod="kube-system/kube-proxy-nxj2s" Jan 23 00:07:42.430203 kubelet[3423]: I0123 00:07:42.429842 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cni-path\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430203 kubelet[3423]: I0123 00:07:42.429856 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56470302-2682-4296-a9e7-7f2ee55dd4de-clustermesh-secrets\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430203 kubelet[3423]: I0123 00:07:42.429869 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-config-path\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430203 kubelet[3423]: I0123 00:07:42.429879 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54d33f20-04a9-4e67-ade3-969a8e1219ba-kube-proxy\") pod \"kube-proxy-nxj2s\" (UID: \"54d33f20-04a9-4e67-ade3-969a8e1219ba\") " pod="kube-system/kube-proxy-nxj2s" Jan 23 00:07:42.430274 kubelet[3423]: I0123 00:07:42.429887 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9dcx\" (UniqueName: \"kubernetes.io/projected/54d33f20-04a9-4e67-ade3-969a8e1219ba-kube-api-access-x9dcx\") pod \"kube-proxy-nxj2s\" (UID: \"54d33f20-04a9-4e67-ade3-969a8e1219ba\") " pod="kube-system/kube-proxy-nxj2s" Jan 23 00:07:42.430274 kubelet[3423]: I0123 00:07:42.429904 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-xtables-lock\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430274 kubelet[3423]: I0123 00:07:42.429919 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-host-proc-sys-kernel\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430274 kubelet[3423]: I0123 00:07:42.429928 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54d33f20-04a9-4e67-ade3-969a8e1219ba-xtables-lock\") pod \"kube-proxy-nxj2s\" (UID: \"54d33f20-04a9-4e67-ade3-969a8e1219ba\") " pod="kube-system/kube-proxy-nxj2s" Jan 23 00:07:42.430274 kubelet[3423]: I0123 00:07:42.429983 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-run\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.430274 kubelet[3423]: I0123 00:07:42.430025 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-bpf-maps\") pod \"cilium-cdccg\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " pod="kube-system/cilium-cdccg" Jan 23 00:07:42.645049 systemd[1]: Created slice kubepods-besteffort-pod59ed3a82_3511_4c83_ab8e_0c32a4f9ee55.slice - libcontainer container kubepods-besteffort-pod59ed3a82_3511_4c83_ab8e_0c32a4f9ee55.slice. Jan 23 00:07:42.651187 containerd[1894]: time="2026-01-23T00:07:42.651146023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdccg,Uid:56470302-2682-4296-a9e7-7f2ee55dd4de,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:42.666960 containerd[1894]: time="2026-01-23T00:07:42.666916279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxj2s,Uid:54d33f20-04a9-4e67-ade3-969a8e1219ba,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:42.704379 containerd[1894]: time="2026-01-23T00:07:42.704273757Z" level=info msg="connecting to shim 5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340" address="unix:///run/containerd/s/abf6602fd0781e533288e55aa6fd1aff40725277d741fbfb4fd95937b8a93d2c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:07:42.713116 containerd[1894]: time="2026-01-23T00:07:42.712859361Z" level=info msg="connecting to shim 7bbfcf449fa4d7d1b26d11cbeac4ad417c1ff6de7a8b6e187fa4b30822951462" address="unix:///run/containerd/s/f80612d9f92fe2323d5d610a029299d8a7296fe3240eba22126b3b1d43c9f785" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:07:42.721011 systemd[1]: Started cri-containerd-5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340.scope - libcontainer container 5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340. Jan 23 00:07:42.733192 kubelet[3423]: I0123 00:07:42.733161 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59ed3a82-3511-4c83-ab8e-0c32a4f9ee55-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-v6x5j\" (UID: \"59ed3a82-3511-4c83-ab8e-0c32a4f9ee55\") " pod="kube-system/cilium-operator-6c4d7847fc-v6x5j" Jan 23 00:07:42.733391 kubelet[3423]: I0123 00:07:42.733377 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj57v\" (UniqueName: \"kubernetes.io/projected/59ed3a82-3511-4c83-ab8e-0c32a4f9ee55-kube-api-access-wj57v\") pod \"cilium-operator-6c4d7847fc-v6x5j\" (UID: \"59ed3a82-3511-4c83-ab8e-0c32a4f9ee55\") " pod="kube-system/cilium-operator-6c4d7847fc-v6x5j" Jan 23 00:07:42.737822 systemd[1]: Started cri-containerd-7bbfcf449fa4d7d1b26d11cbeac4ad417c1ff6de7a8b6e187fa4b30822951462.scope - libcontainer container 7bbfcf449fa4d7d1b26d11cbeac4ad417c1ff6de7a8b6e187fa4b30822951462. Jan 23 00:07:42.757735 containerd[1894]: time="2026-01-23T00:07:42.757674750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdccg,Uid:56470302-2682-4296-a9e7-7f2ee55dd4de,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\"" Jan 23 00:07:42.761579 containerd[1894]: time="2026-01-23T00:07:42.761404562Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 00:07:42.775403 containerd[1894]: time="2026-01-23T00:07:42.775309976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxj2s,Uid:54d33f20-04a9-4e67-ade3-969a8e1219ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bbfcf449fa4d7d1b26d11cbeac4ad417c1ff6de7a8b6e187fa4b30822951462\"" Jan 23 00:07:42.784271 containerd[1894]: time="2026-01-23T00:07:42.784191703Z" level=info msg="CreateContainer within sandbox \"7bbfcf449fa4d7d1b26d11cbeac4ad417c1ff6de7a8b6e187fa4b30822951462\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 00:07:42.802029 containerd[1894]: time="2026-01-23T00:07:42.801990744Z" level=info msg="Container d5666443f53864f41286930af88046d18c720989efe92b79fc768e1926dc5b1e: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:42.819096 containerd[1894]: time="2026-01-23T00:07:42.819015721Z" level=info msg="CreateContainer within sandbox \"7bbfcf449fa4d7d1b26d11cbeac4ad417c1ff6de7a8b6e187fa4b30822951462\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d5666443f53864f41286930af88046d18c720989efe92b79fc768e1926dc5b1e\"" Jan 23 00:07:42.821038 containerd[1894]: time="2026-01-23T00:07:42.820964078Z" level=info msg="StartContainer for \"d5666443f53864f41286930af88046d18c720989efe92b79fc768e1926dc5b1e\"" Jan 23 00:07:42.822431 containerd[1894]: time="2026-01-23T00:07:42.822352645Z" level=info msg="connecting to shim d5666443f53864f41286930af88046d18c720989efe92b79fc768e1926dc5b1e" address="unix:///run/containerd/s/f80612d9f92fe2323d5d610a029299d8a7296fe3240eba22126b3b1d43c9f785" protocol=ttrpc version=3 Jan 23 00:07:42.838894 systemd[1]: Started cri-containerd-d5666443f53864f41286930af88046d18c720989efe92b79fc768e1926dc5b1e.scope - libcontainer container d5666443f53864f41286930af88046d18c720989efe92b79fc768e1926dc5b1e. Jan 23 00:07:42.895303 containerd[1894]: time="2026-01-23T00:07:42.895261883Z" level=info msg="StartContainer for \"d5666443f53864f41286930af88046d18c720989efe92b79fc768e1926dc5b1e\" returns successfully" Jan 23 00:07:42.951892 containerd[1894]: time="2026-01-23T00:07:42.951776327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v6x5j,Uid:59ed3a82-3511-4c83-ab8e-0c32a4f9ee55,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:42.986516 containerd[1894]: time="2026-01-23T00:07:42.985973128Z" level=info msg="connecting to shim 5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4" address="unix:///run/containerd/s/3049bc5b2de50344a78a3b329186da362e5c9f210209683ee40f988306c9be1e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:07:43.012847 systemd[1]: Started cri-containerd-5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4.scope - libcontainer container 5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4. Jan 23 00:07:43.053999 containerd[1894]: time="2026-01-23T00:07:43.053950530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v6x5j,Uid:59ed3a82-3511-4c83-ab8e-0c32a4f9ee55,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4\"" Jan 23 00:07:46.104441 kubelet[3423]: I0123 00:07:46.103896 3423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nxj2s" podStartSLOduration=4.103880314 podStartE2EDuration="4.103880314s" podCreationTimestamp="2026-01-23 00:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:07:43.641131622 +0000 UTC m=+8.122398883" watchObservedRunningTime="2026-01-23 00:07:46.103880314 +0000 UTC m=+10.585147647" Jan 23 00:07:47.671208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933040664.mount: Deactivated successfully. Jan 23 00:07:49.092126 containerd[1894]: time="2026-01-23T00:07:49.091764523Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:49.095727 containerd[1894]: time="2026-01-23T00:07:49.095499032Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 00:07:49.099806 containerd[1894]: time="2026-01-23T00:07:49.099742919Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:49.100955 containerd[1894]: time="2026-01-23T00:07:49.100924209Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.338955689s" Jan 23 00:07:49.101155 containerd[1894]: time="2026-01-23T00:07:49.101064566Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 00:07:49.102081 containerd[1894]: time="2026-01-23T00:07:49.102055489Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 00:07:49.113151 containerd[1894]: time="2026-01-23T00:07:49.112933196Z" level=info msg="CreateContainer within sandbox \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 00:07:49.152691 containerd[1894]: time="2026-01-23T00:07:49.152575143Z" level=info msg="Container d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:49.152838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895268336.mount: Deactivated successfully. Jan 23 00:07:49.168932 containerd[1894]: time="2026-01-23T00:07:49.168818361Z" level=info msg="CreateContainer within sandbox \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\"" Jan 23 00:07:49.170706 containerd[1894]: time="2026-01-23T00:07:49.169936081Z" level=info msg="StartContainer for \"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\"" Jan 23 00:07:49.170706 containerd[1894]: time="2026-01-23T00:07:49.170600753Z" level=info msg="connecting to shim d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922" address="unix:///run/containerd/s/abf6602fd0781e533288e55aa6fd1aff40725277d741fbfb4fd95937b8a93d2c" protocol=ttrpc version=3 Jan 23 00:07:49.193874 systemd[1]: Started cri-containerd-d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922.scope - libcontainer container d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922. Jan 23 00:07:49.224449 containerd[1894]: time="2026-01-23T00:07:49.224407660Z" level=info msg="StartContainer for \"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\" returns successfully" Jan 23 00:07:49.229927 systemd[1]: cri-containerd-d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922.scope: Deactivated successfully. Jan 23 00:07:49.232975 containerd[1894]: time="2026-01-23T00:07:49.232937332Z" level=info msg="received container exit event container_id:\"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\" id:\"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\" pid:3851 exited_at:{seconds:1769126869 nanos:231867254}" Jan 23 00:07:50.150773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922-rootfs.mount: Deactivated successfully. Jan 23 00:07:51.655822 containerd[1894]: time="2026-01-23T00:07:51.655732597Z" level=info msg="CreateContainer within sandbox \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 00:07:51.679882 containerd[1894]: time="2026-01-23T00:07:51.679380966Z" level=info msg="Container fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:51.696564 containerd[1894]: time="2026-01-23T00:07:51.696525169Z" level=info msg="CreateContainer within sandbox \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\"" Jan 23 00:07:51.697958 containerd[1894]: time="2026-01-23T00:07:51.697919226Z" level=info msg="StartContainer for \"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\"" Jan 23 00:07:51.698853 containerd[1894]: time="2026-01-23T00:07:51.698825835Z" level=info msg="connecting to shim fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa" address="unix:///run/containerd/s/abf6602fd0781e533288e55aa6fd1aff40725277d741fbfb4fd95937b8a93d2c" protocol=ttrpc version=3 Jan 23 00:07:51.723990 systemd[1]: Started cri-containerd-fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa.scope - libcontainer container fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa. Jan 23 00:07:51.755423 containerd[1894]: time="2026-01-23T00:07:51.755383136Z" level=info msg="StartContainer for \"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\" returns successfully" Jan 23 00:07:51.765432 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:07:51.766123 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:07:51.767725 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:07:51.769932 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:07:51.771073 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:07:51.771350 systemd[1]: cri-containerd-fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa.scope: Deactivated successfully. Jan 23 00:07:51.774463 containerd[1894]: time="2026-01-23T00:07:51.774368971Z" level=info msg="received container exit event container_id:\"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\" id:\"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\" pid:3899 exited_at:{seconds:1769126871 nanos:773856889}" Jan 23 00:07:51.789230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:07:52.665063 containerd[1894]: time="2026-01-23T00:07:52.665017329Z" level=info msg="CreateContainer within sandbox \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 00:07:52.680386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa-rootfs.mount: Deactivated successfully. Jan 23 00:07:52.687309 containerd[1894]: time="2026-01-23T00:07:52.684942132Z" level=info msg="Container 7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:52.705422 containerd[1894]: time="2026-01-23T00:07:52.705376657Z" level=info msg="CreateContainer within sandbox \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\"" Jan 23 00:07:52.706384 containerd[1894]: time="2026-01-23T00:07:52.706281025Z" level=info msg="StartContainer for \"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\"" Jan 23 00:07:52.709269 containerd[1894]: time="2026-01-23T00:07:52.709214961Z" level=info msg="connecting to shim 7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0" address="unix:///run/containerd/s/abf6602fd0781e533288e55aa6fd1aff40725277d741fbfb4fd95937b8a93d2c" protocol=ttrpc version=3 Jan 23 00:07:52.730828 systemd[1]: Started cri-containerd-7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0.scope - libcontainer container 7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0. Jan 23 00:07:52.791326 systemd[1]: cri-containerd-7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0.scope: Deactivated successfully. Jan 23 00:07:52.797684 containerd[1894]: time="2026-01-23T00:07:52.796655791Z" level=info msg="received container exit event container_id:\"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\" id:\"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\" pid:3958 exited_at:{seconds:1769126872 nanos:793785073}" Jan 23 00:07:52.808766 containerd[1894]: time="2026-01-23T00:07:52.808716531Z" level=info msg="StartContainer for \"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\" returns successfully" Jan 23 00:07:52.826293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0-rootfs.mount: Deactivated successfully. Jan 23 00:07:53.119393 containerd[1894]: time="2026-01-23T00:07:53.118848766Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:53.121511 containerd[1894]: time="2026-01-23T00:07:53.121486211Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 00:07:53.124606 containerd[1894]: time="2026-01-23T00:07:53.124581841Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:53.125419 containerd[1894]: time="2026-01-23T00:07:53.125390902Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.023306804s" Jan 23 00:07:53.125511 containerd[1894]: time="2026-01-23T00:07:53.125497706Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 00:07:53.132891 containerd[1894]: time="2026-01-23T00:07:53.132858103Z" level=info msg="CreateContainer within sandbox \"5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 00:07:53.149706 containerd[1894]: time="2026-01-23T00:07:53.149504109Z" level=info msg="Container a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:53.161595 containerd[1894]: time="2026-01-23T00:07:53.161551617Z" level=info msg="CreateContainer within sandbox \"5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\"" Jan 23 00:07:53.163086 containerd[1894]: time="2026-01-23T00:07:53.162361653Z" level=info msg="StartContainer for \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\"" Jan 23 00:07:53.163086 containerd[1894]: time="2026-01-23T00:07:53.163012173Z" level=info msg="connecting to shim a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81" address="unix:///run/containerd/s/3049bc5b2de50344a78a3b329186da362e5c9f210209683ee40f988306c9be1e" protocol=ttrpc version=3 Jan 23 00:07:53.179814 systemd[1]: Started cri-containerd-a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81.scope - libcontainer container a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81. Jan 23 00:07:53.206695 containerd[1894]: time="2026-01-23T00:07:53.206584638Z" level=info msg="StartContainer for \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\" returns successfully" Jan 23 00:07:53.670306 containerd[1894]: time="2026-01-23T00:07:53.670228855Z" level=info msg="CreateContainer within sandbox \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 00:07:53.691988 containerd[1894]: time="2026-01-23T00:07:53.691949242Z" level=info msg="Container 0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:53.695634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521035306.mount: Deactivated successfully. Jan 23 00:07:53.707742 containerd[1894]: time="2026-01-23T00:07:53.707697305Z" level=info msg="CreateContainer within sandbox \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\"" Jan 23 00:07:53.708415 containerd[1894]: time="2026-01-23T00:07:53.708382009Z" level=info msg="StartContainer for \"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\"" Jan 23 00:07:53.709960 containerd[1894]: time="2026-01-23T00:07:53.709925424Z" level=info msg="connecting to shim 0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e" address="unix:///run/containerd/s/abf6602fd0781e533288e55aa6fd1aff40725277d741fbfb4fd95937b8a93d2c" protocol=ttrpc version=3 Jan 23 00:07:53.737904 systemd[1]: Started cri-containerd-0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e.scope - libcontainer container 0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e. Jan 23 00:07:53.776049 kubelet[3423]: I0123 00:07:53.775983 3423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-v6x5j" podStartSLOduration=1.704927222 podStartE2EDuration="11.775969167s" podCreationTimestamp="2026-01-23 00:07:42 +0000 UTC" firstStartedPulling="2026-01-23 00:07:43.055219492 +0000 UTC m=+7.536486745" lastFinishedPulling="2026-01-23 00:07:53.126261437 +0000 UTC m=+17.607528690" observedRunningTime="2026-01-23 00:07:53.774624847 +0000 UTC m=+18.255892100" watchObservedRunningTime="2026-01-23 00:07:53.775969167 +0000 UTC m=+18.257236420" Jan 23 00:07:53.800355 systemd[1]: cri-containerd-0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e.scope: Deactivated successfully. Jan 23 00:07:53.804964 containerd[1894]: time="2026-01-23T00:07:53.804861416Z" level=info msg="received container exit event container_id:\"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\" id:\"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\" pid:4032 exited_at:{seconds:1769126873 nanos:802927267}" Jan 23 00:07:53.817548 containerd[1894]: time="2026-01-23T00:07:53.817507136Z" level=info msg="StartContainer for \"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\" returns successfully" Jan 23 00:07:54.678991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e-rootfs.mount: Deactivated successfully. Jan 23 00:07:54.681710 containerd[1894]: time="2026-01-23T00:07:54.680972465Z" level=info msg="CreateContainer within sandbox \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 00:07:54.710412 containerd[1894]: time="2026-01-23T00:07:54.707825401Z" level=info msg="Container f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:54.709261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2791155030.mount: Deactivated successfully. Jan 23 00:07:54.723158 containerd[1894]: time="2026-01-23T00:07:54.723061430Z" level=info msg="CreateContainer within sandbox \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\"" Jan 23 00:07:54.723825 containerd[1894]: time="2026-01-23T00:07:54.723652283Z" level=info msg="StartContainer for \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\"" Jan 23 00:07:54.724782 containerd[1894]: time="2026-01-23T00:07:54.724751594Z" level=info msg="connecting to shim f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2" address="unix:///run/containerd/s/abf6602fd0781e533288e55aa6fd1aff40725277d741fbfb4fd95937b8a93d2c" protocol=ttrpc version=3 Jan 23 00:07:54.742820 systemd[1]: Started cri-containerd-f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2.scope - libcontainer container f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2. Jan 23 00:07:54.782719 containerd[1894]: time="2026-01-23T00:07:54.782651840Z" level=info msg="StartContainer for \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\" returns successfully" Jan 23 00:07:54.923203 kubelet[3423]: I0123 00:07:54.923144 3423 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 00:07:54.965356 systemd[1]: Created slice kubepods-burstable-pod04bada73_9a47_4cf4_bc41_820a3d52f832.slice - libcontainer container kubepods-burstable-pod04bada73_9a47_4cf4_bc41_820a3d52f832.slice. Jan 23 00:07:54.971827 systemd[1]: Created slice kubepods-burstable-pod3864746b_e164_458f_927d_5a048ba7fa21.slice - libcontainer container kubepods-burstable-pod3864746b_e164_458f_927d_5a048ba7fa21.slice. Jan 23 00:07:55.008274 kubelet[3423]: I0123 00:07:55.008176 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3864746b-e164-458f-927d-5a048ba7fa21-config-volume\") pod \"coredns-674b8bbfcf-wgmnt\" (UID: \"3864746b-e164-458f-927d-5a048ba7fa21\") " pod="kube-system/coredns-674b8bbfcf-wgmnt" Jan 23 00:07:55.008586 kubelet[3423]: I0123 00:07:55.008568 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbcg6\" (UniqueName: \"kubernetes.io/projected/3864746b-e164-458f-927d-5a048ba7fa21-kube-api-access-bbcg6\") pod \"coredns-674b8bbfcf-wgmnt\" (UID: \"3864746b-e164-458f-927d-5a048ba7fa21\") " pod="kube-system/coredns-674b8bbfcf-wgmnt" Jan 23 00:07:55.008848 kubelet[3423]: I0123 00:07:55.008750 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04bada73-9a47-4cf4-bc41-820a3d52f832-config-volume\") pod \"coredns-674b8bbfcf-ch56j\" (UID: \"04bada73-9a47-4cf4-bc41-820a3d52f832\") " pod="kube-system/coredns-674b8bbfcf-ch56j" Jan 23 00:07:55.008990 kubelet[3423]: I0123 00:07:55.008921 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh792\" (UniqueName: \"kubernetes.io/projected/04bada73-9a47-4cf4-bc41-820a3d52f832-kube-api-access-sh792\") pod \"coredns-674b8bbfcf-ch56j\" (UID: \"04bada73-9a47-4cf4-bc41-820a3d52f832\") " pod="kube-system/coredns-674b8bbfcf-ch56j" Jan 23 00:07:55.269831 containerd[1894]: time="2026-01-23T00:07:55.269747457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ch56j,Uid:04bada73-9a47-4cf4-bc41-820a3d52f832,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:55.278680 containerd[1894]: time="2026-01-23T00:07:55.278565026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wgmnt,Uid:3864746b-e164-458f-927d-5a048ba7fa21,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:55.694166 kubelet[3423]: I0123 00:07:55.693712 3423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cdccg" podStartSLOduration=7.352913444 podStartE2EDuration="13.693695466s" podCreationTimestamp="2026-01-23 00:07:42 +0000 UTC" firstStartedPulling="2026-01-23 00:07:42.761044331 +0000 UTC m=+7.242311592" lastFinishedPulling="2026-01-23 00:07:49.101826361 +0000 UTC m=+13.583093614" observedRunningTime="2026-01-23 00:07:55.69320752 +0000 UTC m=+20.174474773" watchObservedRunningTime="2026-01-23 00:07:55.693695466 +0000 UTC m=+20.174962719" Jan 23 00:07:56.769097 systemd-networkd[1486]: cilium_host: Link UP Jan 23 00:07:56.770475 systemd-networkd[1486]: cilium_net: Link UP Jan 23 00:07:56.771613 systemd-networkd[1486]: cilium_net: Gained carrier Jan 23 00:07:56.772814 systemd-networkd[1486]: cilium_host: Gained carrier Jan 23 00:07:57.045271 systemd-networkd[1486]: cilium_vxlan: Link UP Jan 23 00:07:57.045828 systemd-networkd[1486]: cilium_vxlan: Gained carrier Jan 23 00:07:57.429156 systemd-networkd[1486]: cilium_net: Gained IPv6LL Jan 23 00:07:57.514728 kernel: NET: Registered PF_ALG protocol family Jan 23 00:07:57.748846 systemd-networkd[1486]: cilium_host: Gained IPv6LL Jan 23 00:07:58.130807 systemd-networkd[1486]: lxc_health: Link UP Jan 23 00:07:58.138361 systemd-networkd[1486]: cilium_vxlan: Gained IPv6LL Jan 23 00:07:58.145297 systemd-networkd[1486]: lxc_health: Gained carrier Jan 23 00:07:58.297621 systemd-networkd[1486]: lxc49afe84a6eae: Link UP Jan 23 00:07:58.309697 kernel: eth0: renamed from tmpe49c4 Jan 23 00:07:58.312572 systemd-networkd[1486]: lxc49afe84a6eae: Gained carrier Jan 23 00:07:58.317768 systemd-networkd[1486]: lxc9d7f152812ac: Link UP Jan 23 00:07:58.329686 kernel: eth0: renamed from tmp8d50f Jan 23 00:07:58.330005 systemd-networkd[1486]: lxc9d7f152812ac: Gained carrier Jan 23 00:07:59.541821 systemd-networkd[1486]: lxc_health: Gained IPv6LL Jan 23 00:07:59.542449 systemd-networkd[1486]: lxc49afe84a6eae: Gained IPv6LL Jan 23 00:07:59.924924 systemd-networkd[1486]: lxc9d7f152812ac: Gained IPv6LL Jan 23 00:08:00.988152 containerd[1894]: time="2026-01-23T00:08:00.988065814Z" level=info msg="connecting to shim 8d50f401066f7049dbcee7aaf219bb70a79b9c3fa85f5f286aae14c11f758c53" address="unix:///run/containerd/s/2711487a8b627ff686d521aa69cf9cb504d3dc684c6a9e2d9fa35dc75d4ef63d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:01.015893 containerd[1894]: time="2026-01-23T00:08:01.015840140Z" level=info msg="connecting to shim e49c42a2ed3db0ba8b6670c3d1345e14980fbe96384afdcfa6f0c3fbe97c7584" address="unix:///run/containerd/s/615c0a21f708a1d857e15484fb9f4de5ccbf131d0d5417afe8bcc8859d1bda65" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:01.021251 systemd[1]: Started cri-containerd-8d50f401066f7049dbcee7aaf219bb70a79b9c3fa85f5f286aae14c11f758c53.scope - libcontainer container 8d50f401066f7049dbcee7aaf219bb70a79b9c3fa85f5f286aae14c11f758c53. Jan 23 00:08:01.040901 systemd[1]: Started cri-containerd-e49c42a2ed3db0ba8b6670c3d1345e14980fbe96384afdcfa6f0c3fbe97c7584.scope - libcontainer container e49c42a2ed3db0ba8b6670c3d1345e14980fbe96384afdcfa6f0c3fbe97c7584. Jan 23 00:08:01.061906 containerd[1894]: time="2026-01-23T00:08:01.061871757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wgmnt,Uid:3864746b-e164-458f-927d-5a048ba7fa21,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d50f401066f7049dbcee7aaf219bb70a79b9c3fa85f5f286aae14c11f758c53\"" Jan 23 00:08:01.070926 containerd[1894]: time="2026-01-23T00:08:01.070890991Z" level=info msg="CreateContainer within sandbox \"8d50f401066f7049dbcee7aaf219bb70a79b9c3fa85f5f286aae14c11f758c53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:08:01.082371 containerd[1894]: time="2026-01-23T00:08:01.082329391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ch56j,Uid:04bada73-9a47-4cf4-bc41-820a3d52f832,Namespace:kube-system,Attempt:0,} returns sandbox id \"e49c42a2ed3db0ba8b6670c3d1345e14980fbe96384afdcfa6f0c3fbe97c7584\"" Jan 23 00:08:01.094987 containerd[1894]: time="2026-01-23T00:08:01.094369132Z" level=info msg="CreateContainer within sandbox \"e49c42a2ed3db0ba8b6670c3d1345e14980fbe96384afdcfa6f0c3fbe97c7584\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:08:01.095267 containerd[1894]: time="2026-01-23T00:08:01.095213106Z" level=info msg="Container 6186782b5e30d104a9d3bdd5abd9a99c60976902b6e5b746fb2fabb231a81d84: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:01.128266 containerd[1894]: time="2026-01-23T00:08:01.128223436Z" level=info msg="CreateContainer within sandbox \"8d50f401066f7049dbcee7aaf219bb70a79b9c3fa85f5f286aae14c11f758c53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6186782b5e30d104a9d3bdd5abd9a99c60976902b6e5b746fb2fabb231a81d84\"" Jan 23 00:08:01.133212 containerd[1894]: time="2026-01-23T00:08:01.132034507Z" level=info msg="StartContainer for \"6186782b5e30d104a9d3bdd5abd9a99c60976902b6e5b746fb2fabb231a81d84\"" Jan 23 00:08:01.135075 containerd[1894]: time="2026-01-23T00:08:01.134952187Z" level=info msg="connecting to shim 6186782b5e30d104a9d3bdd5abd9a99c60976902b6e5b746fb2fabb231a81d84" address="unix:///run/containerd/s/2711487a8b627ff686d521aa69cf9cb504d3dc684c6a9e2d9fa35dc75d4ef63d" protocol=ttrpc version=3 Jan 23 00:08:01.151867 systemd[1]: Started cri-containerd-6186782b5e30d104a9d3bdd5abd9a99c60976902b6e5b746fb2fabb231a81d84.scope - libcontainer container 6186782b5e30d104a9d3bdd5abd9a99c60976902b6e5b746fb2fabb231a81d84. Jan 23 00:08:01.162673 containerd[1894]: time="2026-01-23T00:08:01.162538043Z" level=info msg="Container 0906ae4134c26339a3c1bd4fed8ae59f27dcf043a780fb24daa5b37494653d96: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:01.182456 containerd[1894]: time="2026-01-23T00:08:01.182342517Z" level=info msg="CreateContainer within sandbox \"e49c42a2ed3db0ba8b6670c3d1345e14980fbe96384afdcfa6f0c3fbe97c7584\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0906ae4134c26339a3c1bd4fed8ae59f27dcf043a780fb24daa5b37494653d96\"" Jan 23 00:08:01.182593 containerd[1894]: time="2026-01-23T00:08:01.182495818Z" level=info msg="StartContainer for \"6186782b5e30d104a9d3bdd5abd9a99c60976902b6e5b746fb2fabb231a81d84\" returns successfully" Jan 23 00:08:01.183858 containerd[1894]: time="2026-01-23T00:08:01.183759888Z" level=info msg="StartContainer for \"0906ae4134c26339a3c1bd4fed8ae59f27dcf043a780fb24daa5b37494653d96\"" Jan 23 00:08:01.184739 containerd[1894]: time="2026-01-23T00:08:01.184675008Z" level=info msg="connecting to shim 0906ae4134c26339a3c1bd4fed8ae59f27dcf043a780fb24daa5b37494653d96" address="unix:///run/containerd/s/615c0a21f708a1d857e15484fb9f4de5ccbf131d0d5417afe8bcc8859d1bda65" protocol=ttrpc version=3 Jan 23 00:08:01.205808 systemd[1]: Started cri-containerd-0906ae4134c26339a3c1bd4fed8ae59f27dcf043a780fb24daa5b37494653d96.scope - libcontainer container 0906ae4134c26339a3c1bd4fed8ae59f27dcf043a780fb24daa5b37494653d96. Jan 23 00:08:01.242046 containerd[1894]: time="2026-01-23T00:08:01.241897177Z" level=info msg="StartContainer for \"0906ae4134c26339a3c1bd4fed8ae59f27dcf043a780fb24daa5b37494653d96\" returns successfully" Jan 23 00:08:01.706080 kubelet[3423]: I0123 00:08:01.706026 3423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wgmnt" podStartSLOduration=19.706012158 podStartE2EDuration="19.706012158s" podCreationTimestamp="2026-01-23 00:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:01.703988301 +0000 UTC m=+26.185255554" watchObservedRunningTime="2026-01-23 00:08:01.706012158 +0000 UTC m=+26.187279411" Jan 23 00:08:01.977815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904287826.mount: Deactivated successfully. Jan 23 00:09:09.264774 systemd[1]: Started sshd@7-10.200.20.18:22-10.200.16.10:34762.service - OpenSSH per-connection server daemon (10.200.16.10:34762). Jan 23 00:09:09.720410 sshd[4755]: Accepted publickey for core from 10.200.16.10 port 34762 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:09.721198 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:09.725167 systemd-logind[1876]: New session 10 of user core. Jan 23 00:09:09.729799 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 00:09:10.138154 sshd[4758]: Connection closed by 10.200.16.10 port 34762 Jan 23 00:09:10.137451 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:10.140905 systemd-logind[1876]: Session 10 logged out. Waiting for processes to exit. Jan 23 00:09:10.141542 systemd[1]: sshd@7-10.200.20.18:22-10.200.16.10:34762.service: Deactivated successfully. Jan 23 00:09:10.143349 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 00:09:10.145605 systemd-logind[1876]: Removed session 10. Jan 23 00:09:12.088207 update_engine[1881]: I20260123 00:09:12.087724 1881 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 00:09:12.088207 update_engine[1881]: I20260123 00:09:12.087778 1881 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 00:09:12.088207 update_engine[1881]: I20260123 00:09:12.087939 1881 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 00:09:12.089196 update_engine[1881]: I20260123 00:09:12.089159 1881 omaha_request_params.cc:62] Current group set to stable Jan 23 00:09:12.089267 update_engine[1881]: I20260123 00:09:12.089249 1881 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 00:09:12.089267 update_engine[1881]: I20260123 00:09:12.089257 1881 update_attempter.cc:643] Scheduling an action processor start. Jan 23 00:09:12.089303 update_engine[1881]: I20260123 00:09:12.089272 1881 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 00:09:12.089303 update_engine[1881]: I20260123 00:09:12.089296 1881 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 00:09:12.089357 update_engine[1881]: I20260123 00:09:12.089339 1881 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 00:09:12.089357 update_engine[1881]: I20260123 00:09:12.089349 1881 omaha_request_action.cc:272] Request: Jan 23 00:09:12.089357 update_engine[1881]: Jan 23 00:09:12.089357 update_engine[1881]: Jan 23 00:09:12.089357 update_engine[1881]: Jan 23 00:09:12.089357 update_engine[1881]: Jan 23 00:09:12.089357 update_engine[1881]: Jan 23 00:09:12.089357 update_engine[1881]: Jan 23 00:09:12.089357 update_engine[1881]: Jan 23 00:09:12.089357 update_engine[1881]: Jan 23 00:09:12.089357 update_engine[1881]: I20260123 00:09:12.089354 1881 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 00:09:12.090010 locksmithd[1958]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 00:09:12.090186 update_engine[1881]: I20260123 00:09:12.090006 1881 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 00:09:12.090493 update_engine[1881]: I20260123 00:09:12.090466 1881 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 00:09:12.130613 update_engine[1881]: E20260123 00:09:12.130468 1881 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 00:09:12.130613 update_engine[1881]: I20260123 00:09:12.130577 1881 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 00:09:15.231873 systemd[1]: Started sshd@8-10.200.20.18:22-10.200.16.10:59750.service - OpenSSH per-connection server daemon (10.200.16.10:59750). Jan 23 00:09:15.732809 sshd[4773]: Accepted publickey for core from 10.200.16.10 port 59750 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:15.733986 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:15.738011 systemd-logind[1876]: New session 11 of user core. Jan 23 00:09:15.749875 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 00:09:16.129890 sshd[4776]: Connection closed by 10.200.16.10 port 59750 Jan 23 00:09:16.129798 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:16.133140 systemd[1]: sshd@8-10.200.20.18:22-10.200.16.10:59750.service: Deactivated successfully. Jan 23 00:09:16.134985 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 00:09:16.136141 systemd-logind[1876]: Session 11 logged out. Waiting for processes to exit. Jan 23 00:09:16.137330 systemd-logind[1876]: Removed session 11. Jan 23 00:09:21.222906 systemd[1]: Started sshd@9-10.200.20.18:22-10.200.16.10:58106.service - OpenSSH per-connection server daemon (10.200.16.10:58106). Jan 23 00:09:21.711139 sshd[4788]: Accepted publickey for core from 10.200.16.10 port 58106 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:21.712223 sshd-session[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:21.715718 systemd-logind[1876]: New session 12 of user core. Jan 23 00:09:21.718821 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 00:09:22.086586 update_engine[1881]: I20260123 00:09:22.086063 1881 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 00:09:22.086586 update_engine[1881]: I20260123 00:09:22.086169 1881 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 00:09:22.086586 update_engine[1881]: I20260123 00:09:22.086539 1881 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 00:09:22.096341 update_engine[1881]: E20260123 00:09:22.096292 1881 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 00:09:22.096591 update_engine[1881]: I20260123 00:09:22.096560 1881 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 00:09:22.105760 sshd[4791]: Connection closed by 10.200.16.10 port 58106 Jan 23 00:09:22.106329 sshd-session[4788]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:22.109687 systemd[1]: sshd@9-10.200.20.18:22-10.200.16.10:58106.service: Deactivated successfully. Jan 23 00:09:22.113041 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 00:09:22.113955 systemd-logind[1876]: Session 12 logged out. Waiting for processes to exit. Jan 23 00:09:22.115687 systemd-logind[1876]: Removed session 12. Jan 23 00:09:27.200208 systemd[1]: Started sshd@10-10.200.20.18:22-10.200.16.10:58116.service - OpenSSH per-connection server daemon (10.200.16.10:58116). Jan 23 00:09:27.689705 sshd[4803]: Accepted publickey for core from 10.200.16.10 port 58116 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:27.690675 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:27.695313 systemd-logind[1876]: New session 13 of user core. Jan 23 00:09:27.700935 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 00:09:28.080634 sshd[4806]: Connection closed by 10.200.16.10 port 58116 Jan 23 00:09:28.081211 sshd-session[4803]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:28.084653 systemd[1]: sshd@10-10.200.20.18:22-10.200.16.10:58116.service: Deactivated successfully. Jan 23 00:09:28.087212 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 00:09:28.088216 systemd-logind[1876]: Session 13 logged out. Waiting for processes to exit. Jan 23 00:09:28.089519 systemd-logind[1876]: Removed session 13. Jan 23 00:09:28.168883 systemd[1]: Started sshd@11-10.200.20.18:22-10.200.16.10:58118.service - OpenSSH per-connection server daemon (10.200.16.10:58118). Jan 23 00:09:28.664211 sshd[4819]: Accepted publickey for core from 10.200.16.10 port 58118 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:28.665331 sshd-session[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:28.669444 systemd-logind[1876]: New session 14 of user core. Jan 23 00:09:28.675827 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 00:09:29.305851 sshd[4822]: Connection closed by 10.200.16.10 port 58118 Jan 23 00:09:29.306250 sshd-session[4819]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:29.311013 systemd-logind[1876]: Session 14 logged out. Waiting for processes to exit. Jan 23 00:09:29.311437 systemd[1]: sshd@11-10.200.20.18:22-10.200.16.10:58118.service: Deactivated successfully. Jan 23 00:09:29.313198 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 00:09:29.315754 systemd-logind[1876]: Removed session 14. Jan 23 00:09:29.396799 systemd[1]: Started sshd@12-10.200.20.18:22-10.200.16.10:58122.service - OpenSSH per-connection server daemon (10.200.16.10:58122). Jan 23 00:09:29.854616 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 58122 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:29.855393 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:29.859089 systemd-logind[1876]: New session 15 of user core. Jan 23 00:09:29.868856 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 00:09:30.227384 sshd[4834]: Connection closed by 10.200.16.10 port 58122 Jan 23 00:09:30.227210 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:30.230776 systemd-logind[1876]: Session 15 logged out. Waiting for processes to exit. Jan 23 00:09:30.230935 systemd[1]: sshd@12-10.200.20.18:22-10.200.16.10:58122.service: Deactivated successfully. Jan 23 00:09:30.232825 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 00:09:30.235181 systemd-logind[1876]: Removed session 15. Jan 23 00:09:32.086205 update_engine[1881]: I20260123 00:09:32.086118 1881 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 00:09:32.086205 update_engine[1881]: I20260123 00:09:32.086220 1881 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 00:09:32.086740 update_engine[1881]: I20260123 00:09:32.086711 1881 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 00:09:32.104375 update_engine[1881]: E20260123 00:09:32.104319 1881 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 00:09:32.104504 update_engine[1881]: I20260123 00:09:32.104398 1881 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 00:09:35.313635 systemd[1]: Started sshd@13-10.200.20.18:22-10.200.16.10:43412.service - OpenSSH per-connection server daemon (10.200.16.10:43412). Jan 23 00:09:35.763123 sshd[4846]: Accepted publickey for core from 10.200.16.10 port 43412 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:35.764317 sshd-session[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:35.767936 systemd-logind[1876]: New session 16 of user core. Jan 23 00:09:35.778814 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 00:09:36.132878 sshd[4851]: Connection closed by 10.200.16.10 port 43412 Jan 23 00:09:36.133423 sshd-session[4846]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:36.137200 systemd-logind[1876]: Session 16 logged out. Waiting for processes to exit. Jan 23 00:09:36.137498 systemd[1]: sshd@13-10.200.20.18:22-10.200.16.10:43412.service: Deactivated successfully. Jan 23 00:09:36.139325 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 00:09:36.141073 systemd-logind[1876]: Removed session 16. Jan 23 00:09:36.214914 systemd[1]: Started sshd@14-10.200.20.18:22-10.200.16.10:43418.service - OpenSSH per-connection server daemon (10.200.16.10:43418). Jan 23 00:09:36.672375 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 43418 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:36.673442 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:36.677301 systemd-logind[1876]: New session 17 of user core. Jan 23 00:09:36.683956 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 00:09:37.093492 sshd[4865]: Connection closed by 10.200.16.10 port 43418 Jan 23 00:09:37.095876 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:37.100296 systemd[1]: sshd@14-10.200.20.18:22-10.200.16.10:43418.service: Deactivated successfully. Jan 23 00:09:37.104081 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 00:09:37.105045 systemd-logind[1876]: Session 17 logged out. Waiting for processes to exit. Jan 23 00:09:37.106496 systemd-logind[1876]: Removed session 17. Jan 23 00:09:37.188636 systemd[1]: Started sshd@15-10.200.20.18:22-10.200.16.10:43432.service - OpenSSH per-connection server daemon (10.200.16.10:43432). Jan 23 00:09:37.642858 sshd[4875]: Accepted publickey for core from 10.200.16.10 port 43432 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:37.643998 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:37.647798 systemd-logind[1876]: New session 18 of user core. Jan 23 00:09:37.653820 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 00:09:38.438835 sshd[4878]: Connection closed by 10.200.16.10 port 43432 Jan 23 00:09:38.438562 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:38.442619 systemd[1]: sshd@15-10.200.20.18:22-10.200.16.10:43432.service: Deactivated successfully. Jan 23 00:09:38.444624 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 00:09:38.446150 systemd-logind[1876]: Session 18 logged out. Waiting for processes to exit. Jan 23 00:09:38.447631 systemd-logind[1876]: Removed session 18. Jan 23 00:09:38.513767 systemd[1]: Started sshd@16-10.200.20.18:22-10.200.16.10:43436.service - OpenSSH per-connection server daemon (10.200.16.10:43436). Jan 23 00:09:38.970619 sshd[4895]: Accepted publickey for core from 10.200.16.10 port 43436 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:38.971423 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:38.975095 systemd-logind[1876]: New session 19 of user core. Jan 23 00:09:38.983074 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 00:09:39.416460 sshd[4898]: Connection closed by 10.200.16.10 port 43436 Jan 23 00:09:39.416359 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:39.419351 systemd[1]: sshd@16-10.200.20.18:22-10.200.16.10:43436.service: Deactivated successfully. Jan 23 00:09:39.421508 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 00:09:39.423762 systemd-logind[1876]: Session 19 logged out. Waiting for processes to exit. Jan 23 00:09:39.424972 systemd-logind[1876]: Removed session 19. Jan 23 00:09:39.513820 systemd[1]: Started sshd@17-10.200.20.18:22-10.200.16.10:44158.service - OpenSSH per-connection server daemon (10.200.16.10:44158). Jan 23 00:09:40.008931 sshd[4908]: Accepted publickey for core from 10.200.16.10 port 44158 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:40.010189 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:40.014164 systemd-logind[1876]: New session 20 of user core. Jan 23 00:09:40.020835 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 00:09:40.401242 sshd[4911]: Connection closed by 10.200.16.10 port 44158 Jan 23 00:09:40.401858 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:40.404383 systemd[1]: sshd@17-10.200.20.18:22-10.200.16.10:44158.service: Deactivated successfully. Jan 23 00:09:40.406380 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 00:09:40.408641 systemd-logind[1876]: Session 20 logged out. Waiting for processes to exit. Jan 23 00:09:40.409864 systemd-logind[1876]: Removed session 20. Jan 23 00:09:42.087703 update_engine[1881]: I20260123 00:09:42.087379 1881 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 00:09:42.087703 update_engine[1881]: I20260123 00:09:42.087491 1881 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 00:09:42.088094 update_engine[1881]: I20260123 00:09:42.087913 1881 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 00:09:42.122649 update_engine[1881]: E20260123 00:09:42.122587 1881 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 00:09:42.122817 update_engine[1881]: I20260123 00:09:42.122682 1881 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 00:09:42.122817 update_engine[1881]: I20260123 00:09:42.122691 1881 omaha_request_action.cc:617] Omaha request response: Jan 23 00:09:42.122817 update_engine[1881]: E20260123 00:09:42.122783 1881 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 00:09:42.122817 update_engine[1881]: I20260123 00:09:42.122797 1881 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 00:09:42.122817 update_engine[1881]: I20260123 00:09:42.122802 1881 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 00:09:42.122817 update_engine[1881]: I20260123 00:09:42.122805 1881 update_attempter.cc:306] Processing Done. Jan 23 00:09:42.122817 update_engine[1881]: E20260123 00:09:42.122819 1881 update_attempter.cc:619] Update failed. Jan 23 00:09:42.122817 update_engine[1881]: I20260123 00:09:42.122823 1881 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 00:09:42.122946 update_engine[1881]: I20260123 00:09:42.122826 1881 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 00:09:42.122946 update_engine[1881]: I20260123 00:09:42.122830 1881 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 00:09:42.122946 update_engine[1881]: I20260123 00:09:42.122897 1881 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 00:09:42.122946 update_engine[1881]: I20260123 00:09:42.122915 1881 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 00:09:42.122946 update_engine[1881]: I20260123 00:09:42.122920 1881 omaha_request_action.cc:272] Request: Jan 23 00:09:42.122946 update_engine[1881]: Jan 23 00:09:42.122946 update_engine[1881]: Jan 23 00:09:42.122946 update_engine[1881]: Jan 23 00:09:42.122946 update_engine[1881]: Jan 23 00:09:42.122946 update_engine[1881]: Jan 23 00:09:42.122946 update_engine[1881]: Jan 23 00:09:42.122946 update_engine[1881]: I20260123 00:09:42.122923 1881 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 00:09:42.122946 update_engine[1881]: I20260123 00:09:42.122941 1881 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 00:09:42.123304 update_engine[1881]: I20260123 00:09:42.123275 1881 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 00:09:42.123482 locksmithd[1958]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 00:09:42.222774 update_engine[1881]: E20260123 00:09:42.222708 1881 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 00:09:42.223104 update_engine[1881]: I20260123 00:09:42.222800 1881 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 00:09:42.223104 update_engine[1881]: I20260123 00:09:42.222807 1881 omaha_request_action.cc:617] Omaha request response: Jan 23 00:09:42.223104 update_engine[1881]: I20260123 00:09:42.222813 1881 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 00:09:42.223104 update_engine[1881]: I20260123 00:09:42.222816 1881 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 00:09:42.223104 update_engine[1881]: I20260123 00:09:42.222820 1881 update_attempter.cc:306] Processing Done. Jan 23 00:09:42.223104 update_engine[1881]: I20260123 00:09:42.222826 1881 update_attempter.cc:310] Error event sent. Jan 23 00:09:42.223104 update_engine[1881]: I20260123 00:09:42.222836 1881 update_check_scheduler.cc:74] Next update check in 41m52s Jan 23 00:09:42.223385 locksmithd[1958]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 00:09:45.487248 systemd[1]: Started sshd@18-10.200.20.18:22-10.200.16.10:44164.service - OpenSSH per-connection server daemon (10.200.16.10:44164). Jan 23 00:09:45.950858 sshd[4927]: Accepted publickey for core from 10.200.16.10 port 44164 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:45.952071 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:45.955989 systemd-logind[1876]: New session 21 of user core. Jan 23 00:09:45.963805 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 00:09:46.324266 sshd[4930]: Connection closed by 10.200.16.10 port 44164 Jan 23 00:09:46.324878 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:46.328645 systemd[1]: sshd@18-10.200.20.18:22-10.200.16.10:44164.service: Deactivated successfully. Jan 23 00:09:46.330532 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 00:09:46.331809 systemd-logind[1876]: Session 21 logged out. Waiting for processes to exit. Jan 23 00:09:46.332833 systemd-logind[1876]: Removed session 21. Jan 23 00:09:51.413481 systemd[1]: Started sshd@19-10.200.20.18:22-10.200.16.10:53388.service - OpenSSH per-connection server daemon (10.200.16.10:53388). Jan 23 00:09:51.906699 sshd[4942]: Accepted publickey for core from 10.200.16.10 port 53388 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:51.907563 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:51.911483 systemd-logind[1876]: New session 22 of user core. Jan 23 00:09:51.918836 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 00:09:52.297699 sshd[4945]: Connection closed by 10.200.16.10 port 53388 Jan 23 00:09:52.297754 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:52.300927 systemd-logind[1876]: Session 22 logged out. Waiting for processes to exit. Jan 23 00:09:52.301068 systemd[1]: sshd@19-10.200.20.18:22-10.200.16.10:53388.service: Deactivated successfully. Jan 23 00:09:52.302713 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 00:09:52.304580 systemd-logind[1876]: Removed session 22. Jan 23 00:09:52.389629 systemd[1]: Started sshd@20-10.200.20.18:22-10.200.16.10:53394.service - OpenSSH per-connection server daemon (10.200.16.10:53394). Jan 23 00:09:52.887386 sshd[4956]: Accepted publickey for core from 10.200.16.10 port 53394 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:52.888549 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:52.893076 systemd-logind[1876]: New session 23 of user core. Jan 23 00:09:52.899823 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 00:09:54.433590 kubelet[3423]: I0123 00:09:54.433511 3423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ch56j" podStartSLOduration=132.433491375 podStartE2EDuration="2m12.433491375s" podCreationTimestamp="2026-01-23 00:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:01.736885178 +0000 UTC m=+26.218152431" watchObservedRunningTime="2026-01-23 00:09:54.433491375 +0000 UTC m=+138.914758668" Jan 23 00:09:54.443638 containerd[1894]: time="2026-01-23T00:09:54.443287169Z" level=info msg="StopContainer for \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\" with timeout 30 (s)" Jan 23 00:09:54.444343 containerd[1894]: time="2026-01-23T00:09:54.444116951Z" level=info msg="Stop container \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\" with signal terminated" Jan 23 00:09:54.460846 systemd[1]: cri-containerd-a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81.scope: Deactivated successfully. Jan 23 00:09:54.462528 containerd[1894]: time="2026-01-23T00:09:54.462479960Z" level=info msg="received container exit event container_id:\"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\" id:\"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\" pid:4000 exited_at:{seconds:1769126994 nanos:462139220}" Jan 23 00:09:54.464452 containerd[1894]: time="2026-01-23T00:09:54.464408260Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:09:54.478488 containerd[1894]: time="2026-01-23T00:09:54.478412267Z" level=info msg="StopContainer for \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\" with timeout 2 (s)" Jan 23 00:09:54.478963 containerd[1894]: time="2026-01-23T00:09:54.478943774Z" level=info msg="Stop container \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\" with signal terminated" Jan 23 00:09:54.488740 systemd-networkd[1486]: lxc_health: Link DOWN Jan 23 00:09:54.488750 systemd-networkd[1486]: lxc_health: Lost carrier Jan 23 00:09:54.494030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81-rootfs.mount: Deactivated successfully. Jan 23 00:09:54.504824 systemd[1]: cri-containerd-f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2.scope: Deactivated successfully. Jan 23 00:09:54.505116 systemd[1]: cri-containerd-f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2.scope: Consumed 4.486s CPU time, 125.4M memory peak, 128K read from disk, 12.9M written to disk. Jan 23 00:09:54.507284 containerd[1894]: time="2026-01-23T00:09:54.507225038Z" level=info msg="received container exit event container_id:\"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\" id:\"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\" pid:4067 exited_at:{seconds:1769126994 nanos:506543638}" Jan 23 00:09:54.524891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2-rootfs.mount: Deactivated successfully. Jan 23 00:09:54.560165 containerd[1894]: time="2026-01-23T00:09:54.560033097Z" level=info msg="StopContainer for \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\" returns successfully" Jan 23 00:09:54.560763 containerd[1894]: time="2026-01-23T00:09:54.560651775Z" level=info msg="StopPodSandbox for \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\"" Jan 23 00:09:54.560763 containerd[1894]: time="2026-01-23T00:09:54.560732922Z" level=info msg="Container to stop \"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:09:54.560763 containerd[1894]: time="2026-01-23T00:09:54.560743394Z" level=info msg="Container to stop \"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:09:54.561081 containerd[1894]: time="2026-01-23T00:09:54.560750555Z" level=info msg="Container to stop \"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:09:54.561081 containerd[1894]: time="2026-01-23T00:09:54.560889440Z" level=info msg="Container to stop \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:09:54.561081 containerd[1894]: time="2026-01-23T00:09:54.560897264Z" level=info msg="Container to stop \"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:09:54.564395 containerd[1894]: time="2026-01-23T00:09:54.564366930Z" level=info msg="StopContainer for \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\" returns successfully" Jan 23 00:09:54.565041 containerd[1894]: time="2026-01-23T00:09:54.565013185Z" level=info msg="StopPodSandbox for \"5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4\"" Jan 23 00:09:54.565110 containerd[1894]: time="2026-01-23T00:09:54.565084852Z" level=info msg="Container to stop \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:09:54.567389 systemd[1]: cri-containerd-5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340.scope: Deactivated successfully. Jan 23 00:09:54.570222 containerd[1894]: time="2026-01-23T00:09:54.570128646Z" level=info msg="received sandbox exit event container_id:\"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" id:\"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" exit_status:137 exited_at:{seconds:1769126994 nanos:569990233}" monitor_name=podsandbox Jan 23 00:09:54.573442 systemd[1]: cri-containerd-5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4.scope: Deactivated successfully. Jan 23 00:09:54.575530 containerd[1894]: time="2026-01-23T00:09:54.575440562Z" level=info msg="received sandbox exit event container_id:\"5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4\" id:\"5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4\" exit_status:137 exited_at:{seconds:1769126994 nanos:575214570}" monitor_name=podsandbox Jan 23 00:09:54.599567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340-rootfs.mount: Deactivated successfully. Jan 23 00:09:54.604751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4-rootfs.mount: Deactivated successfully. Jan 23 00:09:54.614397 containerd[1894]: time="2026-01-23T00:09:54.614363650Z" level=info msg="shim disconnected" id=5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4 namespace=k8s.io Jan 23 00:09:54.614658 containerd[1894]: time="2026-01-23T00:09:54.614619603Z" level=warning msg="cleaning up after shim disconnected" id=5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4 namespace=k8s.io Jan 23 00:09:54.614826 containerd[1894]: time="2026-01-23T00:09:54.614719015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 00:09:54.615373 containerd[1894]: time="2026-01-23T00:09:54.615343509Z" level=info msg="shim disconnected" id=5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340 namespace=k8s.io Jan 23 00:09:54.615468 containerd[1894]: time="2026-01-23T00:09:54.615366630Z" level=warning msg="cleaning up after shim disconnected" id=5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340 namespace=k8s.io Jan 23 00:09:54.615468 containerd[1894]: time="2026-01-23T00:09:54.615387126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 00:09:54.628684 containerd[1894]: time="2026-01-23T00:09:54.628607282Z" level=info msg="received sandbox container exit event sandbox_id:\"5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4\" exit_status:137 exited_at:{seconds:1769126994 nanos:575214570}" monitor_name=criService Jan 23 00:09:54.630692 containerd[1894]: time="2026-01-23T00:09:54.629300322Z" level=info msg="received sandbox container exit event sandbox_id:\"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" exit_status:137 exited_at:{seconds:1769126994 nanos:569990233}" monitor_name=criService Jan 23 00:09:54.630483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4-shm.mount: Deactivated successfully. Jan 23 00:09:54.631173 containerd[1894]: time="2026-01-23T00:09:54.630987982Z" level=info msg="TearDown network for sandbox \"5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4\" successfully" Jan 23 00:09:54.631173 containerd[1894]: time="2026-01-23T00:09:54.631012903Z" level=info msg="StopPodSandbox for \"5bbcd02d6949e84a55d8236264c17d3e18149e5d86666f45afdc02d73f1002d4\" returns successfully" Jan 23 00:09:54.631355 containerd[1894]: time="2026-01-23T00:09:54.631306225Z" level=info msg="TearDown network for sandbox \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" successfully" Jan 23 00:09:54.631835 containerd[1894]: time="2026-01-23T00:09:54.631813563Z" level=info msg="StopPodSandbox for \"5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340\" returns successfully" Jan 23 00:09:54.732293 kubelet[3423]: I0123 00:09:54.732169 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqhk4\" (UniqueName: \"kubernetes.io/projected/56470302-2682-4296-a9e7-7f2ee55dd4de-kube-api-access-fqhk4\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732293 kubelet[3423]: I0123 00:09:54.732212 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-bpf-maps\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732293 kubelet[3423]: I0123 00:09:54.732228 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-cgroup\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732293 kubelet[3423]: I0123 00:09:54.732237 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-lib-modules\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732293 kubelet[3423]: I0123 00:09:54.732249 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-config-path\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732293 kubelet[3423]: I0123 00:09:54.732259 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-etc-cni-netd\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732532 kubelet[3423]: I0123 00:09:54.732270 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56470302-2682-4296-a9e7-7f2ee55dd4de-hubble-tls\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732532 kubelet[3423]: I0123 00:09:54.732282 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj57v\" (UniqueName: \"kubernetes.io/projected/59ed3a82-3511-4c83-ab8e-0c32a4f9ee55-kube-api-access-wj57v\") pod \"59ed3a82-3511-4c83-ab8e-0c32a4f9ee55\" (UID: \"59ed3a82-3511-4c83-ab8e-0c32a4f9ee55\") " Jan 23 00:09:54.732532 kubelet[3423]: I0123 00:09:54.732292 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-hostproc\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732532 kubelet[3423]: I0123 00:09:54.732303 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-host-proc-sys-net\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732532 kubelet[3423]: I0123 00:09:54.732334 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56470302-2682-4296-a9e7-7f2ee55dd4de-clustermesh-secrets\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732532 kubelet[3423]: I0123 00:09:54.732343 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cni-path\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732624 kubelet[3423]: I0123 00:09:54.732352 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-xtables-lock\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732624 kubelet[3423]: I0123 00:09:54.732361 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-run\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732624 kubelet[3423]: I0123 00:09:54.732371 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-host-proc-sys-kernel\") pod \"56470302-2682-4296-a9e7-7f2ee55dd4de\" (UID: \"56470302-2682-4296-a9e7-7f2ee55dd4de\") " Jan 23 00:09:54.732624 kubelet[3423]: I0123 00:09:54.732381 3423 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59ed3a82-3511-4c83-ab8e-0c32a4f9ee55-cilium-config-path\") pod \"59ed3a82-3511-4c83-ab8e-0c32a4f9ee55\" (UID: \"59ed3a82-3511-4c83-ab8e-0c32a4f9ee55\") " Jan 23 00:09:54.734426 kubelet[3423]: I0123 00:09:54.733654 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59ed3a82-3511-4c83-ab8e-0c32a4f9ee55-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "59ed3a82-3511-4c83-ab8e-0c32a4f9ee55" (UID: "59ed3a82-3511-4c83-ab8e-0c32a4f9ee55"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 00:09:54.734426 kubelet[3423]: I0123 00:09:54.733974 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-hostproc" (OuterVolumeSpecName: "hostproc") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:09:54.734426 kubelet[3423]: I0123 00:09:54.733991 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:09:54.734729 kubelet[3423]: I0123 00:09:54.734691 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cni-path" (OuterVolumeSpecName: "cni-path") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:09:54.734831 kubelet[3423]: I0123 00:09:54.734817 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:09:54.735767 kubelet[3423]: I0123 00:09:54.735720 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:09:54.735851 kubelet[3423]: I0123 00:09:54.735736 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:09:54.736811 kubelet[3423]: I0123 00:09:54.736777 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:09:54.736859 kubelet[3423]: I0123 00:09:54.736829 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:09:54.736859 kubelet[3423]: I0123 00:09:54.736841 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:09:54.736859 kubelet[3423]: I0123 00:09:54.736853 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:09:54.737733 kubelet[3423]: I0123 00:09:54.737711 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 00:09:54.739018 kubelet[3423]: I0123 00:09:54.738989 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59ed3a82-3511-4c83-ab8e-0c32a4f9ee55-kube-api-access-wj57v" (OuterVolumeSpecName: "kube-api-access-wj57v") pod "59ed3a82-3511-4c83-ab8e-0c32a4f9ee55" (UID: "59ed3a82-3511-4c83-ab8e-0c32a4f9ee55"). InnerVolumeSpecName "kube-api-access-wj57v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:09:54.739085 kubelet[3423]: I0123 00:09:54.739040 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56470302-2682-4296-a9e7-7f2ee55dd4de-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 00:09:54.740067 kubelet[3423]: I0123 00:09:54.740022 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56470302-2682-4296-a9e7-7f2ee55dd4de-kube-api-access-fqhk4" (OuterVolumeSpecName: "kube-api-access-fqhk4") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "kube-api-access-fqhk4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:09:54.740397 kubelet[3423]: I0123 00:09:54.740372 3423 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56470302-2682-4296-a9e7-7f2ee55dd4de-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "56470302-2682-4296-a9e7-7f2ee55dd4de" (UID: "56470302-2682-4296-a9e7-7f2ee55dd4de"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:09:54.832757 kubelet[3423]: I0123 00:09:54.832711 3423 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-host-proc-sys-net\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.832757 kubelet[3423]: I0123 00:09:54.832751 3423 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56470302-2682-4296-a9e7-7f2ee55dd4de-clustermesh-secrets\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.832757 kubelet[3423]: I0123 00:09:54.832758 3423 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cni-path\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.832757 kubelet[3423]: I0123 00:09:54.832764 3423 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-xtables-lock\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.832757 kubelet[3423]: I0123 00:09:54.832772 3423 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-run\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.832757 kubelet[3423]: I0123 00:09:54.832778 3423 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-host-proc-sys-kernel\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.833000 kubelet[3423]: I0123 00:09:54.832784 3423 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59ed3a82-3511-4c83-ab8e-0c32a4f9ee55-cilium-config-path\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.833000 kubelet[3423]: I0123 00:09:54.832789 3423 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fqhk4\" (UniqueName: \"kubernetes.io/projected/56470302-2682-4296-a9e7-7f2ee55dd4de-kube-api-access-fqhk4\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.833000 kubelet[3423]: I0123 00:09:54.832797 3423 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-bpf-maps\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.833000 kubelet[3423]: I0123 00:09:54.832802 3423 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-cgroup\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.833000 kubelet[3423]: I0123 00:09:54.832806 3423 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-lib-modules\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.833000 kubelet[3423]: I0123 00:09:54.832811 3423 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56470302-2682-4296-a9e7-7f2ee55dd4de-cilium-config-path\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.833000 kubelet[3423]: I0123 00:09:54.832816 3423 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-etc-cni-netd\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.833000 kubelet[3423]: I0123 00:09:54.832821 3423 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56470302-2682-4296-a9e7-7f2ee55dd4de-hubble-tls\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.833113 kubelet[3423]: I0123 00:09:54.832827 3423 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj57v\" (UniqueName: \"kubernetes.io/projected/59ed3a82-3511-4c83-ab8e-0c32a4f9ee55-kube-api-access-wj57v\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.833113 kubelet[3423]: I0123 00:09:54.832847 3423 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56470302-2682-4296-a9e7-7f2ee55dd4de-hostproc\") on node \"ci-4459.2.2-n-db2e6badfc\" DevicePath \"\"" Jan 23 00:09:54.898698 kubelet[3423]: I0123 00:09:54.898645 3423 scope.go:117] "RemoveContainer" containerID="a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81" Jan 23 00:09:54.903892 containerd[1894]: time="2026-01-23T00:09:54.903814677Z" level=info msg="RemoveContainer for \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\"" Jan 23 00:09:54.904232 systemd[1]: Removed slice kubepods-besteffort-pod59ed3a82_3511_4c83_ab8e_0c32a4f9ee55.slice - libcontainer container kubepods-besteffort-pod59ed3a82_3511_4c83_ab8e_0c32a4f9ee55.slice. Jan 23 00:09:54.910402 systemd[1]: Removed slice kubepods-burstable-pod56470302_2682_4296_a9e7_7f2ee55dd4de.slice - libcontainer container kubepods-burstable-pod56470302_2682_4296_a9e7_7f2ee55dd4de.slice. Jan 23 00:09:54.910484 systemd[1]: kubepods-burstable-pod56470302_2682_4296_a9e7_7f2ee55dd4de.slice: Consumed 4.556s CPU time, 125.8M memory peak, 128K read from disk, 12.9M written to disk. Jan 23 00:09:54.919156 containerd[1894]: time="2026-01-23T00:09:54.919067416Z" level=info msg="RemoveContainer for \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\" returns successfully" Jan 23 00:09:54.919866 kubelet[3423]: I0123 00:09:54.919762 3423 scope.go:117] "RemoveContainer" containerID="a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81" Jan 23 00:09:54.920332 containerd[1894]: time="2026-01-23T00:09:54.920295684Z" level=error msg="ContainerStatus for \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\": not found" Jan 23 00:09:54.921142 kubelet[3423]: E0123 00:09:54.920818 3423 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\": not found" containerID="a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81" Jan 23 00:09:54.921316 kubelet[3423]: I0123 00:09:54.921266 3423 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81"} err="failed to get container status \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\": rpc error: code = NotFound desc = an error occurred when try to find container \"a022ace991c44bdc21316edd762f0d38972bb7c2efe2a40da32c767382d69d81\": not found" Jan 23 00:09:54.921524 kubelet[3423]: I0123 00:09:54.921506 3423 scope.go:117] "RemoveContainer" containerID="f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2" Jan 23 00:09:54.925316 containerd[1894]: time="2026-01-23T00:09:54.925282172Z" level=info msg="RemoveContainer for \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\"" Jan 23 00:09:54.933355 containerd[1894]: time="2026-01-23T00:09:54.933111641Z" level=info msg="RemoveContainer for \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\" returns successfully" Jan 23 00:09:54.933968 kubelet[3423]: I0123 00:09:54.933784 3423 scope.go:117] "RemoveContainer" containerID="0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e" Jan 23 00:09:54.936926 containerd[1894]: time="2026-01-23T00:09:54.936882902Z" level=info msg="RemoveContainer for \"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\"" Jan 23 00:09:54.945179 containerd[1894]: time="2026-01-23T00:09:54.945141338Z" level=info msg="RemoveContainer for \"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\" returns successfully" Jan 23 00:09:54.945452 kubelet[3423]: I0123 00:09:54.945418 3423 scope.go:117] "RemoveContainer" containerID="7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0" Jan 23 00:09:54.947246 containerd[1894]: time="2026-01-23T00:09:54.947218756Z" level=info msg="RemoveContainer for \"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\"" Jan 23 00:09:54.955208 containerd[1894]: time="2026-01-23T00:09:54.955175989Z" level=info msg="RemoveContainer for \"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\" returns successfully" Jan 23 00:09:54.955512 kubelet[3423]: I0123 00:09:54.955396 3423 scope.go:117] "RemoveContainer" containerID="fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa" Jan 23 00:09:54.956840 containerd[1894]: time="2026-01-23T00:09:54.956806759Z" level=info msg="RemoveContainer for \"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\"" Jan 23 00:09:54.964505 containerd[1894]: time="2026-01-23T00:09:54.964472214Z" level=info msg="RemoveContainer for \"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\" returns successfully" Jan 23 00:09:54.964747 kubelet[3423]: I0123 00:09:54.964727 3423 scope.go:117] "RemoveContainer" containerID="d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922" Jan 23 00:09:54.965941 containerd[1894]: time="2026-01-23T00:09:54.965921273Z" level=info msg="RemoveContainer for \"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\"" Jan 23 00:09:54.972849 containerd[1894]: time="2026-01-23T00:09:54.972816749Z" level=info msg="RemoveContainer for \"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\" returns successfully" Jan 23 00:09:54.973145 kubelet[3423]: I0123 00:09:54.973119 3423 scope.go:117] "RemoveContainer" containerID="f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2" Jan 23 00:09:54.973466 containerd[1894]: time="2026-01-23T00:09:54.973435811Z" level=error msg="ContainerStatus for \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\": not found" Jan 23 00:09:54.973585 kubelet[3423]: E0123 00:09:54.973559 3423 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\": not found" containerID="f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2" Jan 23 00:09:54.973617 kubelet[3423]: I0123 00:09:54.973590 3423 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2"} err="failed to get container status \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1ec6c2575011fb8517e27f9149abbf4050d45fb2ea6d8f857d571c6970387f2\": not found" Jan 23 00:09:54.973617 kubelet[3423]: I0123 00:09:54.973609 3423 scope.go:117] "RemoveContainer" containerID="0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e" Jan 23 00:09:54.973822 containerd[1894]: time="2026-01-23T00:09:54.973792375Z" level=error msg="ContainerStatus for \"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\": not found" Jan 23 00:09:54.974055 kubelet[3423]: E0123 00:09:54.974033 3423 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\": not found" containerID="0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e" Jan 23 00:09:54.974108 kubelet[3423]: I0123 00:09:54.974055 3423 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e"} err="failed to get container status \"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0410462a25dfcd7c59f0517cf2f2e7eee1251261c2331e4fd7bb9f7e5fe1e67e\": not found" Jan 23 00:09:54.974108 kubelet[3423]: I0123 00:09:54.974067 3423 scope.go:117] "RemoveContainer" containerID="7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0" Jan 23 00:09:54.974260 containerd[1894]: time="2026-01-23T00:09:54.974227159Z" level=error msg="ContainerStatus for \"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\": not found" Jan 23 00:09:54.974353 kubelet[3423]: E0123 00:09:54.974332 3423 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\": not found" containerID="7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0" Jan 23 00:09:54.974385 kubelet[3423]: I0123 00:09:54.974353 3423 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0"} err="failed to get container status \"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b3c3673ae3f35f0579e8efb516f77d658dcb9939aa4ef874a988a2a864cfef0\": not found" Jan 23 00:09:54.974385 kubelet[3423]: I0123 00:09:54.974365 3423 scope.go:117] "RemoveContainer" containerID="fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa" Jan 23 00:09:54.974580 containerd[1894]: time="2026-01-23T00:09:54.974512273Z" level=error msg="ContainerStatus for \"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\": not found" Jan 23 00:09:54.974630 kubelet[3423]: E0123 00:09:54.974598 3423 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\": not found" containerID="fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa" Jan 23 00:09:54.974630 kubelet[3423]: I0123 00:09:54.974611 3423 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa"} err="failed to get container status \"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd43e2b7058fe22d1f8ecbed66758e47a3863ea0ef8901540ee7999d6797f4fa\": not found" Jan 23 00:09:54.974630 kubelet[3423]: I0123 00:09:54.974620 3423 scope.go:117] "RemoveContainer" containerID="d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922" Jan 23 00:09:54.974980 containerd[1894]: time="2026-01-23T00:09:54.974816323Z" level=error msg="ContainerStatus for \"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\": not found" Jan 23 00:09:54.975027 kubelet[3423]: E0123 00:09:54.974984 3423 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\": not found" containerID="d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922" Jan 23 00:09:54.975027 kubelet[3423]: I0123 00:09:54.974998 3423 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922"} err="failed to get container status \"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\": rpc error: code = NotFound desc = an error occurred when try to find container \"d35e50c4715393699d44f1bba66ce1d6b6fd3bbb04b003b418be936c8d952922\": not found" Jan 23 00:09:55.493997 systemd[1]: var-lib-kubelet-pods-59ed3a82\x2d3511\x2d4c83\x2dab8e\x2d0c32a4f9ee55-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwj57v.mount: Deactivated successfully. Jan 23 00:09:55.494087 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e089f95c2085c157fc529cfeeed67e5fde6e8a7ff00114e52359063a90f1340-shm.mount: Deactivated successfully. Jan 23 00:09:55.494140 systemd[1]: var-lib-kubelet-pods-56470302\x2d2682\x2d4296\x2da9e7\x2d7f2ee55dd4de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfqhk4.mount: Deactivated successfully. Jan 23 00:09:55.494177 systemd[1]: var-lib-kubelet-pods-56470302\x2d2682\x2d4296\x2da9e7\x2d7f2ee55dd4de-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 00:09:55.494217 systemd[1]: var-lib-kubelet-pods-56470302\x2d2682\x2d4296\x2da9e7\x2d7f2ee55dd4de-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 00:09:55.593930 kubelet[3423]: I0123 00:09:55.593885 3423 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56470302-2682-4296-a9e7-7f2ee55dd4de" path="/var/lib/kubelet/pods/56470302-2682-4296-a9e7-7f2ee55dd4de/volumes" Jan 23 00:09:55.594813 kubelet[3423]: I0123 00:09:55.594778 3423 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59ed3a82-3511-4c83-ab8e-0c32a4f9ee55" path="/var/lib/kubelet/pods/59ed3a82-3511-4c83-ab8e-0c32a4f9ee55/volumes" Jan 23 00:09:55.670094 kubelet[3423]: E0123 00:09:55.670047 3423 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 00:09:56.465574 sshd[4959]: Connection closed by 10.200.16.10 port 53394 Jan 23 00:09:56.466198 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:56.469934 systemd[1]: sshd@20-10.200.20.18:22-10.200.16.10:53394.service: Deactivated successfully. Jan 23 00:09:56.471705 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 00:09:56.472508 systemd-logind[1876]: Session 23 logged out. Waiting for processes to exit. Jan 23 00:09:56.474234 systemd-logind[1876]: Removed session 23. Jan 23 00:09:56.555060 systemd[1]: Started sshd@21-10.200.20.18:22-10.200.16.10:53396.service - OpenSSH per-connection server daemon (10.200.16.10:53396). Jan 23 00:09:57.056620 sshd[5106]: Accepted publickey for core from 10.200.16.10 port 53396 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:57.057424 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:57.061070 systemd-logind[1876]: New session 24 of user core. Jan 23 00:09:57.069830 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 00:09:57.802831 systemd[1]: Created slice kubepods-burstable-podcc12b5b9_38a5_4dc8_90e3_dde8ce0f128f.slice - libcontainer container kubepods-burstable-podcc12b5b9_38a5_4dc8_90e3_dde8ce0f128f.slice. Jan 23 00:09:57.847556 sshd[5109]: Connection closed by 10.200.16.10 port 53396 Jan 23 00:09:57.849146 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:57.849889 kubelet[3423]: I0123 00:09:57.849840 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-host-proc-sys-net\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.849889 kubelet[3423]: I0123 00:09:57.849879 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-cilium-run\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850558 kubelet[3423]: I0123 00:09:57.849906 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-cni-path\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850558 kubelet[3423]: I0123 00:09:57.849920 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-cilium-config-path\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850558 kubelet[3423]: I0123 00:09:57.849936 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-bpf-maps\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850558 kubelet[3423]: I0123 00:09:57.849989 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-clustermesh-secrets\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850558 kubelet[3423]: I0123 00:09:57.850018 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-etc-cni-netd\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850558 kubelet[3423]: I0123 00:09:57.850031 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-xtables-lock\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850653 kubelet[3423]: I0123 00:09:57.850046 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-lib-modules\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850653 kubelet[3423]: I0123 00:09:57.850056 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-hubble-tls\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850653 kubelet[3423]: I0123 00:09:57.850069 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-884lr\" (UniqueName: \"kubernetes.io/projected/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-kube-api-access-884lr\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850653 kubelet[3423]: I0123 00:09:57.850086 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-cilium-cgroup\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850653 kubelet[3423]: I0123 00:09:57.850151 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-cilium-ipsec-secrets\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850747 kubelet[3423]: I0123 00:09:57.850182 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-host-proc-sys-kernel\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.850747 kubelet[3423]: I0123 00:09:57.850196 3423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f-hostproc\") pod \"cilium-kbtb6\" (UID: \"cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f\") " pod="kube-system/cilium-kbtb6" Jan 23 00:09:57.853590 systemd[1]: sshd@21-10.200.20.18:22-10.200.16.10:53396.service: Deactivated successfully. Jan 23 00:09:57.857072 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 00:09:57.857960 systemd-logind[1876]: Session 24 logged out. Waiting for processes to exit. Jan 23 00:09:57.860568 systemd-logind[1876]: Removed session 24. Jan 23 00:09:57.937716 systemd[1]: Started sshd@22-10.200.20.18:22-10.200.16.10:53408.service - OpenSSH per-connection server daemon (10.200.16.10:53408). Jan 23 00:09:57.958633 kubelet[3423]: I0123 00:09:57.957538 3423 setters.go:618] "Node became not ready" node="ci-4459.2.2-n-db2e6badfc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T00:09:57Z","lastTransitionTime":"2026-01-23T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 00:09:58.108325 containerd[1894]: time="2026-01-23T00:09:58.107870832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kbtb6,Uid:cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f,Namespace:kube-system,Attempt:0,}" Jan 23 00:09:58.141110 containerd[1894]: time="2026-01-23T00:09:58.141070597Z" level=info msg="connecting to shim c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea" address="unix:///run/containerd/s/7850da13777f0ce9355dc98bf4ae7696d534d0080d4dcb91c29f0226baf5e97a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:09:58.161811 systemd[1]: Started cri-containerd-c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea.scope - libcontainer container c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea. Jan 23 00:09:58.186133 containerd[1894]: time="2026-01-23T00:09:58.186091869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kbtb6,Uid:cc12b5b9-38a5-4dc8-90e3-dde8ce0f128f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea\"" Jan 23 00:09:58.195335 containerd[1894]: time="2026-01-23T00:09:58.194867748Z" level=info msg="CreateContainer within sandbox \"c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 00:09:58.210640 containerd[1894]: time="2026-01-23T00:09:58.210598336Z" level=info msg="Container 946766af1317fd8372b225c699caddeae4cdbf8355736f400992a01d01af8807: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:09:58.227454 containerd[1894]: time="2026-01-23T00:09:58.227404946Z" level=info msg="CreateContainer within sandbox \"c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"946766af1317fd8372b225c699caddeae4cdbf8355736f400992a01d01af8807\"" Jan 23 00:09:58.229163 containerd[1894]: time="2026-01-23T00:09:58.228000079Z" level=info msg="StartContainer for \"946766af1317fd8372b225c699caddeae4cdbf8355736f400992a01d01af8807\"" Jan 23 00:09:58.229163 containerd[1894]: time="2026-01-23T00:09:58.229020043Z" level=info msg="connecting to shim 946766af1317fd8372b225c699caddeae4cdbf8355736f400992a01d01af8807" address="unix:///run/containerd/s/7850da13777f0ce9355dc98bf4ae7696d534d0080d4dcb91c29f0226baf5e97a" protocol=ttrpc version=3 Jan 23 00:09:58.244803 systemd[1]: Started cri-containerd-946766af1317fd8372b225c699caddeae4cdbf8355736f400992a01d01af8807.scope - libcontainer container 946766af1317fd8372b225c699caddeae4cdbf8355736f400992a01d01af8807. Jan 23 00:09:58.271268 containerd[1894]: time="2026-01-23T00:09:58.271133620Z" level=info msg="StartContainer for \"946766af1317fd8372b225c699caddeae4cdbf8355736f400992a01d01af8807\" returns successfully" Jan 23 00:09:58.275302 systemd[1]: cri-containerd-946766af1317fd8372b225c699caddeae4cdbf8355736f400992a01d01af8807.scope: Deactivated successfully. Jan 23 00:09:58.278048 containerd[1894]: time="2026-01-23T00:09:58.277955142Z" level=info msg="received container exit event container_id:\"946766af1317fd8372b225c699caddeae4cdbf8355736f400992a01d01af8807\" id:\"946766af1317fd8372b225c699caddeae4cdbf8355736f400992a01d01af8807\" pid:5187 exited_at:{seconds:1769126998 nanos:277609873}" Jan 23 00:09:58.395297 sshd[5119]: Accepted publickey for core from 10.200.16.10 port 53408 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:58.396135 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:58.399923 systemd-logind[1876]: New session 25 of user core. Jan 23 00:09:58.410820 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 00:09:58.723449 sshd[5218]: Connection closed by 10.200.16.10 port 53408 Jan 23 00:09:58.724158 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:58.727466 systemd[1]: sshd@22-10.200.20.18:22-10.200.16.10:53408.service: Deactivated successfully. Jan 23 00:09:58.729999 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 00:09:58.731129 systemd-logind[1876]: Session 25 logged out. Waiting for processes to exit. Jan 23 00:09:58.732454 systemd-logind[1876]: Removed session 25. Jan 23 00:09:58.817004 systemd[1]: Started sshd@23-10.200.20.18:22-10.200.16.10:53416.service - OpenSSH per-connection server daemon (10.200.16.10:53416). Jan 23 00:09:58.923403 containerd[1894]: time="2026-01-23T00:09:58.923043591Z" level=info msg="CreateContainer within sandbox \"c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 00:09:58.940029 containerd[1894]: time="2026-01-23T00:09:58.939631994Z" level=info msg="Container 6b28f2fff45c0af9b6f2b71ea45cbd366a9d952f3f8142ee7931adeb8998f85a: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:09:58.953606 containerd[1894]: time="2026-01-23T00:09:58.953559830Z" level=info msg="CreateContainer within sandbox \"c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6b28f2fff45c0af9b6f2b71ea45cbd366a9d952f3f8142ee7931adeb8998f85a\"" Jan 23 00:09:58.954164 containerd[1894]: time="2026-01-23T00:09:58.954136202Z" level=info msg="StartContainer for \"6b28f2fff45c0af9b6f2b71ea45cbd366a9d952f3f8142ee7931adeb8998f85a\"" Jan 23 00:09:58.955093 containerd[1894]: time="2026-01-23T00:09:58.955065587Z" level=info msg="connecting to shim 6b28f2fff45c0af9b6f2b71ea45cbd366a9d952f3f8142ee7931adeb8998f85a" address="unix:///run/containerd/s/7850da13777f0ce9355dc98bf4ae7696d534d0080d4dcb91c29f0226baf5e97a" protocol=ttrpc version=3 Jan 23 00:09:58.978845 systemd[1]: Started cri-containerd-6b28f2fff45c0af9b6f2b71ea45cbd366a9d952f3f8142ee7931adeb8998f85a.scope - libcontainer container 6b28f2fff45c0af9b6f2b71ea45cbd366a9d952f3f8142ee7931adeb8998f85a. Jan 23 00:09:59.008425 containerd[1894]: time="2026-01-23T00:09:59.008352104Z" level=info msg="StartContainer for \"6b28f2fff45c0af9b6f2b71ea45cbd366a9d952f3f8142ee7931adeb8998f85a\" returns successfully" Jan 23 00:09:59.012444 systemd[1]: cri-containerd-6b28f2fff45c0af9b6f2b71ea45cbd366a9d952f3f8142ee7931adeb8998f85a.scope: Deactivated successfully. Jan 23 00:09:59.015553 containerd[1894]: time="2026-01-23T00:09:59.015423378Z" level=info msg="received container exit event container_id:\"6b28f2fff45c0af9b6f2b71ea45cbd366a9d952f3f8142ee7931adeb8998f85a\" id:\"6b28f2fff45c0af9b6f2b71ea45cbd366a9d952f3f8142ee7931adeb8998f85a\" pid:5241 exited_at:{seconds:1769126999 nanos:14374252}" Jan 23 00:09:59.031550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b28f2fff45c0af9b6f2b71ea45cbd366a9d952f3f8142ee7931adeb8998f85a-rootfs.mount: Deactivated successfully. Jan 23 00:09:59.304650 sshd[5225]: Accepted publickey for core from 10.200.16.10 port 53416 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:09:59.305858 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:59.309646 systemd-logind[1876]: New session 26 of user core. Jan 23 00:09:59.321837 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 00:09:59.930566 containerd[1894]: time="2026-01-23T00:09:59.930521694Z" level=info msg="CreateContainer within sandbox \"c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 00:09:59.949744 containerd[1894]: time="2026-01-23T00:09:59.949570408Z" level=info msg="Container e1c41a605f412b2c2b1589c8f4f31c216d0db562dfeef41c2ee7560aa806d104: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:09:59.966532 containerd[1894]: time="2026-01-23T00:09:59.966456021Z" level=info msg="CreateContainer within sandbox \"c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e1c41a605f412b2c2b1589c8f4f31c216d0db562dfeef41c2ee7560aa806d104\"" Jan 23 00:09:59.967590 containerd[1894]: time="2026-01-23T00:09:59.967558220Z" level=info msg="StartContainer for \"e1c41a605f412b2c2b1589c8f4f31c216d0db562dfeef41c2ee7560aa806d104\"" Jan 23 00:09:59.968580 containerd[1894]: time="2026-01-23T00:09:59.968551991Z" level=info msg="connecting to shim e1c41a605f412b2c2b1589c8f4f31c216d0db562dfeef41c2ee7560aa806d104" address="unix:///run/containerd/s/7850da13777f0ce9355dc98bf4ae7696d534d0080d4dcb91c29f0226baf5e97a" protocol=ttrpc version=3 Jan 23 00:09:59.991837 systemd[1]: Started cri-containerd-e1c41a605f412b2c2b1589c8f4f31c216d0db562dfeef41c2ee7560aa806d104.scope - libcontainer container e1c41a605f412b2c2b1589c8f4f31c216d0db562dfeef41c2ee7560aa806d104. Jan 23 00:10:00.067497 systemd[1]: cri-containerd-e1c41a605f412b2c2b1589c8f4f31c216d0db562dfeef41c2ee7560aa806d104.scope: Deactivated successfully. Jan 23 00:10:00.070772 containerd[1894]: time="2026-01-23T00:10:00.070648209Z" level=info msg="received container exit event container_id:\"e1c41a605f412b2c2b1589c8f4f31c216d0db562dfeef41c2ee7560aa806d104\" id:\"e1c41a605f412b2c2b1589c8f4f31c216d0db562dfeef41c2ee7560aa806d104\" pid:5292 exited_at:{seconds:1769127000 nanos:69568507}" Jan 23 00:10:00.073189 containerd[1894]: time="2026-01-23T00:10:00.073106408Z" level=info msg="StartContainer for \"e1c41a605f412b2c2b1589c8f4f31c216d0db562dfeef41c2ee7560aa806d104\" returns successfully" Jan 23 00:10:00.089171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1c41a605f412b2c2b1589c8f4f31c216d0db562dfeef41c2ee7560aa806d104-rootfs.mount: Deactivated successfully. Jan 23 00:10:00.671548 kubelet[3423]: E0123 00:10:00.671505 3423 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 00:10:00.935376 containerd[1894]: time="2026-01-23T00:10:00.934821633Z" level=info msg="CreateContainer within sandbox \"c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 00:10:00.956689 containerd[1894]: time="2026-01-23T00:10:00.954260940Z" level=info msg="Container 8ffeeb10bca719ef5e0471047e3c16a83f19eb86f99ccba02f39fa6b8aea38a4: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:10:00.956214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2746648994.mount: Deactivated successfully. Jan 23 00:10:00.971069 containerd[1894]: time="2026-01-23T00:10:00.971021098Z" level=info msg="CreateContainer within sandbox \"c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8ffeeb10bca719ef5e0471047e3c16a83f19eb86f99ccba02f39fa6b8aea38a4\"" Jan 23 00:10:00.971812 containerd[1894]: time="2026-01-23T00:10:00.971630287Z" level=info msg="StartContainer for \"8ffeeb10bca719ef5e0471047e3c16a83f19eb86f99ccba02f39fa6b8aea38a4\"" Jan 23 00:10:00.973329 containerd[1894]: time="2026-01-23T00:10:00.973293090Z" level=info msg="connecting to shim 8ffeeb10bca719ef5e0471047e3c16a83f19eb86f99ccba02f39fa6b8aea38a4" address="unix:///run/containerd/s/7850da13777f0ce9355dc98bf4ae7696d534d0080d4dcb91c29f0226baf5e97a" protocol=ttrpc version=3 Jan 23 00:10:00.993844 systemd[1]: Started cri-containerd-8ffeeb10bca719ef5e0471047e3c16a83f19eb86f99ccba02f39fa6b8aea38a4.scope - libcontainer container 8ffeeb10bca719ef5e0471047e3c16a83f19eb86f99ccba02f39fa6b8aea38a4. Jan 23 00:10:01.016819 systemd[1]: cri-containerd-8ffeeb10bca719ef5e0471047e3c16a83f19eb86f99ccba02f39fa6b8aea38a4.scope: Deactivated successfully. Jan 23 00:10:01.022273 containerd[1894]: time="2026-01-23T00:10:01.022228834Z" level=info msg="received container exit event container_id:\"8ffeeb10bca719ef5e0471047e3c16a83f19eb86f99ccba02f39fa6b8aea38a4\" id:\"8ffeeb10bca719ef5e0471047e3c16a83f19eb86f99ccba02f39fa6b8aea38a4\" pid:5330 exited_at:{seconds:1769127001 nanos:18192852}" Jan 23 00:10:01.023622 containerd[1894]: time="2026-01-23T00:10:01.023585394Z" level=info msg="StartContainer for \"8ffeeb10bca719ef5e0471047e3c16a83f19eb86f99ccba02f39fa6b8aea38a4\" returns successfully" Jan 23 00:10:01.040330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ffeeb10bca719ef5e0471047e3c16a83f19eb86f99ccba02f39fa6b8aea38a4-rootfs.mount: Deactivated successfully. Jan 23 00:10:01.939923 containerd[1894]: time="2026-01-23T00:10:01.939866588Z" level=info msg="CreateContainer within sandbox \"c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 00:10:01.959454 containerd[1894]: time="2026-01-23T00:10:01.959409707Z" level=info msg="Container c8a53183bd7fa22c0bd9a917e036017f5f0f2b2c2977e591a3e6202f59456272: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:10:01.976615 containerd[1894]: time="2026-01-23T00:10:01.976566054Z" level=info msg="CreateContainer within sandbox \"c89cbbddeb279752d1866bff57b10bc95863023f170904873c39a4a7fd2cb2ea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c8a53183bd7fa22c0bd9a917e036017f5f0f2b2c2977e591a3e6202f59456272\"" Jan 23 00:10:01.977231 containerd[1894]: time="2026-01-23T00:10:01.977037135Z" level=info msg="StartContainer for \"c8a53183bd7fa22c0bd9a917e036017f5f0f2b2c2977e591a3e6202f59456272\"" Jan 23 00:10:01.978784 containerd[1894]: time="2026-01-23T00:10:01.978754203Z" level=info msg="connecting to shim c8a53183bd7fa22c0bd9a917e036017f5f0f2b2c2977e591a3e6202f59456272" address="unix:///run/containerd/s/7850da13777f0ce9355dc98bf4ae7696d534d0080d4dcb91c29f0226baf5e97a" protocol=ttrpc version=3 Jan 23 00:10:01.995923 systemd[1]: Started cri-containerd-c8a53183bd7fa22c0bd9a917e036017f5f0f2b2c2977e591a3e6202f59456272.scope - libcontainer container c8a53183bd7fa22c0bd9a917e036017f5f0f2b2c2977e591a3e6202f59456272. Jan 23 00:10:02.032524 containerd[1894]: time="2026-01-23T00:10:02.032463532Z" level=info msg="StartContainer for \"c8a53183bd7fa22c0bd9a917e036017f5f0f2b2c2977e591a3e6202f59456272\" returns successfully" Jan 23 00:10:02.476699 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 00:10:02.951807 kubelet[3423]: I0123 00:10:02.951502 3423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kbtb6" podStartSLOduration=5.951488142 podStartE2EDuration="5.951488142s" podCreationTimestamp="2026-01-23 00:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:10:02.951350986 +0000 UTC m=+147.432618239" watchObservedRunningTime="2026-01-23 00:10:02.951488142 +0000 UTC m=+147.432755395" Jan 23 00:10:04.870835 systemd-networkd[1486]: lxc_health: Link UP Jan 23 00:10:04.882174 systemd-networkd[1486]: lxc_health: Gained carrier Jan 23 00:10:06.325848 systemd-networkd[1486]: lxc_health: Gained IPv6LL Jan 23 00:10:10.069703 sshd[5273]: Connection closed by 10.200.16.10 port 53416 Jan 23 00:10:10.070384 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:10.075097 systemd-logind[1876]: Session 26 logged out. Waiting for processes to exit. Jan 23 00:10:10.075955 systemd[1]: sshd@23-10.200.20.18:22-10.200.16.10:53416.service: Deactivated successfully. Jan 23 00:10:10.079565 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 00:10:10.081210 systemd-logind[1876]: Removed session 26.