Jan 15 23:50:57.105750 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 15 23:50:57.105769 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Jan 15 22:06:59 -00 2026 Jan 15 23:50:57.105776 kernel: KASLR enabled Jan 15 23:50:57.105780 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 15 23:50:57.105783 kernel: printk: legacy bootconsole [pl11] enabled Jan 15 23:50:57.105789 kernel: efi: EFI v2.7 by EDK II Jan 15 23:50:57.105794 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 15 23:50:57.105798 kernel: random: crng init done Jan 15 23:50:57.105802 kernel: secureboot: Secure boot disabled Jan 15 23:50:57.105805 kernel: ACPI: Early table checksum verification disabled Jan 15 23:50:57.105809 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 15 23:50:57.105813 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:50:57.105817 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:50:57.105821 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 15 23:50:57.105827 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:50:57.105832 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:50:57.105836 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:50:57.105840 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:50:57.105844 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:50:57.105849 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:50:57.105853 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 15 23:50:57.105858 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:50:57.105862 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 15 23:50:57.105866 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 15 23:50:57.105870 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 15 23:50:57.105874 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 15 23:50:57.105878 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 15 23:50:57.105882 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 15 23:50:57.105886 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 15 23:50:57.105891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 15 23:50:57.105895 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 15 23:50:57.105900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 15 23:50:57.105904 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 15 23:50:57.105908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 15 23:50:57.105912 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 15 23:50:57.105916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 15 23:50:57.105920 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 15 23:50:57.105924 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 15 23:50:57.105928 kernel: Zone ranges: Jan 15 23:50:57.105933 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 15 23:50:57.105940 kernel: DMA32 empty Jan 15 23:50:57.105944 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 23:50:57.105949 kernel: Device empty Jan 15 23:50:57.105953 kernel: Movable zone start for each node Jan 15 23:50:57.105957 kernel: Early memory node ranges Jan 15 23:50:57.105962 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 15 23:50:57.105967 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 15 23:50:57.105971 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 15 23:50:57.105976 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 15 23:50:57.105980 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 15 23:50:57.105984 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 15 23:50:57.105989 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 23:50:57.105993 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 15 23:50:57.105997 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 15 23:50:57.106002 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 15 23:50:57.106006 kernel: psci: probing for conduit method from ACPI. Jan 15 23:50:57.106010 kernel: psci: PSCIv1.3 detected in firmware. Jan 15 23:50:57.106015 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 23:50:57.106020 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 15 23:50:57.106024 kernel: psci: SMC Calling Convention v1.4 Jan 15 23:50:57.106029 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 15 23:50:57.106033 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 15 23:50:57.106037 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 15 23:50:57.106042 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 15 23:50:57.106046 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 23:50:57.106050 kernel: Detected PIPT I-cache on CPU0 Jan 15 23:50:57.106055 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 15 23:50:57.106059 kernel: CPU features: detected: GIC system register CPU interface Jan 15 23:50:57.106064 kernel: CPU features: detected: Spectre-v4 Jan 15 23:50:57.106068 kernel: CPU features: detected: Spectre-BHB Jan 15 23:50:57.106073 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 15 23:50:57.106077 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 15 23:50:57.106082 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 15 23:50:57.106086 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 15 23:50:57.106090 kernel: alternatives: applying boot alternatives Jan 15 23:50:57.106096 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=83f7d443283b2e87b6283ab8b3252eb2d2356b218981a63efeb3e370fba6f971 Jan 15 23:50:57.106100 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 23:50:57.106105 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 23:50:57.106109 kernel: Fallback order for Node 0: 0 Jan 15 23:50:57.106113 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 15 23:50:57.106118 kernel: Policy zone: Normal Jan 15 23:50:57.106123 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 23:50:57.106127 kernel: software IO TLB: area num 2. Jan 15 23:50:57.106131 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Jan 15 23:50:57.106136 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 23:50:57.106140 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 23:50:57.106145 kernel: rcu: RCU event tracing is enabled. Jan 15 23:50:57.106150 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 23:50:57.106154 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 23:50:57.106158 kernel: Tracing variant of Tasks RCU enabled. Jan 15 23:50:57.106163 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 23:50:57.106167 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 23:50:57.106173 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 23:50:57.106177 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 23:50:57.106181 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 23:50:57.106186 kernel: GICv3: 960 SPIs implemented Jan 15 23:50:57.106190 kernel: GICv3: 0 Extended SPIs implemented Jan 15 23:50:57.106194 kernel: Root IRQ handler: gic_handle_irq Jan 15 23:50:57.106199 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 15 23:50:57.106203 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 15 23:50:57.106207 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 15 23:50:57.106212 kernel: ITS: No ITS available, not enabling LPIs Jan 15 23:50:57.106216 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 23:50:57.106221 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 15 23:50:57.106226 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 23:50:57.106230 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 15 23:50:57.106235 kernel: Console: colour dummy device 80x25 Jan 15 23:50:57.106239 kernel: printk: legacy console [tty1] enabled Jan 15 23:50:57.106244 kernel: ACPI: Core revision 20240827 Jan 15 23:50:57.106249 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 15 23:50:57.106253 kernel: pid_max: default: 32768 minimum: 301 Jan 15 23:50:57.106258 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 15 23:50:57.106262 kernel: landlock: Up and running. Jan 15 23:50:57.106267 kernel: SELinux: Initializing. Jan 15 23:50:57.106272 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 23:50:57.106277 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 23:50:57.106281 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 15 23:50:57.106286 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 15 23:50:57.106294 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 15 23:50:57.106299 kernel: rcu: Hierarchical SRCU implementation. Jan 15 23:50:57.106304 kernel: rcu: Max phase no-delay instances is 400. Jan 15 23:50:57.106309 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 15 23:50:57.106314 kernel: Remapping and enabling EFI services. Jan 15 23:50:57.106318 kernel: smp: Bringing up secondary CPUs ... Jan 15 23:50:57.106323 kernel: Detected PIPT I-cache on CPU1 Jan 15 23:50:57.106328 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 15 23:50:57.106333 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 15 23:50:57.106338 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 23:50:57.106343 kernel: SMP: Total of 2 processors activated. Jan 15 23:50:57.106347 kernel: CPU: All CPU(s) started at EL1 Jan 15 23:50:57.106353 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 23:50:57.106358 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 15 23:50:57.106363 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 15 23:50:57.106367 kernel: CPU features: detected: Common not Private translations Jan 15 23:50:57.106372 kernel: CPU features: detected: CRC32 instructions Jan 15 23:50:57.106377 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 15 23:50:57.106381 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 15 23:50:57.106386 kernel: CPU features: detected: LSE atomic instructions Jan 15 23:50:57.106391 kernel: CPU features: detected: Privileged Access Never Jan 15 23:50:57.106397 kernel: CPU features: detected: Speculation barrier (SB) Jan 15 23:50:57.106401 kernel: CPU features: detected: TLB range maintenance instructions Jan 15 23:50:57.106406 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 15 23:50:57.106411 kernel: CPU features: detected: Scalable Vector Extension Jan 15 23:50:57.106415 kernel: alternatives: applying system-wide alternatives Jan 15 23:50:57.106420 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 15 23:50:57.106425 kernel: SVE: maximum available vector length 16 bytes per vector Jan 15 23:50:57.106430 kernel: SVE: default vector length 16 bytes per vector Jan 15 23:50:57.106435 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Jan 15 23:50:57.106441 kernel: devtmpfs: initialized Jan 15 23:50:57.106445 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 23:50:57.106450 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 23:50:57.106455 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 15 23:50:57.106459 kernel: 0 pages in range for non-PLT usage Jan 15 23:50:57.106464 kernel: 508400 pages in range for PLT usage Jan 15 23:50:57.106469 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 23:50:57.106473 kernel: SMBIOS 3.1.0 present. Jan 15 23:50:57.106479 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 15 23:50:57.106484 kernel: DMI: Memory slots populated: 2/2 Jan 15 23:50:57.106489 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 23:50:57.106493 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 23:50:57.106498 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 23:50:57.106503 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 23:50:57.106508 kernel: audit: initializing netlink subsys (disabled) Jan 15 23:50:57.106512 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 15 23:50:57.106517 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 23:50:57.106523 kernel: cpuidle: using governor menu Jan 15 23:50:57.106528 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 23:50:57.106532 kernel: ASID allocator initialised with 32768 entries Jan 15 23:50:57.106537 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 23:50:57.106542 kernel: Serial: AMBA PL011 UART driver Jan 15 23:50:57.106547 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 23:50:57.106551 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 23:50:57.106556 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 23:50:57.106561 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 23:50:57.106566 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 23:50:57.106571 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 23:50:57.106576 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 23:50:57.106580 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 23:50:57.106585 kernel: ACPI: Added _OSI(Module Device) Jan 15 23:50:57.106590 kernel: ACPI: Added _OSI(Processor Device) Jan 15 23:50:57.106594 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 23:50:57.106599 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 23:50:57.106608 kernel: ACPI: Interpreter enabled Jan 15 23:50:57.106614 kernel: ACPI: Using GIC for interrupt routing Jan 15 23:50:57.106619 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 15 23:50:57.106697 kernel: printk: legacy console [ttyAMA0] enabled Jan 15 23:50:57.106702 kernel: printk: legacy bootconsole [pl11] disabled Jan 15 23:50:57.106707 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 15 23:50:57.106712 kernel: ACPI: CPU0 has been hot-added Jan 15 23:50:57.106717 kernel: ACPI: CPU1 has been hot-added Jan 15 23:50:57.106721 kernel: iommu: Default domain type: Translated Jan 15 23:50:57.106726 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 23:50:57.106732 kernel: efivars: Registered efivars operations Jan 15 23:50:57.106737 kernel: vgaarb: loaded Jan 15 23:50:57.106742 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 23:50:57.106746 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 23:50:57.106751 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 23:50:57.106756 kernel: pnp: PnP ACPI init Jan 15 23:50:57.106760 kernel: pnp: PnP ACPI: found 0 devices Jan 15 23:50:57.106765 kernel: NET: Registered PF_INET protocol family Jan 15 23:50:57.106770 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 23:50:57.106775 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 23:50:57.106781 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 23:50:57.106786 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 23:50:57.106790 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 23:50:57.106795 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 23:50:57.106800 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 23:50:57.106805 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 23:50:57.106809 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 23:50:57.106814 kernel: PCI: CLS 0 bytes, default 64 Jan 15 23:50:57.106819 kernel: kvm [1]: HYP mode not available Jan 15 23:50:57.106824 kernel: Initialise system trusted keyrings Jan 15 23:50:57.106829 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 23:50:57.106834 kernel: Key type asymmetric registered Jan 15 23:50:57.106839 kernel: Asymmetric key parser 'x509' registered Jan 15 23:50:57.106843 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 15 23:50:57.106848 kernel: io scheduler mq-deadline registered Jan 15 23:50:57.106853 kernel: io scheduler kyber registered Jan 15 23:50:57.106857 kernel: io scheduler bfq registered Jan 15 23:50:57.106862 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 23:50:57.106868 kernel: thunder_xcv, ver 1.0 Jan 15 23:50:57.106872 kernel: thunder_bgx, ver 1.0 Jan 15 23:50:57.106877 kernel: nicpf, ver 1.0 Jan 15 23:50:57.106882 kernel: nicvf, ver 1.0 Jan 15 23:50:57.106996 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 23:50:57.107048 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-15T23:50:56 UTC (1768521056) Jan 15 23:50:57.107054 kernel: efifb: probing for efifb Jan 15 23:50:57.107060 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 15 23:50:57.107065 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 15 23:50:57.107070 kernel: efifb: scrolling: redraw Jan 15 23:50:57.107074 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 23:50:57.107079 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 23:50:57.107084 kernel: fb0: EFI VGA frame buffer device Jan 15 23:50:57.107089 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 15 23:50:57.107093 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 23:50:57.107098 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 15 23:50:57.107104 kernel: watchdog: NMI not fully supported Jan 15 23:50:57.107109 kernel: watchdog: Hard watchdog permanently disabled Jan 15 23:50:57.107114 kernel: NET: Registered PF_INET6 protocol family Jan 15 23:50:57.107118 kernel: Segment Routing with IPv6 Jan 15 23:50:57.107123 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 23:50:57.107128 kernel: NET: Registered PF_PACKET protocol family Jan 15 23:50:57.107132 kernel: Key type dns_resolver registered Jan 15 23:50:57.107137 kernel: registered taskstats version 1 Jan 15 23:50:57.107142 kernel: Loading compiled-in X.509 certificates Jan 15 23:50:57.107147 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: b110dfc7e70ecac41e34f52a0c530f0543b60d51' Jan 15 23:50:57.107152 kernel: Demotion targets for Node 0: null Jan 15 23:50:57.107157 kernel: Key type .fscrypt registered Jan 15 23:50:57.107162 kernel: Key type fscrypt-provisioning registered Jan 15 23:50:57.107166 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 23:50:57.107171 kernel: ima: Allocated hash algorithm: sha1 Jan 15 23:50:57.107176 kernel: ima: No architecture policies found Jan 15 23:50:57.107181 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 23:50:57.107185 kernel: clk: Disabling unused clocks Jan 15 23:50:57.107190 kernel: PM: genpd: Disabling unused power domains Jan 15 23:50:57.107195 kernel: Warning: unable to open an initial console. Jan 15 23:50:57.107200 kernel: Freeing unused kernel memory: 39552K Jan 15 23:50:57.107205 kernel: Run /init as init process Jan 15 23:50:57.107209 kernel: with arguments: Jan 15 23:50:57.107214 kernel: /init Jan 15 23:50:57.107219 kernel: with environment: Jan 15 23:50:57.107223 kernel: HOME=/ Jan 15 23:50:57.107228 kernel: TERM=linux Jan 15 23:50:57.107234 systemd[1]: Successfully made /usr/ read-only. Jan 15 23:50:57.107241 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 23:50:57.107247 systemd[1]: Detected virtualization microsoft. Jan 15 23:50:57.107252 systemd[1]: Detected architecture arm64. Jan 15 23:50:57.107257 systemd[1]: Running in initrd. Jan 15 23:50:57.107262 systemd[1]: No hostname configured, using default hostname. Jan 15 23:50:57.107267 systemd[1]: Hostname set to . Jan 15 23:50:57.107272 systemd[1]: Initializing machine ID from random generator. Jan 15 23:50:57.107278 systemd[1]: Queued start job for default target initrd.target. Jan 15 23:50:57.107284 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:50:57.107292 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:50:57.107298 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 23:50:57.107303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 23:50:57.107308 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 23:50:57.107314 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 23:50:57.107321 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 23:50:57.107326 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 23:50:57.107332 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:50:57.107337 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:50:57.107342 systemd[1]: Reached target paths.target - Path Units. Jan 15 23:50:57.107347 systemd[1]: Reached target slices.target - Slice Units. Jan 15 23:50:57.107352 systemd[1]: Reached target swap.target - Swaps. Jan 15 23:50:57.107357 systemd[1]: Reached target timers.target - Timer Units. Jan 15 23:50:57.107363 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 23:50:57.107369 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 23:50:57.107374 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 23:50:57.107379 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 15 23:50:57.107384 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:50:57.107390 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 23:50:57.107395 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:50:57.107400 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 23:50:57.107405 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 23:50:57.107411 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 23:50:57.107416 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 23:50:57.107422 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 15 23:50:57.107427 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 23:50:57.107432 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 23:50:57.107437 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 23:50:57.107454 systemd-journald[225]: Collecting audit messages is disabled. Jan 15 23:50:57.107468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:50:57.107474 systemd-journald[225]: Journal started Jan 15 23:50:57.107489 systemd-journald[225]: Runtime Journal (/run/log/journal/3f75b89c16ca49cfbcd7bf43d16cb512) is 8M, max 78.3M, 70.3M free. Jan 15 23:50:57.114577 systemd-modules-load[227]: Inserted module 'overlay' Jan 15 23:50:57.134594 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 23:50:57.136299 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 23:50:57.153815 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 23:50:57.153841 kernel: Bridge firewalling registered Jan 15 23:50:57.156412 systemd-modules-load[227]: Inserted module 'br_netfilter' Jan 15 23:50:57.159916 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:50:57.165985 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 23:50:57.176643 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 23:50:57.184645 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:50:57.195706 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 23:50:57.212247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:50:57.227429 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 23:50:57.240515 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 23:50:57.258407 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 15 23:50:57.258544 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:50:57.266652 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 23:50:57.281518 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:50:57.293158 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 23:50:57.305224 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 23:50:57.323759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 23:50:57.331180 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=83f7d443283b2e87b6283ab8b3252eb2d2356b218981a63efeb3e370fba6f971 Jan 15 23:50:57.363761 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 23:50:57.382862 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:50:57.401643 kernel: SCSI subsystem initialized Jan 15 23:50:57.409642 kernel: Loading iSCSI transport class v2.0-870. Jan 15 23:50:57.411740 systemd-resolved[262]: Positive Trust Anchors: Jan 15 23:50:57.411757 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 23:50:57.411776 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 23:50:57.462966 kernel: iscsi: registered transport (tcp) Jan 15 23:50:57.416132 systemd-resolved[262]: Defaulting to hostname 'linux'. Jan 15 23:50:57.421739 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 23:50:57.454608 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:50:57.479601 kernel: iscsi: registered transport (qla4xxx) Jan 15 23:50:57.479627 kernel: QLogic iSCSI HBA Driver Jan 15 23:50:57.493764 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 23:50:57.509466 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:50:57.515534 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 23:50:57.566802 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 23:50:57.573761 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 23:50:57.636651 kernel: raid6: neonx8 gen() 18522 MB/s Jan 15 23:50:57.655636 kernel: raid6: neonx4 gen() 18565 MB/s Jan 15 23:50:57.674631 kernel: raid6: neonx2 gen() 17066 MB/s Jan 15 23:50:57.694631 kernel: raid6: neonx1 gen() 15137 MB/s Jan 15 23:50:57.713730 kernel: raid6: int64x8 gen() 10533 MB/s Jan 15 23:50:57.732727 kernel: raid6: int64x4 gen() 10612 MB/s Jan 15 23:50:57.752653 kernel: raid6: int64x2 gen() 9005 MB/s Jan 15 23:50:57.774058 kernel: raid6: int64x1 gen() 7010 MB/s Jan 15 23:50:57.774140 kernel: raid6: using algorithm neonx4 gen() 18565 MB/s Jan 15 23:50:57.796519 kernel: raid6: .... xor() 15142 MB/s, rmw enabled Jan 15 23:50:57.796528 kernel: raid6: using neon recovery algorithm Jan 15 23:50:57.804925 kernel: xor: measuring software checksum speed Jan 15 23:50:57.804932 kernel: 8regs : 28519 MB/sec Jan 15 23:50:57.808077 kernel: 32regs : 28739 MB/sec Jan 15 23:50:57.811987 kernel: arm64_neon : 37367 MB/sec Jan 15 23:50:57.815597 kernel: xor: using function: arm64_neon (37367 MB/sec) Jan 15 23:50:57.853648 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 23:50:57.859303 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 23:50:57.869093 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:50:57.899335 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jan 15 23:50:57.905398 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:50:57.919586 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 23:50:57.948199 dracut-pre-trigger[489]: rd.md=0: removing MD RAID activation Jan 15 23:50:57.968902 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 23:50:57.975048 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 23:50:58.020657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:50:58.028314 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 23:50:58.094657 kernel: hv_vmbus: Vmbus version:5.3 Jan 15 23:50:58.107619 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:50:58.118046 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 15 23:50:58.118067 kernel: hv_vmbus: registering driver hv_netvsc Jan 15 23:50:58.118074 kernel: hv_vmbus: registering driver hid_hyperv Jan 15 23:50:58.112280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:50:58.147523 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 15 23:50:58.147542 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 15 23:50:58.147551 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 15 23:50:58.147716 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 15 23:50:58.131635 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:50:58.166399 kernel: PTP clock support registered Jan 15 23:50:58.156887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:50:58.172977 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:50:58.218390 kernel: hv_vmbus: registering driver hv_storvsc Jan 15 23:50:58.218410 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 15 23:50:58.218418 kernel: hv_utils: Registering HyperV Utility Driver Jan 15 23:50:58.218433 kernel: scsi host1: storvsc_host_t Jan 15 23:50:58.218583 kernel: scsi host0: storvsc_host_t Jan 15 23:50:58.218692 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 15 23:50:58.218712 kernel: hv_vmbus: registering driver hv_utils Jan 15 23:50:58.218725 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 15 23:50:58.180306 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:50:58.230539 kernel: hv_utils: Heartbeat IC version 3.0 Jan 15 23:50:58.230560 kernel: hv_utils: Shutdown IC version 3.2 Jan 15 23:50:58.180388 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:50:58.241556 kernel: hv_netvsc 7ced8dcf-d348-7ced-8dcf-d3487ced8dcf eth0: VF slot 1 added Jan 15 23:50:58.243977 kernel: hv_utils: TimeSync IC version 4.0 Jan 15 23:50:58.242758 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:50:58.244530 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:50:58.009518 systemd-journald[225]: Time jumped backwards, rotating. Jan 15 23:50:58.009554 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 15 23:50:57.986480 systemd-resolved[262]: Clock change detected. Flushing caches. Jan 15 23:50:58.037683 kernel: hv_vmbus: registering driver hv_pci Jan 15 23:50:58.037703 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 15 23:50:58.037891 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 15 23:50:58.037963 kernel: hv_pci 784a6396-f954-47f1-ad64-f7a998dae844: PCI VMBus probing: Using version 0x10004 Jan 15 23:50:58.038041 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 15 23:50:58.038104 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 15 23:50:58.046216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#256 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 15 23:50:58.048585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 15 23:50:58.058029 kernel: hv_pci 784a6396-f954-47f1-ad64-f7a998dae844: PCI host bridge to bus f954:00 Jan 15 23:50:58.053618 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:50:58.088986 kernel: pci_bus f954:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 15 23:50:58.089185 kernel: pci_bus f954:00: No busn resource found for root bus, will use [bus 00-ff] Jan 15 23:50:58.089247 kernel: pci f954:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 15 23:50:58.089265 kernel: pci f954:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 23:50:58.092692 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 23:50:58.092710 kernel: pci f954:00:02.0: enabling Extended Tags Jan 15 23:50:58.101443 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 15 23:50:58.114469 kernel: pci f954:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f954:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 15 23:50:58.114695 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 15 23:50:58.114787 kernel: pci_bus f954:00: busn_res: [bus 00-ff] end is updated to 00 Jan 15 23:50:58.121234 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 23:50:58.121274 kernel: pci f954:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 15 23:50:58.128455 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 15 23:50:58.147450 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#19 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 15 23:50:58.170474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 15 23:50:58.202781 kernel: mlx5_core f954:00:02.0: enabling device (0000 -> 0002) Jan 15 23:50:58.212016 kernel: mlx5_core f954:00:02.0: PTM is not supported by PCIe Jan 15 23:50:58.212218 kernel: mlx5_core f954:00:02.0: firmware version: 16.30.5026 Jan 15 23:50:58.396439 kernel: hv_netvsc 7ced8dcf-d348-7ced-8dcf-d3487ced8dcf eth0: VF registering: eth1 Jan 15 23:50:58.396654 kernel: mlx5_core f954:00:02.0 eth1: joined to eth0 Jan 15 23:50:58.401770 kernel: mlx5_core f954:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 15 23:50:58.411449 kernel: mlx5_core f954:00:02.0 enP63828s1: renamed from eth1 Jan 15 23:50:58.563006 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 15 23:50:58.653282 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 23:50:58.707446 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 15 23:50:58.714444 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 23:50:58.729169 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 15 23:50:58.740787 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 15 23:50:58.751032 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 23:50:58.756227 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:50:58.765982 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 23:50:58.775449 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 23:50:58.803244 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 23:50:58.826450 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#46 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 15 23:50:58.839363 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 23:50:58.839486 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 23:50:59.850916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 15 23:50:59.864047 disk-uuid[664]: The operation has completed successfully. Jan 15 23:50:59.868351 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 23:50:59.942952 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 23:50:59.944457 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 23:50:59.965439 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 23:50:59.985952 sh[829]: Success Jan 15 23:51:00.021041 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 23:51:00.021109 kernel: device-mapper: uevent: version 1.0.3 Jan 15 23:51:00.026245 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 15 23:51:00.036464 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 15 23:51:00.299075 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 23:51:00.307521 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 23:51:00.318338 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 23:51:00.343474 kernel: BTRFS: device fsid 4e574c26-9d5a-48bc-a727-ae12db8ee9fc devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (847) Jan 15 23:51:00.353065 kernel: BTRFS info (device dm-0): first mount of filesystem 4e574c26-9d5a-48bc-a727-ae12db8ee9fc Jan 15 23:51:00.353111 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:51:00.655497 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 23:51:00.655583 kernel: BTRFS info (device dm-0): enabling free space tree Jan 15 23:51:00.745692 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 23:51:00.749711 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 15 23:51:00.757396 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 23:51:00.758120 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 23:51:00.785479 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 23:51:00.817474 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (873) Jan 15 23:51:00.828485 kernel: BTRFS info (device sda6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:51:00.828541 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:51:00.880662 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 23:51:00.891490 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 23:51:00.923415 systemd-networkd[1010]: lo: Link UP Jan 15 23:51:00.923437 systemd-networkd[1010]: lo: Gained carrier Jan 15 23:51:00.924597 systemd-networkd[1010]: Enumeration completed Jan 15 23:51:00.926228 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 23:51:00.930820 systemd-networkd[1010]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:51:00.961250 kernel: BTRFS info (device sda6): turning on async discard Jan 15 23:51:00.961272 kernel: BTRFS info (device sda6): enabling free space tree Jan 15 23:51:00.930824 systemd-networkd[1010]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 23:51:00.931442 systemd[1]: Reached target network.target - Network. Jan 15 23:51:00.977448 kernel: BTRFS info (device sda6): last unmount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:51:00.978516 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 23:51:00.984119 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 23:51:01.036442 kernel: mlx5_core f954:00:02.0 enP63828s1: Link up Jan 15 23:51:01.072059 systemd-networkd[1010]: enP63828s1: Link UP Jan 15 23:51:01.075751 kernel: hv_netvsc 7ced8dcf-d348-7ced-8dcf-d3487ced8dcf eth0: Data path switched to VF: enP63828s1 Jan 15 23:51:01.072120 systemd-networkd[1010]: eth0: Link UP Jan 15 23:51:01.072224 systemd-networkd[1010]: eth0: Gained carrier Jan 15 23:51:01.072239 systemd-networkd[1010]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:51:01.091461 systemd-networkd[1010]: enP63828s1: Gained carrier Jan 15 23:51:01.106466 systemd-networkd[1010]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 23:51:02.195237 ignition[1018]: Ignition 2.22.0 Jan 15 23:51:02.195252 ignition[1018]: Stage: fetch-offline Jan 15 23:51:02.195361 ignition[1018]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:51:02.199524 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 23:51:02.195367 ignition[1018]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:51:02.207542 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 23:51:02.198135 ignition[1018]: parsed url from cmdline: "" Jan 15 23:51:02.198140 ignition[1018]: no config URL provided Jan 15 23:51:02.198146 ignition[1018]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 23:51:02.198158 ignition[1018]: no config at "/usr/lib/ignition/user.ign" Jan 15 23:51:02.198163 ignition[1018]: failed to fetch config: resource requires networking Jan 15 23:51:02.198307 ignition[1018]: Ignition finished successfully Jan 15 23:51:02.247324 ignition[1025]: Ignition 2.22.0 Jan 15 23:51:02.247329 ignition[1025]: Stage: fetch Jan 15 23:51:02.247591 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:51:02.247599 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:51:02.247679 ignition[1025]: parsed url from cmdline: "" Jan 15 23:51:02.247682 ignition[1025]: no config URL provided Jan 15 23:51:02.247685 ignition[1025]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 23:51:02.247690 ignition[1025]: no config at "/usr/lib/ignition/user.ign" Jan 15 23:51:02.247712 ignition[1025]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 15 23:51:02.318824 ignition[1025]: GET result: OK Jan 15 23:51:02.319010 ignition[1025]: config has been read from IMDS userdata Jan 15 23:51:02.319038 ignition[1025]: parsing config with SHA512: 4042283b99f67e831cbc1509be4291bc6bcb12dd80c3e21d94c284818094e7181843f906ba2746392b9145f6f3365dbb93801047636a779ef932a455fbbc9a29 Jan 15 23:51:02.321618 unknown[1025]: fetched base config from "system" Jan 15 23:51:02.321851 ignition[1025]: fetch: fetch complete Jan 15 23:51:02.321623 unknown[1025]: fetched base config from "system" Jan 15 23:51:02.321854 ignition[1025]: fetch: fetch passed Jan 15 23:51:02.321626 unknown[1025]: fetched user config from "azure" Jan 15 23:51:02.321894 ignition[1025]: Ignition finished successfully Jan 15 23:51:02.326694 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 23:51:02.337267 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 23:51:02.375679 ignition[1032]: Ignition 2.22.0 Jan 15 23:51:02.375692 ignition[1032]: Stage: kargs Jan 15 23:51:02.379009 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:51:02.385660 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 23:51:02.379018 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:51:02.392918 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 23:51:02.379559 ignition[1032]: kargs: kargs passed Jan 15 23:51:02.379609 ignition[1032]: Ignition finished successfully Jan 15 23:51:02.424247 ignition[1039]: Ignition 2.22.0 Jan 15 23:51:02.424264 ignition[1039]: Stage: disks Jan 15 23:51:02.428589 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 23:51:02.424474 ignition[1039]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:51:02.434896 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 23:51:02.424482 ignition[1039]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:51:02.443143 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 23:51:02.424963 ignition[1039]: disks: disks passed Jan 15 23:51:02.451352 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 23:51:02.425006 ignition[1039]: Ignition finished successfully Jan 15 23:51:02.459779 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 23:51:02.467913 systemd[1]: Reached target basic.target - Basic System. Jan 15 23:51:02.477241 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 23:51:02.554446 systemd-fsck[1048]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 15 23:51:02.560529 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 23:51:02.566912 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 23:51:02.722722 systemd-networkd[1010]: eth0: Gained IPv6LL Jan 15 23:51:02.815436 kernel: EXT4-fs (sda9): mounted filesystem e775b4a8-7fa9-4c45-80b7-b5e0f0a5e4b9 r/w with ordered data mode. Quota mode: none. Jan 15 23:51:02.816534 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 23:51:02.820736 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 23:51:02.842669 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 23:51:02.848069 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 23:51:02.863922 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 23:51:02.871932 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 23:51:02.871967 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 23:51:02.883962 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 23:51:02.897597 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 23:51:02.919439 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1062) Jan 15 23:51:02.930507 kernel: BTRFS info (device sda6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:51:02.930556 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:51:02.940004 kernel: BTRFS info (device sda6): turning on async discard Jan 15 23:51:02.940037 kernel: BTRFS info (device sda6): enabling free space tree Jan 15 23:51:02.941376 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 23:51:03.447849 coreos-metadata[1064]: Jan 15 23:51:03.447 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 23:51:03.456859 coreos-metadata[1064]: Jan 15 23:51:03.456 INFO Fetch successful Jan 15 23:51:03.456859 coreos-metadata[1064]: Jan 15 23:51:03.456 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 15 23:51:03.469644 coreos-metadata[1064]: Jan 15 23:51:03.469 INFO Fetch successful Jan 15 23:51:03.469644 coreos-metadata[1064]: Jan 15 23:51:03.469 INFO wrote hostname ci-4459.2.2-n-e85017da3c to /sysroot/etc/hostname Jan 15 23:51:03.474051 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 23:51:03.759974 initrd-setup-root[1092]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 23:51:03.810679 initrd-setup-root[1099]: cut: /sysroot/etc/group: No such file or directory Jan 15 23:51:03.833995 initrd-setup-root[1106]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 23:51:03.841405 initrd-setup-root[1113]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 23:51:04.627170 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 23:51:04.632640 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 23:51:04.654222 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 23:51:04.666143 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 23:51:04.674808 kernel: BTRFS info (device sda6): last unmount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:51:04.698538 ignition[1181]: INFO : Ignition 2.22.0 Jan 15 23:51:04.698538 ignition[1181]: INFO : Stage: mount Jan 15 23:51:04.709497 ignition[1181]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:51:04.709497 ignition[1181]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:51:04.709497 ignition[1181]: INFO : mount: mount passed Jan 15 23:51:04.709497 ignition[1181]: INFO : Ignition finished successfully Jan 15 23:51:04.700162 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 23:51:04.707284 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 23:51:04.715229 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 23:51:04.737556 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 23:51:04.765438 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1192) Jan 15 23:51:04.781338 kernel: BTRFS info (device sda6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:51:04.781351 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:51:04.790505 kernel: BTRFS info (device sda6): turning on async discard Jan 15 23:51:04.790519 kernel: BTRFS info (device sda6): enabling free space tree Jan 15 23:51:04.792529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 23:51:04.821287 ignition[1210]: INFO : Ignition 2.22.0 Jan 15 23:51:04.821287 ignition[1210]: INFO : Stage: files Jan 15 23:51:04.821287 ignition[1210]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:51:04.821287 ignition[1210]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:51:04.836387 ignition[1210]: DEBUG : files: compiled without relabeling support, skipping Jan 15 23:51:04.841245 ignition[1210]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 23:51:04.841245 ignition[1210]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 23:51:04.911630 ignition[1210]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 23:51:04.917109 ignition[1210]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 23:51:04.917109 ignition[1210]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 23:51:04.911988 unknown[1210]: wrote ssh authorized keys file for user: core Jan 15 23:51:04.946340 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 15 23:51:04.954884 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 15 23:51:04.979522 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 23:51:05.138826 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 15 23:51:05.147266 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 15 23:51:05.147266 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 15 23:51:05.193622 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 15 23:51:05.285325 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 15 23:51:05.285325 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 15 23:51:05.299456 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 23:51:05.299456 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 23:51:05.299456 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 23:51:05.299456 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 23:51:05.299456 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 23:51:05.299456 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 23:51:05.299456 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 23:51:05.347557 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 23:51:05.347557 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 23:51:05.347557 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:51:05.347557 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:51:05.347557 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:51:05.347557 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 15 23:51:05.812432 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 15 23:51:06.010958 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:51:06.010958 ignition[1210]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 15 23:51:06.041502 ignition[1210]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 23:51:06.073529 ignition[1210]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 23:51:06.073529 ignition[1210]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 15 23:51:06.073529 ignition[1210]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 15 23:51:06.102521 ignition[1210]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 23:51:06.102521 ignition[1210]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 23:51:06.102521 ignition[1210]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 23:51:06.102521 ignition[1210]: INFO : files: files passed Jan 15 23:51:06.102521 ignition[1210]: INFO : Ignition finished successfully Jan 15 23:51:06.082371 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 23:51:06.093319 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 23:51:06.117193 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 23:51:06.132759 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 23:51:06.135898 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 23:51:06.165776 initrd-setup-root-after-ignition[1239]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:51:06.165776 initrd-setup-root-after-ignition[1239]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:51:06.181697 initrd-setup-root-after-ignition[1243]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:51:06.175788 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 23:51:06.187768 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 23:51:06.198774 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 23:51:06.248343 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 23:51:06.248479 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 23:51:06.257279 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 23:51:06.266650 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 23:51:06.274525 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 23:51:06.275273 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 23:51:06.308907 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 23:51:06.315526 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 23:51:06.340248 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:51:06.348176 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:51:06.358930 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 23:51:06.368104 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 23:51:06.368222 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 23:51:06.382121 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 23:51:06.393728 systemd[1]: Stopped target basic.target - Basic System. Jan 15 23:51:06.403724 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 23:51:06.412223 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 23:51:06.424442 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 23:51:06.433755 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 15 23:51:06.443846 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 23:51:06.453690 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 23:51:06.463496 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 23:51:06.477792 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 23:51:06.486887 systemd[1]: Stopped target swap.target - Swaps. Jan 15 23:51:06.494642 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 23:51:06.494769 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 23:51:06.506633 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:51:06.511653 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:51:06.523988 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 23:51:06.528252 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:51:06.533828 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 23:51:06.533942 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 23:51:06.547594 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 23:51:06.547680 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 23:51:06.553355 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 23:51:06.553438 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 23:51:06.563635 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 23:51:06.563707 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 23:51:06.573762 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 23:51:06.647071 ignition[1263]: INFO : Ignition 2.22.0 Jan 15 23:51:06.647071 ignition[1263]: INFO : Stage: umount Jan 15 23:51:06.647071 ignition[1263]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:51:06.647071 ignition[1263]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:51:06.647071 ignition[1263]: INFO : umount: umount passed Jan 15 23:51:06.647071 ignition[1263]: INFO : Ignition finished successfully Jan 15 23:51:06.590455 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 23:51:06.590609 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:51:06.610600 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 23:51:06.622708 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 23:51:06.622876 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:51:06.630933 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 23:51:06.631060 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 23:51:06.651290 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 23:51:06.651560 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 23:51:06.659004 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 23:51:06.659095 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 23:51:06.670359 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 23:51:06.672319 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 23:51:06.672387 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 23:51:06.685028 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 23:51:06.685099 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 23:51:06.689632 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 23:51:06.689670 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 23:51:06.697924 systemd[1]: Stopped target network.target - Network. Jan 15 23:51:06.706907 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 23:51:06.706989 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 23:51:06.714878 systemd[1]: Stopped target paths.target - Path Units. Jan 15 23:51:06.723129 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 23:51:06.727457 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:51:06.737057 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 23:51:06.744570 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 23:51:06.752044 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 23:51:06.752087 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 23:51:06.759442 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 23:51:06.759468 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 23:51:06.767927 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 23:51:06.767985 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 23:51:06.776986 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 23:51:06.777016 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 23:51:06.786039 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 23:51:06.793687 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 23:51:06.814831 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 23:51:06.814974 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 23:51:06.997618 kernel: hv_netvsc 7ced8dcf-d348-7ced-8dcf-d3487ced8dcf eth0: Data path switched from VF: enP63828s1 Jan 15 23:51:06.827495 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 15 23:51:06.827712 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 23:51:06.827838 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 23:51:06.841689 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 15 23:51:06.842272 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 15 23:51:06.849980 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 23:51:06.850023 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:51:06.864553 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 23:51:06.868690 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 23:51:06.868768 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 23:51:06.876567 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 23:51:06.876622 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:51:06.883711 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 23:51:06.883764 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 23:51:06.889006 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 23:51:06.889059 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:51:06.900672 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:51:06.909025 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 15 23:51:06.909087 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:51:06.928613 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 23:51:06.928783 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:51:06.938304 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 23:51:06.938341 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 23:51:06.947722 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 23:51:06.947752 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:51:06.956577 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 23:51:06.956622 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 23:51:06.970240 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 23:51:06.970283 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 23:51:06.991686 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 23:51:06.991743 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 23:51:06.998572 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 23:51:07.013332 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 15 23:51:07.013410 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:51:07.031156 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 23:51:07.031210 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:51:07.041685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:51:07.041755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:51:07.052205 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 15 23:51:07.052251 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 15 23:51:07.052277 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:51:07.052548 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 23:51:07.052654 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 23:51:07.088174 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 23:51:07.088310 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 23:51:07.126416 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 23:51:07.126541 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 23:51:07.135124 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 23:51:07.142549 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 23:51:07.142616 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 23:51:07.293934 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jan 15 23:51:07.153015 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 23:51:07.189180 systemd[1]: Switching root. Jan 15 23:51:07.300223 systemd-journald[225]: Journal stopped Jan 15 23:51:11.770959 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 23:51:11.770988 kernel: SELinux: policy capability open_perms=1 Jan 15 23:51:11.770996 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 23:51:11.771001 kernel: SELinux: policy capability always_check_network=0 Jan 15 23:51:11.771007 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 23:51:11.771013 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 23:51:11.771021 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 23:51:11.771027 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 23:51:11.771032 kernel: SELinux: policy capability userspace_initial_context=0 Jan 15 23:51:11.771039 systemd[1]: Successfully loaded SELinux policy in 194.170ms. Jan 15 23:51:11.771046 kernel: audit: type=1403 audit(1768521068.314:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 23:51:11.771053 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.374ms. Jan 15 23:51:11.771060 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 23:51:11.771066 systemd[1]: Detected virtualization microsoft. Jan 15 23:51:11.771073 systemd[1]: Detected architecture arm64. Jan 15 23:51:11.771080 systemd[1]: Detected first boot. Jan 15 23:51:11.771087 systemd[1]: Hostname set to . Jan 15 23:51:11.771093 systemd[1]: Initializing machine ID from random generator. Jan 15 23:51:11.771099 zram_generator::config[1305]: No configuration found. Jan 15 23:51:11.771105 kernel: NET: Registered PF_VSOCK protocol family Jan 15 23:51:11.771112 systemd[1]: Populated /etc with preset unit settings. Jan 15 23:51:11.771118 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 15 23:51:11.771125 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 23:51:11.771132 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 23:51:11.771137 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 23:51:11.771144 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 23:51:11.771150 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 23:51:11.771158 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 23:51:11.771164 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 23:51:11.771170 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 23:51:11.771178 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 23:51:11.771185 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 23:51:11.771191 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 23:51:11.771197 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:51:11.771204 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:51:11.771210 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 23:51:11.771216 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 23:51:11.771223 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 23:51:11.771231 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 23:51:11.771237 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 15 23:51:11.771245 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:51:11.771251 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:51:11.771257 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 23:51:11.771264 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 23:51:11.771270 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 23:51:11.771276 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 23:51:11.771284 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:51:11.771291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 23:51:11.771297 systemd[1]: Reached target slices.target - Slice Units. Jan 15 23:51:11.771303 systemd[1]: Reached target swap.target - Swaps. Jan 15 23:51:11.771309 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 23:51:11.771316 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 23:51:11.771324 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 15 23:51:11.771330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:51:11.771337 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 23:51:11.771344 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:51:11.771350 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 23:51:11.771357 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 23:51:11.771363 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 23:51:11.771370 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 23:51:11.771377 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 23:51:11.771383 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 23:51:11.771389 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 23:51:11.771396 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 23:51:11.771403 systemd[1]: Reached target machines.target - Containers. Jan 15 23:51:11.771410 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 23:51:11.771417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:51:11.771456 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 23:51:11.771464 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 23:51:11.771471 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:51:11.771477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 23:51:11.771483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:51:11.771490 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 23:51:11.771496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:51:11.771503 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 23:51:11.771509 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 23:51:11.771518 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 23:51:11.771525 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 23:51:11.771531 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 23:51:11.771538 kernel: fuse: init (API version 7.41) Jan 15 23:51:11.771544 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:51:11.771551 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 23:51:11.771557 kernel: loop: module loaded Jan 15 23:51:11.771563 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 23:51:11.771571 kernel: ACPI: bus type drm_connector registered Jan 15 23:51:11.771578 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 23:51:11.771584 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 23:51:11.771590 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 15 23:51:11.771616 systemd-journald[1399]: Collecting audit messages is disabled. Jan 15 23:51:11.771634 systemd-journald[1399]: Journal started Jan 15 23:51:11.771650 systemd-journald[1399]: Runtime Journal (/run/log/journal/7ebff755dc314c4d9ad2cdb8d24aa047) is 8M, max 78.3M, 70.3M free. Jan 15 23:51:10.845231 systemd[1]: Queued start job for default target multi-user.target. Jan 15 23:51:10.860004 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 15 23:51:10.860441 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 23:51:10.860745 systemd[1]: systemd-journald.service: Consumed 2.511s CPU time. Jan 15 23:51:11.795519 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 23:51:11.804071 systemd[1]: verity-setup.service: Deactivated successfully. Jan 15 23:51:11.804138 systemd[1]: Stopped verity-setup.service. Jan 15 23:51:11.818219 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 23:51:11.818924 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 23:51:11.823515 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 23:51:11.829334 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 23:51:11.833490 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 23:51:11.838260 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 23:51:11.843404 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 23:51:11.847690 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 23:51:11.852997 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:51:11.859025 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 23:51:11.859172 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 23:51:11.864646 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:51:11.864782 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:51:11.870655 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 23:51:11.870794 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 23:51:11.877604 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:51:11.877723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:51:11.884011 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 23:51:11.884146 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 23:51:11.890241 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:51:11.890362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:51:11.895455 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 23:51:11.901102 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:51:11.907887 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 23:51:11.914179 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 15 23:51:11.928153 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 23:51:11.936566 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 23:51:11.948609 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 23:51:11.954206 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 23:51:11.954243 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 23:51:11.959899 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 15 23:51:11.966872 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 23:51:11.972440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:51:11.982189 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 23:51:11.988464 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 23:51:11.993878 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 23:51:11.995562 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 23:51:12.001616 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 23:51:12.008333 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:51:12.016569 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 23:51:12.023032 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 23:51:12.036773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:51:12.043152 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 23:51:12.051224 systemd-journald[1399]: Time spent on flushing to /var/log/journal/7ebff755dc314c4d9ad2cdb8d24aa047 is 47.295ms for 939 entries. Jan 15 23:51:12.051224 systemd-journald[1399]: System Journal (/var/log/journal/7ebff755dc314c4d9ad2cdb8d24aa047) is 11.8M, max 2.6G, 2.6G free. Jan 15 23:51:12.153492 systemd-journald[1399]: Received client request to flush runtime journal. Jan 15 23:51:12.153551 systemd-journald[1399]: /var/log/journal/7ebff755dc314c4d9ad2cdb8d24aa047/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 15 23:51:12.153570 systemd-journald[1399]: Rotating system journal. Jan 15 23:51:12.153586 kernel: loop0: detected capacity change from 0 to 207008 Jan 15 23:51:12.051580 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 23:51:12.066538 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 23:51:12.093941 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 23:51:12.105553 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 15 23:51:12.115582 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:51:12.154935 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 23:51:12.172444 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 23:51:12.215396 kernel: loop1: detected capacity change from 0 to 119840 Jan 15 23:51:12.208967 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 23:51:12.209622 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 15 23:51:12.216306 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 23:51:12.227998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 23:51:12.327496 systemd-tmpfiles[1463]: ACLs are not supported, ignoring. Jan 15 23:51:12.327913 systemd-tmpfiles[1463]: ACLs are not supported, ignoring. Jan 15 23:51:12.330944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:51:12.702562 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 23:51:12.709644 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:51:12.735710 systemd-udevd[1469]: Using default interface naming scheme 'v255'. Jan 15 23:51:12.736449 kernel: loop2: detected capacity change from 0 to 27936 Jan 15 23:51:12.977809 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:51:12.988673 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 23:51:13.023586 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 15 23:51:13.043463 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 23:51:13.108496 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 23:51:13.142461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#98 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 15 23:51:13.146890 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 23:51:13.167504 kernel: hv_vmbus: registering driver hv_balloon Jan 15 23:51:13.171386 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 15 23:51:13.180969 kernel: loop3: detected capacity change from 0 to 100632 Jan 15 23:51:13.181048 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 15 23:51:13.235516 kernel: hv_vmbus: registering driver hyperv_fb Jan 15 23:51:13.235604 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 15 23:51:13.241617 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 15 23:51:13.246921 kernel: Console: switching to colour dummy device 80x25 Jan 15 23:51:13.250459 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 23:51:13.306581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:51:13.318890 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:51:13.319056 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:51:13.331400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:51:13.363532 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:51:13.363727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:51:13.371491 kernel: MACsec IEEE 802.1AE Jan 15 23:51:13.373518 systemd-networkd[1493]: lo: Link UP Jan 15 23:51:13.373524 systemd-networkd[1493]: lo: Gained carrier Jan 15 23:51:13.379622 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:51:13.384066 systemd-networkd[1493]: Enumeration completed Jan 15 23:51:13.384411 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:51:13.384435 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 23:51:13.386799 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 23:51:13.394966 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 23:51:13.405221 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 23:51:13.413892 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 15 23:51:13.421557 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 23:51:13.458945 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 23:51:13.462430 kernel: mlx5_core f954:00:02.0 enP63828s1: Link up Jan 15 23:51:13.488461 kernel: hv_netvsc 7ced8dcf-d348-7ced-8dcf-d3487ced8dcf eth0: Data path switched to VF: enP63828s1 Jan 15 23:51:13.488859 systemd-networkd[1493]: enP63828s1: Link UP Jan 15 23:51:13.489234 systemd-networkd[1493]: eth0: Link UP Jan 15 23:51:13.489288 systemd-networkd[1493]: eth0: Gained carrier Jan 15 23:51:13.489361 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:51:13.490862 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 15 23:51:13.498771 systemd-networkd[1493]: enP63828s1: Gained carrier Jan 15 23:51:13.508468 systemd-networkd[1493]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 23:51:13.628447 kernel: loop4: detected capacity change from 0 to 207008 Jan 15 23:51:13.646447 kernel: loop5: detected capacity change from 0 to 119840 Jan 15 23:51:13.660463 kernel: loop6: detected capacity change from 0 to 27936 Jan 15 23:51:13.674454 kernel: loop7: detected capacity change from 0 to 100632 Jan 15 23:51:13.683063 (sd-merge)[1616]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 15 23:51:13.683489 (sd-merge)[1616]: Merged extensions into '/usr'. Jan 15 23:51:13.687115 systemd[1]: Reload requested from client PID 1443 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 23:51:13.687130 systemd[1]: Reloading... Jan 15 23:51:13.738826 zram_generator::config[1646]: No configuration found. Jan 15 23:51:13.910803 systemd[1]: Reloading finished in 223 ms. Jan 15 23:51:13.931778 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 23:51:13.944392 systemd[1]: Starting ensure-sysext.service... Jan 15 23:51:13.948579 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 23:51:13.963558 systemd[1]: Reload requested from client PID 1702 ('systemctl') (unit ensure-sysext.service)... Jan 15 23:51:13.963578 systemd[1]: Reloading... Jan 15 23:51:13.963888 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 15 23:51:13.963920 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 15 23:51:13.964109 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 23:51:13.964249 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 23:51:13.965088 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 23:51:13.965336 systemd-tmpfiles[1703]: ACLs are not supported, ignoring. Jan 15 23:51:13.965462 systemd-tmpfiles[1703]: ACLs are not supported, ignoring. Jan 15 23:51:14.001660 systemd-tmpfiles[1703]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 23:51:14.001669 systemd-tmpfiles[1703]: Skipping /boot Jan 15 23:51:14.008698 systemd-tmpfiles[1703]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 23:51:14.008831 systemd-tmpfiles[1703]: Skipping /boot Jan 15 23:51:14.038449 zram_generator::config[1739]: No configuration found. Jan 15 23:51:14.197239 systemd[1]: Reloading finished in 233 ms. Jan 15 23:51:14.206659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:51:14.217334 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:51:14.230561 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 23:51:14.258238 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 23:51:14.271836 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 23:51:14.280568 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 23:51:14.290580 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 23:51:14.300678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:51:14.303619 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:51:14.313124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:51:14.325730 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:51:14.330559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:51:14.330784 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:51:14.333993 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:51:14.334162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:51:14.343221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:51:14.344447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:51:14.351236 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:51:14.351528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:51:14.365519 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 23:51:14.375393 systemd[1]: Finished ensure-sysext.service. Jan 15 23:51:14.380907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:51:14.382150 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:51:14.390185 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 23:51:14.399509 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:51:14.409809 systemd-resolved[1797]: Positive Trust Anchors: Jan 15 23:51:14.410473 systemd-resolved[1797]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 23:51:14.410556 systemd-resolved[1797]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 23:51:14.411701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:51:14.418328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:51:14.418376 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:51:14.418430 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 23:51:14.420819 systemd-resolved[1797]: Using system hostname 'ci-4459.2.2-n-e85017da3c'. Jan 15 23:51:14.424182 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 23:51:14.430161 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 23:51:14.436724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:51:14.436869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:51:14.442830 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 23:51:14.443509 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 23:51:14.449805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:51:14.449928 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:51:14.456008 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:51:14.457476 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:51:14.465134 systemd[1]: Reached target network.target - Network. Jan 15 23:51:14.469553 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:51:14.475021 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 23:51:14.475092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 23:51:14.529568 augenrules[1836]: No rules Jan 15 23:51:14.530823 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 23:51:14.531059 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 23:51:14.690707 systemd-networkd[1493]: eth0: Gained IPv6LL Jan 15 23:51:14.692913 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 23:51:14.699120 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 23:51:14.894706 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 23:51:14.900748 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 23:51:17.188290 ldconfig[1438]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 23:51:17.204773 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 23:51:17.211086 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 23:51:17.224068 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 23:51:17.228933 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 23:51:17.233669 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 23:51:17.239614 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 23:51:17.245045 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 23:51:17.249510 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 23:51:17.254954 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 23:51:17.260021 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 23:51:17.260051 systemd[1]: Reached target paths.target - Path Units. Jan 15 23:51:17.263480 systemd[1]: Reached target timers.target - Timer Units. Jan 15 23:51:17.286214 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 23:51:17.291953 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 23:51:17.297228 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 15 23:51:17.303294 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 15 23:51:17.308443 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 15 23:51:17.315116 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 23:51:17.319928 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 15 23:51:17.325826 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 23:51:17.330893 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 23:51:17.335052 systemd[1]: Reached target basic.target - Basic System. Jan 15 23:51:17.338930 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 23:51:17.338951 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 23:51:17.341273 systemd[1]: Starting chronyd.service - NTP client/server... Jan 15 23:51:17.355533 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 23:51:17.362303 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 23:51:17.371773 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 23:51:17.378886 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 23:51:17.389196 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 23:51:17.396204 jq[1857]: false Jan 15 23:51:17.396566 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 23:51:17.400812 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 23:51:17.401743 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 15 23:51:17.406596 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 15 23:51:17.409529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:51:17.418591 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 23:51:17.424648 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 23:51:17.431841 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 23:51:17.438954 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 23:51:17.447119 KVP[1859]: KVP starting; pid is:1859 Jan 15 23:51:17.448942 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 23:51:17.453501 chronyd[1849]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 15 23:51:17.457208 extend-filesystems[1858]: Found /dev/sda6 Jan 15 23:51:17.460655 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 23:51:17.466001 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 23:51:17.466538 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 23:51:17.467085 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 23:51:17.481472 kernel: hv_utils: KVP IC version 4.0 Jan 15 23:51:17.473196 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 23:51:17.475479 KVP[1859]: KVP LIC Version: 3.1 Jan 15 23:51:17.484357 extend-filesystems[1858]: Found /dev/sda9 Jan 15 23:51:17.490988 extend-filesystems[1858]: Checking size of /dev/sda9 Jan 15 23:51:17.489562 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 23:51:17.499555 chronyd[1849]: Timezone right/UTC failed leap second check, ignoring Jan 15 23:51:17.504451 jq[1880]: true Jan 15 23:51:17.506756 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 23:51:17.507080 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 23:51:17.508721 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 23:51:17.508872 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 23:51:17.516493 chronyd[1849]: Loaded seccomp filter (level 2) Jan 15 23:51:17.517103 systemd[1]: Started chronyd.service - NTP client/server. Jan 15 23:51:17.522348 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 23:51:17.522725 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 23:51:17.532456 extend-filesystems[1858]: Old size kept for /dev/sda9 Jan 15 23:51:17.534870 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 23:51:17.535050 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 23:51:17.562345 jq[1894]: true Jan 15 23:51:17.572492 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 23:51:17.578754 (ntainerd)[1896]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 23:51:17.586329 update_engine[1875]: I20260115 23:51:17.586258 1875 main.cc:92] Flatcar Update Engine starting Jan 15 23:51:17.614104 systemd-logind[1871]: New seat seat0. Jan 15 23:51:17.617316 tar[1890]: linux-arm64/LICENSE Jan 15 23:51:17.617599 tar[1890]: linux-arm64/helm Jan 15 23:51:17.618710 systemd-logind[1871]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 15 23:51:17.618877 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 23:51:17.685682 bash[1928]: Updated "/home/core/.ssh/authorized_keys" Jan 15 23:51:17.687474 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 23:51:17.697355 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 23:51:17.701692 dbus-daemon[1852]: [system] SELinux support is enabled Jan 15 23:51:17.701848 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 23:51:17.711175 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 23:51:17.712493 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 23:51:17.722038 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 23:51:17.722060 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 23:51:17.729218 update_engine[1875]: I20260115 23:51:17.729084 1875 update_check_scheduler.cc:74] Next update check in 2m58s Jan 15 23:51:17.734398 systemd[1]: Started update-engine.service - Update Engine. Jan 15 23:51:17.739076 dbus-daemon[1852]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 23:51:17.750917 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 23:51:17.779775 coreos-metadata[1851]: Jan 15 23:51:17.779 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 23:51:17.787305 coreos-metadata[1851]: Jan 15 23:51:17.786 INFO Fetch successful Jan 15 23:51:17.787305 coreos-metadata[1851]: Jan 15 23:51:17.786 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 15 23:51:17.791577 coreos-metadata[1851]: Jan 15 23:51:17.791 INFO Fetch successful Jan 15 23:51:17.791577 coreos-metadata[1851]: Jan 15 23:51:17.791 INFO Fetching http://168.63.129.16/machine/2998b3c6-e8b6-4acb-a2c2-245bd7601605/73bc37ea%2D965e%2D4572%2D9b92%2Da82a470919dd.%5Fci%2D4459.2.2%2Dn%2De85017da3c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 15 23:51:17.796450 coreos-metadata[1851]: Jan 15 23:51:17.794 INFO Fetch successful Jan 15 23:51:17.797039 coreos-metadata[1851]: Jan 15 23:51:17.794 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 15 23:51:17.807710 coreos-metadata[1851]: Jan 15 23:51:17.807 INFO Fetch successful Jan 15 23:51:17.882403 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 23:51:17.893434 sshd_keygen[1889]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 23:51:17.901279 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 23:51:17.934751 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 23:51:17.944252 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 23:51:17.952819 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 15 23:51:17.979891 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 23:51:17.980327 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 23:51:17.991668 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 23:51:18.000835 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 15 23:51:18.019674 locksmithd[1971]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 23:51:18.038297 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 23:51:18.048550 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 23:51:18.057572 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 15 23:51:18.066877 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 23:51:18.141707 tar[1890]: linux-arm64/README.md Jan 15 23:51:18.156120 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 23:51:18.250820 containerd[1896]: time="2026-01-15T23:51:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 15 23:51:18.251693 containerd[1896]: time="2026-01-15T23:51:18.251662588Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 15 23:51:18.257970 containerd[1896]: time="2026-01-15T23:51:18.257934268Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.992µs" Jan 15 23:51:18.258048 containerd[1896]: time="2026-01-15T23:51:18.258034308Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 15 23:51:18.258110 containerd[1896]: time="2026-01-15T23:51:18.258100116Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 15 23:51:18.258509 containerd[1896]: time="2026-01-15T23:51:18.258487612Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 15 23:51:18.258579 containerd[1896]: time="2026-01-15T23:51:18.258568812Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 15 23:51:18.258629 containerd[1896]: time="2026-01-15T23:51:18.258620388Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 23:51:18.258746 containerd[1896]: time="2026-01-15T23:51:18.258731380Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 23:51:18.258786 containerd[1896]: time="2026-01-15T23:51:18.258776988Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 23:51:18.259043 containerd[1896]: time="2026-01-15T23:51:18.259022940Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 23:51:18.259092 containerd[1896]: time="2026-01-15T23:51:18.259082580Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 23:51:18.259128 containerd[1896]: time="2026-01-15T23:51:18.259117244Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 23:51:18.259169 containerd[1896]: time="2026-01-15T23:51:18.259158404Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 15 23:51:18.259284 containerd[1896]: time="2026-01-15T23:51:18.259268684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 15 23:51:18.259556 containerd[1896]: time="2026-01-15T23:51:18.259538044Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 23:51:18.259633 containerd[1896]: time="2026-01-15T23:51:18.259620068Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 23:51:18.259690 containerd[1896]: time="2026-01-15T23:51:18.259679972Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 15 23:51:18.259748 containerd[1896]: time="2026-01-15T23:51:18.259738204Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 15 23:51:18.259924 containerd[1896]: time="2026-01-15T23:51:18.259911372Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 15 23:51:18.260037 containerd[1896]: time="2026-01-15T23:51:18.260022572Z" level=info msg="metadata content store policy set" policy=shared Jan 15 23:51:18.279921 containerd[1896]: time="2026-01-15T23:51:18.279878740Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 15 23:51:18.280081 containerd[1896]: time="2026-01-15T23:51:18.280070228Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 15 23:51:18.280444 containerd[1896]: time="2026-01-15T23:51:18.280304700Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 15 23:51:18.280444 containerd[1896]: time="2026-01-15T23:51:18.280322460Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 15 23:51:18.280444 containerd[1896]: time="2026-01-15T23:51:18.280331820Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 15 23:51:18.280444 containerd[1896]: time="2026-01-15T23:51:18.280338980Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 15 23:51:18.280444 containerd[1896]: time="2026-01-15T23:51:18.280348716Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 15 23:51:18.280444 containerd[1896]: time="2026-01-15T23:51:18.280359828Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 15 23:51:18.280444 containerd[1896]: time="2026-01-15T23:51:18.280371724Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 15 23:51:18.280444 containerd[1896]: time="2026-01-15T23:51:18.280377948Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 15 23:51:18.280444 containerd[1896]: time="2026-01-15T23:51:18.280384292Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 15 23:51:18.280444 containerd[1896]: time="2026-01-15T23:51:18.280393804Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 15 23:51:18.280857 containerd[1896]: time="2026-01-15T23:51:18.280805908Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 15 23:51:18.280857 containerd[1896]: time="2026-01-15T23:51:18.280829668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 15 23:51:18.280857 containerd[1896]: time="2026-01-15T23:51:18.280840292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 15 23:51:18.281007 containerd[1896]: time="2026-01-15T23:51:18.280849484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 15 23:51:18.281007 containerd[1896]: time="2026-01-15T23:51:18.280950124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 15 23:51:18.281007 containerd[1896]: time="2026-01-15T23:51:18.280960364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 15 23:51:18.281007 containerd[1896]: time="2026-01-15T23:51:18.280968292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 15 23:51:18.281007 containerd[1896]: time="2026-01-15T23:51:18.280974796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 15 23:51:18.281007 containerd[1896]: time="2026-01-15T23:51:18.280983204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 15 23:51:18.281007 containerd[1896]: time="2026-01-15T23:51:18.280989668Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 15 23:51:18.281193 containerd[1896]: time="2026-01-15T23:51:18.280997524Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 15 23:51:18.281193 containerd[1896]: time="2026-01-15T23:51:18.281173092Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 15 23:51:18.281257 containerd[1896]: time="2026-01-15T23:51:18.281247828Z" level=info msg="Start snapshots syncer" Jan 15 23:51:18.281325 containerd[1896]: time="2026-01-15T23:51:18.281316012Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 15 23:51:18.281612 containerd[1896]: time="2026-01-15T23:51:18.281582252Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 15 23:51:18.281755 containerd[1896]: time="2026-01-15T23:51:18.281741724Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 15 23:51:18.281876 containerd[1896]: time="2026-01-15T23:51:18.281861876Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 15 23:51:18.282060 containerd[1896]: time="2026-01-15T23:51:18.282046564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 15 23:51:18.282137 containerd[1896]: time="2026-01-15T23:51:18.282127140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 15 23:51:18.282179 containerd[1896]: time="2026-01-15T23:51:18.282171804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 15 23:51:18.282229 containerd[1896]: time="2026-01-15T23:51:18.282219748Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 15 23:51:18.282283 containerd[1896]: time="2026-01-15T23:51:18.282274580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 15 23:51:18.282329 containerd[1896]: time="2026-01-15T23:51:18.282320972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 15 23:51:18.282465 containerd[1896]: time="2026-01-15T23:51:18.282354060Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 15 23:51:18.282465 containerd[1896]: time="2026-01-15T23:51:18.282377156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 15 23:51:18.282465 containerd[1896]: time="2026-01-15T23:51:18.282391420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 15 23:51:18.282465 containerd[1896]: time="2026-01-15T23:51:18.282399676Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 15 23:51:18.282553 containerd[1896]: time="2026-01-15T23:51:18.282541500Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 23:51:18.282616 containerd[1896]: time="2026-01-15T23:51:18.282606036Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 23:51:18.282669 containerd[1896]: time="2026-01-15T23:51:18.282651164Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 23:51:18.282712 containerd[1896]: time="2026-01-15T23:51:18.282700468Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 23:51:18.282751 containerd[1896]: time="2026-01-15T23:51:18.282740876Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 15 23:51:18.282861 containerd[1896]: time="2026-01-15T23:51:18.282778148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 15 23:51:18.282861 containerd[1896]: time="2026-01-15T23:51:18.282790028Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 15 23:51:18.282861 containerd[1896]: time="2026-01-15T23:51:18.282805548Z" level=info msg="runtime interface created" Jan 15 23:51:18.282861 containerd[1896]: time="2026-01-15T23:51:18.282810076Z" level=info msg="created NRI interface" Jan 15 23:51:18.282861 containerd[1896]: time="2026-01-15T23:51:18.282816004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 15 23:51:18.282861 containerd[1896]: time="2026-01-15T23:51:18.282825996Z" level=info msg="Connect containerd service" Jan 15 23:51:18.282977 containerd[1896]: time="2026-01-15T23:51:18.282965740Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 23:51:18.283793 containerd[1896]: time="2026-01-15T23:51:18.283727524Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 23:51:18.431166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:51:18.443973 (kubelet)[2048]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:51:18.739158 containerd[1896]: time="2026-01-15T23:51:18.738734876Z" level=info msg="Start subscribing containerd event" Jan 15 23:51:18.739158 containerd[1896]: time="2026-01-15T23:51:18.738833452Z" level=info msg="Start recovering state" Jan 15 23:51:18.739158 containerd[1896]: time="2026-01-15T23:51:18.738926108Z" level=info msg="Start event monitor" Jan 15 23:51:18.739158 containerd[1896]: time="2026-01-15T23:51:18.738946052Z" level=info msg="Start cni network conf syncer for default" Jan 15 23:51:18.739158 containerd[1896]: time="2026-01-15T23:51:18.738951604Z" level=info msg="Start streaming server" Jan 15 23:51:18.739158 containerd[1896]: time="2026-01-15T23:51:18.738959340Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 15 23:51:18.739158 containerd[1896]: time="2026-01-15T23:51:18.738965100Z" level=info msg="runtime interface starting up..." Jan 15 23:51:18.739158 containerd[1896]: time="2026-01-15T23:51:18.738968884Z" level=info msg="starting plugins..." Jan 15 23:51:18.739158 containerd[1896]: time="2026-01-15T23:51:18.738979724Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 15 23:51:18.739571 containerd[1896]: time="2026-01-15T23:51:18.739540148Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 23:51:18.739613 containerd[1896]: time="2026-01-15T23:51:18.739592300Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 23:51:18.739737 containerd[1896]: time="2026-01-15T23:51:18.739692676Z" level=info msg="containerd successfully booted in 0.489291s" Jan 15 23:51:18.740547 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 23:51:18.747562 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 23:51:18.753894 systemd[1]: Startup finished in 1.742s (kernel) + 11.752s (initrd) + 10.632s (userspace) = 24.127s. Jan 15 23:51:18.844687 kubelet[2048]: E0115 23:51:18.844620 2048 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:51:18.847732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:51:18.847851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:51:18.848216 systemd[1]: kubelet.service: Consumed 560ms CPU time, 256M memory peak. Jan 15 23:51:18.978280 login[2033]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 15 23:51:18.978503 login[2032]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:18.983903 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 23:51:18.984783 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 23:51:18.990470 systemd-logind[1871]: New session 1 of user core. Jan 15 23:51:19.016470 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 23:51:19.018708 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 23:51:19.041543 (systemd)[2069]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 23:51:19.043698 systemd-logind[1871]: New session c1 of user core. Jan 15 23:51:19.148909 systemd[2069]: Queued start job for default target default.target. Jan 15 23:51:19.159541 systemd[2069]: Created slice app.slice - User Application Slice. Jan 15 23:51:19.159797 systemd[2069]: Reached target paths.target - Paths. Jan 15 23:51:19.159886 systemd[2069]: Reached target timers.target - Timers. Jan 15 23:51:19.161129 systemd[2069]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 23:51:19.168836 systemd[2069]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 23:51:19.169111 systemd[2069]: Reached target sockets.target - Sockets. Jan 15 23:51:19.169215 systemd[2069]: Reached target basic.target - Basic System. Jan 15 23:51:19.169364 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 23:51:19.169471 systemd[2069]: Reached target default.target - Main User Target. Jan 15 23:51:19.169571 systemd[2069]: Startup finished in 120ms. Jan 15 23:51:19.171175 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 23:51:19.775260 waagent[2024]: 2026-01-15T23:51:19.775184Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 15 23:51:19.783801 waagent[2024]: 2026-01-15T23:51:19.780097Z INFO Daemon Daemon OS: flatcar 4459.2.2 Jan 15 23:51:19.784132 waagent[2024]: 2026-01-15T23:51:19.784090Z INFO Daemon Daemon Python: 3.11.13 Jan 15 23:51:19.788452 waagent[2024]: 2026-01-15T23:51:19.787905Z INFO Daemon Daemon Run daemon Jan 15 23:51:19.794443 waagent[2024]: 2026-01-15T23:51:19.791831Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Jan 15 23:51:19.798839 waagent[2024]: 2026-01-15T23:51:19.798792Z INFO Daemon Daemon Using waagent for provisioning Jan 15 23:51:19.802981 waagent[2024]: 2026-01-15T23:51:19.802933Z INFO Daemon Daemon Activate resource disk Jan 15 23:51:19.806546 waagent[2024]: 2026-01-15T23:51:19.806503Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 15 23:51:19.815022 waagent[2024]: 2026-01-15T23:51:19.814968Z INFO Daemon Daemon Found device: None Jan 15 23:51:19.818592 waagent[2024]: 2026-01-15T23:51:19.818556Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 15 23:51:19.825172 waagent[2024]: 2026-01-15T23:51:19.825140Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 15 23:51:19.834248 waagent[2024]: 2026-01-15T23:51:19.834208Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 23:51:19.838591 waagent[2024]: 2026-01-15T23:51:19.838560Z INFO Daemon Daemon Running default provisioning handler Jan 15 23:51:19.848105 waagent[2024]: 2026-01-15T23:51:19.848054Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 15 23:51:19.858547 waagent[2024]: 2026-01-15T23:51:19.858500Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 15 23:51:19.866106 waagent[2024]: 2026-01-15T23:51:19.866063Z INFO Daemon Daemon cloud-init is enabled: False Jan 15 23:51:19.870132 waagent[2024]: 2026-01-15T23:51:19.870093Z INFO Daemon Daemon Copying ovf-env.xml Jan 15 23:51:19.971033 waagent[2024]: 2026-01-15T23:51:19.970950Z INFO Daemon Daemon Successfully mounted dvd Jan 15 23:51:19.979538 login[2033]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:19.985260 systemd-logind[1871]: New session 2 of user core. Jan 15 23:51:19.995551 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 23:51:20.006231 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 15 23:51:20.007125 waagent[2024]: 2026-01-15T23:51:20.006291Z INFO Daemon Daemon Detect protocol endpoint Jan 15 23:51:20.010198 waagent[2024]: 2026-01-15T23:51:20.010147Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 23:51:20.014758 waagent[2024]: 2026-01-15T23:51:20.014714Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 15 23:51:20.020318 waagent[2024]: 2026-01-15T23:51:20.020272Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 15 23:51:20.024426 waagent[2024]: 2026-01-15T23:51:20.024382Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 15 23:51:20.028955 waagent[2024]: 2026-01-15T23:51:20.028269Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 15 23:51:20.078442 waagent[2024]: 2026-01-15T23:51:20.073747Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 15 23:51:20.078740 waagent[2024]: 2026-01-15T23:51:20.078711Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 15 23:51:20.082928 waagent[2024]: 2026-01-15T23:51:20.082893Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 15 23:51:20.194735 waagent[2024]: 2026-01-15T23:51:20.194645Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 15 23:51:20.199443 waagent[2024]: 2026-01-15T23:51:20.199393Z INFO Daemon Daemon Forcing an update of the goal state. Jan 15 23:51:20.206522 waagent[2024]: 2026-01-15T23:51:20.206483Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 23:51:20.223618 waagent[2024]: 2026-01-15T23:51:20.223580Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 15 23:51:20.227894 waagent[2024]: 2026-01-15T23:51:20.227856Z INFO Daemon Jan 15 23:51:20.229927 waagent[2024]: 2026-01-15T23:51:20.229898Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 60d22d02-6f21-4758-b618-cd714488118d eTag: 16460269677894076318 source: Fabric] Jan 15 23:51:20.238128 waagent[2024]: 2026-01-15T23:51:20.238092Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 15 23:51:20.243297 waagent[2024]: 2026-01-15T23:51:20.243267Z INFO Daemon Jan 15 23:51:20.245319 waagent[2024]: 2026-01-15T23:51:20.245293Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 15 23:51:20.254395 waagent[2024]: 2026-01-15T23:51:20.254367Z INFO Daemon Daemon Downloading artifacts profile blob Jan 15 23:51:20.318855 waagent[2024]: 2026-01-15T23:51:20.318726Z INFO Daemon Downloaded certificate {'thumbprint': '8A3BEB5AECBB0BFA25CA4382C1A3BCABBB10C8AC', 'hasPrivateKey': True} Jan 15 23:51:20.325866 waagent[2024]: 2026-01-15T23:51:20.325825Z INFO Daemon Fetch goal state completed Jan 15 23:51:20.336138 waagent[2024]: 2026-01-15T23:51:20.336102Z INFO Daemon Daemon Starting provisioning Jan 15 23:51:20.339789 waagent[2024]: 2026-01-15T23:51:20.339756Z INFO Daemon Daemon Handle ovf-env.xml. Jan 15 23:51:20.343100 waagent[2024]: 2026-01-15T23:51:20.343075Z INFO Daemon Daemon Set hostname [ci-4459.2.2-n-e85017da3c] Jan 15 23:51:20.351954 waagent[2024]: 2026-01-15T23:51:20.351902Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-n-e85017da3c] Jan 15 23:51:20.356769 waagent[2024]: 2026-01-15T23:51:20.356732Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 15 23:51:20.361436 waagent[2024]: 2026-01-15T23:51:20.361391Z INFO Daemon Daemon Primary interface is [eth0] Jan 15 23:51:20.371296 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:51:20.371303 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 23:51:20.371362 systemd-networkd[1493]: eth0: DHCP lease lost Jan 15 23:51:20.372051 waagent[2024]: 2026-01-15T23:51:20.372011Z INFO Daemon Daemon Create user account if not exists Jan 15 23:51:20.376017 waagent[2024]: 2026-01-15T23:51:20.375980Z INFO Daemon Daemon User core already exists, skip useradd Jan 15 23:51:20.380090 waagent[2024]: 2026-01-15T23:51:20.380054Z INFO Daemon Daemon Configure sudoer Jan 15 23:51:20.391011 waagent[2024]: 2026-01-15T23:51:20.387925Z INFO Daemon Daemon Configure sshd Jan 15 23:51:20.395957 waagent[2024]: 2026-01-15T23:51:20.395909Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 15 23:51:20.405044 waagent[2024]: 2026-01-15T23:51:20.404988Z INFO Daemon Daemon Deploy ssh public key. Jan 15 23:51:20.408260 systemd-networkd[1493]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 23:51:21.524465 waagent[2024]: 2026-01-15T23:51:21.524272Z INFO Daemon Daemon Provisioning complete Jan 15 23:51:21.538675 waagent[2024]: 2026-01-15T23:51:21.538636Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 15 23:51:21.543215 waagent[2024]: 2026-01-15T23:51:21.543176Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 15 23:51:21.550315 waagent[2024]: 2026-01-15T23:51:21.550283Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 15 23:51:21.655341 waagent[2119]: 2026-01-15T23:51:21.655259Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 15 23:51:21.655636 waagent[2119]: 2026-01-15T23:51:21.655401Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Jan 15 23:51:21.655636 waagent[2119]: 2026-01-15T23:51:21.655460Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 15 23:51:21.655636 waagent[2119]: 2026-01-15T23:51:21.655499Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 15 23:51:21.708061 waagent[2119]: 2026-01-15T23:51:21.707978Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 15 23:51:21.708231 waagent[2119]: 2026-01-15T23:51:21.708203Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 23:51:21.708273 waagent[2119]: 2026-01-15T23:51:21.708258Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 23:51:21.714835 waagent[2119]: 2026-01-15T23:51:21.714784Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 23:51:21.720022 waagent[2119]: 2026-01-15T23:51:21.719987Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 15 23:51:21.720475 waagent[2119]: 2026-01-15T23:51:21.720409Z INFO ExtHandler Jan 15 23:51:21.720534 waagent[2119]: 2026-01-15T23:51:21.720515Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c89a2275-d3ab-4ebe-94df-102729290866 eTag: 16460269677894076318 source: Fabric] Jan 15 23:51:21.720759 waagent[2119]: 2026-01-15T23:51:21.720733Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 15 23:51:21.721168 waagent[2119]: 2026-01-15T23:51:21.721138Z INFO ExtHandler Jan 15 23:51:21.721205 waagent[2119]: 2026-01-15T23:51:21.721190Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 15 23:51:21.724613 waagent[2119]: 2026-01-15T23:51:21.724584Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 15 23:51:21.778724 waagent[2119]: 2026-01-15T23:51:21.778596Z INFO ExtHandler Downloaded certificate {'thumbprint': '8A3BEB5AECBB0BFA25CA4382C1A3BCABBB10C8AC', 'hasPrivateKey': True} Jan 15 23:51:21.779079 waagent[2119]: 2026-01-15T23:51:21.779043Z INFO ExtHandler Fetch goal state completed Jan 15 23:51:21.792610 waagent[2119]: 2026-01-15T23:51:21.792552Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 15 23:51:21.796141 waagent[2119]: 2026-01-15T23:51:21.796089Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2119 Jan 15 23:51:21.796253 waagent[2119]: 2026-01-15T23:51:21.796226Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 15 23:51:21.796549 waagent[2119]: 2026-01-15T23:51:21.796519Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 15 23:51:21.797680 waagent[2119]: 2026-01-15T23:51:21.797643Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Jan 15 23:51:21.798008 waagent[2119]: 2026-01-15T23:51:21.797978Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 15 23:51:21.798129 waagent[2119]: 2026-01-15T23:51:21.798106Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 15 23:51:21.798584 waagent[2119]: 2026-01-15T23:51:21.798555Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 15 23:51:21.863625 waagent[2119]: 2026-01-15T23:51:21.863583Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 15 23:51:21.863817 waagent[2119]: 2026-01-15T23:51:21.863790Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 15 23:51:21.868503 waagent[2119]: 2026-01-15T23:51:21.868385Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 15 23:51:21.873199 systemd[1]: Reload requested from client PID 2134 ('systemctl') (unit waagent.service)... Jan 15 23:51:21.873213 systemd[1]: Reloading... Jan 15 23:51:21.946475 zram_generator::config[2173]: No configuration found. Jan 15 23:51:22.098990 systemd[1]: Reloading finished in 225 ms. Jan 15 23:51:22.121649 waagent[2119]: 2026-01-15T23:51:22.120174Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 15 23:51:22.121649 waagent[2119]: 2026-01-15T23:51:22.120320Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 15 23:51:23.542374 waagent[2119]: 2026-01-15T23:51:23.541581Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 15 23:51:23.542374 waagent[2119]: 2026-01-15T23:51:23.541900Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 15 23:51:23.542691 waagent[2119]: 2026-01-15T23:51:23.542600Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 23:51:23.542691 waagent[2119]: 2026-01-15T23:51:23.542675Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 23:51:23.542867 waagent[2119]: 2026-01-15T23:51:23.542832Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 15 23:51:23.542955 waagent[2119]: 2026-01-15T23:51:23.542912Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 15 23:51:23.543065 waagent[2119]: 2026-01-15T23:51:23.543035Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 15 23:51:23.543065 waagent[2119]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 15 23:51:23.543065 waagent[2119]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 15 23:51:23.543065 waagent[2119]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 15 23:51:23.543065 waagent[2119]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 15 23:51:23.543065 waagent[2119]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 23:51:23.543065 waagent[2119]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 23:51:23.543498 waagent[2119]: 2026-01-15T23:51:23.543461Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 15 23:51:23.543871 waagent[2119]: 2026-01-15T23:51:23.543838Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 23:51:23.543925 waagent[2119]: 2026-01-15T23:51:23.543895Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 23:51:23.544009 waagent[2119]: 2026-01-15T23:51:23.543982Z INFO EnvHandler ExtHandler Configure routes Jan 15 23:51:23.544043 waagent[2119]: 2026-01-15T23:51:23.544027Z INFO EnvHandler ExtHandler Gateway:None Jan 15 23:51:23.544069 waagent[2119]: 2026-01-15T23:51:23.544056Z INFO EnvHandler ExtHandler Routes:None Jan 15 23:51:23.544355 waagent[2119]: 2026-01-15T23:51:23.544332Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 15 23:51:23.544355 waagent[2119]: 2026-01-15T23:51:23.544377Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 15 23:51:23.544668 waagent[2119]: 2026-01-15T23:51:23.544636Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 15 23:51:23.544809 waagent[2119]: 2026-01-15T23:51:23.544778Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 15 23:51:23.544901 waagent[2119]: 2026-01-15T23:51:23.544874Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 15 23:51:23.550673 waagent[2119]: 2026-01-15T23:51:23.550639Z INFO ExtHandler ExtHandler Jan 15 23:51:23.550802 waagent[2119]: 2026-01-15T23:51:23.550778Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: eb47e353-b8f2-45e1-8430-5a763ce94502 correlation 44d6c533-79cf-441a-ab9e-64000ba71e0e created: 2026-01-15T23:50:23.920578Z] Jan 15 23:51:23.551168 waagent[2119]: 2026-01-15T23:51:23.551136Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 15 23:51:23.551717 waagent[2119]: 2026-01-15T23:51:23.551684Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 15 23:51:23.576162 waagent[2119]: 2026-01-15T23:51:23.576122Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 15 23:51:23.576162 waagent[2119]: Try `iptables -h' or 'iptables --help' for more information.) Jan 15 23:51:23.576832 waagent[2119]: 2026-01-15T23:51:23.576795Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 11DD8DE0-466F-4685-A1D9-471F26FD83A8;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 15 23:51:23.603263 waagent[2119]: 2026-01-15T23:51:23.603201Z INFO MonitorHandler ExtHandler Network interfaces: Jan 15 23:51:23.603263 waagent[2119]: Executing ['ip', '-a', '-o', 'link']: Jan 15 23:51:23.603263 waagent[2119]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 15 23:51:23.603263 waagent[2119]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:cf:d3:48 brd ff:ff:ff:ff:ff:ff Jan 15 23:51:23.603263 waagent[2119]: 3: enP63828s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:cf:d3:48 brd ff:ff:ff:ff:ff:ff\ altname enP63828p0s2 Jan 15 23:51:23.603263 waagent[2119]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 15 23:51:23.603263 waagent[2119]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 15 23:51:23.603263 waagent[2119]: 2: eth0 inet 10.200.20.10/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 15 23:51:23.603263 waagent[2119]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 15 23:51:23.603263 waagent[2119]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 15 23:51:23.603263 waagent[2119]: 2: eth0 inet6 fe80::7eed:8dff:fecf:d348/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 15 23:51:23.635455 waagent[2119]: 2026-01-15T23:51:23.635316Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 15 23:51:23.635455 waagent[2119]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:51:23.635455 waagent[2119]: pkts bytes target prot opt in out source destination Jan 15 23:51:23.635455 waagent[2119]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:51:23.635455 waagent[2119]: pkts bytes target prot opt in out source destination Jan 15 23:51:23.635455 waagent[2119]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:51:23.635455 waagent[2119]: pkts bytes target prot opt in out source destination Jan 15 23:51:23.635455 waagent[2119]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 23:51:23.635455 waagent[2119]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 23:51:23.635455 waagent[2119]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 23:51:23.639444 waagent[2119]: 2026-01-15T23:51:23.639379Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 15 23:51:23.639444 waagent[2119]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:51:23.639444 waagent[2119]: pkts bytes target prot opt in out source destination Jan 15 23:51:23.639444 waagent[2119]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:51:23.639444 waagent[2119]: pkts bytes target prot opt in out source destination Jan 15 23:51:23.639444 waagent[2119]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:51:23.639444 waagent[2119]: pkts bytes target prot opt in out source destination Jan 15 23:51:23.639444 waagent[2119]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 23:51:23.639444 waagent[2119]: 11 928 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 23:51:23.639444 waagent[2119]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 23:51:23.639633 waagent[2119]: 2026-01-15T23:51:23.639615Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 15 23:51:29.099133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 23:51:29.100479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:51:29.224712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:51:29.236933 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:51:29.349953 kubelet[2269]: E0115 23:51:29.349831 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:51:29.352766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:51:29.352881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:51:29.353366 systemd[1]: kubelet.service: Consumed 115ms CPU time, 107.2M memory peak. Jan 15 23:51:39.603395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 23:51:39.606608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:51:39.969617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:51:39.972613 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:51:40.001226 kubelet[2283]: E0115 23:51:40.001177 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:51:40.003396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:51:40.003522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:51:40.003975 systemd[1]: kubelet.service: Consumed 108ms CPU time, 105.3M memory peak. Jan 15 23:51:40.185788 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 23:51:40.186737 systemd[1]: Started sshd@0-10.200.20.10:22-10.200.16.10:37780.service - OpenSSH per-connection server daemon (10.200.16.10:37780). Jan 15 23:51:40.761504 sshd[2290]: Accepted publickey for core from 10.200.16.10 port 37780 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:51:40.763011 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:40.766883 systemd-logind[1871]: New session 3 of user core. Jan 15 23:51:40.772562 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 23:51:41.165193 systemd[1]: Started sshd@1-10.200.20.10:22-10.200.16.10:37788.service - OpenSSH per-connection server daemon (10.200.16.10:37788). Jan 15 23:51:41.323147 chronyd[1849]: Selected source PHC0 Jan 15 23:51:41.581912 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 37788 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:51:41.583541 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:41.587335 systemd-logind[1871]: New session 4 of user core. Jan 15 23:51:41.594593 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 23:51:41.890277 sshd[2299]: Connection closed by 10.200.16.10 port 37788 Jan 15 23:51:41.890814 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Jan 15 23:51:41.893903 systemd[1]: sshd@1-10.200.20.10:22-10.200.16.10:37788.service: Deactivated successfully. Jan 15 23:51:41.895694 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 23:51:41.897878 systemd-logind[1871]: Session 4 logged out. Waiting for processes to exit. Jan 15 23:51:41.899250 systemd-logind[1871]: Removed session 4. Jan 15 23:51:41.989126 systemd[1]: Started sshd@2-10.200.20.10:22-10.200.16.10:37790.service - OpenSSH per-connection server daemon (10.200.16.10:37790). Jan 15 23:51:42.483811 sshd[2305]: Accepted publickey for core from 10.200.16.10 port 37790 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:51:42.484948 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:42.488334 systemd-logind[1871]: New session 5 of user core. Jan 15 23:51:42.496838 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 23:51:42.833395 sshd[2308]: Connection closed by 10.200.16.10 port 37790 Jan 15 23:51:42.833903 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Jan 15 23:51:42.837526 systemd[1]: sshd@2-10.200.20.10:22-10.200.16.10:37790.service: Deactivated successfully. Jan 15 23:51:42.838869 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 23:51:42.839498 systemd-logind[1871]: Session 5 logged out. Waiting for processes to exit. Jan 15 23:51:42.840527 systemd-logind[1871]: Removed session 5. Jan 15 23:51:42.922135 systemd[1]: Started sshd@3-10.200.20.10:22-10.200.16.10:37794.service - OpenSSH per-connection server daemon (10.200.16.10:37794). Jan 15 23:51:43.416268 sshd[2314]: Accepted publickey for core from 10.200.16.10 port 37794 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:51:43.417402 sshd-session[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:43.421191 systemd-logind[1871]: New session 6 of user core. Jan 15 23:51:43.428559 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 23:51:43.770808 sshd[2317]: Connection closed by 10.200.16.10 port 37794 Jan 15 23:51:43.769893 sshd-session[2314]: pam_unix(sshd:session): session closed for user core Jan 15 23:51:43.772701 systemd[1]: sshd@3-10.200.20.10:22-10.200.16.10:37794.service: Deactivated successfully. Jan 15 23:51:43.774597 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 23:51:43.776545 systemd-logind[1871]: Session 6 logged out. Waiting for processes to exit. Jan 15 23:51:43.777988 systemd-logind[1871]: Removed session 6. Jan 15 23:51:43.855960 systemd[1]: Started sshd@4-10.200.20.10:22-10.200.16.10:37796.service - OpenSSH per-connection server daemon (10.200.16.10:37796). Jan 15 23:51:44.313392 sshd[2323]: Accepted publickey for core from 10.200.16.10 port 37796 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:51:44.314145 sshd-session[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:44.317732 systemd-logind[1871]: New session 7 of user core. Jan 15 23:51:44.324633 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 23:51:44.710108 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 23:51:44.710333 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:51:44.743033 sudo[2327]: pam_unix(sudo:session): session closed for user root Jan 15 23:51:44.831457 sshd[2326]: Connection closed by 10.200.16.10 port 37796 Jan 15 23:51:44.831120 sshd-session[2323]: pam_unix(sshd:session): session closed for user core Jan 15 23:51:44.834297 systemd[1]: sshd@4-10.200.20.10:22-10.200.16.10:37796.service: Deactivated successfully. Jan 15 23:51:44.835911 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 23:51:44.837173 systemd-logind[1871]: Session 7 logged out. Waiting for processes to exit. Jan 15 23:51:44.838706 systemd-logind[1871]: Removed session 7. Jan 15 23:51:44.912612 systemd[1]: Started sshd@5-10.200.20.10:22-10.200.16.10:37800.service - OpenSSH per-connection server daemon (10.200.16.10:37800). Jan 15 23:51:45.372105 sshd[2333]: Accepted publickey for core from 10.200.16.10 port 37800 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:51:45.373325 sshd-session[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:45.376943 systemd-logind[1871]: New session 8 of user core. Jan 15 23:51:45.385563 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 23:51:45.629984 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 23:51:45.630759 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:51:45.637761 sudo[2338]: pam_unix(sudo:session): session closed for user root Jan 15 23:51:45.641951 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 15 23:51:45.642171 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:51:45.650204 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 23:51:45.677035 augenrules[2360]: No rules Jan 15 23:51:45.678330 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 23:51:45.678773 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 23:51:45.680181 sudo[2337]: pam_unix(sudo:session): session closed for user root Jan 15 23:51:45.769458 sshd[2336]: Connection closed by 10.200.16.10 port 37800 Jan 15 23:51:45.769998 sshd-session[2333]: pam_unix(sshd:session): session closed for user core Jan 15 23:51:45.773922 systemd-logind[1871]: Session 8 logged out. Waiting for processes to exit. Jan 15 23:51:45.774284 systemd[1]: sshd@5-10.200.20.10:22-10.200.16.10:37800.service: Deactivated successfully. Jan 15 23:51:45.775728 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 23:51:45.777087 systemd-logind[1871]: Removed session 8. Jan 15 23:51:45.838089 systemd[1]: Started sshd@6-10.200.20.10:22-10.200.16.10:37816.service - OpenSSH per-connection server daemon (10.200.16.10:37816). Jan 15 23:51:46.254235 sshd[2369]: Accepted publickey for core from 10.200.16.10 port 37816 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:51:46.254975 sshd-session[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:46.258389 systemd-logind[1871]: New session 9 of user core. Jan 15 23:51:46.266764 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 23:51:46.490092 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 23:51:46.490314 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:51:47.884355 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 23:51:47.894918 (dockerd)[2391]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 23:51:48.700456 dockerd[2391]: time="2026-01-15T23:51:48.700146046Z" level=info msg="Starting up" Jan 15 23:51:48.701315 dockerd[2391]: time="2026-01-15T23:51:48.701294750Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 15 23:51:48.709963 dockerd[2391]: time="2026-01-15T23:51:48.709878342Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 15 23:51:48.845461 dockerd[2391]: time="2026-01-15T23:51:48.845387086Z" level=info msg="Loading containers: start." Jan 15 23:51:48.877448 kernel: Initializing XFRM netlink socket Jan 15 23:51:49.168543 systemd-networkd[1493]: docker0: Link UP Jan 15 23:51:49.191363 dockerd[2391]: time="2026-01-15T23:51:49.190767854Z" level=info msg="Loading containers: done." Jan 15 23:51:49.201519 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2138562270-merged.mount: Deactivated successfully. Jan 15 23:51:49.217467 dockerd[2391]: time="2026-01-15T23:51:49.217228478Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 23:51:49.217467 dockerd[2391]: time="2026-01-15T23:51:49.217331262Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 15 23:51:49.217662 dockerd[2391]: time="2026-01-15T23:51:49.217647390Z" level=info msg="Initializing buildkit" Jan 15 23:51:49.296816 dockerd[2391]: time="2026-01-15T23:51:49.296757606Z" level=info msg="Completed buildkit initialization" Jan 15 23:51:49.301018 dockerd[2391]: time="2026-01-15T23:51:49.300972558Z" level=info msg="Daemon has completed initialization" Jan 15 23:51:49.301129 dockerd[2391]: time="2026-01-15T23:51:49.301036534Z" level=info msg="API listen on /run/docker.sock" Jan 15 23:51:49.303388 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 23:51:50.099388 containerd[1896]: time="2026-01-15T23:51:50.099303163Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 15 23:51:50.168593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 23:51:50.170056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:51:50.276887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:51:50.280489 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:51:50.391048 kubelet[2606]: E0115 23:51:50.390774 2606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:51:50.392734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:51:50.392850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:51:50.393122 systemd[1]: kubelet.service: Consumed 112ms CPU time, 107.4M memory peak. Jan 15 23:51:51.481905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657441671.mount: Deactivated successfully. Jan 15 23:51:52.612690 containerd[1896]: time="2026-01-15T23:51:52.612634195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:52.620615 containerd[1896]: time="2026-01-15T23:51:52.620572763Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 15 23:51:52.625123 containerd[1896]: time="2026-01-15T23:51:52.625097357Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:52.630596 containerd[1896]: time="2026-01-15T23:51:52.630542824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:52.631329 containerd[1896]: time="2026-01-15T23:51:52.630825961Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.531016549s" Jan 15 23:51:52.631329 containerd[1896]: time="2026-01-15T23:51:52.630859162Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 15 23:51:52.631728 containerd[1896]: time="2026-01-15T23:51:52.631707703Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 15 23:51:53.669456 waagent[2119]: 2026-01-15T23:51:53.668679Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 15 23:51:53.678173 waagent[2119]: 2026-01-15T23:51:53.676621Z INFO ExtHandler Jan 15 23:51:53.678173 waagent[2119]: 2026-01-15T23:51:53.676732Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b7c474c3-682a-4ba5-95b3-1751f257fe92 eTag: 16902667704991295183 source: Fabric] Jan 15 23:51:53.678173 waagent[2119]: 2026-01-15T23:51:53.677039Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 15 23:51:53.678173 waagent[2119]: 2026-01-15T23:51:53.677572Z INFO ExtHandler Jan 15 23:51:53.678173 waagent[2119]: 2026-01-15T23:51:53.677628Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 15 23:51:53.756965 waagent[2119]: 2026-01-15T23:51:53.756137Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 15 23:51:53.819748 waagent[2119]: 2026-01-15T23:51:53.819674Z INFO ExtHandler Downloaded certificate {'thumbprint': '8A3BEB5AECBB0BFA25CA4382C1A3BCABBB10C8AC', 'hasPrivateKey': True} Jan 15 23:51:53.820659 waagent[2119]: 2026-01-15T23:51:53.820616Z INFO ExtHandler Fetch goal state completed Jan 15 23:51:53.821192 waagent[2119]: 2026-01-15T23:51:53.821157Z INFO ExtHandler ExtHandler Jan 15 23:51:53.821704 waagent[2119]: 2026-01-15T23:51:53.821667Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 5a90e6b5-b1fd-4fe3-bfc8-eaa41b4f59a7 correlation 44d6c533-79cf-441a-ab9e-64000ba71e0e created: 2026-01-15T23:51:42.358528Z] Jan 15 23:51:53.822127 waagent[2119]: 2026-01-15T23:51:53.822092Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 15 23:51:53.822870 waagent[2119]: 2026-01-15T23:51:53.822834Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Jan 15 23:51:53.988517 containerd[1896]: time="2026-01-15T23:51:53.988377334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:53.992789 containerd[1896]: time="2026-01-15T23:51:53.992748667Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 15 23:51:54.001050 containerd[1896]: time="2026-01-15T23:51:54.001015094Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:54.008027 containerd[1896]: time="2026-01-15T23:51:54.007989765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:54.009184 containerd[1896]: time="2026-01-15T23:51:54.009153188Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.377421388s" Jan 15 23:51:54.009204 containerd[1896]: time="2026-01-15T23:51:54.009190958Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 15 23:51:54.009923 containerd[1896]: time="2026-01-15T23:51:54.009900470Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 15 23:51:55.210779 containerd[1896]: time="2026-01-15T23:51:55.210724482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:55.215216 containerd[1896]: time="2026-01-15T23:51:55.215029581Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 15 23:51:55.220823 containerd[1896]: time="2026-01-15T23:51:55.220793818Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:55.228147 containerd[1896]: time="2026-01-15T23:51:55.228090812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:55.228832 containerd[1896]: time="2026-01-15T23:51:55.228802156Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.218852613s" Jan 15 23:51:55.228955 containerd[1896]: time="2026-01-15T23:51:55.228932744Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 15 23:51:55.229400 containerd[1896]: time="2026-01-15T23:51:55.229370095Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 15 23:51:56.731567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490660639.mount: Deactivated successfully. Jan 15 23:51:57.006474 containerd[1896]: time="2026-01-15T23:51:57.006297157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:57.011530 containerd[1896]: time="2026-01-15T23:51:57.011480767Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 15 23:51:57.015698 containerd[1896]: time="2026-01-15T23:51:57.015662398Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:57.021910 containerd[1896]: time="2026-01-15T23:51:57.021864850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:57.022130 containerd[1896]: time="2026-01-15T23:51:57.022094194Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.792685569s" Jan 15 23:51:57.022130 containerd[1896]: time="2026-01-15T23:51:57.022126683Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 15 23:51:57.022777 containerd[1896]: time="2026-01-15T23:51:57.022756128Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 15 23:51:57.713555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194952089.mount: Deactivated successfully. Jan 15 23:51:58.773249 containerd[1896]: time="2026-01-15T23:51:58.772592500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:58.776994 containerd[1896]: time="2026-01-15T23:51:58.776960785Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 15 23:51:58.781003 containerd[1896]: time="2026-01-15T23:51:58.780976320Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:58.785892 containerd[1896]: time="2026-01-15T23:51:58.785836712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:58.786358 containerd[1896]: time="2026-01-15T23:51:58.786326906Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.763392892s" Jan 15 23:51:58.786477 containerd[1896]: time="2026-01-15T23:51:58.786461439Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 15 23:51:58.786920 containerd[1896]: time="2026-01-15T23:51:58.786901992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 23:51:59.422498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590753926.mount: Deactivated successfully. Jan 15 23:51:59.449976 containerd[1896]: time="2026-01-15T23:51:59.449480858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:51:59.453227 containerd[1896]: time="2026-01-15T23:51:59.453196055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 15 23:51:59.457413 containerd[1896]: time="2026-01-15T23:51:59.457380229Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:51:59.462976 containerd[1896]: time="2026-01-15T23:51:59.462523399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:51:59.462976 containerd[1896]: time="2026-01-15T23:51:59.462683085Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 675.693674ms" Jan 15 23:51:59.462976 containerd[1896]: time="2026-01-15T23:51:59.462707342Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 15 23:51:59.463324 containerd[1896]: time="2026-01-15T23:51:59.463293228Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 15 23:52:00.093821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3250397607.mount: Deactivated successfully. Jan 15 23:52:00.418761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 15 23:52:00.420996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:52:01.198487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:52:01.201548 (kubelet)[2769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:52:01.228402 kubelet[2769]: E0115 23:52:01.228345 2769 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:52:01.230817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:52:01.231056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:52:01.231603 systemd[1]: kubelet.service: Consumed 113ms CPU time, 106.9M memory peak. Jan 15 23:52:01.336280 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 15 23:52:02.639533 update_engine[1875]: I20260115 23:52:02.639467 1875 update_attempter.cc:509] Updating boot flags... Jan 15 23:52:03.019142 containerd[1896]: time="2026-01-15T23:52:03.018915268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:52:03.024307 containerd[1896]: time="2026-01-15T23:52:03.024151073Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 15 23:52:03.028821 containerd[1896]: time="2026-01-15T23:52:03.028762687Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:52:03.034125 containerd[1896]: time="2026-01-15T23:52:03.033892737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:52:03.034537 containerd[1896]: time="2026-01-15T23:52:03.034511976Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.571042918s" Jan 15 23:52:03.034605 containerd[1896]: time="2026-01-15T23:52:03.034540265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 15 23:52:05.894254 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:52:05.894573 systemd[1]: kubelet.service: Consumed 113ms CPU time, 106.9M memory peak. Jan 15 23:52:05.896539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:52:05.916968 systemd[1]: Reload requested from client PID 2959 ('systemctl') (unit session-9.scope)... Jan 15 23:52:05.916984 systemd[1]: Reloading... Jan 15 23:52:06.017445 zram_generator::config[3006]: No configuration found. Jan 15 23:52:06.178089 systemd[1]: Reloading finished in 260 ms. Jan 15 23:52:06.236824 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 23:52:06.236890 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 23:52:06.237079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:52:06.237120 systemd[1]: kubelet.service: Consumed 77ms CPU time, 95.2M memory peak. Jan 15 23:52:06.238658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:52:06.414787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:52:06.424837 (kubelet)[3073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 23:52:06.450977 kubelet[3073]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:52:06.452441 kubelet[3073]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 23:52:06.452441 kubelet[3073]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:52:06.452441 kubelet[3073]: I0115 23:52:06.451330 3073 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 23:52:06.694215 kubelet[3073]: I0115 23:52:06.694165 3073 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 23:52:06.694215 kubelet[3073]: I0115 23:52:06.694202 3073 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 23:52:06.694574 kubelet[3073]: I0115 23:52:06.694550 3073 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 23:52:06.714395 kubelet[3073]: E0115 23:52:06.714282 3073 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:52:06.714832 kubelet[3073]: I0115 23:52:06.714510 3073 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 23:52:06.720117 kubelet[3073]: I0115 23:52:06.720092 3073 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 23:52:06.722703 kubelet[3073]: I0115 23:52:06.722679 3073 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 23:52:06.723369 kubelet[3073]: I0115 23:52:06.723323 3073 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 23:52:06.723527 kubelet[3073]: I0115 23:52:06.723369 3073 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-e85017da3c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 23:52:06.723606 kubelet[3073]: I0115 23:52:06.723536 3073 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 23:52:06.723606 kubelet[3073]: I0115 23:52:06.723544 3073 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 23:52:06.723699 kubelet[3073]: I0115 23:52:06.723684 3073 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:52:06.726162 kubelet[3073]: I0115 23:52:06.726142 3073 kubelet.go:446] "Attempting to sync node with API server" Jan 15 23:52:06.726196 kubelet[3073]: I0115 23:52:06.726167 3073 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 23:52:06.726196 kubelet[3073]: I0115 23:52:06.726187 3073 kubelet.go:352] "Adding apiserver pod source" Jan 15 23:52:06.726196 kubelet[3073]: I0115 23:52:06.726195 3073 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 23:52:06.733381 kubelet[3073]: W0115 23:52:06.733327 3073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-e85017da3c&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 15 23:52:06.733574 kubelet[3073]: E0115 23:52:06.733555 3073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-e85017da3c&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:52:06.733787 kubelet[3073]: I0115 23:52:06.733482 3073 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 15 23:52:06.733915 kubelet[3073]: W0115 23:52:06.733338 3073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 15 23:52:06.733983 kubelet[3073]: E0115 23:52:06.733969 3073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:52:06.734162 kubelet[3073]: I0115 23:52:06.734144 3073 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 23:52:06.734213 kubelet[3073]: W0115 23:52:06.734203 3073 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 23:52:06.735210 kubelet[3073]: I0115 23:52:06.735178 3073 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 23:52:06.735210 kubelet[3073]: I0115 23:52:06.735217 3073 server.go:1287] "Started kubelet" Jan 15 23:52:06.739034 kubelet[3073]: I0115 23:52:06.738979 3073 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 23:52:06.739276 kubelet[3073]: I0115 23:52:06.739245 3073 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 23:52:06.739461 kubelet[3073]: I0115 23:52:06.739442 3073 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 23:52:06.739598 kubelet[3073]: I0115 23:52:06.739580 3073 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 23:52:06.740549 kubelet[3073]: I0115 23:52:06.740214 3073 server.go:479] "Adding debug handlers to kubelet server" Jan 15 23:52:06.745019 kubelet[3073]: I0115 23:52:06.744991 3073 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 23:52:06.745175 kubelet[3073]: I0115 23:52:06.745154 3073 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 23:52:06.745349 kubelet[3073]: E0115 23:52:06.745327 3073 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-e85017da3c\" not found" Jan 15 23:52:06.746277 kubelet[3073]: E0115 23:52:06.745583 3073 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-e85017da3c?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="200ms" Jan 15 23:52:06.746786 kubelet[3073]: E0115 23:52:06.746651 3073 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-n-e85017da3c.188b0c9840d339eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-n-e85017da3c,UID:ci-4459.2.2-n-e85017da3c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-n-e85017da3c,},FirstTimestamp:2026-01-15 23:52:06.735198699 +0000 UTC m=+0.307424702,LastTimestamp:2026-01-15 23:52:06.735198699 +0000 UTC m=+0.307424702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-n-e85017da3c,}" Jan 15 23:52:06.747117 kubelet[3073]: I0115 23:52:06.747098 3073 factory.go:221] Registration of the systemd container factory successfully Jan 15 23:52:06.747259 kubelet[3073]: I0115 23:52:06.747244 3073 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 23:52:06.747729 kubelet[3073]: I0115 23:52:06.747707 3073 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 23:52:06.747788 kubelet[3073]: I0115 23:52:06.747762 3073 reconciler.go:26] "Reconciler: start to sync state" Jan 15 23:52:06.748932 kubelet[3073]: I0115 23:52:06.748914 3073 factory.go:221] Registration of the containerd container factory successfully Jan 15 23:52:06.766330 kubelet[3073]: W0115 23:52:06.766284 3073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 15 23:52:06.766330 kubelet[3073]: E0115 23:52:06.766339 3073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:52:06.768123 kubelet[3073]: E0115 23:52:06.768107 3073 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 23:52:06.771019 kubelet[3073]: I0115 23:52:06.771003 3073 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 23:52:06.771109 kubelet[3073]: I0115 23:52:06.771100 3073 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 23:52:06.771165 kubelet[3073]: I0115 23:52:06.771158 3073 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:52:06.845765 kubelet[3073]: E0115 23:52:06.845722 3073 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-e85017da3c\" not found" Jan 15 23:52:06.941101 kubelet[3073]: I0115 23:52:06.940785 3073 policy_none.go:49] "None policy: Start" Jan 15 23:52:06.941101 kubelet[3073]: I0115 23:52:06.940826 3073 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 23:52:06.941101 kubelet[3073]: I0115 23:52:06.940839 3073 state_mem.go:35] "Initializing new in-memory state store" Jan 15 23:52:06.946101 kubelet[3073]: E0115 23:52:06.946071 3073 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-e85017da3c\" not found" Jan 15 23:52:06.946646 kubelet[3073]: E0115 23:52:06.946614 3073 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-e85017da3c?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="400ms" Jan 15 23:52:06.952737 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 23:52:06.954699 kubelet[3073]: I0115 23:52:06.954659 3073 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 23:52:06.955860 kubelet[3073]: I0115 23:52:06.955843 3073 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 23:52:06.956163 kubelet[3073]: I0115 23:52:06.955939 3073 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 23:52:06.956163 kubelet[3073]: I0115 23:52:06.955958 3073 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 23:52:06.956163 kubelet[3073]: I0115 23:52:06.955964 3073 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 23:52:06.956163 kubelet[3073]: E0115 23:52:06.955999 3073 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 23:52:06.959839 kubelet[3073]: W0115 23:52:06.959725 3073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 15 23:52:06.959839 kubelet[3073]: E0115 23:52:06.959770 3073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:52:06.963256 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 23:52:06.966556 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 23:52:06.976453 kubelet[3073]: I0115 23:52:06.976160 3073 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 23:52:06.976453 kubelet[3073]: I0115 23:52:06.976374 3073 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 23:52:06.976453 kubelet[3073]: I0115 23:52:06.976384 3073 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 23:52:06.977945 kubelet[3073]: I0115 23:52:06.977916 3073 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 23:52:06.978996 kubelet[3073]: E0115 23:52:06.978981 3073 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 23:52:06.979115 kubelet[3073]: E0115 23:52:06.979104 3073 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-n-e85017da3c\" not found" Jan 15 23:52:07.065610 systemd[1]: Created slice kubepods-burstable-podaf860bcbe21f3a66625c218377a1387f.slice - libcontainer container kubepods-burstable-podaf860bcbe21f3a66625c218377a1387f.slice. Jan 15 23:52:07.075726 kubelet[3073]: E0115 23:52:07.075661 3073 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-e85017da3c\" not found" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.078248 systemd[1]: Created slice kubepods-burstable-podc59ac9ef0d19206309bb0df9a5104b77.slice - libcontainer container kubepods-burstable-podc59ac9ef0d19206309bb0df9a5104b77.slice. Jan 15 23:52:07.080823 kubelet[3073]: I0115 23:52:07.080724 3073 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.081224 kubelet[3073]: E0115 23:52:07.081062 3073 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.081503 kubelet[3073]: E0115 23:52:07.081481 3073 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-e85017da3c\" not found" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.082553 systemd[1]: Created slice kubepods-burstable-pod8226e8bc2cde9612e4b675fa4cee34ad.slice - libcontainer container kubepods-burstable-pod8226e8bc2cde9612e4b675fa4cee34ad.slice. Jan 15 23:52:07.084450 kubelet[3073]: E0115 23:52:07.084374 3073 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-e85017da3c\" not found" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.148935 kubelet[3073]: I0115 23:52:07.148824 3073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af860bcbe21f3a66625c218377a1387f-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-e85017da3c\" (UID: \"af860bcbe21f3a66625c218377a1387f\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.148935 kubelet[3073]: I0115 23:52:07.148871 3073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af860bcbe21f3a66625c218377a1387f-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-e85017da3c\" (UID: \"af860bcbe21f3a66625c218377a1387f\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.148935 kubelet[3073]: I0115 23:52:07.148886 3073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af860bcbe21f3a66625c218377a1387f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-e85017da3c\" (UID: \"af860bcbe21f3a66625c218377a1387f\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.148935 kubelet[3073]: I0115 23:52:07.148903 3073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c59ac9ef0d19206309bb0df9a5104b77-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-e85017da3c\" (UID: \"c59ac9ef0d19206309bb0df9a5104b77\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.249791 kubelet[3073]: I0115 23:52:07.249567 3073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c59ac9ef0d19206309bb0df9a5104b77-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-e85017da3c\" (UID: \"c59ac9ef0d19206309bb0df9a5104b77\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.249791 kubelet[3073]: I0115 23:52:07.249616 3073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c59ac9ef0d19206309bb0df9a5104b77-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-e85017da3c\" (UID: \"c59ac9ef0d19206309bb0df9a5104b77\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.249791 kubelet[3073]: I0115 23:52:07.249631 3073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8226e8bc2cde9612e4b675fa4cee34ad-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-e85017da3c\" (UID: \"8226e8bc2cde9612e4b675fa4cee34ad\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.249791 kubelet[3073]: I0115 23:52:07.249662 3073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c59ac9ef0d19206309bb0df9a5104b77-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-e85017da3c\" (UID: \"c59ac9ef0d19206309bb0df9a5104b77\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.249791 kubelet[3073]: I0115 23:52:07.249673 3073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c59ac9ef0d19206309bb0df9a5104b77-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-e85017da3c\" (UID: \"c59ac9ef0d19206309bb0df9a5104b77\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.283477 kubelet[3073]: I0115 23:52:07.283447 3073 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.283792 kubelet[3073]: E0115 23:52:07.283769 3073 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.347440 kubelet[3073]: E0115 23:52:07.347372 3073 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-e85017da3c?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="800ms" Jan 15 23:52:07.376941 containerd[1896]: time="2026-01-15T23:52:07.376897526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-e85017da3c,Uid:af860bcbe21f3a66625c218377a1387f,Namespace:kube-system,Attempt:0,}" Jan 15 23:52:07.383175 containerd[1896]: time="2026-01-15T23:52:07.382849937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-e85017da3c,Uid:c59ac9ef0d19206309bb0df9a5104b77,Namespace:kube-system,Attempt:0,}" Jan 15 23:52:07.386743 containerd[1896]: time="2026-01-15T23:52:07.386649254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-e85017da3c,Uid:8226e8bc2cde9612e4b675fa4cee34ad,Namespace:kube-system,Attempt:0,}" Jan 15 23:52:07.659103 kubelet[3073]: W0115 23:52:07.658935 3073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 15 23:52:07.659103 kubelet[3073]: E0115 23:52:07.658999 3073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:52:07.686105 kubelet[3073]: I0115 23:52:07.686077 3073 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.686551 kubelet[3073]: E0115 23:52:07.686526 3073 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:07.969212 kubelet[3073]: W0115 23:52:07.969150 3073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-e85017da3c&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 15 23:52:07.969212 kubelet[3073]: E0115 23:52:07.969218 3073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-e85017da3c&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:52:08.148659 kubelet[3073]: E0115 23:52:08.148609 3073 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-e85017da3c?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="1.6s" Jan 15 23:52:08.310120 kubelet[3073]: W0115 23:52:08.309984 3073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 15 23:52:08.310120 kubelet[3073]: E0115 23:52:08.310056 3073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:52:08.460428 kubelet[3073]: W0115 23:52:08.460358 3073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 15 23:52:08.460428 kubelet[3073]: E0115 23:52:08.460438 3073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:52:08.487940 kubelet[3073]: I0115 23:52:08.487912 3073 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:08.488263 kubelet[3073]: E0115 23:52:08.488239 3073 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:08.602973 containerd[1896]: time="2026-01-15T23:52:08.602804628Z" level=info msg="connecting to shim c282931ea28c75e9c502291956844c08ce5622c3b13e9475c7a6ae0763ac45ce" address="unix:///run/containerd/s/52856fc3c54c28528cb82c0870a2c373f516bc20f9d21e42c205f7311d27d39b" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:52:08.629585 systemd[1]: Started cri-containerd-c282931ea28c75e9c502291956844c08ce5622c3b13e9475c7a6ae0763ac45ce.scope - libcontainer container c282931ea28c75e9c502291956844c08ce5622c3b13e9475c7a6ae0763ac45ce. Jan 15 23:52:08.633999 containerd[1896]: time="2026-01-15T23:52:08.633955513Z" level=info msg="connecting to shim 8bece1b91075686b7c5ce2fe88b01ddbf73096429b0a0cf4ef16ffd20eb748dd" address="unix:///run/containerd/s/08b692ecaceb28925bd72912be49d9adaab3708ce28ad66e73c88c377c3603f8" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:52:08.635992 containerd[1896]: time="2026-01-15T23:52:08.635953288Z" level=info msg="connecting to shim c5b6cd5de21ade3ff6c052b3c4dffdebeb12bcb843ea8fe2f778a31b830c7635" address="unix:///run/containerd/s/598e41d3f99de55ceb7e1e4b9e78f92f7bbd792d586a1804bbda778293f337f5" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:52:08.664578 systemd[1]: Started cri-containerd-8bece1b91075686b7c5ce2fe88b01ddbf73096429b0a0cf4ef16ffd20eb748dd.scope - libcontainer container 8bece1b91075686b7c5ce2fe88b01ddbf73096429b0a0cf4ef16ffd20eb748dd. Jan 15 23:52:08.669150 systemd[1]: Started cri-containerd-c5b6cd5de21ade3ff6c052b3c4dffdebeb12bcb843ea8fe2f778a31b830c7635.scope - libcontainer container c5b6cd5de21ade3ff6c052b3c4dffdebeb12bcb843ea8fe2f778a31b830c7635. Jan 15 23:52:08.691607 containerd[1896]: time="2026-01-15T23:52:08.691141227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-e85017da3c,Uid:af860bcbe21f3a66625c218377a1387f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c282931ea28c75e9c502291956844c08ce5622c3b13e9475c7a6ae0763ac45ce\"" Jan 15 23:52:08.700305 containerd[1896]: time="2026-01-15T23:52:08.700252320Z" level=info msg="CreateContainer within sandbox \"c282931ea28c75e9c502291956844c08ce5622c3b13e9475c7a6ae0763ac45ce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 23:52:08.718876 containerd[1896]: time="2026-01-15T23:52:08.718826233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-e85017da3c,Uid:8226e8bc2cde9612e4b675fa4cee34ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bece1b91075686b7c5ce2fe88b01ddbf73096429b0a0cf4ef16ffd20eb748dd\"" Jan 15 23:52:08.721387 containerd[1896]: time="2026-01-15T23:52:08.721361111Z" level=info msg="CreateContainer within sandbox \"8bece1b91075686b7c5ce2fe88b01ddbf73096429b0a0cf4ef16ffd20eb748dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 23:52:08.728013 containerd[1896]: time="2026-01-15T23:52:08.727936166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-e85017da3c,Uid:c59ac9ef0d19206309bb0df9a5104b77,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5b6cd5de21ade3ff6c052b3c4dffdebeb12bcb843ea8fe2f778a31b830c7635\"" Jan 15 23:52:08.730312 containerd[1896]: time="2026-01-15T23:52:08.730279492Z" level=info msg="CreateContainer within sandbox \"c5b6cd5de21ade3ff6c052b3c4dffdebeb12bcb843ea8fe2f778a31b830c7635\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 23:52:08.744384 containerd[1896]: time="2026-01-15T23:52:08.744087373Z" level=info msg="Container a8b9a93531d51c18d8b14f11772a00f4d40e013bf4d6bca969c4bd7f1236a7c5: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:08.794665 containerd[1896]: time="2026-01-15T23:52:08.794615581Z" level=info msg="Container 89489d9898ac6d497a85a5e4cd1989286b210e74c31ce30e978383af60acddb2: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:08.805614 containerd[1896]: time="2026-01-15T23:52:08.805571667Z" level=info msg="CreateContainer within sandbox \"c282931ea28c75e9c502291956844c08ce5622c3b13e9475c7a6ae0763ac45ce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a8b9a93531d51c18d8b14f11772a00f4d40e013bf4d6bca969c4bd7f1236a7c5\"" Jan 15 23:52:08.807086 containerd[1896]: time="2026-01-15T23:52:08.806325123Z" level=info msg="StartContainer for \"a8b9a93531d51c18d8b14f11772a00f4d40e013bf4d6bca969c4bd7f1236a7c5\"" Jan 15 23:52:08.807345 containerd[1896]: time="2026-01-15T23:52:08.807322335Z" level=info msg="connecting to shim a8b9a93531d51c18d8b14f11772a00f4d40e013bf4d6bca969c4bd7f1236a7c5" address="unix:///run/containerd/s/52856fc3c54c28528cb82c0870a2c373f516bc20f9d21e42c205f7311d27d39b" protocol=ttrpc version=3 Jan 15 23:52:08.812084 containerd[1896]: time="2026-01-15T23:52:08.812042580Z" level=info msg="Container 187b3a2fd3f9db7e5019821279372316d45a195af9cecc4ecb48b474ea942619: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:08.821568 systemd[1]: Started cri-containerd-a8b9a93531d51c18d8b14f11772a00f4d40e013bf4d6bca969c4bd7f1236a7c5.scope - libcontainer container a8b9a93531d51c18d8b14f11772a00f4d40e013bf4d6bca969c4bd7f1236a7c5. Jan 15 23:52:08.835327 containerd[1896]: time="2026-01-15T23:52:08.835271576Z" level=info msg="CreateContainer within sandbox \"8bece1b91075686b7c5ce2fe88b01ddbf73096429b0a0cf4ef16ffd20eb748dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"89489d9898ac6d497a85a5e4cd1989286b210e74c31ce30e978383af60acddb2\"" Jan 15 23:52:08.836622 containerd[1896]: time="2026-01-15T23:52:08.836595930Z" level=info msg="StartContainer for \"89489d9898ac6d497a85a5e4cd1989286b210e74c31ce30e978383af60acddb2\"" Jan 15 23:52:08.839035 containerd[1896]: time="2026-01-15T23:52:08.839008451Z" level=info msg="connecting to shim 89489d9898ac6d497a85a5e4cd1989286b210e74c31ce30e978383af60acddb2" address="unix:///run/containerd/s/08b692ecaceb28925bd72912be49d9adaab3708ce28ad66e73c88c377c3603f8" protocol=ttrpc version=3 Jan 15 23:52:08.857347 containerd[1896]: time="2026-01-15T23:52:08.855208052Z" level=info msg="CreateContainer within sandbox \"c5b6cd5de21ade3ff6c052b3c4dffdebeb12bcb843ea8fe2f778a31b830c7635\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"187b3a2fd3f9db7e5019821279372316d45a195af9cecc4ecb48b474ea942619\"" Jan 15 23:52:08.858379 containerd[1896]: time="2026-01-15T23:52:08.858028727Z" level=info msg="StartContainer for \"187b3a2fd3f9db7e5019821279372316d45a195af9cecc4ecb48b474ea942619\"" Jan 15 23:52:08.859603 systemd[1]: Started cri-containerd-89489d9898ac6d497a85a5e4cd1989286b210e74c31ce30e978383af60acddb2.scope - libcontainer container 89489d9898ac6d497a85a5e4cd1989286b210e74c31ce30e978383af60acddb2. Jan 15 23:52:08.861897 containerd[1896]: time="2026-01-15T23:52:08.861771154Z" level=info msg="connecting to shim 187b3a2fd3f9db7e5019821279372316d45a195af9cecc4ecb48b474ea942619" address="unix:///run/containerd/s/598e41d3f99de55ceb7e1e4b9e78f92f7bbd792d586a1804bbda778293f337f5" protocol=ttrpc version=3 Jan 15 23:52:08.882417 containerd[1896]: time="2026-01-15T23:52:08.882380260Z" level=info msg="StartContainer for \"a8b9a93531d51c18d8b14f11772a00f4d40e013bf4d6bca969c4bd7f1236a7c5\" returns successfully" Jan 15 23:52:08.884583 systemd[1]: Started cri-containerd-187b3a2fd3f9db7e5019821279372316d45a195af9cecc4ecb48b474ea942619.scope - libcontainer container 187b3a2fd3f9db7e5019821279372316d45a195af9cecc4ecb48b474ea942619. Jan 15 23:52:08.897773 kubelet[3073]: E0115 23:52:08.897733 3073 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:52:08.948321 containerd[1896]: time="2026-01-15T23:52:08.948280098Z" level=info msg="StartContainer for \"89489d9898ac6d497a85a5e4cd1989286b210e74c31ce30e978383af60acddb2\" returns successfully" Jan 15 23:52:08.948636 containerd[1896]: time="2026-01-15T23:52:08.948619656Z" level=info msg="StartContainer for \"187b3a2fd3f9db7e5019821279372316d45a195af9cecc4ecb48b474ea942619\" returns successfully" Jan 15 23:52:08.968562 kubelet[3073]: E0115 23:52:08.967589 3073 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-e85017da3c\" not found" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:08.972506 kubelet[3073]: E0115 23:52:08.972100 3073 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-e85017da3c\" not found" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:08.974259 kubelet[3073]: E0115 23:52:08.973824 3073 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-e85017da3c\" not found" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:09.929265 kubelet[3073]: E0115 23:52:09.929225 3073 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-n-e85017da3c\" not found" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:09.976571 kubelet[3073]: E0115 23:52:09.976241 3073 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-e85017da3c\" not found" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:09.977498 kubelet[3073]: E0115 23:52:09.977471 3073 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-e85017da3c\" not found" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:10.090802 kubelet[3073]: I0115 23:52:10.090767 3073 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:10.100845 kubelet[3073]: I0115 23:52:10.100669 3073 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:10.147199 kubelet[3073]: I0115 23:52:10.146650 3073 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:10.154701 kubelet[3073]: E0115 23:52:10.154667 3073 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-e85017da3c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:10.155076 kubelet[3073]: I0115 23:52:10.154863 3073 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:10.158588 kubelet[3073]: E0115 23:52:10.158562 3073 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-n-e85017da3c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:10.158718 kubelet[3073]: I0115 23:52:10.158706 3073 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:10.160078 kubelet[3073]: E0115 23:52:10.160054 3073 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-e85017da3c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:10.730114 kubelet[3073]: I0115 23:52:10.729852 3073 apiserver.go:52] "Watching apiserver" Jan 15 23:52:10.748259 kubelet[3073]: I0115 23:52:10.748211 3073 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 23:52:11.958041 systemd[1]: Reload requested from client PID 3344 ('systemctl') (unit session-9.scope)... Jan 15 23:52:11.958057 systemd[1]: Reloading... Jan 15 23:52:12.051765 zram_generator::config[3391]: No configuration found. Jan 15 23:52:12.223590 systemd[1]: Reloading finished in 265 ms. Jan 15 23:52:12.256960 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:52:12.268869 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 23:52:12.269218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:52:12.269265 systemd[1]: kubelet.service: Consumed 568ms CPU time, 124.8M memory peak. Jan 15 23:52:12.271220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:52:12.384633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:52:12.393803 (kubelet)[3455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 23:52:12.426950 kubelet[3455]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:52:12.426950 kubelet[3455]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 23:52:12.426950 kubelet[3455]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:52:12.426950 kubelet[3455]: I0115 23:52:12.426781 3455 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 23:52:12.431870 kubelet[3455]: I0115 23:52:12.431830 3455 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 23:52:12.431870 kubelet[3455]: I0115 23:52:12.431862 3455 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 23:52:12.432062 kubelet[3455]: I0115 23:52:12.432043 3455 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 23:52:12.433004 kubelet[3455]: I0115 23:52:12.432984 3455 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 23:52:12.434764 kubelet[3455]: I0115 23:52:12.434717 3455 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 23:52:12.438442 kubelet[3455]: I0115 23:52:12.437838 3455 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 23:52:12.440459 kubelet[3455]: I0115 23:52:12.440439 3455 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 23:52:12.440634 kubelet[3455]: I0115 23:52:12.440609 3455 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 23:52:12.443533 kubelet[3455]: I0115 23:52:12.440632 3455 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-e85017da3c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 23:52:12.443630 kubelet[3455]: I0115 23:52:12.443546 3455 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 23:52:12.443630 kubelet[3455]: I0115 23:52:12.443558 3455 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 23:52:12.443630 kubelet[3455]: I0115 23:52:12.443606 3455 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:52:12.443740 kubelet[3455]: I0115 23:52:12.443729 3455 kubelet.go:446] "Attempting to sync node with API server" Jan 15 23:52:12.443769 kubelet[3455]: I0115 23:52:12.443741 3455 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 23:52:12.443769 kubelet[3455]: I0115 23:52:12.443763 3455 kubelet.go:352] "Adding apiserver pod source" Jan 15 23:52:12.443769 kubelet[3455]: I0115 23:52:12.443773 3455 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 23:52:12.447876 kubelet[3455]: I0115 23:52:12.447852 3455 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 15 23:52:12.448746 kubelet[3455]: I0115 23:52:12.448674 3455 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 23:52:12.450023 kubelet[3455]: I0115 23:52:12.449041 3455 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 23:52:12.450023 kubelet[3455]: I0115 23:52:12.449072 3455 server.go:1287] "Started kubelet" Jan 15 23:52:12.452335 kubelet[3455]: I0115 23:52:12.451891 3455 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 23:52:12.461640 kubelet[3455]: I0115 23:52:12.461120 3455 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 23:52:12.463503 kubelet[3455]: I0115 23:52:12.462618 3455 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 23:52:12.463881 kubelet[3455]: I0115 23:52:12.463601 3455 server.go:479] "Adding debug handlers to kubelet server" Jan 15 23:52:12.464528 kubelet[3455]: I0115 23:52:12.464378 3455 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 23:52:12.465203 kubelet[3455]: I0115 23:52:12.465174 3455 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 23:52:12.474935 kubelet[3455]: I0115 23:52:12.474832 3455 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 23:52:12.480613 kubelet[3455]: I0115 23:52:12.480503 3455 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 23:52:12.484229 kubelet[3455]: I0115 23:52:12.484210 3455 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 23:52:12.484442 kubelet[3455]: I0115 23:52:12.484387 3455 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 23:52:12.484442 kubelet[3455]: I0115 23:52:12.484410 3455 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 23:52:12.484442 kubelet[3455]: I0115 23:52:12.484416 3455 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 23:52:12.484581 kubelet[3455]: E0115 23:52:12.484565 3455 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 23:52:12.493692 kubelet[3455]: I0115 23:52:12.493586 3455 factory.go:221] Registration of the containerd container factory successfully Jan 15 23:52:12.494478 kubelet[3455]: I0115 23:52:12.493816 3455 factory.go:221] Registration of the systemd container factory successfully Jan 15 23:52:12.494478 kubelet[3455]: I0115 23:52:12.493909 3455 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 23:52:12.494664 kubelet[3455]: E0115 23:52:12.494646 3455 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 23:52:12.532524 kubelet[3455]: I0115 23:52:12.532496 3455 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 23:52:12.532524 kubelet[3455]: I0115 23:52:12.532513 3455 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 23:52:12.532524 kubelet[3455]: I0115 23:52:12.532534 3455 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:52:12.532687 kubelet[3455]: I0115 23:52:12.532678 3455 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 23:52:12.532705 kubelet[3455]: I0115 23:52:12.532686 3455 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 23:52:12.532705 kubelet[3455]: I0115 23:52:12.532700 3455 policy_none.go:49] "None policy: Start" Jan 15 23:52:12.532744 kubelet[3455]: I0115 23:52:12.532708 3455 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 23:52:12.532744 kubelet[3455]: I0115 23:52:12.532715 3455 state_mem.go:35] "Initializing new in-memory state store" Jan 15 23:52:12.532797 kubelet[3455]: I0115 23:52:12.532784 3455 state_mem.go:75] "Updated machine memory state" Jan 15 23:52:12.536259 kubelet[3455]: I0115 23:52:12.536237 3455 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 23:52:12.585450 kubelet[3455]: E0115 23:52:12.585402 3455 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 23:52:12.785250 kubelet[3455]: I0115 23:52:12.784576 3455 reconciler.go:26] "Reconciler: start to sync state" Jan 15 23:52:12.785250 kubelet[3455]: I0115 23:52:12.784691 3455 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 23:52:12.785250 kubelet[3455]: I0115 23:52:12.784708 3455 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 23:52:12.785250 kubelet[3455]: I0115 23:52:12.785026 3455 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 23:52:12.785866 kubelet[3455]: I0115 23:52:12.785773 3455 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 23:52:12.786203 kubelet[3455]: I0115 23:52:12.786182 3455 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.786865 kubelet[3455]: I0115 23:52:12.786765 3455 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.787143 kubelet[3455]: I0115 23:52:12.787130 3455 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.788827 kubelet[3455]: E0115 23:52:12.788799 3455 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 23:52:12.797049 kubelet[3455]: W0115 23:52:12.797018 3455 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 23:52:12.801982 kubelet[3455]: W0115 23:52:12.801954 3455 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 23:52:12.803337 kubelet[3455]: W0115 23:52:12.803302 3455 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 23:52:12.885500 kubelet[3455]: I0115 23:52:12.885451 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c59ac9ef0d19206309bb0df9a5104b77-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-e85017da3c\" (UID: \"c59ac9ef0d19206309bb0df9a5104b77\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.885500 kubelet[3455]: I0115 23:52:12.885496 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c59ac9ef0d19206309bb0df9a5104b77-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-e85017da3c\" (UID: \"c59ac9ef0d19206309bb0df9a5104b77\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.885681 kubelet[3455]: I0115 23:52:12.885520 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af860bcbe21f3a66625c218377a1387f-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-e85017da3c\" (UID: \"af860bcbe21f3a66625c218377a1387f\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.885681 kubelet[3455]: I0115 23:52:12.885531 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c59ac9ef0d19206309bb0df9a5104b77-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-e85017da3c\" (UID: \"c59ac9ef0d19206309bb0df9a5104b77\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.885681 kubelet[3455]: I0115 23:52:12.885543 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c59ac9ef0d19206309bb0df9a5104b77-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-e85017da3c\" (UID: \"c59ac9ef0d19206309bb0df9a5104b77\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.885681 kubelet[3455]: I0115 23:52:12.885553 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c59ac9ef0d19206309bb0df9a5104b77-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-e85017da3c\" (UID: \"c59ac9ef0d19206309bb0df9a5104b77\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.885681 kubelet[3455]: I0115 23:52:12.885563 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8226e8bc2cde9612e4b675fa4cee34ad-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-e85017da3c\" (UID: \"8226e8bc2cde9612e4b675fa4cee34ad\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.885761 kubelet[3455]: I0115 23:52:12.885572 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af860bcbe21f3a66625c218377a1387f-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-e85017da3c\" (UID: \"af860bcbe21f3a66625c218377a1387f\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.885761 kubelet[3455]: I0115 23:52:12.885582 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af860bcbe21f3a66625c218377a1387f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-e85017da3c\" (UID: \"af860bcbe21f3a66625c218377a1387f\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.896478 kubelet[3455]: I0115 23:52:12.896182 3455 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.907437 kubelet[3455]: I0115 23:52:12.907133 3455 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.907437 kubelet[3455]: I0115 23:52:12.907220 3455 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-e85017da3c" Jan 15 23:52:12.978376 sudo[3491]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 15 23:52:12.978639 sudo[3491]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 15 23:52:13.224332 sudo[3491]: pam_unix(sudo:session): session closed for user root Jan 15 23:52:13.446780 kubelet[3455]: I0115 23:52:13.446565 3455 apiserver.go:52] "Watching apiserver" Jan 15 23:52:13.465914 kubelet[3455]: I0115 23:52:13.465856 3455 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 23:52:13.545652 kubelet[3455]: I0115 23:52:13.545464 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-n-e85017da3c" podStartSLOduration=1.5454480510000002 podStartE2EDuration="1.545448051s" podCreationTimestamp="2026-01-15 23:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:52:13.545282006 +0000 UTC m=+1.146894939" watchObservedRunningTime="2026-01-15 23:52:13.545448051 +0000 UTC m=+1.147060984" Jan 15 23:52:13.577356 kubelet[3455]: I0115 23:52:13.577174 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-n-e85017da3c" podStartSLOduration=1.577155332 podStartE2EDuration="1.577155332s" podCreationTimestamp="2026-01-15 23:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:52:13.565546484 +0000 UTC m=+1.167159449" watchObservedRunningTime="2026-01-15 23:52:13.577155332 +0000 UTC m=+1.178768265" Jan 15 23:52:13.591556 kubelet[3455]: I0115 23:52:13.591410 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-e85017da3c" podStartSLOduration=1.591391175 podStartE2EDuration="1.591391175s" podCreationTimestamp="2026-01-15 23:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:52:13.578151775 +0000 UTC m=+1.179764796" watchObservedRunningTime="2026-01-15 23:52:13.591391175 +0000 UTC m=+1.193004108" Jan 15 23:52:14.302223 sudo[2373]: pam_unix(sudo:session): session closed for user root Jan 15 23:52:14.382865 sshd[2372]: Connection closed by 10.200.16.10 port 37816 Jan 15 23:52:14.383648 sshd-session[2369]: pam_unix(sshd:session): session closed for user core Jan 15 23:52:14.386948 systemd[1]: sshd@6-10.200.20.10:22-10.200.16.10:37816.service: Deactivated successfully. Jan 15 23:52:14.389303 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 23:52:14.389540 systemd[1]: session-9.scope: Consumed 3.520s CPU time, 263M memory peak. Jan 15 23:52:14.391226 systemd-logind[1871]: Session 9 logged out. Waiting for processes to exit. Jan 15 23:52:14.393303 systemd-logind[1871]: Removed session 9. Jan 15 23:52:16.933846 kubelet[3455]: I0115 23:52:16.933812 3455 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 23:52:16.934739 containerd[1896]: time="2026-01-15T23:52:16.934707227Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 23:52:16.935289 kubelet[3455]: I0115 23:52:16.935114 3455 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 23:52:17.731267 systemd[1]: Created slice kubepods-besteffort-pod99680e3e_9616_462a_86a6_a6047354e309.slice - libcontainer container kubepods-besteffort-pod99680e3e_9616_462a_86a6_a6047354e309.slice. Jan 15 23:52:17.745936 systemd[1]: Created slice kubepods-burstable-poda2d0fea2_b100_4c78_b9a1_1f65f3e2e3f8.slice - libcontainer container kubepods-burstable-poda2d0fea2_b100_4c78_b9a1_1f65f3e2e3f8.slice. Jan 15 23:52:17.815494 kubelet[3455]: I0115 23:52:17.814489 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-xtables-lock\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815658 kubelet[3455]: I0115 23:52:17.815515 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-host-proc-sys-net\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815658 kubelet[3455]: I0115 23:52:17.815548 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-config-path\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815658 kubelet[3455]: I0115 23:52:17.815561 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99680e3e-9616-462a-86a6-a6047354e309-lib-modules\") pod \"kube-proxy-ls62b\" (UID: \"99680e3e-9616-462a-86a6-a6047354e309\") " pod="kube-system/kube-proxy-ls62b" Jan 15 23:52:17.815658 kubelet[3455]: I0115 23:52:17.815573 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-hostproc\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815658 kubelet[3455]: I0115 23:52:17.815587 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-cgroup\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815658 kubelet[3455]: I0115 23:52:17.815597 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-host-proc-sys-kernel\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815764 kubelet[3455]: I0115 23:52:17.815608 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w9jc\" (UniqueName: \"kubernetes.io/projected/99680e3e-9616-462a-86a6-a6047354e309-kube-api-access-9w9jc\") pod \"kube-proxy-ls62b\" (UID: \"99680e3e-9616-462a-86a6-a6047354e309\") " pod="kube-system/kube-proxy-ls62b" Jan 15 23:52:17.815764 kubelet[3455]: I0115 23:52:17.815620 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-etc-cni-netd\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815764 kubelet[3455]: I0115 23:52:17.815628 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzlw9\" (UniqueName: \"kubernetes.io/projected/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-kube-api-access-jzlw9\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815764 kubelet[3455]: I0115 23:52:17.815637 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-clustermesh-secrets\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815764 kubelet[3455]: I0115 23:52:17.815650 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-hubble-tls\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815838 kubelet[3455]: I0115 23:52:17.815663 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/99680e3e-9616-462a-86a6-a6047354e309-kube-proxy\") pod \"kube-proxy-ls62b\" (UID: \"99680e3e-9616-462a-86a6-a6047354e309\") " pod="kube-system/kube-proxy-ls62b" Jan 15 23:52:17.815838 kubelet[3455]: I0115 23:52:17.815673 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cni-path\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815838 kubelet[3455]: I0115 23:52:17.815684 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-run\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815838 kubelet[3455]: I0115 23:52:17.815693 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-bpf-maps\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815838 kubelet[3455]: I0115 23:52:17.815710 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-lib-modules\") pod \"cilium-zvd4b\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " pod="kube-system/cilium-zvd4b" Jan 15 23:52:17.815838 kubelet[3455]: I0115 23:52:17.815720 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99680e3e-9616-462a-86a6-a6047354e309-xtables-lock\") pod \"kube-proxy-ls62b\" (UID: \"99680e3e-9616-462a-86a6-a6047354e309\") " pod="kube-system/kube-proxy-ls62b" Jan 15 23:52:17.942911 systemd[1]: Created slice kubepods-besteffort-pod13e82da4_92e2_483f_b683_09f637581b86.slice - libcontainer container kubepods-besteffort-pod13e82da4_92e2_483f_b683_09f637581b86.slice. Jan 15 23:52:18.017347 kubelet[3455]: I0115 23:52:18.017163 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzpwc\" (UniqueName: \"kubernetes.io/projected/13e82da4-92e2-483f-b683-09f637581b86-kube-api-access-wzpwc\") pod \"cilium-operator-6c4d7847fc-bj92v\" (UID: \"13e82da4-92e2-483f-b683-09f637581b86\") " pod="kube-system/cilium-operator-6c4d7847fc-bj92v" Jan 15 23:52:18.017347 kubelet[3455]: I0115 23:52:18.017207 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13e82da4-92e2-483f-b683-09f637581b86-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bj92v\" (UID: \"13e82da4-92e2-483f-b683-09f637581b86\") " pod="kube-system/cilium-operator-6c4d7847fc-bj92v" Jan 15 23:52:18.039893 containerd[1896]: time="2026-01-15T23:52:18.039844526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ls62b,Uid:99680e3e-9616-462a-86a6-a6047354e309,Namespace:kube-system,Attempt:0,}" Jan 15 23:52:18.050965 containerd[1896]: time="2026-01-15T23:52:18.050925602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvd4b,Uid:a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8,Namespace:kube-system,Attempt:0,}" Jan 15 23:52:18.100415 containerd[1896]: time="2026-01-15T23:52:18.100321382Z" level=info msg="connecting to shim dbb843f43c6e25bfea4e51d686311bca8661c04b471e2dad53c513042ad2dd3d" address="unix:///run/containerd/s/aaccf90d626de13c0f7769927c98d794f8c7027c56aba56da13f24585b99a6a6" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:52:18.116738 systemd[1]: Started cri-containerd-dbb843f43c6e25bfea4e51d686311bca8661c04b471e2dad53c513042ad2dd3d.scope - libcontainer container dbb843f43c6e25bfea4e51d686311bca8661c04b471e2dad53c513042ad2dd3d. Jan 15 23:52:18.137878 containerd[1896]: time="2026-01-15T23:52:18.137804017Z" level=info msg="connecting to shim c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d" address="unix:///run/containerd/s/3e8b6cad8328db0f7a0b7014ab3c42fe02a5755cc8f0ba70bb889dda064756d0" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:52:18.147768 containerd[1896]: time="2026-01-15T23:52:18.147711764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ls62b,Uid:99680e3e-9616-462a-86a6-a6047354e309,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbb843f43c6e25bfea4e51d686311bca8661c04b471e2dad53c513042ad2dd3d\"" Jan 15 23:52:18.151813 containerd[1896]: time="2026-01-15T23:52:18.151496329Z" level=info msg="CreateContainer within sandbox \"dbb843f43c6e25bfea4e51d686311bca8661c04b471e2dad53c513042ad2dd3d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 23:52:18.164591 systemd[1]: Started cri-containerd-c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d.scope - libcontainer container c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d. Jan 15 23:52:18.186791 containerd[1896]: time="2026-01-15T23:52:18.186749350Z" level=info msg="Container 0f9210cf3b5cc7a61d6c67d7cfd3485f5030d7d8e30bfff1f260fb586b201ca3: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:18.232993 containerd[1896]: time="2026-01-15T23:52:18.232947890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvd4b,Uid:a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\"" Jan 15 23:52:18.234872 containerd[1896]: time="2026-01-15T23:52:18.234837452Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 15 23:52:18.252960 containerd[1896]: time="2026-01-15T23:52:18.252863292Z" level=info msg="CreateContainer within sandbox \"dbb843f43c6e25bfea4e51d686311bca8661c04b471e2dad53c513042ad2dd3d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0f9210cf3b5cc7a61d6c67d7cfd3485f5030d7d8e30bfff1f260fb586b201ca3\"" Jan 15 23:52:18.253984 containerd[1896]: time="2026-01-15T23:52:18.253951235Z" level=info msg="StartContainer for \"0f9210cf3b5cc7a61d6c67d7cfd3485f5030d7d8e30bfff1f260fb586b201ca3\"" Jan 15 23:52:18.255258 containerd[1896]: time="2026-01-15T23:52:18.255231367Z" level=info msg="connecting to shim 0f9210cf3b5cc7a61d6c67d7cfd3485f5030d7d8e30bfff1f260fb586b201ca3" address="unix:///run/containerd/s/aaccf90d626de13c0f7769927c98d794f8c7027c56aba56da13f24585b99a6a6" protocol=ttrpc version=3 Jan 15 23:52:18.257703 containerd[1896]: time="2026-01-15T23:52:18.257666309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bj92v,Uid:13e82da4-92e2-483f-b683-09f637581b86,Namespace:kube-system,Attempt:0,}" Jan 15 23:52:18.271650 systemd[1]: Started cri-containerd-0f9210cf3b5cc7a61d6c67d7cfd3485f5030d7d8e30bfff1f260fb586b201ca3.scope - libcontainer container 0f9210cf3b5cc7a61d6c67d7cfd3485f5030d7d8e30bfff1f260fb586b201ca3. Jan 15 23:52:18.314234 containerd[1896]: time="2026-01-15T23:52:18.314193436Z" level=info msg="connecting to shim b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82" address="unix:///run/containerd/s/3048fbddcb9e0978b5574249e98666ff7da2cd54323d71f752157a5c9219ce86" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:52:18.336104 containerd[1896]: time="2026-01-15T23:52:18.335976480Z" level=info msg="StartContainer for \"0f9210cf3b5cc7a61d6c67d7cfd3485f5030d7d8e30bfff1f260fb586b201ca3\" returns successfully" Jan 15 23:52:18.340569 systemd[1]: Started cri-containerd-b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82.scope - libcontainer container b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82. Jan 15 23:52:18.377353 containerd[1896]: time="2026-01-15T23:52:18.377238007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bj92v,Uid:13e82da4-92e2-483f-b683-09f637581b86,Namespace:kube-system,Attempt:0,} returns sandbox id \"b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82\"" Jan 15 23:52:19.123562 kubelet[3455]: I0115 23:52:19.123395 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ls62b" podStartSLOduration=2.123376794 podStartE2EDuration="2.123376794s" podCreationTimestamp="2026-01-15 23:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:52:18.539118189 +0000 UTC m=+6.140731130" watchObservedRunningTime="2026-01-15 23:52:19.123376794 +0000 UTC m=+6.724989735" Jan 15 23:52:28.508681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387600172.mount: Deactivated successfully. Jan 15 23:52:29.999003 containerd[1896]: time="2026-01-15T23:52:29.998932587Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:52:30.003964 containerd[1896]: time="2026-01-15T23:52:30.003921182Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 15 23:52:30.008203 containerd[1896]: time="2026-01-15T23:52:30.008168166Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:52:30.010056 containerd[1896]: time="2026-01-15T23:52:30.010014896Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.775148811s" Jan 15 23:52:30.010099 containerd[1896]: time="2026-01-15T23:52:30.010063546Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 15 23:52:30.016744 containerd[1896]: time="2026-01-15T23:52:30.016252584Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 15 23:52:30.017450 containerd[1896]: time="2026-01-15T23:52:30.017378761Z" level=info msg="CreateContainer within sandbox \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 15 23:52:30.046433 containerd[1896]: time="2026-01-15T23:52:30.045235505Z" level=info msg="Container 392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:30.046272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752584503.mount: Deactivated successfully. Jan 15 23:52:30.064219 containerd[1896]: time="2026-01-15T23:52:30.064171840Z" level=info msg="CreateContainer within sandbox \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\"" Jan 15 23:52:30.064947 containerd[1896]: time="2026-01-15T23:52:30.064909763Z" level=info msg="StartContainer for \"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\"" Jan 15 23:52:30.067035 containerd[1896]: time="2026-01-15T23:52:30.066991462Z" level=info msg="connecting to shim 392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2" address="unix:///run/containerd/s/3e8b6cad8328db0f7a0b7014ab3c42fe02a5755cc8f0ba70bb889dda064756d0" protocol=ttrpc version=3 Jan 15 23:52:30.083568 systemd[1]: Started cri-containerd-392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2.scope - libcontainer container 392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2. Jan 15 23:52:30.118225 containerd[1896]: time="2026-01-15T23:52:30.118176275Z" level=info msg="StartContainer for \"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\" returns successfully" Jan 15 23:52:30.119147 systemd[1]: cri-containerd-392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2.scope: Deactivated successfully. Jan 15 23:52:30.122429 containerd[1896]: time="2026-01-15T23:52:30.121840326Z" level=info msg="received container exit event container_id:\"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\" id:\"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\" pid:3870 exited_at:{seconds:1768521150 nanos:121343188}" Jan 15 23:52:30.138749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2-rootfs.mount: Deactivated successfully. Jan 15 23:52:32.558979 containerd[1896]: time="2026-01-15T23:52:32.558933768Z" level=info msg="CreateContainer within sandbox \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 15 23:52:32.589363 containerd[1896]: time="2026-01-15T23:52:32.589318315Z" level=info msg="Container 9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:32.605461 containerd[1896]: time="2026-01-15T23:52:32.605389396Z" level=info msg="CreateContainer within sandbox \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\"" Jan 15 23:52:32.606579 containerd[1896]: time="2026-01-15T23:52:32.606546493Z" level=info msg="StartContainer for \"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\"" Jan 15 23:52:32.608082 containerd[1896]: time="2026-01-15T23:52:32.608032075Z" level=info msg="connecting to shim 9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea" address="unix:///run/containerd/s/3e8b6cad8328db0f7a0b7014ab3c42fe02a5755cc8f0ba70bb889dda064756d0" protocol=ttrpc version=3 Jan 15 23:52:32.627621 systemd[1]: Started cri-containerd-9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea.scope - libcontainer container 9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea. Jan 15 23:52:32.656172 containerd[1896]: time="2026-01-15T23:52:32.656132217Z" level=info msg="StartContainer for \"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\" returns successfully" Jan 15 23:52:32.664335 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 23:52:32.664524 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:52:32.664687 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:52:32.667691 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:52:32.669854 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 15 23:52:32.671328 systemd[1]: cri-containerd-9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea.scope: Deactivated successfully. Jan 15 23:52:32.675165 containerd[1896]: time="2026-01-15T23:52:32.675097962Z" level=info msg="received container exit event container_id:\"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\" id:\"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\" pid:3915 exited_at:{seconds:1768521152 nanos:673415205}" Jan 15 23:52:32.687576 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:52:33.562749 containerd[1896]: time="2026-01-15T23:52:33.562681828Z" level=info msg="CreateContainer within sandbox \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 15 23:52:33.590681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea-rootfs.mount: Deactivated successfully. Jan 15 23:52:33.604985 containerd[1896]: time="2026-01-15T23:52:33.604944633Z" level=info msg="Container bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:33.607260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount907093309.mount: Deactivated successfully. Jan 15 23:52:33.631193 containerd[1896]: time="2026-01-15T23:52:33.631146853Z" level=info msg="CreateContainer within sandbox \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\"" Jan 15 23:52:33.631893 containerd[1896]: time="2026-01-15T23:52:33.631870103Z" level=info msg="StartContainer for \"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\"" Jan 15 23:52:33.633290 containerd[1896]: time="2026-01-15T23:52:33.633251753Z" level=info msg="connecting to shim bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398" address="unix:///run/containerd/s/3e8b6cad8328db0f7a0b7014ab3c42fe02a5755cc8f0ba70bb889dda064756d0" protocol=ttrpc version=3 Jan 15 23:52:33.654580 systemd[1]: Started cri-containerd-bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398.scope - libcontainer container bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398. Jan 15 23:52:33.721299 systemd[1]: cri-containerd-bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398.scope: Deactivated successfully. Jan 15 23:52:33.725875 containerd[1896]: time="2026-01-15T23:52:33.725823011Z" level=info msg="received container exit event container_id:\"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\" id:\"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\" pid:3960 exited_at:{seconds:1768521153 nanos:724107806}" Jan 15 23:52:33.727823 containerd[1896]: time="2026-01-15T23:52:33.727799122Z" level=info msg="StartContainer for \"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\" returns successfully" Jan 15 23:52:33.745188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398-rootfs.mount: Deactivated successfully. Jan 15 23:52:34.567515 containerd[1896]: time="2026-01-15T23:52:34.567447508Z" level=info msg="CreateContainer within sandbox \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 15 23:52:34.602450 containerd[1896]: time="2026-01-15T23:52:34.602391850Z" level=info msg="Container 70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:34.621153 containerd[1896]: time="2026-01-15T23:52:34.621102754Z" level=info msg="CreateContainer within sandbox \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\"" Jan 15 23:52:34.623026 containerd[1896]: time="2026-01-15T23:52:34.622993597Z" level=info msg="StartContainer for \"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\"" Jan 15 23:52:34.623892 containerd[1896]: time="2026-01-15T23:52:34.623862709Z" level=info msg="connecting to shim 70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c" address="unix:///run/containerd/s/3e8b6cad8328db0f7a0b7014ab3c42fe02a5755cc8f0ba70bb889dda064756d0" protocol=ttrpc version=3 Jan 15 23:52:34.649679 systemd[1]: Started cri-containerd-70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c.scope - libcontainer container 70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c. Jan 15 23:52:34.672149 systemd[1]: cri-containerd-70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c.scope: Deactivated successfully. Jan 15 23:52:34.681610 containerd[1896]: time="2026-01-15T23:52:34.681570468Z" level=info msg="received container exit event container_id:\"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\" id:\"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\" pid:4008 exited_at:{seconds:1768521154 nanos:673286451}" Jan 15 23:52:34.682312 containerd[1896]: time="2026-01-15T23:52:34.682280797Z" level=info msg="StartContainer for \"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\" returns successfully" Jan 15 23:52:34.699010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c-rootfs.mount: Deactivated successfully. Jan 15 23:52:35.572104 containerd[1896]: time="2026-01-15T23:52:35.572056887Z" level=info msg="CreateContainer within sandbox \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 15 23:52:35.610907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383260043.mount: Deactivated successfully. Jan 15 23:52:35.611747 containerd[1896]: time="2026-01-15T23:52:35.611704702Z" level=info msg="Container 21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:35.636305 containerd[1896]: time="2026-01-15T23:52:35.636159131Z" level=info msg="CreateContainer within sandbox \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\"" Jan 15 23:52:35.637142 containerd[1896]: time="2026-01-15T23:52:35.637112470Z" level=info msg="StartContainer for \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\"" Jan 15 23:52:35.638160 containerd[1896]: time="2026-01-15T23:52:35.638126394Z" level=info msg="connecting to shim 21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6" address="unix:///run/containerd/s/3e8b6cad8328db0f7a0b7014ab3c42fe02a5755cc8f0ba70bb889dda064756d0" protocol=ttrpc version=3 Jan 15 23:52:35.655589 systemd[1]: Started cri-containerd-21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6.scope - libcontainer container 21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6. Jan 15 23:52:35.695852 containerd[1896]: time="2026-01-15T23:52:35.695809280Z" level=info msg="StartContainer for \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\" returns successfully" Jan 15 23:52:35.771475 kubelet[3455]: I0115 23:52:35.771215 3455 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 15 23:52:35.819195 systemd[1]: Created slice kubepods-burstable-pode77fc8e4_c20b_4e46_9ad8_2676d67c5746.slice - libcontainer container kubepods-burstable-pode77fc8e4_c20b_4e46_9ad8_2676d67c5746.slice. Jan 15 23:52:35.826027 systemd[1]: Created slice kubepods-burstable-pod9f9bffc3_6345_4ad2_ae65_8ca129608d87.slice - libcontainer container kubepods-burstable-pod9f9bffc3_6345_4ad2_ae65_8ca129608d87.slice. Jan 15 23:52:35.931235 kubelet[3455]: I0115 23:52:35.931166 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f9bffc3-6345-4ad2-ae65-8ca129608d87-config-volume\") pod \"coredns-668d6bf9bc-4x82q\" (UID: \"9f9bffc3-6345-4ad2-ae65-8ca129608d87\") " pod="kube-system/coredns-668d6bf9bc-4x82q" Jan 15 23:52:35.931235 kubelet[3455]: I0115 23:52:35.931211 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlrrp\" (UniqueName: \"kubernetes.io/projected/e77fc8e4-c20b-4e46-9ad8-2676d67c5746-kube-api-access-mlrrp\") pod \"coredns-668d6bf9bc-dgd8v\" (UID: \"e77fc8e4-c20b-4e46-9ad8-2676d67c5746\") " pod="kube-system/coredns-668d6bf9bc-dgd8v" Jan 15 23:52:35.931235 kubelet[3455]: I0115 23:52:35.931225 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk757\" (UniqueName: \"kubernetes.io/projected/9f9bffc3-6345-4ad2-ae65-8ca129608d87-kube-api-access-tk757\") pod \"coredns-668d6bf9bc-4x82q\" (UID: \"9f9bffc3-6345-4ad2-ae65-8ca129608d87\") " pod="kube-system/coredns-668d6bf9bc-4x82q" Jan 15 23:52:35.931235 kubelet[3455]: I0115 23:52:35.931239 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e77fc8e4-c20b-4e46-9ad8-2676d67c5746-config-volume\") pod \"coredns-668d6bf9bc-dgd8v\" (UID: \"e77fc8e4-c20b-4e46-9ad8-2676d67c5746\") " pod="kube-system/coredns-668d6bf9bc-dgd8v" Jan 15 23:52:36.125510 containerd[1896]: time="2026-01-15T23:52:36.125234606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dgd8v,Uid:e77fc8e4-c20b-4e46-9ad8-2676d67c5746,Namespace:kube-system,Attempt:0,}" Jan 15 23:52:36.130126 containerd[1896]: time="2026-01-15T23:52:36.130074732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4x82q,Uid:9f9bffc3-6345-4ad2-ae65-8ca129608d87,Namespace:kube-system,Attempt:0,}" Jan 15 23:52:36.455464 containerd[1896]: time="2026-01-15T23:52:36.455380632Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:52:36.464313 containerd[1896]: time="2026-01-15T23:52:36.464267903Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 15 23:52:36.468539 containerd[1896]: time="2026-01-15T23:52:36.468505575Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:52:36.471871 containerd[1896]: time="2026-01-15T23:52:36.471826262Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.455536701s" Jan 15 23:52:36.472075 containerd[1896]: time="2026-01-15T23:52:36.471859239Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 15 23:52:36.475392 containerd[1896]: time="2026-01-15T23:52:36.475064514Z" level=info msg="CreateContainer within sandbox \"b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 15 23:52:36.502776 containerd[1896]: time="2026-01-15T23:52:36.502725411Z" level=info msg="Container 6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:36.519872 containerd[1896]: time="2026-01-15T23:52:36.519830065Z" level=info msg="CreateContainer within sandbox \"b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\"" Jan 15 23:52:36.520753 containerd[1896]: time="2026-01-15T23:52:36.520724097Z" level=info msg="StartContainer for \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\"" Jan 15 23:52:36.521451 containerd[1896]: time="2026-01-15T23:52:36.521378297Z" level=info msg="connecting to shim 6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b" address="unix:///run/containerd/s/3048fbddcb9e0978b5574249e98666ff7da2cd54323d71f752157a5c9219ce86" protocol=ttrpc version=3 Jan 15 23:52:36.538565 systemd[1]: Started cri-containerd-6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b.scope - libcontainer container 6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b. Jan 15 23:52:36.566677 containerd[1896]: time="2026-01-15T23:52:36.566639105Z" level=info msg="StartContainer for \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\" returns successfully" Jan 15 23:52:36.594443 kubelet[3455]: I0115 23:52:36.593885 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zvd4b" podStartSLOduration=7.813434083 podStartE2EDuration="19.593866451s" podCreationTimestamp="2026-01-15 23:52:17 +0000 UTC" firstStartedPulling="2026-01-15 23:52:18.234191502 +0000 UTC m=+5.835804435" lastFinishedPulling="2026-01-15 23:52:30.01462387 +0000 UTC m=+17.616236803" observedRunningTime="2026-01-15 23:52:36.593835145 +0000 UTC m=+24.195448078" watchObservedRunningTime="2026-01-15 23:52:36.593866451 +0000 UTC m=+24.195479384" Jan 15 23:52:40.569110 systemd-networkd[1493]: cilium_host: Link UP Jan 15 23:52:40.569197 systemd-networkd[1493]: cilium_net: Link UP Jan 15 23:52:40.569275 systemd-networkd[1493]: cilium_host: Gained carrier Jan 15 23:52:40.569342 systemd-networkd[1493]: cilium_net: Gained carrier Jan 15 23:52:40.723363 systemd-networkd[1493]: cilium_vxlan: Link UP Jan 15 23:52:40.723370 systemd-networkd[1493]: cilium_vxlan: Gained carrier Jan 15 23:52:40.906594 systemd-networkd[1493]: cilium_net: Gained IPv6LL Jan 15 23:52:40.953462 kernel: NET: Registered PF_ALG protocol family Jan 15 23:52:41.475592 systemd-networkd[1493]: cilium_host: Gained IPv6LL Jan 15 23:52:41.485108 systemd-networkd[1493]: lxc_health: Link UP Jan 15 23:52:41.493769 systemd-networkd[1493]: lxc_health: Gained carrier Jan 15 23:52:41.699994 systemd-networkd[1493]: lxc58bdf99fb8b2: Link UP Jan 15 23:52:41.701462 systemd-networkd[1493]: lxc61e678c630ff: Link UP Jan 15 23:52:41.709669 kernel: eth0: renamed from tmp41e8d Jan 15 23:52:41.709766 kernel: eth0: renamed from tmp830ed Jan 15 23:52:41.719844 systemd-networkd[1493]: lxc61e678c630ff: Gained carrier Jan 15 23:52:41.720222 systemd-networkd[1493]: lxc58bdf99fb8b2: Gained carrier Jan 15 23:52:42.069524 kubelet[3455]: I0115 23:52:42.069356 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bj92v" podStartSLOduration=6.975717425 podStartE2EDuration="25.069340103s" podCreationTimestamp="2026-01-15 23:52:17 +0000 UTC" firstStartedPulling="2026-01-15 23:52:18.379000021 +0000 UTC m=+5.980612954" lastFinishedPulling="2026-01-15 23:52:36.472622699 +0000 UTC m=+24.074235632" observedRunningTime="2026-01-15 23:52:36.606341042 +0000 UTC m=+24.207953983" watchObservedRunningTime="2026-01-15 23:52:42.069340103 +0000 UTC m=+29.670953036" Jan 15 23:52:42.306604 systemd-networkd[1493]: cilium_vxlan: Gained IPv6LL Jan 15 23:52:43.074636 systemd-networkd[1493]: lxc_health: Gained IPv6LL Jan 15 23:52:43.395597 systemd-networkd[1493]: lxc58bdf99fb8b2: Gained IPv6LL Jan 15 23:52:43.587598 systemd-networkd[1493]: lxc61e678c630ff: Gained IPv6LL Jan 15 23:52:44.344696 containerd[1896]: time="2026-01-15T23:52:44.344476911Z" level=info msg="connecting to shim 830ed5283ac31e4a5c5472c153f440ca56405c571a4746a016d5213b6d77633e" address="unix:///run/containerd/s/6962d886e6b913784ab1a6e221d2c1bb3f4a493955cb75496c5a8e25bcf8b464" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:52:44.358103 containerd[1896]: time="2026-01-15T23:52:44.357723457Z" level=info msg="connecting to shim 41e8d0da8d662294eb5bce0d6d657d1689d01970bb7b7d1cc00c13d13581b72a" address="unix:///run/containerd/s/acb78a09318b4f8d1c4dc73906e26bce5ed808e88b569f94bdd27817a52658c9" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:52:44.375615 systemd[1]: Started cri-containerd-830ed5283ac31e4a5c5472c153f440ca56405c571a4746a016d5213b6d77633e.scope - libcontainer container 830ed5283ac31e4a5c5472c153f440ca56405c571a4746a016d5213b6d77633e. Jan 15 23:52:44.383599 systemd[1]: Started cri-containerd-41e8d0da8d662294eb5bce0d6d657d1689d01970bb7b7d1cc00c13d13581b72a.scope - libcontainer container 41e8d0da8d662294eb5bce0d6d657d1689d01970bb7b7d1cc00c13d13581b72a. Jan 15 23:52:44.414550 containerd[1896]: time="2026-01-15T23:52:44.414504903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4x82q,Uid:9f9bffc3-6345-4ad2-ae65-8ca129608d87,Namespace:kube-system,Attempt:0,} returns sandbox id \"830ed5283ac31e4a5c5472c153f440ca56405c571a4746a016d5213b6d77633e\"" Jan 15 23:52:44.419171 containerd[1896]: time="2026-01-15T23:52:44.419117299Z" level=info msg="CreateContainer within sandbox \"830ed5283ac31e4a5c5472c153f440ca56405c571a4746a016d5213b6d77633e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 23:52:44.438008 containerd[1896]: time="2026-01-15T23:52:44.437923185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dgd8v,Uid:e77fc8e4-c20b-4e46-9ad8-2676d67c5746,Namespace:kube-system,Attempt:0,} returns sandbox id \"41e8d0da8d662294eb5bce0d6d657d1689d01970bb7b7d1cc00c13d13581b72a\"" Jan 15 23:52:44.443931 containerd[1896]: time="2026-01-15T23:52:44.443850322Z" level=info msg="CreateContainer within sandbox \"41e8d0da8d662294eb5bce0d6d657d1689d01970bb7b7d1cc00c13d13581b72a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 23:52:44.452001 containerd[1896]: time="2026-01-15T23:52:44.451957565Z" level=info msg="Container 6ab955a0aa4c3e62b2c381e1d93932f1e370dde17f20a336279ad46a71d9acf2: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:44.484291 containerd[1896]: time="2026-01-15T23:52:44.484239324Z" level=info msg="CreateContainer within sandbox \"830ed5283ac31e4a5c5472c153f440ca56405c571a4746a016d5213b6d77633e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ab955a0aa4c3e62b2c381e1d93932f1e370dde17f20a336279ad46a71d9acf2\"" Jan 15 23:52:44.485230 containerd[1896]: time="2026-01-15T23:52:44.485209125Z" level=info msg="StartContainer for \"6ab955a0aa4c3e62b2c381e1d93932f1e370dde17f20a336279ad46a71d9acf2\"" Jan 15 23:52:44.490670 containerd[1896]: time="2026-01-15T23:52:44.490629045Z" level=info msg="connecting to shim 6ab955a0aa4c3e62b2c381e1d93932f1e370dde17f20a336279ad46a71d9acf2" address="unix:///run/containerd/s/6962d886e6b913784ab1a6e221d2c1bb3f4a493955cb75496c5a8e25bcf8b464" protocol=ttrpc version=3 Jan 15 23:52:44.494483 containerd[1896]: time="2026-01-15T23:52:44.494456767Z" level=info msg="Container 981ddb5b73c8c611c30e66ce9d27a3775632db488533c135970c0916324a5362: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:52:44.509588 systemd[1]: Started cri-containerd-6ab955a0aa4c3e62b2c381e1d93932f1e370dde17f20a336279ad46a71d9acf2.scope - libcontainer container 6ab955a0aa4c3e62b2c381e1d93932f1e370dde17f20a336279ad46a71d9acf2. Jan 15 23:52:44.511393 containerd[1896]: time="2026-01-15T23:52:44.511304922Z" level=info msg="CreateContainer within sandbox \"41e8d0da8d662294eb5bce0d6d657d1689d01970bb7b7d1cc00c13d13581b72a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"981ddb5b73c8c611c30e66ce9d27a3775632db488533c135970c0916324a5362\"" Jan 15 23:52:44.513470 containerd[1896]: time="2026-01-15T23:52:44.512588782Z" level=info msg="StartContainer for \"981ddb5b73c8c611c30e66ce9d27a3775632db488533c135970c0916324a5362\"" Jan 15 23:52:44.513470 containerd[1896]: time="2026-01-15T23:52:44.513195859Z" level=info msg="connecting to shim 981ddb5b73c8c611c30e66ce9d27a3775632db488533c135970c0916324a5362" address="unix:///run/containerd/s/acb78a09318b4f8d1c4dc73906e26bce5ed808e88b569f94bdd27817a52658c9" protocol=ttrpc version=3 Jan 15 23:52:44.533707 systemd[1]: Started cri-containerd-981ddb5b73c8c611c30e66ce9d27a3775632db488533c135970c0916324a5362.scope - libcontainer container 981ddb5b73c8c611c30e66ce9d27a3775632db488533c135970c0916324a5362. Jan 15 23:52:44.551930 containerd[1896]: time="2026-01-15T23:52:44.551727774Z" level=info msg="StartContainer for \"6ab955a0aa4c3e62b2c381e1d93932f1e370dde17f20a336279ad46a71d9acf2\" returns successfully" Jan 15 23:52:44.581206 containerd[1896]: time="2026-01-15T23:52:44.581155444Z" level=info msg="StartContainer for \"981ddb5b73c8c611c30e66ce9d27a3775632db488533c135970c0916324a5362\" returns successfully" Jan 15 23:52:44.616842 kubelet[3455]: I0115 23:52:44.616699 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dgd8v" podStartSLOduration=27.616683217 podStartE2EDuration="27.616683217s" podCreationTimestamp="2026-01-15 23:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:52:44.616368838 +0000 UTC m=+32.217981771" watchObservedRunningTime="2026-01-15 23:52:44.616683217 +0000 UTC m=+32.218296150" Jan 15 23:52:44.633563 kubelet[3455]: I0115 23:52:44.633489 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4x82q" podStartSLOduration=27.633470946 podStartE2EDuration="27.633470946s" podCreationTimestamp="2026-01-15 23:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:52:44.632685952 +0000 UTC m=+32.234298893" watchObservedRunningTime="2026-01-15 23:52:44.633470946 +0000 UTC m=+32.235083879" Jan 15 23:52:45.336243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165601377.mount: Deactivated successfully. Jan 15 23:53:44.508644 systemd[1]: Started sshd@7-10.200.20.10:22-10.200.16.10:53158.service - OpenSSH per-connection server daemon (10.200.16.10:53158). Jan 15 23:53:45.001834 sshd[4776]: Accepted publickey for core from 10.200.16.10 port 53158 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:45.003461 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:45.007334 systemd-logind[1871]: New session 10 of user core. Jan 15 23:53:45.017606 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 23:53:45.445236 sshd[4779]: Connection closed by 10.200.16.10 port 53158 Jan 15 23:53:45.445854 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:45.449978 systemd[1]: sshd@7-10.200.20.10:22-10.200.16.10:53158.service: Deactivated successfully. Jan 15 23:53:45.452418 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 23:53:45.453522 systemd-logind[1871]: Session 10 logged out. Waiting for processes to exit. Jan 15 23:53:45.455218 systemd-logind[1871]: Removed session 10. Jan 15 23:53:50.536309 systemd[1]: Started sshd@8-10.200.20.10:22-10.200.16.10:47972.service - OpenSSH per-connection server daemon (10.200.16.10:47972). Jan 15 23:53:51.025160 sshd[4795]: Accepted publickey for core from 10.200.16.10 port 47972 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:51.026290 sshd-session[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:51.030044 systemd-logind[1871]: New session 11 of user core. Jan 15 23:53:51.037567 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 23:53:51.432964 sshd[4798]: Connection closed by 10.200.16.10 port 47972 Jan 15 23:53:51.432865 sshd-session[4795]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:51.436614 systemd[1]: sshd@8-10.200.20.10:22-10.200.16.10:47972.service: Deactivated successfully. Jan 15 23:53:51.438414 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 23:53:51.440161 systemd-logind[1871]: Session 11 logged out. Waiting for processes to exit. Jan 15 23:53:51.441497 systemd-logind[1871]: Removed session 11. Jan 15 23:53:56.504652 systemd[1]: Started sshd@9-10.200.20.10:22-10.200.16.10:47986.service - OpenSSH per-connection server daemon (10.200.16.10:47986). Jan 15 23:53:56.954486 sshd[4811]: Accepted publickey for core from 10.200.16.10 port 47986 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:56.955602 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:56.959158 systemd-logind[1871]: New session 12 of user core. Jan 15 23:53:56.967569 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 23:53:57.319382 sshd[4814]: Connection closed by 10.200.16.10 port 47986 Jan 15 23:53:57.320065 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:57.323758 systemd[1]: sshd@9-10.200.20.10:22-10.200.16.10:47986.service: Deactivated successfully. Jan 15 23:53:57.326142 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 23:53:57.326976 systemd-logind[1871]: Session 12 logged out. Waiting for processes to exit. Jan 15 23:53:57.328683 systemd-logind[1871]: Removed session 12. Jan 15 23:54:02.414862 systemd[1]: Started sshd@10-10.200.20.10:22-10.200.16.10:54036.service - OpenSSH per-connection server daemon (10.200.16.10:54036). Jan 15 23:54:02.869483 sshd[4826]: Accepted publickey for core from 10.200.16.10 port 54036 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:02.870620 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:02.874414 systemd-logind[1871]: New session 13 of user core. Jan 15 23:54:02.880581 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 23:54:03.249204 sshd[4829]: Connection closed by 10.200.16.10 port 54036 Jan 15 23:54:03.248912 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:03.252641 systemd-logind[1871]: Session 13 logged out. Waiting for processes to exit. Jan 15 23:54:03.253184 systemd[1]: sshd@10-10.200.20.10:22-10.200.16.10:54036.service: Deactivated successfully. Jan 15 23:54:03.256698 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 23:54:03.258767 systemd-logind[1871]: Removed session 13. Jan 15 23:54:03.322713 systemd[1]: Started sshd@11-10.200.20.10:22-10.200.16.10:54042.service - OpenSSH per-connection server daemon (10.200.16.10:54042). Jan 15 23:54:03.738491 sshd[4842]: Accepted publickey for core from 10.200.16.10 port 54042 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:03.740261 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:03.743795 systemd-logind[1871]: New session 14 of user core. Jan 15 23:54:03.749581 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 23:54:04.119986 sshd[4845]: Connection closed by 10.200.16.10 port 54042 Jan 15 23:54:04.119632 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:04.127871 systemd-logind[1871]: Session 14 logged out. Waiting for processes to exit. Jan 15 23:54:04.128577 systemd[1]: sshd@11-10.200.20.10:22-10.200.16.10:54042.service: Deactivated successfully. Jan 15 23:54:04.131945 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 23:54:04.133709 systemd-logind[1871]: Removed session 14. Jan 15 23:54:04.216365 systemd[1]: Started sshd@12-10.200.20.10:22-10.200.16.10:54046.service - OpenSSH per-connection server daemon (10.200.16.10:54046). Jan 15 23:54:04.700715 sshd[4854]: Accepted publickey for core from 10.200.16.10 port 54046 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:04.701884 sshd-session[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:04.706400 systemd-logind[1871]: New session 15 of user core. Jan 15 23:54:04.713605 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 23:54:05.096788 sshd[4857]: Connection closed by 10.200.16.10 port 54046 Jan 15 23:54:05.096621 sshd-session[4854]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:05.101116 systemd[1]: sshd@12-10.200.20.10:22-10.200.16.10:54046.service: Deactivated successfully. Jan 15 23:54:05.103082 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 23:54:05.105410 systemd-logind[1871]: Session 15 logged out. Waiting for processes to exit. Jan 15 23:54:05.107075 systemd-logind[1871]: Removed session 15. Jan 15 23:54:10.160533 systemd[1]: Started sshd@13-10.200.20.10:22-10.200.16.10:35246.service - OpenSSH per-connection server daemon (10.200.16.10:35246). Jan 15 23:54:10.578266 sshd[4868]: Accepted publickey for core from 10.200.16.10 port 35246 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:10.579078 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:10.582894 systemd-logind[1871]: New session 16 of user core. Jan 15 23:54:10.588743 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 23:54:10.931070 sshd[4871]: Connection closed by 10.200.16.10 port 35246 Jan 15 23:54:10.931649 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:10.935470 systemd[1]: sshd@13-10.200.20.10:22-10.200.16.10:35246.service: Deactivated successfully. Jan 15 23:54:10.937036 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 23:54:10.937820 systemd-logind[1871]: Session 16 logged out. Waiting for processes to exit. Jan 15 23:54:10.938852 systemd-logind[1871]: Removed session 16. Jan 15 23:54:11.009664 systemd[1]: Started sshd@14-10.200.20.10:22-10.200.16.10:35254.service - OpenSSH per-connection server daemon (10.200.16.10:35254). Jan 15 23:54:11.433533 sshd[4883]: Accepted publickey for core from 10.200.16.10 port 35254 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:11.434687 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:11.438707 systemd-logind[1871]: New session 17 of user core. Jan 15 23:54:11.441568 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 23:54:11.828544 sshd[4886]: Connection closed by 10.200.16.10 port 35254 Jan 15 23:54:11.829626 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:11.832731 systemd[1]: sshd@14-10.200.20.10:22-10.200.16.10:35254.service: Deactivated successfully. Jan 15 23:54:11.834717 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 23:54:11.835388 systemd-logind[1871]: Session 17 logged out. Waiting for processes to exit. Jan 15 23:54:11.836477 systemd-logind[1871]: Removed session 17. Jan 15 23:54:11.904375 systemd[1]: Started sshd@15-10.200.20.10:22-10.200.16.10:35270.service - OpenSSH per-connection server daemon (10.200.16.10:35270). Jan 15 23:54:12.318676 sshd[4896]: Accepted publickey for core from 10.200.16.10 port 35270 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:12.319802 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:12.323405 systemd-logind[1871]: New session 18 of user core. Jan 15 23:54:12.329699 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 23:54:13.068019 sshd[4899]: Connection closed by 10.200.16.10 port 35270 Jan 15 23:54:13.068399 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:13.072518 systemd[1]: sshd@15-10.200.20.10:22-10.200.16.10:35270.service: Deactivated successfully. Jan 15 23:54:13.074317 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 23:54:13.075005 systemd-logind[1871]: Session 18 logged out. Waiting for processes to exit. Jan 15 23:54:13.076303 systemd-logind[1871]: Removed session 18. Jan 15 23:54:13.147435 systemd[1]: Started sshd@16-10.200.20.10:22-10.200.16.10:35276.service - OpenSSH per-connection server daemon (10.200.16.10:35276). Jan 15 23:54:13.561381 sshd[4918]: Accepted publickey for core from 10.200.16.10 port 35276 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:13.562510 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:13.566763 systemd-logind[1871]: New session 19 of user core. Jan 15 23:54:13.572578 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 23:54:13.992169 sshd[4921]: Connection closed by 10.200.16.10 port 35276 Jan 15 23:54:13.992568 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:13.997820 systemd[1]: sshd@16-10.200.20.10:22-10.200.16.10:35276.service: Deactivated successfully. Jan 15 23:54:13.999742 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 23:54:14.000471 systemd-logind[1871]: Session 19 logged out. Waiting for processes to exit. Jan 15 23:54:14.001702 systemd-logind[1871]: Removed session 19. Jan 15 23:54:14.086417 systemd[1]: Started sshd@17-10.200.20.10:22-10.200.16.10:35282.service - OpenSSH per-connection server daemon (10.200.16.10:35282). Jan 15 23:54:14.583039 sshd[4931]: Accepted publickey for core from 10.200.16.10 port 35282 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:14.583863 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:14.587864 systemd-logind[1871]: New session 20 of user core. Jan 15 23:54:14.596591 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 23:54:14.976167 sshd[4934]: Connection closed by 10.200.16.10 port 35282 Jan 15 23:54:14.975692 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:14.978885 systemd[1]: sshd@17-10.200.20.10:22-10.200.16.10:35282.service: Deactivated successfully. Jan 15 23:54:14.982004 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 23:54:14.982916 systemd-logind[1871]: Session 20 logged out. Waiting for processes to exit. Jan 15 23:54:14.984639 systemd-logind[1871]: Removed session 20. Jan 15 23:54:15.627218 update_engine[1875]: I20260115 23:54:15.627153 1875 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 15 23:54:15.627218 update_engine[1875]: I20260115 23:54:15.627207 1875 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 15 23:54:15.630480 update_engine[1875]: I20260115 23:54:15.627363 1875 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 15 23:54:15.630869 update_engine[1875]: I20260115 23:54:15.630818 1875 omaha_request_params.cc:62] Current group set to stable Jan 15 23:54:15.631024 update_engine[1875]: I20260115 23:54:15.631008 1875 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 15 23:54:15.632364 update_engine[1875]: I20260115 23:54:15.631197 1875 update_attempter.cc:643] Scheduling an action processor start. Jan 15 23:54:15.632364 update_engine[1875]: I20260115 23:54:15.631229 1875 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 15 23:54:15.632364 update_engine[1875]: I20260115 23:54:15.631258 1875 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 15 23:54:15.632364 update_engine[1875]: I20260115 23:54:15.631311 1875 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 15 23:54:15.632364 update_engine[1875]: I20260115 23:54:15.631316 1875 omaha_request_action.cc:272] Request: Jan 15 23:54:15.632364 update_engine[1875]: Jan 15 23:54:15.632364 update_engine[1875]: Jan 15 23:54:15.632364 update_engine[1875]: Jan 15 23:54:15.632364 update_engine[1875]: Jan 15 23:54:15.632364 update_engine[1875]: Jan 15 23:54:15.632364 update_engine[1875]: Jan 15 23:54:15.632364 update_engine[1875]: Jan 15 23:54:15.632364 update_engine[1875]: Jan 15 23:54:15.632364 update_engine[1875]: I20260115 23:54:15.631320 1875 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 23:54:15.632364 update_engine[1875]: I20260115 23:54:15.631978 1875 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 23:54:15.632636 locksmithd[1971]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 15 23:54:15.633170 update_engine[1875]: I20260115 23:54:15.633140 1875 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 23:54:15.703314 update_engine[1875]: E20260115 23:54:15.703255 1875 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 23:54:15.703597 update_engine[1875]: I20260115 23:54:15.703573 1875 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 15 23:54:20.064590 systemd[1]: Started sshd@18-10.200.20.10:22-10.200.16.10:44416.service - OpenSSH per-connection server daemon (10.200.16.10:44416). Jan 15 23:54:20.555721 sshd[4950]: Accepted publickey for core from 10.200.16.10 port 44416 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:20.556982 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:20.560681 systemd-logind[1871]: New session 21 of user core. Jan 15 23:54:20.569795 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 23:54:20.946570 sshd[4953]: Connection closed by 10.200.16.10 port 44416 Jan 15 23:54:20.947146 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:20.950751 systemd[1]: sshd@18-10.200.20.10:22-10.200.16.10:44416.service: Deactivated successfully. Jan 15 23:54:20.954088 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 23:54:20.955327 systemd-logind[1871]: Session 21 logged out. Waiting for processes to exit. Jan 15 23:54:20.956547 systemd-logind[1871]: Removed session 21. Jan 15 23:54:25.628309 update_engine[1875]: I20260115 23:54:25.628233 1875 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 23:54:25.628672 update_engine[1875]: I20260115 23:54:25.628333 1875 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 23:54:25.628733 update_engine[1875]: I20260115 23:54:25.628700 1875 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 23:54:25.661000 update_engine[1875]: E20260115 23:54:25.660938 1875 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 23:54:25.661149 update_engine[1875]: I20260115 23:54:25.661024 1875 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 15 23:54:26.016509 systemd[1]: Started sshd@19-10.200.20.10:22-10.200.16.10:44432.service - OpenSSH per-connection server daemon (10.200.16.10:44432). Jan 15 23:54:26.432939 sshd[4965]: Accepted publickey for core from 10.200.16.10 port 44432 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:26.434123 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:26.437738 systemd-logind[1871]: New session 22 of user core. Jan 15 23:54:26.447581 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 23:54:26.793473 sshd[4968]: Connection closed by 10.200.16.10 port 44432 Jan 15 23:54:26.794127 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:26.797276 systemd[1]: sshd@19-10.200.20.10:22-10.200.16.10:44432.service: Deactivated successfully. Jan 15 23:54:26.799081 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 23:54:26.801330 systemd-logind[1871]: Session 22 logged out. Waiting for processes to exit. Jan 15 23:54:26.802759 systemd-logind[1871]: Removed session 22. Jan 15 23:54:31.899403 systemd[1]: Started sshd@20-10.200.20.10:22-10.200.16.10:34976.service - OpenSSH per-connection server daemon (10.200.16.10:34976). Jan 15 23:54:32.390126 sshd[4979]: Accepted publickey for core from 10.200.16.10 port 34976 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:32.391360 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:32.395018 systemd-logind[1871]: New session 23 of user core. Jan 15 23:54:32.402565 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 23:54:32.792157 sshd[4982]: Connection closed by 10.200.16.10 port 34976 Jan 15 23:54:32.792731 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:32.796802 systemd[1]: sshd@20-10.200.20.10:22-10.200.16.10:34976.service: Deactivated successfully. Jan 15 23:54:32.799659 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 23:54:32.800965 systemd-logind[1871]: Session 23 logged out. Waiting for processes to exit. Jan 15 23:54:32.802565 systemd-logind[1871]: Removed session 23. Jan 15 23:54:32.856447 systemd[1]: Started sshd@21-10.200.20.10:22-10.200.16.10:34990.service - OpenSSH per-connection server daemon (10.200.16.10:34990). Jan 15 23:54:33.270659 sshd[4993]: Accepted publickey for core from 10.200.16.10 port 34990 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:33.271732 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:33.275255 systemd-logind[1871]: New session 24 of user core. Jan 15 23:54:33.285560 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 23:54:35.047669 containerd[1896]: time="2026-01-15T23:54:35.047621659Z" level=info msg="StopContainer for \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\" with timeout 30 (s)" Jan 15 23:54:35.048369 containerd[1896]: time="2026-01-15T23:54:35.048010254Z" level=info msg="Stop container \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\" with signal terminated" Jan 15 23:54:35.056113 containerd[1896]: time="2026-01-15T23:54:35.055977431Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 23:54:35.061326 systemd[1]: cri-containerd-6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b.scope: Deactivated successfully. Jan 15 23:54:35.062162 containerd[1896]: time="2026-01-15T23:54:35.061808345Z" level=info msg="received container exit event container_id:\"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\" id:\"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\" pid:4195 exited_at:{seconds:1768521275 nanos:61106664}" Jan 15 23:54:35.064806 containerd[1896]: time="2026-01-15T23:54:35.064786537Z" level=info msg="StopContainer for \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\" with timeout 2 (s)" Jan 15 23:54:35.065328 containerd[1896]: time="2026-01-15T23:54:35.065305282Z" level=info msg="Stop container \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\" with signal terminated" Jan 15 23:54:35.074363 systemd-networkd[1493]: lxc_health: Link DOWN Jan 15 23:54:35.074368 systemd-networkd[1493]: lxc_health: Lost carrier Jan 15 23:54:35.089212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b-rootfs.mount: Deactivated successfully. Jan 15 23:54:35.091131 systemd[1]: cri-containerd-21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6.scope: Deactivated successfully. Jan 15 23:54:35.091695 systemd[1]: cri-containerd-21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6.scope: Consumed 4.510s CPU time, 125.8M memory peak, 136K read from disk, 12.9M written to disk. Jan 15 23:54:35.094415 containerd[1896]: time="2026-01-15T23:54:35.094377425Z" level=info msg="received container exit event container_id:\"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\" id:\"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\" pid:4044 exited_at:{seconds:1768521275 nanos:94181167}" Jan 15 23:54:35.111304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6-rootfs.mount: Deactivated successfully. Jan 15 23:54:35.198972 containerd[1896]: time="2026-01-15T23:54:35.198820116Z" level=info msg="StopContainer for \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\" returns successfully" Jan 15 23:54:35.199588 containerd[1896]: time="2026-01-15T23:54:35.199519710Z" level=info msg="StopPodSandbox for \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\"" Jan 15 23:54:35.199588 containerd[1896]: time="2026-01-15T23:54:35.199570184Z" level=info msg="Container to stop \"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:54:35.199752 containerd[1896]: time="2026-01-15T23:54:35.199578473Z" level=info msg="Container to stop \"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:54:35.199752 containerd[1896]: time="2026-01-15T23:54:35.199687222Z" level=info msg="Container to stop \"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:54:35.199752 containerd[1896]: time="2026-01-15T23:54:35.199695822Z" level=info msg="Container to stop \"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:54:35.199752 containerd[1896]: time="2026-01-15T23:54:35.199702775Z" level=info msg="Container to stop \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:54:35.205465 systemd[1]: cri-containerd-c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d.scope: Deactivated successfully. Jan 15 23:54:35.206805 containerd[1896]: time="2026-01-15T23:54:35.206751260Z" level=info msg="StopContainer for \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\" returns successfully" Jan 15 23:54:35.207114 containerd[1896]: time="2026-01-15T23:54:35.207055490Z" level=info msg="received sandbox exit event container_id:\"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" id:\"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" exit_status:137 exited_at:{seconds:1768521275 nanos:206585412}" monitor_name=podsandbox Jan 15 23:54:35.207693 containerd[1896]: time="2026-01-15T23:54:35.207600837Z" level=info msg="StopPodSandbox for \"b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82\"" Jan 15 23:54:35.207693 containerd[1896]: time="2026-01-15T23:54:35.207658031Z" level=info msg="Container to stop \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 23:54:35.213659 systemd[1]: cri-containerd-b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82.scope: Deactivated successfully. Jan 15 23:54:35.218890 containerd[1896]: time="2026-01-15T23:54:35.218836684Z" level=info msg="received sandbox exit event container_id:\"b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82\" id:\"b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82\" exit_status:137 exited_at:{seconds:1768521275 nanos:218606425}" monitor_name=podsandbox Jan 15 23:54:35.228408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d-rootfs.mount: Deactivated successfully. Jan 15 23:54:35.242748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82-rootfs.mount: Deactivated successfully. Jan 15 23:54:35.251707 containerd[1896]: time="2026-01-15T23:54:35.251666896Z" level=info msg="shim disconnected" id=c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d namespace=k8s.io Jan 15 23:54:35.252635 containerd[1896]: time="2026-01-15T23:54:35.251699609Z" level=warning msg="cleaning up after shim disconnected" id=c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d namespace=k8s.io Jan 15 23:54:35.252635 containerd[1896]: time="2026-01-15T23:54:35.251814319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 23:54:35.262775 containerd[1896]: time="2026-01-15T23:54:35.262582472Z" level=info msg="TearDown network for sandbox \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" successfully" Jan 15 23:54:35.262775 containerd[1896]: time="2026-01-15T23:54:35.262613249Z" level=info msg="StopPodSandbox for \"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" returns successfully" Jan 15 23:54:35.263179 containerd[1896]: time="2026-01-15T23:54:35.263025357Z" level=info msg="received sandbox container exit event sandbox_id:\"c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d\" exit_status:137 exited_at:{seconds:1768521275 nanos:206585412}" monitor_name=criService Jan 15 23:54:35.263374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c131d60965f6435ee6e416344504204f012dcae1efb9a5a25c4c94adc706161d-shm.mount: Deactivated successfully. Jan 15 23:54:35.264835 containerd[1896]: time="2026-01-15T23:54:35.264093593Z" level=info msg="shim disconnected" id=b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82 namespace=k8s.io Jan 15 23:54:35.264835 containerd[1896]: time="2026-01-15T23:54:35.264121858Z" level=warning msg="cleaning up after shim disconnected" id=b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82 namespace=k8s.io Jan 15 23:54:35.264835 containerd[1896]: time="2026-01-15T23:54:35.264145827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 23:54:35.275498 containerd[1896]: time="2026-01-15T23:54:35.275367122Z" level=info msg="TearDown network for sandbox \"b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82\" successfully" Jan 15 23:54:35.275498 containerd[1896]: time="2026-01-15T23:54:35.275397515Z" level=info msg="StopPodSandbox for \"b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82\" returns successfully" Jan 15 23:54:35.275623 containerd[1896]: time="2026-01-15T23:54:35.275540962Z" level=info msg="received sandbox container exit event sandbox_id:\"b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82\" exit_status:137 exited_at:{seconds:1768521275 nanos:218606425}" monitor_name=criService Jan 15 23:54:35.379773 kubelet[3455]: I0115 23:54:35.379251 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-host-proc-sys-net\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.379773 kubelet[3455]: I0115 23:54:35.379289 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-hostproc\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.379773 kubelet[3455]: I0115 23:54:35.379305 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-cgroup\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.379773 kubelet[3455]: I0115 23:54:35.379320 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-clustermesh-secrets\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.379773 kubelet[3455]: I0115 23:54:35.379338 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-hubble-tls\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.379773 kubelet[3455]: I0115 23:54:35.379349 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-bpf-maps\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.380201 kubelet[3455]: I0115 23:54:35.379359 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-host-proc-sys-kernel\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.380201 kubelet[3455]: I0115 23:54:35.379372 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-etc-cni-netd\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.380201 kubelet[3455]: I0115 23:54:35.379380 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cni-path\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.380201 kubelet[3455]: I0115 23:54:35.379391 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13e82da4-92e2-483f-b683-09f637581b86-cilium-config-path\") pod \"13e82da4-92e2-483f-b683-09f637581b86\" (UID: \"13e82da4-92e2-483f-b683-09f637581b86\") " Jan 15 23:54:35.380201 kubelet[3455]: I0115 23:54:35.379406 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-config-path\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.380201 kubelet[3455]: I0115 23:54:35.379417 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzlw9\" (UniqueName: \"kubernetes.io/projected/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-kube-api-access-jzlw9\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.380295 kubelet[3455]: I0115 23:54:35.379442 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-lib-modules\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.380295 kubelet[3455]: I0115 23:54:35.379451 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-run\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.380295 kubelet[3455]: I0115 23:54:35.379463 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-xtables-lock\") pod \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\" (UID: \"a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8\") " Jan 15 23:54:35.380295 kubelet[3455]: I0115 23:54:35.379474 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzpwc\" (UniqueName: \"kubernetes.io/projected/13e82da4-92e2-483f-b683-09f637581b86-kube-api-access-wzpwc\") pod \"13e82da4-92e2-483f-b683-09f637581b86\" (UID: \"13e82da4-92e2-483f-b683-09f637581b86\") " Jan 15 23:54:35.380295 kubelet[3455]: I0115 23:54:35.379327 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:54:35.380368 kubelet[3455]: I0115 23:54:35.379569 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:54:35.382942 kubelet[3455]: I0115 23:54:35.381022 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-hostproc" (OuterVolumeSpecName: "hostproc") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:54:35.382942 kubelet[3455]: I0115 23:54:35.381202 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cni-path" (OuterVolumeSpecName: "cni-path") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:54:35.382942 kubelet[3455]: I0115 23:54:35.381884 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:54:35.384155 kubelet[3455]: I0115 23:54:35.384107 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 23:54:35.385591 kubelet[3455]: I0115 23:54:35.385564 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:54:35.385707 kubelet[3455]: I0115 23:54:35.385692 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:54:35.385834 kubelet[3455]: I0115 23:54:35.385797 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:54:35.385834 kubelet[3455]: I0115 23:54:35.385828 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:54:35.385896 kubelet[3455]: I0115 23:54:35.385839 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 15 23:54:35.385912 kubelet[3455]: I0115 23:54:35.385904 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13e82da4-92e2-483f-b683-09f637581b86-kube-api-access-wzpwc" (OuterVolumeSpecName: "kube-api-access-wzpwc") pod "13e82da4-92e2-483f-b683-09f637581b86" (UID: "13e82da4-92e2-483f-b683-09f637581b86"). InnerVolumeSpecName "kube-api-access-wzpwc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 23:54:35.386093 kubelet[3455]: I0115 23:54:35.386073 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-kube-api-access-jzlw9" (OuterVolumeSpecName: "kube-api-access-jzlw9") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "kube-api-access-jzlw9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 23:54:35.386872 kubelet[3455]: I0115 23:54:35.386820 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 15 23:54:35.387269 kubelet[3455]: I0115 23:54:35.387236 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" (UID: "a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 23:54:35.387782 kubelet[3455]: I0115 23:54:35.387762 3455 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13e82da4-92e2-483f-b683-09f637581b86-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "13e82da4-92e2-483f-b683-09f637581b86" (UID: "13e82da4-92e2-483f-b683-09f637581b86"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 23:54:35.479953 kubelet[3455]: I0115 23:54:35.479915 3455 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-run\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.479953 kubelet[3455]: I0115 23:54:35.479948 3455 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wzpwc\" (UniqueName: \"kubernetes.io/projected/13e82da4-92e2-483f-b683-09f637581b86-kube-api-access-wzpwc\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.479953 kubelet[3455]: I0115 23:54:35.479962 3455 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-xtables-lock\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.479953 kubelet[3455]: I0115 23:54:35.479969 3455 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-cgroup\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480163 kubelet[3455]: I0115 23:54:35.479975 3455 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-clustermesh-secrets\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480163 kubelet[3455]: I0115 23:54:35.479981 3455 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-host-proc-sys-net\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480163 kubelet[3455]: I0115 23:54:35.479987 3455 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-hostproc\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480163 kubelet[3455]: I0115 23:54:35.479993 3455 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-hubble-tls\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480163 kubelet[3455]: I0115 23:54:35.480005 3455 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-bpf-maps\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480163 kubelet[3455]: I0115 23:54:35.480011 3455 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-host-proc-sys-kernel\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480163 kubelet[3455]: I0115 23:54:35.480016 3455 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cni-path\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480163 kubelet[3455]: I0115 23:54:35.480063 3455 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13e82da4-92e2-483f-b683-09f637581b86-cilium-config-path\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480281 kubelet[3455]: I0115 23:54:35.480069 3455 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-etc-cni-netd\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480281 kubelet[3455]: I0115 23:54:35.480075 3455 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-cilium-config-path\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480281 kubelet[3455]: I0115 23:54:35.480081 3455 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jzlw9\" (UniqueName: \"kubernetes.io/projected/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-kube-api-access-jzlw9\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.480281 kubelet[3455]: I0115 23:54:35.480087 3455 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8-lib-modules\") on node \"ci-4459.2.2-n-e85017da3c\" DevicePath \"\"" Jan 15 23:54:35.626792 update_engine[1875]: I20260115 23:54:35.626718 1875 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 23:54:35.627145 update_engine[1875]: I20260115 23:54:35.626813 1875 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 23:54:35.627167 update_engine[1875]: I20260115 23:54:35.627147 1875 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 23:54:35.708549 update_engine[1875]: E20260115 23:54:35.708484 1875 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 23:54:35.708702 update_engine[1875]: I20260115 23:54:35.708575 1875 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 15 23:54:35.802676 kubelet[3455]: I0115 23:54:35.802467 3455 scope.go:117] "RemoveContainer" containerID="21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6" Jan 15 23:54:35.806942 containerd[1896]: time="2026-01-15T23:54:35.806909901Z" level=info msg="RemoveContainer for \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\"" Jan 15 23:54:35.809459 systemd[1]: Removed slice kubepods-burstable-poda2d0fea2_b100_4c78_b9a1_1f65f3e2e3f8.slice - libcontainer container kubepods-burstable-poda2d0fea2_b100_4c78_b9a1_1f65f3e2e3f8.slice. Jan 15 23:54:35.809568 systemd[1]: kubepods-burstable-poda2d0fea2_b100_4c78_b9a1_1f65f3e2e3f8.slice: Consumed 4.578s CPU time, 126.3M memory peak, 136K read from disk, 12.9M written to disk. Jan 15 23:54:35.814589 systemd[1]: Removed slice kubepods-besteffort-pod13e82da4_92e2_483f_b683_09f637581b86.slice - libcontainer container kubepods-besteffort-pod13e82da4_92e2_483f_b683_09f637581b86.slice. Jan 15 23:54:35.823431 containerd[1896]: time="2026-01-15T23:54:35.823377250Z" level=info msg="RemoveContainer for \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\" returns successfully" Jan 15 23:54:35.823832 kubelet[3455]: I0115 23:54:35.823781 3455 scope.go:117] "RemoveContainer" containerID="70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c" Jan 15 23:54:35.826260 containerd[1896]: time="2026-01-15T23:54:35.826159608Z" level=info msg="RemoveContainer for \"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\"" Jan 15 23:54:35.842833 containerd[1896]: time="2026-01-15T23:54:35.842797005Z" level=info msg="RemoveContainer for \"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\" returns successfully" Jan 15 23:54:35.843202 kubelet[3455]: I0115 23:54:35.843169 3455 scope.go:117] "RemoveContainer" containerID="bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398" Jan 15 23:54:35.844915 containerd[1896]: time="2026-01-15T23:54:35.844888858Z" level=info msg="RemoveContainer for \"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\"" Jan 15 23:54:35.859694 containerd[1896]: time="2026-01-15T23:54:35.859655460Z" level=info msg="RemoveContainer for \"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\" returns successfully" Jan 15 23:54:35.859970 kubelet[3455]: I0115 23:54:35.859872 3455 scope.go:117] "RemoveContainer" containerID="9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea" Jan 15 23:54:35.861353 containerd[1896]: time="2026-01-15T23:54:35.861327717Z" level=info msg="RemoveContainer for \"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\"" Jan 15 23:54:35.871579 containerd[1896]: time="2026-01-15T23:54:35.871546523Z" level=info msg="RemoveContainer for \"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\" returns successfully" Jan 15 23:54:35.871876 kubelet[3455]: I0115 23:54:35.871853 3455 scope.go:117] "RemoveContainer" containerID="392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2" Jan 15 23:54:35.873231 containerd[1896]: time="2026-01-15T23:54:35.873203268Z" level=info msg="RemoveContainer for \"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\"" Jan 15 23:54:35.883964 containerd[1896]: time="2026-01-15T23:54:35.883936379Z" level=info msg="RemoveContainer for \"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\" returns successfully" Jan 15 23:54:35.884167 kubelet[3455]: I0115 23:54:35.884142 3455 scope.go:117] "RemoveContainer" containerID="21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6" Jan 15 23:54:35.884406 containerd[1896]: time="2026-01-15T23:54:35.884330694Z" level=error msg="ContainerStatus for \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\": not found" Jan 15 23:54:35.884703 kubelet[3455]: E0115 23:54:35.884553 3455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\": not found" containerID="21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6" Jan 15 23:54:35.884703 kubelet[3455]: I0115 23:54:35.884585 3455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6"} err="failed to get container status \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"21641b98b48c7ef254aae2399dd5add21cd9841e6a6eb3d864937bf13808a1e6\": not found" Jan 15 23:54:35.884703 kubelet[3455]: I0115 23:54:35.884639 3455 scope.go:117] "RemoveContainer" containerID="70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c" Jan 15 23:54:35.884863 containerd[1896]: time="2026-01-15T23:54:35.884829006Z" level=error msg="ContainerStatus for \"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\": not found" Jan 15 23:54:35.884939 kubelet[3455]: E0115 23:54:35.884919 3455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\": not found" containerID="70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c" Jan 15 23:54:35.884960 kubelet[3455]: I0115 23:54:35.884943 3455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c"} err="failed to get container status \"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\": rpc error: code = NotFound desc = an error occurred when try to find container \"70e31be4bcbcd3a25573348a16123875efac8760b3b1d5b59be9895bac6c343c\": not found" Jan 15 23:54:35.884978 kubelet[3455]: I0115 23:54:35.884959 3455 scope.go:117] "RemoveContainer" containerID="bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398" Jan 15 23:54:35.885110 containerd[1896]: time="2026-01-15T23:54:35.885085266Z" level=error msg="ContainerStatus for \"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\": not found" Jan 15 23:54:35.885272 kubelet[3455]: E0115 23:54:35.885205 3455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\": not found" containerID="bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398" Jan 15 23:54:35.885297 kubelet[3455]: I0115 23:54:35.885275 3455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398"} err="failed to get container status \"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\": rpc error: code = NotFound desc = an error occurred when try to find container \"bae932c806f712ab42ffca491fb724a8f345400b2de8d188bd605f19e2d2f398\": not found" Jan 15 23:54:35.885297 kubelet[3455]: I0115 23:54:35.885289 3455 scope.go:117] "RemoveContainer" containerID="9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea" Jan 15 23:54:35.885498 containerd[1896]: time="2026-01-15T23:54:35.885474517Z" level=error msg="ContainerStatus for \"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\": not found" Jan 15 23:54:35.885633 kubelet[3455]: E0115 23:54:35.885611 3455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\": not found" containerID="9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea" Jan 15 23:54:35.885661 kubelet[3455]: I0115 23:54:35.885631 3455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea"} err="failed to get container status \"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\": rpc error: code = NotFound desc = an error occurred when try to find container \"9071f3fa356b57ee8800f3b9f8381c164a9749a5ed6c8dd3e23423a2e4f8ffea\": not found" Jan 15 23:54:35.885661 kubelet[3455]: I0115 23:54:35.885646 3455 scope.go:117] "RemoveContainer" containerID="392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2" Jan 15 23:54:35.885883 containerd[1896]: time="2026-01-15T23:54:35.885790388Z" level=error msg="ContainerStatus for \"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\": not found" Jan 15 23:54:35.885962 kubelet[3455]: E0115 23:54:35.885942 3455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\": not found" containerID="392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2" Jan 15 23:54:35.885988 kubelet[3455]: I0115 23:54:35.885965 3455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2"} err="failed to get container status \"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"392c10d02163e00934da9ae053398bc57256759377227968b0c2292bf683c7c2\": not found" Jan 15 23:54:35.885988 kubelet[3455]: I0115 23:54:35.885979 3455 scope.go:117] "RemoveContainer" containerID="6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b" Jan 15 23:54:35.887443 containerd[1896]: time="2026-01-15T23:54:35.887363184Z" level=info msg="RemoveContainer for \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\"" Jan 15 23:54:35.900268 containerd[1896]: time="2026-01-15T23:54:35.900208902Z" level=info msg="RemoveContainer for \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\" returns successfully" Jan 15 23:54:35.901595 kubelet[3455]: I0115 23:54:35.901571 3455 scope.go:117] "RemoveContainer" containerID="6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b" Jan 15 23:54:35.902663 containerd[1896]: time="2026-01-15T23:54:35.902626963Z" level=error msg="ContainerStatus for \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\": not found" Jan 15 23:54:35.902889 kubelet[3455]: E0115 23:54:35.902867 3455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\": not found" containerID="6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b" Jan 15 23:54:35.902941 kubelet[3455]: I0115 23:54:35.902893 3455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b"} err="failed to get container status \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e909f4b8d413eebfa6c5805a44c644c06b39bf681cb60cf3d44c7665562729b\": not found" Jan 15 23:54:36.089318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b723edac28058e0c6fca2c14e190b181b833f72123e65d6ea8c7fd91f6b32c82-shm.mount: Deactivated successfully. Jan 15 23:54:36.090317 systemd[1]: var-lib-kubelet-pods-13e82da4\x2d92e2\x2d483f\x2db683\x2d09f637581b86-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwzpwc.mount: Deactivated successfully. Jan 15 23:54:36.090367 systemd[1]: var-lib-kubelet-pods-a2d0fea2\x2db100\x2d4c78\x2db9a1\x2d1f65f3e2e3f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djzlw9.mount: Deactivated successfully. Jan 15 23:54:36.090410 systemd[1]: var-lib-kubelet-pods-a2d0fea2\x2db100\x2d4c78\x2db9a1\x2d1f65f3e2e3f8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 15 23:54:36.090471 systemd[1]: var-lib-kubelet-pods-a2d0fea2\x2db100\x2d4c78\x2db9a1\x2d1f65f3e2e3f8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 15 23:54:36.488675 kubelet[3455]: I0115 23:54:36.488132 3455 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13e82da4-92e2-483f-b683-09f637581b86" path="/var/lib/kubelet/pods/13e82da4-92e2-483f-b683-09f637581b86/volumes" Jan 15 23:54:36.488675 kubelet[3455]: I0115 23:54:36.488479 3455 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" path="/var/lib/kubelet/pods/a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8/volumes" Jan 15 23:54:37.049439 sshd[4996]: Connection closed by 10.200.16.10 port 34990 Jan 15 23:54:37.049322 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:37.053592 systemd-logind[1871]: Session 24 logged out. Waiting for processes to exit. Jan 15 23:54:37.054280 systemd[1]: sshd@21-10.200.20.10:22-10.200.16.10:34990.service: Deactivated successfully. Jan 15 23:54:37.056048 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 23:54:37.058339 systemd-logind[1871]: Removed session 24. Jan 15 23:54:37.138293 systemd[1]: Started sshd@22-10.200.20.10:22-10.200.16.10:34992.service - OpenSSH per-connection server daemon (10.200.16.10:34992). Jan 15 23:54:37.593107 sshd[5141]: Accepted publickey for core from 10.200.16.10 port 34992 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:37.594212 sshd-session[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:37.597897 systemd-logind[1871]: New session 25 of user core. Jan 15 23:54:37.605580 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 23:54:37.822022 kubelet[3455]: E0115 23:54:37.821916 3455 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 15 23:54:38.239145 kubelet[3455]: I0115 23:54:38.239104 3455 memory_manager.go:355] "RemoveStaleState removing state" podUID="a2d0fea2-b100-4c78-b9a1-1f65f3e2e3f8" containerName="cilium-agent" Jan 15 23:54:38.239145 kubelet[3455]: I0115 23:54:38.239132 3455 memory_manager.go:355] "RemoveStaleState removing state" podUID="13e82da4-92e2-483f-b683-09f637581b86" containerName="cilium-operator" Jan 15 23:54:38.248634 systemd[1]: Created slice kubepods-burstable-pod138d085f_55d9_4b18_bb19_88c2558da02e.slice - libcontainer container kubepods-burstable-pod138d085f_55d9_4b18_bb19_88c2558da02e.slice. Jan 15 23:54:38.297235 kubelet[3455]: I0115 23:54:38.297194 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/138d085f-55d9-4b18-bb19-88c2558da02e-cilium-config-path\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297235 kubelet[3455]: I0115 23:54:38.297234 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/138d085f-55d9-4b18-bb19-88c2558da02e-bpf-maps\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297235 kubelet[3455]: I0115 23:54:38.297249 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/138d085f-55d9-4b18-bb19-88c2558da02e-host-proc-sys-kernel\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297667 kubelet[3455]: I0115 23:54:38.297258 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/138d085f-55d9-4b18-bb19-88c2558da02e-hostproc\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297667 kubelet[3455]: I0115 23:54:38.297268 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/138d085f-55d9-4b18-bb19-88c2558da02e-cilium-cgroup\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297667 kubelet[3455]: I0115 23:54:38.297278 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/138d085f-55d9-4b18-bb19-88c2558da02e-host-proc-sys-net\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297667 kubelet[3455]: I0115 23:54:38.297291 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/138d085f-55d9-4b18-bb19-88c2558da02e-cni-path\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297667 kubelet[3455]: I0115 23:54:38.297303 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/138d085f-55d9-4b18-bb19-88c2558da02e-etc-cni-netd\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297667 kubelet[3455]: I0115 23:54:38.297335 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/138d085f-55d9-4b18-bb19-88c2558da02e-clustermesh-secrets\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297762 kubelet[3455]: I0115 23:54:38.297370 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/138d085f-55d9-4b18-bb19-88c2558da02e-lib-modules\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297762 kubelet[3455]: I0115 23:54:38.297386 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/138d085f-55d9-4b18-bb19-88c2558da02e-cilium-ipsec-secrets\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297762 kubelet[3455]: I0115 23:54:38.297402 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/138d085f-55d9-4b18-bb19-88c2558da02e-hubble-tls\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297762 kubelet[3455]: I0115 23:54:38.297432 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/138d085f-55d9-4b18-bb19-88c2558da02e-cilium-run\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297762 kubelet[3455]: I0115 23:54:38.297444 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/138d085f-55d9-4b18-bb19-88c2558da02e-xtables-lock\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.297762 kubelet[3455]: I0115 23:54:38.297456 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmlk\" (UniqueName: \"kubernetes.io/projected/138d085f-55d9-4b18-bb19-88c2558da02e-kube-api-access-vsmlk\") pod \"cilium-hdsg9\" (UID: \"138d085f-55d9-4b18-bb19-88c2558da02e\") " pod="kube-system/cilium-hdsg9" Jan 15 23:54:38.302819 sshd[5144]: Connection closed by 10.200.16.10 port 34992 Jan 15 23:54:38.302663 sshd-session[5141]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:38.307826 systemd[1]: sshd@22-10.200.20.10:22-10.200.16.10:34992.service: Deactivated successfully. Jan 15 23:54:38.310385 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 23:54:38.311144 systemd-logind[1871]: Session 25 logged out. Waiting for processes to exit. Jan 15 23:54:38.312703 systemd-logind[1871]: Removed session 25. Jan 15 23:54:38.394738 systemd[1]: Started sshd@23-10.200.20.10:22-10.200.16.10:35002.service - OpenSSH per-connection server daemon (10.200.16.10:35002). Jan 15 23:54:38.552572 containerd[1896]: time="2026-01-15T23:54:38.552445067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hdsg9,Uid:138d085f-55d9-4b18-bb19-88c2558da02e,Namespace:kube-system,Attempt:0,}" Jan 15 23:54:38.596281 containerd[1896]: time="2026-01-15T23:54:38.596239513Z" level=info msg="connecting to shim 2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850" address="unix:///run/containerd/s/00857a8c7dd812b1e4085764e1cefa22dfbe07c1cb7cc06363a55bb78c57183f" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:54:38.614687 systemd[1]: Started cri-containerd-2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850.scope - libcontainer container 2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850. Jan 15 23:54:38.639581 containerd[1896]: time="2026-01-15T23:54:38.639546961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hdsg9,Uid:138d085f-55d9-4b18-bb19-88c2558da02e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850\"" Jan 15 23:54:38.644178 containerd[1896]: time="2026-01-15T23:54:38.644126395Z" level=info msg="CreateContainer within sandbox \"2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 15 23:54:38.665389 containerd[1896]: time="2026-01-15T23:54:38.665349278Z" level=info msg="Container 27d6701d49ffd2cbf1f9368a16a4b0274b286bcc66004d62c8b9fb2d9d569f56: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:54:38.682285 containerd[1896]: time="2026-01-15T23:54:38.682187776Z" level=info msg="CreateContainer within sandbox \"2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"27d6701d49ffd2cbf1f9368a16a4b0274b286bcc66004d62c8b9fb2d9d569f56\"" Jan 15 23:54:38.683478 containerd[1896]: time="2026-01-15T23:54:38.683207433Z" level=info msg="StartContainer for \"27d6701d49ffd2cbf1f9368a16a4b0274b286bcc66004d62c8b9fb2d9d569f56\"" Jan 15 23:54:38.684340 containerd[1896]: time="2026-01-15T23:54:38.684320486Z" level=info msg="connecting to shim 27d6701d49ffd2cbf1f9368a16a4b0274b286bcc66004d62c8b9fb2d9d569f56" address="unix:///run/containerd/s/00857a8c7dd812b1e4085764e1cefa22dfbe07c1cb7cc06363a55bb78c57183f" protocol=ttrpc version=3 Jan 15 23:54:38.700565 systemd[1]: Started cri-containerd-27d6701d49ffd2cbf1f9368a16a4b0274b286bcc66004d62c8b9fb2d9d569f56.scope - libcontainer container 27d6701d49ffd2cbf1f9368a16a4b0274b286bcc66004d62c8b9fb2d9d569f56. Jan 15 23:54:38.727704 containerd[1896]: time="2026-01-15T23:54:38.727571506Z" level=info msg="StartContainer for \"27d6701d49ffd2cbf1f9368a16a4b0274b286bcc66004d62c8b9fb2d9d569f56\" returns successfully" Jan 15 23:54:38.732713 systemd[1]: cri-containerd-27d6701d49ffd2cbf1f9368a16a4b0274b286bcc66004d62c8b9fb2d9d569f56.scope: Deactivated successfully. Jan 15 23:54:38.737898 containerd[1896]: time="2026-01-15T23:54:38.737853804Z" level=info msg="received container exit event container_id:\"27d6701d49ffd2cbf1f9368a16a4b0274b286bcc66004d62c8b9fb2d9d569f56\" id:\"27d6701d49ffd2cbf1f9368a16a4b0274b286bcc66004d62c8b9fb2d9d569f56\" pid:5220 exited_at:{seconds:1768521278 nanos:737634730}" Jan 15 23:54:38.818356 containerd[1896]: time="2026-01-15T23:54:38.818249090Z" level=info msg="CreateContainer within sandbox \"2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 15 23:54:38.838440 containerd[1896]: time="2026-01-15T23:54:38.838019760Z" level=info msg="Container 5b0669c61b8515b26d78bb5287d97248278aba79555dad361130b15fe9b55be2: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:54:38.854515 containerd[1896]: time="2026-01-15T23:54:38.854445095Z" level=info msg="CreateContainer within sandbox \"2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b0669c61b8515b26d78bb5287d97248278aba79555dad361130b15fe9b55be2\"" Jan 15 23:54:38.856450 containerd[1896]: time="2026-01-15T23:54:38.856314072Z" level=info msg="StartContainer for \"5b0669c61b8515b26d78bb5287d97248278aba79555dad361130b15fe9b55be2\"" Jan 15 23:54:38.857308 containerd[1896]: time="2026-01-15T23:54:38.857178993Z" level=info msg="connecting to shim 5b0669c61b8515b26d78bb5287d97248278aba79555dad361130b15fe9b55be2" address="unix:///run/containerd/s/00857a8c7dd812b1e4085764e1cefa22dfbe07c1cb7cc06363a55bb78c57183f" protocol=ttrpc version=3 Jan 15 23:54:38.874570 systemd[1]: Started cri-containerd-5b0669c61b8515b26d78bb5287d97248278aba79555dad361130b15fe9b55be2.scope - libcontainer container 5b0669c61b8515b26d78bb5287d97248278aba79555dad361130b15fe9b55be2. Jan 15 23:54:38.889872 sshd[5154]: Accepted publickey for core from 10.200.16.10 port 35002 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:38.890923 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:38.895139 systemd-logind[1871]: New session 26 of user core. Jan 15 23:54:38.900663 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 23:54:38.908390 systemd[1]: cri-containerd-5b0669c61b8515b26d78bb5287d97248278aba79555dad361130b15fe9b55be2.scope: Deactivated successfully. Jan 15 23:54:38.911762 containerd[1896]: time="2026-01-15T23:54:38.911650988Z" level=info msg="StartContainer for \"5b0669c61b8515b26d78bb5287d97248278aba79555dad361130b15fe9b55be2\" returns successfully" Jan 15 23:54:38.913410 containerd[1896]: time="2026-01-15T23:54:38.913377758Z" level=info msg="received container exit event container_id:\"5b0669c61b8515b26d78bb5287d97248278aba79555dad361130b15fe9b55be2\" id:\"5b0669c61b8515b26d78bb5287d97248278aba79555dad361130b15fe9b55be2\" pid:5265 exited_at:{seconds:1768521278 nanos:913044023}" Jan 15 23:54:39.251856 sshd[5283]: Connection closed by 10.200.16.10 port 35002 Jan 15 23:54:39.251696 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:39.255341 systemd[1]: sshd@23-10.200.20.10:22-10.200.16.10:35002.service: Deactivated successfully. Jan 15 23:54:39.257724 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 23:54:39.259082 systemd-logind[1871]: Session 26 logged out. Waiting for processes to exit. Jan 15 23:54:39.260930 systemd-logind[1871]: Removed session 26. Jan 15 23:54:39.345994 systemd[1]: Started sshd@24-10.200.20.10:22-10.200.16.10:35010.service - OpenSSH per-connection server daemon (10.200.16.10:35010). Jan 15 23:54:39.823940 containerd[1896]: time="2026-01-15T23:54:39.823822735Z" level=info msg="CreateContainer within sandbox \"2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 15 23:54:39.857765 containerd[1896]: time="2026-01-15T23:54:39.857655963Z" level=info msg="Container e5aea64f765007de011ba20ba88a3439ace0d8c0dc3b4af82ae0894e57f13509: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:54:39.874450 sshd[5302]: Accepted publickey for core from 10.200.16.10 port 35010 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:54:39.875607 sshd-session[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:54:39.883901 containerd[1896]: time="2026-01-15T23:54:39.883312801Z" level=info msg="CreateContainer within sandbox \"2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5aea64f765007de011ba20ba88a3439ace0d8c0dc3b4af82ae0894e57f13509\"" Jan 15 23:54:39.883493 systemd-logind[1871]: New session 27 of user core. Jan 15 23:54:39.885273 containerd[1896]: time="2026-01-15T23:54:39.884628592Z" level=info msg="StartContainer for \"e5aea64f765007de011ba20ba88a3439ace0d8c0dc3b4af82ae0894e57f13509\"" Jan 15 23:54:39.885646 containerd[1896]: time="2026-01-15T23:54:39.885606503Z" level=info msg="connecting to shim e5aea64f765007de011ba20ba88a3439ace0d8c0dc3b4af82ae0894e57f13509" address="unix:///run/containerd/s/00857a8c7dd812b1e4085764e1cefa22dfbe07c1cb7cc06363a55bb78c57183f" protocol=ttrpc version=3 Jan 15 23:54:39.887572 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 15 23:54:39.909582 systemd[1]: Started cri-containerd-e5aea64f765007de011ba20ba88a3439ace0d8c0dc3b4af82ae0894e57f13509.scope - libcontainer container e5aea64f765007de011ba20ba88a3439ace0d8c0dc3b4af82ae0894e57f13509. Jan 15 23:54:39.968860 systemd[1]: cri-containerd-e5aea64f765007de011ba20ba88a3439ace0d8c0dc3b4af82ae0894e57f13509.scope: Deactivated successfully. Jan 15 23:54:39.975040 containerd[1896]: time="2026-01-15T23:54:39.974331834Z" level=info msg="received container exit event container_id:\"e5aea64f765007de011ba20ba88a3439ace0d8c0dc3b4af82ae0894e57f13509\" id:\"e5aea64f765007de011ba20ba88a3439ace0d8c0dc3b4af82ae0894e57f13509\" pid:5320 exited_at:{seconds:1768521279 nanos:971247839}" Jan 15 23:54:39.976004 containerd[1896]: time="2026-01-15T23:54:39.975957791Z" level=info msg="StartContainer for \"e5aea64f765007de011ba20ba88a3439ace0d8c0dc3b4af82ae0894e57f13509\" returns successfully" Jan 15 23:54:39.992310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5aea64f765007de011ba20ba88a3439ace0d8c0dc3b4af82ae0894e57f13509-rootfs.mount: Deactivated successfully. Jan 15 23:54:40.829485 containerd[1896]: time="2026-01-15T23:54:40.829361618Z" level=info msg="CreateContainer within sandbox \"2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 15 23:54:40.865242 containerd[1896]: time="2026-01-15T23:54:40.865131546Z" level=info msg="Container 283aa6672788cbe14a63b98142268edf02234cc470dd07b8c628a027190f328e: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:54:40.867933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170164663.mount: Deactivated successfully. Jan 15 23:54:40.886688 containerd[1896]: time="2026-01-15T23:54:40.886648371Z" level=info msg="CreateContainer within sandbox \"2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"283aa6672788cbe14a63b98142268edf02234cc470dd07b8c628a027190f328e\"" Jan 15 23:54:40.887269 containerd[1896]: time="2026-01-15T23:54:40.887239407Z" level=info msg="StartContainer for \"283aa6672788cbe14a63b98142268edf02234cc470dd07b8c628a027190f328e\"" Jan 15 23:54:40.888394 containerd[1896]: time="2026-01-15T23:54:40.888367701Z" level=info msg="connecting to shim 283aa6672788cbe14a63b98142268edf02234cc470dd07b8c628a027190f328e" address="unix:///run/containerd/s/00857a8c7dd812b1e4085764e1cefa22dfbe07c1cb7cc06363a55bb78c57183f" protocol=ttrpc version=3 Jan 15 23:54:40.908590 systemd[1]: Started cri-containerd-283aa6672788cbe14a63b98142268edf02234cc470dd07b8c628a027190f328e.scope - libcontainer container 283aa6672788cbe14a63b98142268edf02234cc470dd07b8c628a027190f328e. Jan 15 23:54:40.928909 systemd[1]: cri-containerd-283aa6672788cbe14a63b98142268edf02234cc470dd07b8c628a027190f328e.scope: Deactivated successfully. Jan 15 23:54:40.940976 containerd[1896]: time="2026-01-15T23:54:40.940843201Z" level=info msg="received container exit event container_id:\"283aa6672788cbe14a63b98142268edf02234cc470dd07b8c628a027190f328e\" id:\"283aa6672788cbe14a63b98142268edf02234cc470dd07b8c628a027190f328e\" pid:5366 exited_at:{seconds:1768521280 nanos:930243592}" Jan 15 23:54:40.942377 containerd[1896]: time="2026-01-15T23:54:40.942346961Z" level=info msg="StartContainer for \"283aa6672788cbe14a63b98142268edf02234cc470dd07b8c628a027190f328e\" returns successfully" Jan 15 23:54:40.960936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-283aa6672788cbe14a63b98142268edf02234cc470dd07b8c628a027190f328e-rootfs.mount: Deactivated successfully. Jan 15 23:54:41.834970 containerd[1896]: time="2026-01-15T23:54:41.833903293Z" level=info msg="CreateContainer within sandbox \"2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 15 23:54:41.864594 containerd[1896]: time="2026-01-15T23:54:41.864555625Z" level=info msg="Container 577e566d31a60602a2c9a88a52a40fa958b0fc491298134b69d703d5f9da562f: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:54:41.895875 containerd[1896]: time="2026-01-15T23:54:41.895833115Z" level=info msg="CreateContainer within sandbox \"2ca8c0116dae6f1b1fb4c0e724bb405f25c7b4c4fa5ab41f4a80ed31e920d850\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"577e566d31a60602a2c9a88a52a40fa958b0fc491298134b69d703d5f9da562f\"" Jan 15 23:54:41.897603 containerd[1896]: time="2026-01-15T23:54:41.897562142Z" level=info msg="StartContainer for \"577e566d31a60602a2c9a88a52a40fa958b0fc491298134b69d703d5f9da562f\"" Jan 15 23:54:41.898534 containerd[1896]: time="2026-01-15T23:54:41.898506794Z" level=info msg="connecting to shim 577e566d31a60602a2c9a88a52a40fa958b0fc491298134b69d703d5f9da562f" address="unix:///run/containerd/s/00857a8c7dd812b1e4085764e1cefa22dfbe07c1cb7cc06363a55bb78c57183f" protocol=ttrpc version=3 Jan 15 23:54:41.918576 systemd[1]: Started cri-containerd-577e566d31a60602a2c9a88a52a40fa958b0fc491298134b69d703d5f9da562f.scope - libcontainer container 577e566d31a60602a2c9a88a52a40fa958b0fc491298134b69d703d5f9da562f. Jan 15 23:54:41.969003 containerd[1896]: time="2026-01-15T23:54:41.968957999Z" level=info msg="StartContainer for \"577e566d31a60602a2c9a88a52a40fa958b0fc491298134b69d703d5f9da562f\" returns successfully" Jan 15 23:54:42.246476 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 15 23:54:44.641143 systemd-networkd[1493]: lxc_health: Link UP Jan 15 23:54:44.657884 systemd-networkd[1493]: lxc_health: Gained carrier Jan 15 23:54:45.629557 update_engine[1875]: I20260115 23:54:45.629482 1875 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 23:54:45.629900 update_engine[1875]: I20260115 23:54:45.629580 1875 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 23:54:45.629920 update_engine[1875]: I20260115 23:54:45.629899 1875 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 23:54:45.711195 update_engine[1875]: E20260115 23:54:45.711128 1875 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 23:54:45.711352 update_engine[1875]: I20260115 23:54:45.711217 1875 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 15 23:54:45.711352 update_engine[1875]: I20260115 23:54:45.711225 1875 omaha_request_action.cc:617] Omaha request response: Jan 15 23:54:45.711352 update_engine[1875]: E20260115 23:54:45.711323 1875 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 15 23:54:45.711352 update_engine[1875]: I20260115 23:54:45.711339 1875 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 15 23:54:45.711352 update_engine[1875]: I20260115 23:54:45.711344 1875 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 23:54:45.711352 update_engine[1875]: I20260115 23:54:45.711349 1875 update_attempter.cc:306] Processing Done. Jan 15 23:54:45.711468 update_engine[1875]: E20260115 23:54:45.711360 1875 update_attempter.cc:619] Update failed. Jan 15 23:54:45.711468 update_engine[1875]: I20260115 23:54:45.711364 1875 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 15 23:54:45.711468 update_engine[1875]: I20260115 23:54:45.711367 1875 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 15 23:54:45.711468 update_engine[1875]: I20260115 23:54:45.711371 1875 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 15 23:54:45.712951 update_engine[1875]: I20260115 23:54:45.712499 1875 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 15 23:54:45.712951 update_engine[1875]: I20260115 23:54:45.712543 1875 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 15 23:54:45.712951 update_engine[1875]: I20260115 23:54:45.712548 1875 omaha_request_action.cc:272] Request: Jan 15 23:54:45.712951 update_engine[1875]: Jan 15 23:54:45.712951 update_engine[1875]: Jan 15 23:54:45.712951 update_engine[1875]: Jan 15 23:54:45.712951 update_engine[1875]: Jan 15 23:54:45.712951 update_engine[1875]: Jan 15 23:54:45.712951 update_engine[1875]: Jan 15 23:54:45.712951 update_engine[1875]: I20260115 23:54:45.712553 1875 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 23:54:45.712951 update_engine[1875]: I20260115 23:54:45.712575 1875 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 23:54:45.712951 update_engine[1875]: I20260115 23:54:45.712908 1875 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 23:54:45.713282 locksmithd[1971]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 15 23:54:45.998287 update_engine[1875]: E20260115 23:54:45.998064 1875 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 23:54:45.998287 update_engine[1875]: I20260115 23:54:45.998149 1875 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 15 23:54:45.998287 update_engine[1875]: I20260115 23:54:45.998157 1875 omaha_request_action.cc:617] Omaha request response: Jan 15 23:54:45.998287 update_engine[1875]: I20260115 23:54:45.998163 1875 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 23:54:45.998287 update_engine[1875]: I20260115 23:54:45.998167 1875 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 23:54:45.998287 update_engine[1875]: I20260115 23:54:45.998170 1875 update_attempter.cc:306] Processing Done. Jan 15 23:54:45.998287 update_engine[1875]: I20260115 23:54:45.998175 1875 update_attempter.cc:310] Error event sent. Jan 15 23:54:45.998287 update_engine[1875]: I20260115 23:54:45.998193 1875 update_check_scheduler.cc:74] Next update check in 45m53s Jan 15 23:54:45.998750 locksmithd[1971]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 15 23:54:46.339574 systemd-networkd[1493]: lxc_health: Gained IPv6LL Jan 15 23:54:46.582211 kubelet[3455]: I0115 23:54:46.581815 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hdsg9" podStartSLOduration=8.581798688 podStartE2EDuration="8.581798688s" podCreationTimestamp="2026-01-15 23:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:54:42.852252337 +0000 UTC m=+150.453865270" watchObservedRunningTime="2026-01-15 23:54:46.581798688 +0000 UTC m=+154.183411629" Jan 15 23:54:50.688072 sshd[5306]: Connection closed by 10.200.16.10 port 35010 Jan 15 23:54:50.688731 sshd-session[5302]: pam_unix(sshd:session): session closed for user core Jan 15 23:54:50.692480 systemd-logind[1871]: Session 27 logged out. Waiting for processes to exit. Jan 15 23:54:50.692947 systemd[1]: sshd@24-10.200.20.10:22-10.200.16.10:35010.service: Deactivated successfully. Jan 15 23:54:50.694610 systemd[1]: session-27.scope: Deactivated successfully. Jan 15 23:54:50.696795 systemd-logind[1871]: Removed session 27.