Dec 16 12:28:32.137065 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 16 12:28:32.137084 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:28:32.137091 kernel: KASLR enabled Dec 16 12:28:32.137095 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 16 12:28:32.137099 kernel: printk: legacy bootconsole [pl11] enabled Dec 16 12:28:32.137104 kernel: efi: EFI v2.7 by EDK II Dec 16 12:28:32.137109 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e3eb018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 16 12:28:32.137113 kernel: random: crng init done Dec 16 12:28:32.137117 kernel: secureboot: Secure boot disabled Dec 16 12:28:32.137121 kernel: ACPI: Early table checksum verification disabled Dec 16 12:28:32.137125 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 16 12:28:32.137129 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.137133 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.137137 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 12:28:32.137143 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.137147 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.137151 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.137156 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.137160 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.137165 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.137169 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 16 12:28:32.137173 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.137177 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 16 12:28:32.137181 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:28:32.137186 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 16 12:28:32.137190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 16 12:28:32.137194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 16 12:28:32.137198 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 16 12:28:32.137202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 16 12:28:32.137207 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 16 12:28:32.137211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 16 12:28:32.137216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 16 12:28:32.137220 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 16 12:28:32.137224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 16 12:28:32.137228 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 16 12:28:32.137232 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 16 12:28:32.137237 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 16 12:28:32.137241 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 16 12:28:32.137245 kernel: Zone ranges: Dec 16 12:28:32.137249 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 16 12:28:32.137256 kernel: DMA32 empty Dec 16 12:28:32.137261 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:28:32.137265 kernel: Device empty Dec 16 12:28:32.137269 kernel: Movable zone start for each node Dec 16 12:28:32.137274 kernel: Early memory node ranges Dec 16 12:28:32.137278 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 16 12:28:32.137283 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 16 12:28:32.137287 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 16 12:28:32.137292 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 16 12:28:32.137296 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 16 12:28:32.137300 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 16 12:28:32.137305 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:28:32.137309 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 16 12:28:32.137313 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 16 12:28:32.137318 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 16 12:28:32.137322 kernel: psci: probing for conduit method from ACPI. Dec 16 12:28:32.137326 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 12:28:32.137331 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:28:32.137336 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 16 12:28:32.137340 kernel: psci: SMC Calling Convention v1.4 Dec 16 12:28:32.137345 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 12:28:32.137349 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 12:28:32.137354 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:28:32.137358 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:28:32.137363 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:28:32.137367 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:28:32.137372 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 16 12:28:32.137376 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:28:32.137381 kernel: CPU features: detected: Spectre-v4 Dec 16 12:28:32.137385 kernel: CPU features: detected: Spectre-BHB Dec 16 12:28:32.137391 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:28:32.137395 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:28:32.137400 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 16 12:28:32.137404 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:28:32.137409 kernel: alternatives: applying boot alternatives Dec 16 12:28:32.137414 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:28:32.137419 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:28:32.137423 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:28:32.137428 kernel: Fallback order for Node 0: 0 Dec 16 12:28:32.137432 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 16 12:28:32.137437 kernel: Policy zone: Normal Dec 16 12:28:32.137442 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:28:32.137447 kernel: software IO TLB: area num 2. Dec 16 12:28:32.137451 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Dec 16 12:28:32.137455 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:28:32.137460 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:28:32.137465 kernel: rcu: RCU event tracing is enabled. Dec 16 12:28:32.137470 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:28:32.137474 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:28:32.137478 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:28:32.137483 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:28:32.137487 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:28:32.137493 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:28:32.137498 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:28:32.137502 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:28:32.137507 kernel: GICv3: 960 SPIs implemented Dec 16 12:28:32.137511 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:28:32.137516 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:28:32.137520 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 16 12:28:32.137524 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 16 12:28:32.137529 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 16 12:28:32.137533 kernel: ITS: No ITS available, not enabling LPIs Dec 16 12:28:32.137538 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:28:32.137544 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 16 12:28:32.137548 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:28:32.137553 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 16 12:28:32.137557 kernel: Console: colour dummy device 80x25 Dec 16 12:28:32.137562 kernel: printk: legacy console [tty1] enabled Dec 16 12:28:32.137567 kernel: ACPI: Core revision 20240827 Dec 16 12:28:32.137572 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 16 12:28:32.137576 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:28:32.137581 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:28:32.137585 kernel: landlock: Up and running. Dec 16 12:28:32.137591 kernel: SELinux: Initializing. Dec 16 12:28:32.137595 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:28:32.137600 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:28:32.137605 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 16 12:28:32.137609 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 16 12:28:32.137617 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 12:28:32.137623 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:28:32.137628 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:28:32.137633 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:28:32.137637 kernel: Remapping and enabling EFI services. Dec 16 12:28:32.137642 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:28:32.137647 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:28:32.137652 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 16 12:28:32.137657 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 16 12:28:32.137662 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:28:32.137687 kernel: SMP: Total of 2 processors activated. Dec 16 12:28:32.137692 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:28:32.137698 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:28:32.137709 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 16 12:28:32.137714 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:28:32.137718 kernel: CPU features: detected: Common not Private translations Dec 16 12:28:32.137723 kernel: CPU features: detected: CRC32 instructions Dec 16 12:28:32.137728 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 16 12:28:32.137733 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:28:32.137738 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:28:32.137743 kernel: CPU features: detected: Privileged Access Never Dec 16 12:28:32.137748 kernel: CPU features: detected: Speculation barrier (SB) Dec 16 12:28:32.137753 kernel: CPU features: detected: TLB range maintenance instructions Dec 16 12:28:32.137758 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:28:32.137763 kernel: CPU features: detected: Scalable Vector Extension Dec 16 12:28:32.137768 kernel: alternatives: applying system-wide alternatives Dec 16 12:28:32.137773 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 12:28:32.137778 kernel: SVE: maximum available vector length 16 bytes per vector Dec 16 12:28:32.137783 kernel: SVE: default vector length 16 bytes per vector Dec 16 12:28:32.137788 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Dec 16 12:28:32.137794 kernel: devtmpfs: initialized Dec 16 12:28:32.137799 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:28:32.137803 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:28:32.137808 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:28:32.137813 kernel: 0 pages in range for non-PLT usage Dec 16 12:28:32.137817 kernel: 508400 pages in range for PLT usage Dec 16 12:28:32.137822 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:28:32.137827 kernel: SMBIOS 3.1.0 present. Dec 16 12:28:32.137832 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 16 12:28:32.137837 kernel: DMI: Memory slots populated: 2/2 Dec 16 12:28:32.137842 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:28:32.137846 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:28:32.137851 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:28:32.137856 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:28:32.137861 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:28:32.137865 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Dec 16 12:28:32.137870 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:28:32.137876 kernel: cpuidle: using governor menu Dec 16 12:28:32.137880 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:28:32.137885 kernel: ASID allocator initialised with 32768 entries Dec 16 12:28:32.137890 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:28:32.137894 kernel: Serial: AMBA PL011 UART driver Dec 16 12:28:32.137899 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:28:32.137904 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:28:32.137909 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:28:32.137914 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:28:32.137919 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:28:32.137924 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:28:32.137928 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:28:32.137933 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:28:32.137938 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:28:32.137942 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:28:32.137947 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:28:32.137952 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:28:32.137957 kernel: ACPI: Interpreter enabled Dec 16 12:28:32.137962 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:28:32.137967 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:28:32.137972 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:28:32.137977 kernel: printk: legacy bootconsole [pl11] disabled Dec 16 12:28:32.137982 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 16 12:28:32.137986 kernel: ACPI: CPU0 has been hot-added Dec 16 12:28:32.137991 kernel: ACPI: CPU1 has been hot-added Dec 16 12:28:32.137996 kernel: iommu: Default domain type: Translated Dec 16 12:28:32.138000 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:28:32.138006 kernel: efivars: Registered efivars operations Dec 16 12:28:32.138011 kernel: vgaarb: loaded Dec 16 12:28:32.138015 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:28:32.138020 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:28:32.138025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:28:32.138029 kernel: pnp: PnP ACPI init Dec 16 12:28:32.138034 kernel: pnp: PnP ACPI: found 0 devices Dec 16 12:28:32.138039 kernel: NET: Registered PF_INET protocol family Dec 16 12:28:32.138044 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:28:32.138048 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:28:32.138054 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:28:32.138059 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:28:32.138064 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:28:32.138068 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:28:32.138073 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:28:32.138078 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:28:32.138082 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:28:32.138087 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:28:32.138092 kernel: kvm [1]: HYP mode not available Dec 16 12:28:32.138097 kernel: Initialise system trusted keyrings Dec 16 12:28:32.138102 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:28:32.138107 kernel: Key type asymmetric registered Dec 16 12:28:32.138111 kernel: Asymmetric key parser 'x509' registered Dec 16 12:28:32.138116 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:28:32.138121 kernel: io scheduler mq-deadline registered Dec 16 12:28:32.138126 kernel: io scheduler kyber registered Dec 16 12:28:32.138130 kernel: io scheduler bfq registered Dec 16 12:28:32.138135 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:28:32.138141 kernel: thunder_xcv, ver 1.0 Dec 16 12:28:32.138145 kernel: thunder_bgx, ver 1.0 Dec 16 12:28:32.138150 kernel: nicpf, ver 1.0 Dec 16 12:28:32.138154 kernel: nicvf, ver 1.0 Dec 16 12:28:32.138272 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:28:32.138326 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:28:31 UTC (1765888111) Dec 16 12:28:32.138332 kernel: efifb: probing for efifb Dec 16 12:28:32.138338 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 12:28:32.138343 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 12:28:32.138348 kernel: efifb: scrolling: redraw Dec 16 12:28:32.138353 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:28:32.138358 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:28:32.138363 kernel: fb0: EFI VGA frame buffer device Dec 16 12:28:32.138368 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 16 12:28:32.138373 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:28:32.138378 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:28:32.138383 kernel: watchdog: NMI not fully supported Dec 16 12:28:32.138388 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:28:32.138393 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:28:32.138398 kernel: Segment Routing with IPv6 Dec 16 12:28:32.138402 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:28:32.138407 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:28:32.138412 kernel: Key type dns_resolver registered Dec 16 12:28:32.138416 kernel: registered taskstats version 1 Dec 16 12:28:32.138421 kernel: Loading compiled-in X.509 certificates Dec 16 12:28:32.138426 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:28:32.138432 kernel: Demotion targets for Node 0: null Dec 16 12:28:32.138436 kernel: Key type .fscrypt registered Dec 16 12:28:32.138441 kernel: Key type fscrypt-provisioning registered Dec 16 12:28:32.138445 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:28:32.138450 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:28:32.138455 kernel: ima: No architecture policies found Dec 16 12:28:32.138460 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:28:32.138464 kernel: clk: Disabling unused clocks Dec 16 12:28:32.138469 kernel: PM: genpd: Disabling unused power domains Dec 16 12:28:32.138475 kernel: Warning: unable to open an initial console. Dec 16 12:28:32.138479 kernel: Freeing unused kernel memory: 39552K Dec 16 12:28:32.138484 kernel: Run /init as init process Dec 16 12:28:32.138489 kernel: with arguments: Dec 16 12:28:32.138493 kernel: /init Dec 16 12:28:32.138498 kernel: with environment: Dec 16 12:28:32.138503 kernel: HOME=/ Dec 16 12:28:32.138507 kernel: TERM=linux Dec 16 12:28:32.138513 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:28:32.138521 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:28:32.138526 systemd[1]: Detected virtualization microsoft. Dec 16 12:28:32.138531 systemd[1]: Detected architecture arm64. Dec 16 12:28:32.138536 systemd[1]: Running in initrd. Dec 16 12:28:32.138541 systemd[1]: No hostname configured, using default hostname. Dec 16 12:28:32.138547 systemd[1]: Hostname set to . Dec 16 12:28:32.138552 systemd[1]: Initializing machine ID from random generator. Dec 16 12:28:32.138557 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:28:32.138563 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:32.138568 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:32.138573 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:28:32.138579 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:28:32.138584 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:28:32.138589 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:28:32.138596 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:28:32.138601 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:28:32.138607 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:32.138612 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:32.138617 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:28:32.138622 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:28:32.138627 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:28:32.138633 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:28:32.138639 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:28:32.138644 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:28:32.138649 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:28:32.138654 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:28:32.138659 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:32.138734 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:32.138740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:32.138746 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:28:32.138751 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:28:32.138758 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:28:32.138763 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:28:32.138769 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:28:32.138774 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:28:32.138779 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:28:32.138784 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:28:32.138802 systemd-journald[226]: Collecting audit messages is disabled. Dec 16 12:28:32.138817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:32.138823 systemd-journald[226]: Journal started Dec 16 12:28:32.138838 systemd-journald[226]: Runtime Journal (/run/log/journal/814111269f9845969a78d0704646e2c8) is 8M, max 78.3M, 70.3M free. Dec 16 12:28:32.139367 systemd-modules-load[228]: Inserted module 'overlay' Dec 16 12:28:32.155150 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:28:32.160754 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:28:32.180849 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:28:32.180866 kernel: Bridge firewalling registered Dec 16 12:28:32.176274 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:32.190226 systemd-modules-load[228]: Inserted module 'br_netfilter' Dec 16 12:28:32.191518 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:28:32.201978 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:32.212812 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:32.223521 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:28:32.246160 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:28:32.252016 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:28:32.275486 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:28:32.289327 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:28:32.297230 systemd-tmpfiles[254]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:28:32.299127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:32.310697 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:28:32.323848 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:32.335559 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:28:32.367779 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:28:32.381769 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:28:32.416225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:28:32.439201 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:32.458832 kernel: SCSI subsystem initialized Dec 16 12:28:32.458858 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:28:32.465825 systemd-resolved[264]: Positive Trust Anchors: Dec 16 12:28:32.476763 kernel: iscsi: registered transport (tcp) Dec 16 12:28:32.465976 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:28:32.489241 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:28:32.489258 kernel: QLogic iSCSI HBA Driver Dec 16 12:28:32.465998 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:28:32.471034 systemd-resolved[264]: Defaulting to hostname 'linux'. Dec 16 12:28:32.471696 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:28:32.485953 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:32.545537 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:28:32.566990 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:32.574161 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:28:32.626518 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:28:32.633799 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:28:32.698685 kernel: raid6: neonx8 gen() 18537 MB/s Dec 16 12:28:32.715674 kernel: raid6: neonx4 gen() 18541 MB/s Dec 16 12:28:32.734675 kernel: raid6: neonx2 gen() 17083 MB/s Dec 16 12:28:32.754763 kernel: raid6: neonx1 gen() 15001 MB/s Dec 16 12:28:32.773676 kernel: raid6: int64x8 gen() 10521 MB/s Dec 16 12:28:32.792675 kernel: raid6: int64x4 gen() 10615 MB/s Dec 16 12:28:32.812691 kernel: raid6: int64x2 gen() 8989 MB/s Dec 16 12:28:32.833860 kernel: raid6: int64x1 gen() 7000 MB/s Dec 16 12:28:32.833870 kernel: raid6: using algorithm neonx4 gen() 18541 MB/s Dec 16 12:28:32.856409 kernel: raid6: .... xor() 15147 MB/s, rmw enabled Dec 16 12:28:32.856448 kernel: raid6: using neon recovery algorithm Dec 16 12:28:32.864508 kernel: xor: measuring software checksum speed Dec 16 12:28:32.864525 kernel: 8regs : 28609 MB/sec Dec 16 12:28:32.871211 kernel: 32regs : 27618 MB/sec Dec 16 12:28:32.871219 kernel: arm64_neon : 37642 MB/sec Dec 16 12:28:32.874400 kernel: xor: using function: arm64_neon (37642 MB/sec) Dec 16 12:28:32.911686 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:28:32.917258 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:28:32.926266 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:32.952878 systemd-udevd[475]: Using default interface naming scheme 'v255'. Dec 16 12:28:32.957155 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:32.964020 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:28:32.991946 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Dec 16 12:28:33.012257 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:28:33.018452 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:28:33.065358 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:33.079039 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:28:33.145348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:33.161769 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 12:28:33.161789 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 12:28:33.145444 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:33.160762 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:33.239283 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 12:28:33.239307 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 12:28:33.239314 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 12:28:33.239322 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 12:28:33.239348 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 12:28:33.239355 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 12:28:33.239361 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 12:28:33.239368 kernel: scsi host0: storvsc_host_t Dec 16 12:28:33.239502 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 12:28:33.239565 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 16 12:28:33.239637 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 12:28:33.239726 kernel: scsi host1: storvsc_host_t Dec 16 12:28:33.170854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:33.207894 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:33.239129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:33.286868 kernel: PTP clock support registered Dec 16 12:28:33.286886 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 16 12:28:33.287014 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 16 12:28:33.241356 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:33.294483 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 12:28:33.294611 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 16 12:28:33.294713 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 16 12:28:33.257482 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:33.322833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#125 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:33.322959 kernel: hv_netvsc 002248b5-d028-0022-48b5-d028002248b5 eth0: VF slot 1 added Dec 16 12:28:33.323024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#68 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:33.261819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:33.346331 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:28:33.346368 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 12:28:33.346517 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 12:28:33.347895 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:33.364370 kernel: hv_vmbus: registering driver hv_pci Dec 16 12:28:33.364398 kernel: hv_vmbus: registering driver hv_utils Dec 16 12:28:33.371120 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 12:28:33.371157 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 12:28:33.371165 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 12:28:33.371310 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 12:28:33.409234 systemd-resolved[264]: Clock change detected. Flushing caches. Dec 16 12:28:33.438808 kernel: hv_pci 163c937e-38a1-405a-ab59-357fa44100c5: PCI VMBus probing: Using version 0x10004 Dec 16 12:28:33.438957 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:28:33.438964 kernel: hv_pci 163c937e-38a1-405a-ab59-357fa44100c5: PCI host bridge to bus 38a1:00 Dec 16 12:28:33.439043 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 12:28:33.439126 kernel: pci_bus 38a1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 16 12:28:33.444953 kernel: pci_bus 38a1:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 12:28:33.494479 kernel: pci 38a1:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 16 12:28:33.501093 kernel: pci 38a1:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 16 12:28:33.506083 kernel: pci 38a1:00:02.0: enabling Extended Tags Dec 16 12:28:33.506101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:33.531045 kernel: pci 38a1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 38a1:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 16 12:28:33.543469 kernel: pci_bus 38a1:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 12:28:33.543609 kernel: pci 38a1:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 16 12:28:33.543688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#100 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:33.634923 kernel: mlx5_core 38a1:00:02.0: enabling device (0000 -> 0002) Dec 16 12:28:33.644290 kernel: mlx5_core 38a1:00:02.0: PTM is not supported by PCIe Dec 16 12:28:33.644437 kernel: mlx5_core 38a1:00:02.0: firmware version: 16.30.5006 Dec 16 12:28:33.827945 kernel: hv_netvsc 002248b5-d028-0022-48b5-d028002248b5 eth0: VF registering: eth1 Dec 16 12:28:33.828164 kernel: mlx5_core 38a1:00:02.0 eth1: joined to eth0 Dec 16 12:28:33.835166 kernel: mlx5_core 38a1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 16 12:28:33.845040 kernel: mlx5_core 38a1:00:02.0 enP14497s1: renamed from eth1 Dec 16 12:28:33.887864 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 16 12:28:33.987321 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 16 12:28:33.999917 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 16 12:28:34.007084 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 16 12:28:34.035774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:28:34.040709 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:28:34.049967 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:28:34.059109 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:34.069609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:28:34.084161 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:28:34.100439 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:28:34.124658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#101 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:34.120178 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:28:34.135041 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:28:35.149116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#115 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:35.160738 disk-uuid[665]: The operation has completed successfully. Dec 16 12:28:35.164608 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:28:35.233499 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:28:35.235040 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:28:35.262288 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:28:35.282343 sh[825]: Success Dec 16 12:28:35.317510 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:28:35.317556 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:28:35.322751 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:28:35.334036 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:28:35.614243 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:28:35.623188 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:28:35.638098 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:28:35.662046 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (843) Dec 16 12:28:35.673029 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:28:35.673094 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:35.914137 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:28:35.914231 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:28:35.941233 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:28:35.945242 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:28:35.952580 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:28:35.953255 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:28:35.980900 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:28:36.013039 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (866) Dec 16 12:28:36.025390 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:36.025426 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:36.052570 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:36.052618 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:36.062045 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:36.063214 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:28:36.074234 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:28:36.108892 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:28:36.120749 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:28:36.150136 systemd-networkd[1012]: lo: Link UP Dec 16 12:28:36.150147 systemd-networkd[1012]: lo: Gained carrier Dec 16 12:28:36.151278 systemd-networkd[1012]: Enumeration completed Dec 16 12:28:36.153108 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:28:36.153424 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:36.153427 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:28:36.161911 systemd[1]: Reached target network.target - Network. Dec 16 12:28:36.225044 kernel: mlx5_core 38a1:00:02.0 enP14497s1: Link up Dec 16 12:28:36.258047 kernel: hv_netvsc 002248b5-d028-0022-48b5-d028002248b5 eth0: Data path switched to VF: enP14497s1 Dec 16 12:28:36.258131 systemd-networkd[1012]: enP14497s1: Link UP Dec 16 12:28:36.258185 systemd-networkd[1012]: eth0: Link UP Dec 16 12:28:36.258316 systemd-networkd[1012]: eth0: Gained carrier Dec 16 12:28:36.258329 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:36.279123 systemd-networkd[1012]: enP14497s1: Gained carrier Dec 16 12:28:36.289053 systemd-networkd[1012]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:28:37.487067 ignition[969]: Ignition 2.22.0 Dec 16 12:28:37.487080 ignition[969]: Stage: fetch-offline Dec 16 12:28:37.490892 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:28:37.487181 ignition[969]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.500486 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:28:37.487188 ignition[969]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.487253 ignition[969]: parsed url from cmdline: "" Dec 16 12:28:37.487255 ignition[969]: no config URL provided Dec 16 12:28:37.487259 ignition[969]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:28:37.487264 ignition[969]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:28:37.487267 ignition[969]: failed to fetch config: resource requires networking Dec 16 12:28:37.487381 ignition[969]: Ignition finished successfully Dec 16 12:28:37.532408 ignition[1023]: Ignition 2.22.0 Dec 16 12:28:37.532413 ignition[1023]: Stage: fetch Dec 16 12:28:37.532637 ignition[1023]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.532645 ignition[1023]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.532713 ignition[1023]: parsed url from cmdline: "" Dec 16 12:28:37.532716 ignition[1023]: no config URL provided Dec 16 12:28:37.532720 ignition[1023]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:28:37.532727 ignition[1023]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:28:37.532745 ignition[1023]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 12:28:37.603297 ignition[1023]: GET result: OK Dec 16 12:28:37.605827 ignition[1023]: config has been read from IMDS userdata Dec 16 12:28:37.605848 ignition[1023]: parsing config with SHA512: 214b457cacafbfa88b14805504a5b434d545156102a1edbd4492675f9c891c7ce59c9527afaec78ac846982db997dc65ae90c9258b330a8d87ee8f88c62db944 Dec 16 12:28:37.609068 unknown[1023]: fetched base config from "system" Dec 16 12:28:37.609356 ignition[1023]: fetch: fetch complete Dec 16 12:28:37.609073 unknown[1023]: fetched base config from "system" Dec 16 12:28:37.609359 ignition[1023]: fetch: fetch passed Dec 16 12:28:37.609077 unknown[1023]: fetched user config from "azure" Dec 16 12:28:37.609402 ignition[1023]: Ignition finished successfully Dec 16 12:28:37.611305 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:28:37.620014 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:28:37.660137 ignition[1029]: Ignition 2.22.0 Dec 16 12:28:37.660150 ignition[1029]: Stage: kargs Dec 16 12:28:37.664260 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:28:37.660318 ignition[1029]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.672376 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:28:37.660324 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.679164 systemd-networkd[1012]: eth0: Gained IPv6LL Dec 16 12:28:37.660811 ignition[1029]: kargs: kargs passed Dec 16 12:28:37.660852 ignition[1029]: Ignition finished successfully Dec 16 12:28:37.704171 ignition[1035]: Ignition 2.22.0 Dec 16 12:28:37.704188 ignition[1035]: Stage: disks Dec 16 12:28:37.710259 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:28:37.704410 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.714999 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:28:37.704418 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.724304 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:28:37.705068 ignition[1035]: disks: disks passed Dec 16 12:28:37.733478 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:28:37.705110 ignition[1035]: Ignition finished successfully Dec 16 12:28:37.742877 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:28:37.752136 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:28:37.762276 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:28:37.844349 systemd-fsck[1044]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Dec 16 12:28:37.853578 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:28:37.860258 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:28:38.082040 kernel: EXT4-fs (sda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:28:38.082263 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:28:38.086051 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:28:38.109785 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:28:38.117665 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:28:38.129081 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:28:38.140138 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:28:38.140173 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:28:38.156330 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:28:38.165570 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:28:38.188038 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1058) Dec 16 12:28:38.200013 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:38.200061 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:38.210204 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:38.210250 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:38.211662 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:28:38.617706 coreos-metadata[1060]: Dec 16 12:28:38.617 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:28:38.624070 coreos-metadata[1060]: Dec 16 12:28:38.623 INFO Fetch successful Dec 16 12:28:38.624070 coreos-metadata[1060]: Dec 16 12:28:38.623 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:28:38.636290 coreos-metadata[1060]: Dec 16 12:28:38.636 INFO Fetch successful Dec 16 12:28:38.649311 coreos-metadata[1060]: Dec 16 12:28:38.649 INFO wrote hostname ci-4459.2.2-a-7f44347f41 to /sysroot/etc/hostname Dec 16 12:28:38.655939 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:28:38.834054 initrd-setup-root[1088]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:28:38.869046 initrd-setup-root[1095]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:28:38.886469 initrd-setup-root[1102]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:28:38.903831 initrd-setup-root[1109]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:28:39.803362 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:28:39.809666 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:28:39.828599 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:28:39.843907 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:39.841277 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:28:39.867388 ignition[1176]: INFO : Ignition 2.22.0 Dec 16 12:28:39.867388 ignition[1176]: INFO : Stage: mount Dec 16 12:28:39.875098 ignition[1176]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:39.875098 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:39.875098 ignition[1176]: INFO : mount: mount passed Dec 16 12:28:39.875098 ignition[1176]: INFO : Ignition finished successfully Dec 16 12:28:39.873044 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:28:39.879037 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:28:39.888919 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:28:39.912133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:28:39.942035 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1189) Dec 16 12:28:39.953034 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:39.953066 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:39.962045 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:39.962064 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:39.963616 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:28:39.989932 ignition[1207]: INFO : Ignition 2.22.0 Dec 16 12:28:39.994449 ignition[1207]: INFO : Stage: files Dec 16 12:28:39.994449 ignition[1207]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:39.994449 ignition[1207]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:39.994449 ignition[1207]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:28:40.012452 ignition[1207]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:28:40.012452 ignition[1207]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:28:40.037270 ignition[1207]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:28:40.043704 ignition[1207]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:28:40.050092 ignition[1207]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:28:40.043791 unknown[1207]: wrote ssh authorized keys file for user: core Dec 16 12:28:40.119970 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:28:40.128625 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 16 12:28:40.148919 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:28:40.221162 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:28:40.229141 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:28:40.229141 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 16 12:28:40.269908 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 12:28:40.342779 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:28:40.342779 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:28:40.358733 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:28:40.358733 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:28:40.358733 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:28:40.358733 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:28:40.358733 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:28:40.358733 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:28:40.358733 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:28:40.412854 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:28:40.412854 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:28:40.412854 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:28:40.412854 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:28:40.412854 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:28:40.412854 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 16 12:28:40.765449 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 12:28:40.972167 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:28:40.972167 ignition[1207]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 12:28:40.998802 ignition[1207]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:28:41.006546 ignition[1207]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:28:41.006546 ignition[1207]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 12:28:41.006546 ignition[1207]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:28:41.006546 ignition[1207]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:28:41.006546 ignition[1207]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:28:41.006546 ignition[1207]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:28:41.006546 ignition[1207]: INFO : files: files passed Dec 16 12:28:41.006546 ignition[1207]: INFO : Ignition finished successfully Dec 16 12:28:41.007185 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:28:41.019946 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:28:41.058738 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:28:41.073269 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:28:41.073337 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:28:41.101521 initrd-setup-root-after-ignition[1236]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:41.101521 initrd-setup-root-after-ignition[1236]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:41.115178 initrd-setup-root-after-ignition[1240]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:41.113069 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:28:41.120418 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:28:41.131347 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:28:41.176421 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:28:41.176522 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:28:41.185802 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:28:41.195124 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:28:41.203644 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:28:41.204297 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:28:41.240342 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:28:41.246828 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:28:41.268342 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:41.273789 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:41.283353 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:28:41.291445 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:28:41.291550 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:28:41.303726 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:28:41.308026 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:28:41.316876 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:28:41.325567 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:28:41.333417 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:28:41.342180 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:28:41.350966 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:28:41.359827 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:28:41.369166 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:28:41.377930 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:28:41.387673 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:28:41.395812 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:28:41.395923 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:28:41.407068 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:41.411716 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:41.421463 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:28:41.423035 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:41.431736 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:28:41.431826 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:28:41.446605 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:28:41.446691 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:28:41.452432 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:28:41.452503 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:28:41.460709 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:28:41.460774 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:28:41.473210 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:28:41.539728 ignition[1260]: INFO : Ignition 2.22.0 Dec 16 12:28:41.539728 ignition[1260]: INFO : Stage: umount Dec 16 12:28:41.539728 ignition[1260]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:41.539728 ignition[1260]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:41.539728 ignition[1260]: INFO : umount: umount passed Dec 16 12:28:41.539728 ignition[1260]: INFO : Ignition finished successfully Dec 16 12:28:41.508730 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:28:41.528166 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:28:41.528298 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:41.535104 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:28:41.535181 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:28:41.549942 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:28:41.550046 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:28:41.566635 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:28:41.567424 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:28:41.567497 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:28:41.575675 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:28:41.575719 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:28:41.583787 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:28:41.583824 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:28:41.593630 systemd[1]: Stopped target network.target - Network. Dec 16 12:28:41.602282 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:28:41.602354 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:28:41.608044 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:28:41.617032 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:28:41.621033 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:41.627870 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:28:41.636905 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:28:41.645596 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:28:41.645634 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:28:41.650285 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:28:41.650309 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:28:41.658214 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:28:41.658258 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:28:41.666191 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:28:41.666227 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:28:41.674854 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:28:41.683152 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:28:41.695206 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:28:41.695279 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:28:41.707891 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:28:41.707989 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:28:41.721477 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:28:41.721673 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:28:41.721758 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:28:41.735921 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:28:41.980598 kernel: hv_netvsc 002248b5-d028-0022-48b5-d028002248b5 eth0: Data path switched from VF: enP14497s1 Dec 16 12:28:41.741106 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:28:41.750345 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:28:41.750378 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:41.764595 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:28:41.779400 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:28:41.779464 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:28:41.789577 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:28:41.789617 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:41.803264 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:28:41.803306 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:41.808839 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:28:41.808872 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:41.822703 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:41.831852 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:28:41.831902 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:41.832190 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:28:41.832364 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:28:41.842217 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:28:41.842304 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:28:41.864673 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:28:41.869382 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:41.880495 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:28:41.880526 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:41.891145 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:28:41.891189 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:41.900433 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:28:41.900469 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:28:41.915020 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:28:41.915061 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:28:41.969491 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:28:41.969535 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:28:41.981097 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:28:41.993600 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:28:41.993652 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:42.008362 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:28:42.008403 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:42.232658 systemd-journald[226]: Received SIGTERM from PID 1 (systemd). Dec 16 12:28:42.025891 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:28:42.025941 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:28:42.037992 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:28:42.038038 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:42.043668 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:42.043701 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:42.058668 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 12:28:42.058714 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 12:28:42.058736 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 12:28:42.058760 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:42.059051 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:28:42.059144 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:28:42.069792 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:28:42.069873 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:28:42.081456 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:28:42.091572 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:28:42.128949 systemd[1]: Switching root. Dec 16 12:28:42.320594 systemd-journald[226]: Journal stopped Dec 16 12:28:46.671422 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:28:46.671444 kernel: SELinux: policy capability open_perms=1 Dec 16 12:28:46.671452 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:28:46.671457 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:28:46.671462 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:28:46.671469 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:28:46.671475 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:28:46.671481 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:28:46.671486 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:28:46.671491 kernel: audit: type=1403 audit(1765888123.521:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:28:46.671498 systemd[1]: Successfully loaded SELinux policy in 156.602ms. Dec 16 12:28:46.671506 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.267ms. Dec 16 12:28:46.671513 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:28:46.671519 systemd[1]: Detected virtualization microsoft. Dec 16 12:28:46.671526 systemd[1]: Detected architecture arm64. Dec 16 12:28:46.671531 systemd[1]: Detected first boot. Dec 16 12:28:46.671538 systemd[1]: Hostname set to . Dec 16 12:28:46.671544 systemd[1]: Initializing machine ID from random generator. Dec 16 12:28:46.671550 zram_generator::config[1305]: No configuration found. Dec 16 12:28:46.671557 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:28:46.671562 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:28:46.671569 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:28:46.671575 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:28:46.671582 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:28:46.671588 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:28:46.671594 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:28:46.671600 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:28:46.671606 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:28:46.671612 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:28:46.671618 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:28:46.671625 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:28:46.671631 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:28:46.671637 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:28:46.671643 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:46.671649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:46.671655 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:28:46.671662 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:28:46.671668 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:28:46.671675 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:28:46.671681 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:28:46.671689 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:46.671695 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:46.671701 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:28:46.671707 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:28:46.671715 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:28:46.671721 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:28:46.671728 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:46.671734 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:28:46.671740 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:28:46.671746 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:28:46.671753 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:28:46.671759 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:28:46.671766 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:28:46.671773 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:46.671779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:46.671785 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:46.671791 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:28:46.671798 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:28:46.671804 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:28:46.671811 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:28:46.671817 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:28:46.671823 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:28:46.671829 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:28:46.671836 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:28:46.671842 systemd[1]: Reached target machines.target - Containers. Dec 16 12:28:46.671849 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:28:46.671856 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:46.671863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:28:46.671869 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:28:46.671875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:46.671881 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:28:46.671888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:46.671894 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:28:46.671900 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:46.671907 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:28:46.671913 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:28:46.671920 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:28:46.671927 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:28:46.671933 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:28:46.671939 kernel: fuse: init (API version 7.41) Dec 16 12:28:46.671945 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:46.671951 kernel: loop: module loaded Dec 16 12:28:46.671957 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:28:46.671963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:28:46.671970 kernel: ACPI: bus type drm_connector registered Dec 16 12:28:46.671976 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:28:46.671983 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:28:46.672006 systemd-journald[1399]: Collecting audit messages is disabled. Dec 16 12:28:46.672046 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:28:46.672055 systemd-journald[1399]: Journal started Dec 16 12:28:46.672069 systemd-journald[1399]: Runtime Journal (/run/log/journal/e1aaec6c5c124e1984ef0753669674e1) is 8M, max 78.3M, 70.3M free. Dec 16 12:28:45.907838 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:28:45.912503 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:28:45.912889 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:28:45.913190 systemd[1]: systemd-journald.service: Consumed 2.680s CPU time. Dec 16 12:28:46.697491 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:28:46.705562 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:28:46.705598 systemd[1]: Stopped verity-setup.service. Dec 16 12:28:46.719857 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:28:46.720569 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:28:46.725254 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:28:46.729911 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:28:46.734680 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:28:46.739778 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:28:46.745211 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:28:46.751053 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:28:46.756700 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:46.762943 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:28:46.763159 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:28:46.768754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:46.770067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:46.775357 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:28:46.775573 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:28:46.780196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:46.780397 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:46.786239 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:28:46.786429 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:28:46.792010 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:46.792229 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:46.797728 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:46.803182 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:46.809740 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:28:46.815817 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:28:46.821907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:46.836409 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:28:46.842627 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:28:46.855070 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:28:46.860276 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:28:46.860302 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:28:46.865758 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:28:46.872934 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:28:46.877660 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:46.883637 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:28:46.889440 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:28:46.895626 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:28:46.896337 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:28:46.902411 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:28:46.904137 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:28:46.910747 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:28:46.918169 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:28:46.924910 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:28:46.930495 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:28:46.932516 systemd-journald[1399]: Time spent on flushing to /var/log/journal/e1aaec6c5c124e1984ef0753669674e1 is 10.354ms for 942 entries. Dec 16 12:28:46.932516 systemd-journald[1399]: System Journal (/var/log/journal/e1aaec6c5c124e1984ef0753669674e1) is 8M, max 2.6G, 2.6G free. Dec 16 12:28:46.965234 systemd-journald[1399]: Received client request to flush runtime journal. Dec 16 12:28:46.959264 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:28:46.966318 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:28:46.974833 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:28:46.984379 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:28:47.001721 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:47.012127 kernel: loop0: detected capacity change from 0 to 100632 Dec 16 12:28:47.041655 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Dec 16 12:28:47.041666 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Dec 16 12:28:47.044382 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:28:47.052181 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:28:47.058962 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:28:47.059512 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:28:47.176376 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:28:47.183939 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:28:47.202961 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Dec 16 12:28:47.203230 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Dec 16 12:28:47.205748 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:47.389045 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:28:47.432054 kernel: loop1: detected capacity change from 0 to 119840 Dec 16 12:28:47.520462 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:28:47.527272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:47.552486 systemd-udevd[1467]: Using default interface naming scheme 'v255'. Dec 16 12:28:47.748368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:47.763149 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:28:47.823036 kernel: loop2: detected capacity change from 0 to 207008 Dec 16 12:28:47.823159 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:28:47.890062 kernel: loop3: detected capacity change from 0 to 27936 Dec 16 12:28:47.891170 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:28:47.901041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#301 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:47.903518 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:28:47.919059 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:28:47.964379 kernel: hv_vmbus: registering driver hv_balloon Dec 16 12:28:47.964458 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 12:28:47.972707 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 12:28:47.972776 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 16 12:28:47.972801 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 12:28:47.979039 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 12:28:47.987497 kernel: Console: switching to colour dummy device 80x25 Dec 16 12:28:47.995071 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:28:48.024114 systemd-networkd[1492]: lo: Link UP Dec 16 12:28:48.024365 systemd-networkd[1492]: lo: Gained carrier Dec 16 12:28:48.026387 systemd-networkd[1492]: Enumeration completed Dec 16 12:28:48.026482 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:28:48.026771 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:48.026833 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:28:48.032916 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:28:48.040082 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:28:48.075600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:48.090212 kernel: mlx5_core 38a1:00:02.0 enP14497s1: Link up Dec 16 12:28:48.090391 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:48.090544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:48.096440 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:48.097519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:48.110030 kernel: hv_netvsc 002248b5-d028-0022-48b5-d028002248b5 eth0: Data path switched to VF: enP14497s1 Dec 16 12:28:48.111445 systemd-networkd[1492]: enP14497s1: Link UP Dec 16 12:28:48.111647 systemd-networkd[1492]: eth0: Link UP Dec 16 12:28:48.111699 systemd-networkd[1492]: eth0: Gained carrier Dec 16 12:28:48.111759 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:48.114263 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:28:48.120246 systemd-networkd[1492]: enP14497s1: Gained carrier Dec 16 12:28:48.127110 systemd-networkd[1492]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:28:48.151071 kernel: MACsec IEEE 802.1AE Dec 16 12:28:48.189636 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:28:48.199635 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:28:48.243046 kernel: loop4: detected capacity change from 0 to 100632 Dec 16 12:28:48.245899 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:28:48.258091 kernel: loop5: detected capacity change from 0 to 119840 Dec 16 12:28:48.270050 kernel: loop6: detected capacity change from 0 to 207008 Dec 16 12:28:48.287042 kernel: loop7: detected capacity change from 0 to 27936 Dec 16 12:28:48.296070 (sd-merge)[1611]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 16 12:28:48.296447 (sd-merge)[1611]: Merged extensions into '/usr'. Dec 16 12:28:48.299701 systemd[1]: Reload requested from client PID 1444 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:28:48.299796 systemd[1]: Reloading... Dec 16 12:28:48.352053 zram_generator::config[1640]: No configuration found. Dec 16 12:28:48.519231 systemd[1]: Reloading finished in 219 ms. Dec 16 12:28:48.544083 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:48.549209 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:28:48.560969 systemd[1]: Starting ensure-sysext.service... Dec 16 12:28:48.566161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:28:48.577156 systemd[1]: Reload requested from client PID 1700 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:28:48.577172 systemd[1]: Reloading... Dec 16 12:28:48.606133 systemd-tmpfiles[1701]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:28:48.606428 systemd-tmpfiles[1701]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:28:48.607304 systemd-tmpfiles[1701]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:28:48.607636 systemd-tmpfiles[1701]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:28:48.608228 systemd-tmpfiles[1701]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:28:48.608485 systemd-tmpfiles[1701]: ACLs are not supported, ignoring. Dec 16 12:28:48.608593 systemd-tmpfiles[1701]: ACLs are not supported, ignoring. Dec 16 12:28:48.630033 zram_generator::config[1734]: No configuration found. Dec 16 12:28:48.630454 systemd-tmpfiles[1701]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:28:48.630546 systemd-tmpfiles[1701]: Skipping /boot Dec 16 12:28:48.637088 systemd-tmpfiles[1701]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:28:48.637174 systemd-tmpfiles[1701]: Skipping /boot Dec 16 12:28:48.785516 systemd[1]: Reloading finished in 208 ms. Dec 16 12:28:48.801129 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:48.828800 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:28:48.843276 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:28:48.849558 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:28:48.859221 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:28:48.867580 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:28:48.876856 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:48.878807 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:48.888230 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:48.900427 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:48.909147 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:48.909260 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:48.910168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:48.910320 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:48.918679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:48.919086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:48.930104 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:48.930281 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:48.942945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:48.944054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:48.953718 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:48.961277 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:48.967431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:48.967565 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:48.968388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:48.968539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:48.978366 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:28:48.984851 systemd-resolved[1793]: Positive Trust Anchors: Dec 16 12:28:48.984877 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:28:48.985168 systemd-resolved[1793]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:28:48.985233 systemd-resolved[1793]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:28:48.990677 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:48.990799 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:48.992438 systemd-resolved[1793]: Using system hostname 'ci-4459.2.2-a-7f44347f41'. Dec 16 12:28:48.996546 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:28:49.002561 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:49.002711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:49.010746 systemd[1]: Reached target network.target - Network. Dec 16 12:28:49.014927 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:49.020551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:49.021589 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:49.031503 augenrules[1829]: No rules Dec 16 12:28:49.034435 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:28:49.040197 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:49.047230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:49.051308 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:49.051404 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:49.051504 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:28:49.056511 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:28:49.059054 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:28:49.064630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:49.064764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:49.070074 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:28:49.070197 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:28:49.075249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:49.075377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:49.080886 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:49.081003 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:49.089162 systemd[1]: Finished ensure-sysext.service. Dec 16 12:28:49.094809 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:28:49.094866 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:28:49.563404 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:28:49.570946 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:28:49.579179 systemd-networkd[1492]: eth0: Gained IPv6LL Dec 16 12:28:49.581308 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:28:49.587948 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:28:51.987074 ldconfig[1439]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:28:51.997855 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:28:52.005959 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:28:52.018188 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:28:52.023522 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:28:52.028435 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:28:52.033609 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:28:52.039308 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:28:52.043621 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:28:52.048890 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:28:52.054540 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:28:52.054570 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:28:52.058310 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:28:52.077389 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:28:52.083324 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:28:52.088824 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:28:52.094938 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:28:52.101225 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:28:52.107762 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:28:52.112534 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:28:52.118586 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:28:52.123327 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:28:52.127558 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:28:52.131836 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:28:52.131857 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:28:52.133851 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 12:28:52.147123 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:28:52.152466 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:28:52.159134 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:28:52.166122 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:28:52.173984 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:28:52.185980 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:28:52.190673 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:28:52.191653 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 12:28:52.198549 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 12:28:52.200104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:28:52.200831 jq[1858]: false Dec 16 12:28:52.206239 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:28:52.213737 KVP[1860]: KVP starting; pid is:1860 Dec 16 12:28:52.213866 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:28:52.218857 chronyd[1850]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 12:28:52.220539 KVP[1860]: KVP LIC Version: 3.1 Dec 16 12:28:52.221135 kernel: hv_utils: KVP IC version 4.0 Dec 16 12:28:52.221743 extend-filesystems[1859]: Found /dev/sda6 Dec 16 12:28:52.229130 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:28:52.235215 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:28:52.241281 chronyd[1850]: Timezone right/UTC failed leap second check, ignoring Dec 16 12:28:52.241613 chronyd[1850]: Loaded seccomp filter (level 2) Dec 16 12:28:52.243161 extend-filesystems[1859]: Found /dev/sda9 Dec 16 12:28:52.246056 extend-filesystems[1859]: Checking size of /dev/sda9 Dec 16 12:28:52.245467 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:28:52.259236 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:28:52.264447 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:28:52.264814 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:28:52.266136 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:28:52.275194 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:28:52.282602 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 12:28:52.287585 jq[1887]: true Dec 16 12:28:52.289865 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:28:52.296082 extend-filesystems[1859]: Old size kept for /dev/sda9 Dec 16 12:28:52.297499 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:28:52.297655 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:28:52.297873 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:28:52.298000 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:28:52.308458 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:28:52.308607 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:28:52.317042 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:28:52.325832 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:28:52.325996 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:28:52.342619 update_engine[1881]: I20251216 12:28:52.342533 1881 main.cc:92] Flatcar Update Engine starting Dec 16 12:28:52.352119 (ntainerd)[1901]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:28:52.359884 jq[1900]: true Dec 16 12:28:52.412273 systemd-logind[1878]: New seat seat0. Dec 16 12:28:52.414729 systemd-logind[1878]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 12:28:52.414892 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:28:52.424372 tar[1898]: linux-arm64/LICENSE Dec 16 12:28:52.425163 tar[1898]: linux-arm64/helm Dec 16 12:28:52.464206 dbus-daemon[1853]: [system] SELinux support is enabled Dec 16 12:28:52.464357 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:28:52.475196 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:28:52.475652 dbus-daemon[1853]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:28:52.475225 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:28:52.481614 update_engine[1881]: I20251216 12:28:52.480638 1881 update_check_scheduler.cc:74] Next update check in 9m52s Dec 16 12:28:52.483606 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:28:52.483627 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:28:52.495767 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:28:52.516903 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:28:52.524155 bash[1944]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:28:52.546647 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:28:52.554806 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:28:52.558787 coreos-metadata[1852]: Dec 16 12:28:52.558 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:28:52.562102 coreos-metadata[1852]: Dec 16 12:28:52.561 INFO Fetch successful Dec 16 12:28:52.562102 coreos-metadata[1852]: Dec 16 12:28:52.562 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 12:28:52.567157 coreos-metadata[1852]: Dec 16 12:28:52.566 INFO Fetch successful Dec 16 12:28:52.567319 coreos-metadata[1852]: Dec 16 12:28:52.567 INFO Fetching http://168.63.129.16/machine/abcbb1c9-ac1b-4039-97a1-b5b32469004d/114b6921%2Db147%2D4df7%2D8cb9%2Dd8aece41472f.%5Fci%2D4459.2.2%2Da%2D7f44347f41?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 12:28:52.575268 coreos-metadata[1852]: Dec 16 12:28:52.575 INFO Fetch successful Dec 16 12:28:52.575268 coreos-metadata[1852]: Dec 16 12:28:52.575 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:28:52.584470 coreos-metadata[1852]: Dec 16 12:28:52.584 INFO Fetch successful Dec 16 12:28:52.629186 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:28:52.637360 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:28:52.695907 sshd_keygen[1882]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:28:52.717325 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:28:52.724304 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:28:52.742164 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 12:28:52.753666 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:28:52.759544 locksmithd[1983]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:28:52.762150 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:28:52.776229 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:28:52.797757 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 12:28:52.807068 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:28:52.818509 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:28:52.825912 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:28:52.833918 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:28:52.918964 tar[1898]: linux-arm64/README.md Dec 16 12:28:52.931055 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:28:53.014758 containerd[1901]: time="2025-12-16T12:28:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:28:53.015691 containerd[1901]: time="2025-12-16T12:28:53.015664456Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:28:53.021720 containerd[1901]: time="2025-12-16T12:28:53.021686864Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.016µs" Dec 16 12:28:53.022197 containerd[1901]: time="2025-12-16T12:28:53.022171072Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:28:53.022269 containerd[1901]: time="2025-12-16T12:28:53.022258088Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:28:53.022436 containerd[1901]: time="2025-12-16T12:28:53.022420120Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:28:53.022505 containerd[1901]: time="2025-12-16T12:28:53.022492032Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:28:53.022558 containerd[1901]: time="2025-12-16T12:28:53.022547440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:28:53.022655 containerd[1901]: time="2025-12-16T12:28:53.022639664Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:28:53.022710 containerd[1901]: time="2025-12-16T12:28:53.022698712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:28:53.022944 containerd[1901]: time="2025-12-16T12:28:53.022923144Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:28:53.023008 containerd[1901]: time="2025-12-16T12:28:53.022996664Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:28:53.023071 containerd[1901]: time="2025-12-16T12:28:53.023059008Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:28:53.023123 containerd[1901]: time="2025-12-16T12:28:53.023111288Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:28:53.023239 containerd[1901]: time="2025-12-16T12:28:53.023226096Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:28:53.023474 containerd[1901]: time="2025-12-16T12:28:53.023454272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:28:53.023545 containerd[1901]: time="2025-12-16T12:28:53.023534504Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:28:53.023581 containerd[1901]: time="2025-12-16T12:28:53.023571080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:28:53.023694 containerd[1901]: time="2025-12-16T12:28:53.023636736Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:28:53.023889 containerd[1901]: time="2025-12-16T12:28:53.023869544Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:28:53.024032 containerd[1901]: time="2025-12-16T12:28:53.024005312Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:28:53.040010 containerd[1901]: time="2025-12-16T12:28:53.039986328Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:28:53.040168 containerd[1901]: time="2025-12-16T12:28:53.040116576Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:28:53.040168 containerd[1901]: time="2025-12-16T12:28:53.040143424Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:28:53.040168 containerd[1901]: time="2025-12-16T12:28:53.040152544Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:28:53.040285 containerd[1901]: time="2025-12-16T12:28:53.040272240Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:28:53.040381 containerd[1901]: time="2025-12-16T12:28:53.040321864Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:28:53.040381 containerd[1901]: time="2025-12-16T12:28:53.040337656Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:28:53.040381 containerd[1901]: time="2025-12-16T12:28:53.040347144Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:28:53.040381 containerd[1901]: time="2025-12-16T12:28:53.040354168Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:28:53.040381 containerd[1901]: time="2025-12-16T12:28:53.040365048Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:28:53.040381 containerd[1901]: time="2025-12-16T12:28:53.040371256Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040515048Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040638480Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040656016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040666888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040673552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040680552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040687032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040694280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040700536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040708312Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040715024Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:28:53.040763 containerd[1901]: time="2025-12-16T12:28:53.040722224Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:28:53.041107 containerd[1901]: time="2025-12-16T12:28:53.041012048Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:28:53.041197 containerd[1901]: time="2025-12-16T12:28:53.041183064Z" level=info msg="Start snapshots syncer" Dec 16 12:28:53.041262 containerd[1901]: time="2025-12-16T12:28:53.041251808Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:28:53.042679 containerd[1901]: time="2025-12-16T12:28:53.041586400Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:28:53.042679 containerd[1901]: time="2025-12-16T12:28:53.041627696Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041667072Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041764120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041785136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041793072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041801936Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041809496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041816112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041822544Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041839984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041847656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041853880Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041875832Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041884400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:28:53.042807 containerd[1901]: time="2025-12-16T12:28:53.041889472Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:28:53.043000 containerd[1901]: time="2025-12-16T12:28:53.041895048Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:28:53.043000 containerd[1901]: time="2025-12-16T12:28:53.041899536Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:28:53.043000 containerd[1901]: time="2025-12-16T12:28:53.041904688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:28:53.043000 containerd[1901]: time="2025-12-16T12:28:53.041912928Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:28:53.043000 containerd[1901]: time="2025-12-16T12:28:53.041924112Z" level=info msg="runtime interface created" Dec 16 12:28:53.043000 containerd[1901]: time="2025-12-16T12:28:53.041927304Z" level=info msg="created NRI interface" Dec 16 12:28:53.043000 containerd[1901]: time="2025-12-16T12:28:53.041933144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:28:53.043000 containerd[1901]: time="2025-12-16T12:28:53.041941664Z" level=info msg="Connect containerd service" Dec 16 12:28:53.043000 containerd[1901]: time="2025-12-16T12:28:53.041954680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:28:53.043000 containerd[1901]: time="2025-12-16T12:28:53.042506544Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:28:53.153909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:28:53.159252 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:28:53.358061 containerd[1901]: time="2025-12-16T12:28:53.357930984Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:28:53.358270 containerd[1901]: time="2025-12-16T12:28:53.358168464Z" level=info msg="Start subscribing containerd event" Dec 16 12:28:53.358312 containerd[1901]: time="2025-12-16T12:28:53.358279920Z" level=info msg="Start recovering state" Dec 16 12:28:53.358381 containerd[1901]: time="2025-12-16T12:28:53.358252256Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:28:53.358449 containerd[1901]: time="2025-12-16T12:28:53.358381312Z" level=info msg="Start event monitor" Dec 16 12:28:53.358495 containerd[1901]: time="2025-12-16T12:28:53.358450920Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:28:53.358495 containerd[1901]: time="2025-12-16T12:28:53.358461624Z" level=info msg="Start streaming server" Dec 16 12:28:53.358495 containerd[1901]: time="2025-12-16T12:28:53.358467936Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:28:53.358495 containerd[1901]: time="2025-12-16T12:28:53.358473192Z" level=info msg="runtime interface starting up..." Dec 16 12:28:53.358495 containerd[1901]: time="2025-12-16T12:28:53.358476808Z" level=info msg="starting plugins..." Dec 16 12:28:53.358495 containerd[1901]: time="2025-12-16T12:28:53.358488888Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:28:53.358700 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:28:53.364193 containerd[1901]: time="2025-12-16T12:28:53.364011816Z" level=info msg="containerd successfully booted in 0.349603s" Dec 16 12:28:53.364755 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:28:53.371514 systemd[1]: Startup finished in 1.638s (kernel) + 11.699s (initrd) + 10.004s (userspace) = 23.343s. Dec 16 12:28:53.537696 kubelet[2051]: E1216 12:28:53.537635 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:28:53.540001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:28:53.540251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:28:53.540812 systemd[1]: kubelet.service: Consumed 544ms CPU time, 254.1M memory peak. Dec 16 12:28:53.777092 login[2031]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:53.778122 login[2032]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:53.783758 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:28:53.784745 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:28:53.790101 systemd-logind[1878]: New session 1 of user core. Dec 16 12:28:53.792510 systemd-logind[1878]: New session 2 of user core. Dec 16 12:28:53.801275 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:28:53.806361 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:28:53.813646 (systemd)[2068]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:28:53.815715 systemd-logind[1878]: New session c1 of user core. Dec 16 12:28:53.938961 systemd[2068]: Queued start job for default target default.target. Dec 16 12:28:53.942758 systemd[2068]: Created slice app.slice - User Application Slice. Dec 16 12:28:53.942782 systemd[2068]: Reached target paths.target - Paths. Dec 16 12:28:53.942810 systemd[2068]: Reached target timers.target - Timers. Dec 16 12:28:53.943776 systemd[2068]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:28:53.950970 systemd[2068]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:28:53.951013 systemd[2068]: Reached target sockets.target - Sockets. Dec 16 12:28:53.951065 systemd[2068]: Reached target basic.target - Basic System. Dec 16 12:28:53.951085 systemd[2068]: Reached target default.target - Main User Target. Dec 16 12:28:53.951104 systemd[2068]: Startup finished in 130ms. Dec 16 12:28:53.951285 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:28:53.953376 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:28:53.954544 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:28:54.233802 waagent[2029]: 2025-12-16T12:28:54.233655Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 12:28:54.238345 waagent[2029]: 2025-12-16T12:28:54.238295Z INFO Daemon Daemon OS: flatcar 4459.2.2 Dec 16 12:28:54.241912 waagent[2029]: 2025-12-16T12:28:54.241880Z INFO Daemon Daemon Python: 3.11.13 Dec 16 12:28:54.245277 waagent[2029]: 2025-12-16T12:28:54.245237Z INFO Daemon Daemon Run daemon Dec 16 12:28:54.248311 waagent[2029]: 2025-12-16T12:28:54.248270Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Dec 16 12:28:54.255400 waagent[2029]: 2025-12-16T12:28:54.255357Z INFO Daemon Daemon Using waagent for provisioning Dec 16 12:28:54.259438 waagent[2029]: 2025-12-16T12:28:54.259399Z INFO Daemon Daemon Activate resource disk Dec 16 12:28:54.262809 waagent[2029]: 2025-12-16T12:28:54.262777Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 12:28:54.271385 waagent[2029]: 2025-12-16T12:28:54.271342Z INFO Daemon Daemon Found device: None Dec 16 12:28:54.275383 waagent[2029]: 2025-12-16T12:28:54.275344Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 12:28:54.282313 waagent[2029]: 2025-12-16T12:28:54.282278Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 12:28:54.291674 waagent[2029]: 2025-12-16T12:28:54.291631Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:28:54.296383 waagent[2029]: 2025-12-16T12:28:54.296348Z INFO Daemon Daemon Running default provisioning handler Dec 16 12:28:54.306098 waagent[2029]: 2025-12-16T12:28:54.305652Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 12:28:54.316706 waagent[2029]: 2025-12-16T12:28:54.316662Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 12:28:54.324332 waagent[2029]: 2025-12-16T12:28:54.324295Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 12:28:54.328310 waagent[2029]: 2025-12-16T12:28:54.328283Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 12:28:54.392922 waagent[2029]: 2025-12-16T12:28:54.392819Z INFO Daemon Daemon Successfully mounted dvd Dec 16 12:28:54.419794 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 12:28:54.422115 waagent[2029]: 2025-12-16T12:28:54.422065Z INFO Daemon Daemon Detect protocol endpoint Dec 16 12:28:54.426258 waagent[2029]: 2025-12-16T12:28:54.426221Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:28:54.430511 waagent[2029]: 2025-12-16T12:28:54.430482Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 12:28:54.435586 waagent[2029]: 2025-12-16T12:28:54.435558Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 12:28:54.439709 waagent[2029]: 2025-12-16T12:28:54.439678Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 12:28:54.443733 waagent[2029]: 2025-12-16T12:28:54.443704Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 12:28:54.488981 waagent[2029]: 2025-12-16T12:28:54.488905Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 12:28:54.493796 waagent[2029]: 2025-12-16T12:28:54.493775Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 12:28:54.497702 waagent[2029]: 2025-12-16T12:28:54.497677Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 12:28:54.640062 waagent[2029]: 2025-12-16T12:28:54.639817Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 12:28:54.645753 waagent[2029]: 2025-12-16T12:28:54.645703Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 12:28:54.653504 waagent[2029]: 2025-12-16T12:28:54.653461Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:28:54.669727 waagent[2029]: 2025-12-16T12:28:54.669692Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 12:28:54.674134 waagent[2029]: 2025-12-16T12:28:54.674102Z INFO Daemon Dec 16 12:28:54.676494 waagent[2029]: 2025-12-16T12:28:54.676463Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5e1846a3-010d-4d73-b603-4469905a080b eTag: 14154364898097798110 source: Fabric] Dec 16 12:28:54.685033 waagent[2029]: 2025-12-16T12:28:54.684995Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 12:28:54.689841 waagent[2029]: 2025-12-16T12:28:54.689808Z INFO Daemon Dec 16 12:28:54.692354 waagent[2029]: 2025-12-16T12:28:54.692325Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:28:54.701085 waagent[2029]: 2025-12-16T12:28:54.701055Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 12:28:54.760716 waagent[2029]: 2025-12-16T12:28:54.760592Z INFO Daemon Downloaded certificate {'thumbprint': 'A21A605E0634B2F513F0C30ADB5CA2673EF17791', 'hasPrivateKey': True} Dec 16 12:28:54.767938 waagent[2029]: 2025-12-16T12:28:54.767898Z INFO Daemon Fetch goal state completed Dec 16 12:28:54.778099 waagent[2029]: 2025-12-16T12:28:54.778065Z INFO Daemon Daemon Starting provisioning Dec 16 12:28:54.781756 waagent[2029]: 2025-12-16T12:28:54.781721Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 12:28:54.785352 waagent[2029]: 2025-12-16T12:28:54.785326Z INFO Daemon Daemon Set hostname [ci-4459.2.2-a-7f44347f41] Dec 16 12:28:54.804636 waagent[2029]: 2025-12-16T12:28:54.804593Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-a-7f44347f41] Dec 16 12:28:54.809245 waagent[2029]: 2025-12-16T12:28:54.809208Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 12:28:54.813839 waagent[2029]: 2025-12-16T12:28:54.813807Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 12:28:54.823785 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:54.823800 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:28:54.823832 systemd-networkd[1492]: eth0: DHCP lease lost Dec 16 12:28:54.824766 waagent[2029]: 2025-12-16T12:28:54.824721Z INFO Daemon Daemon Create user account if not exists Dec 16 12:28:54.828950 waagent[2029]: 2025-12-16T12:28:54.828913Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 12:28:54.833611 waagent[2029]: 2025-12-16T12:28:54.833569Z INFO Daemon Daemon Configure sudoer Dec 16 12:28:54.841105 waagent[2029]: 2025-12-16T12:28:54.841061Z INFO Daemon Daemon Configure sshd Dec 16 12:28:54.848074 systemd-networkd[1492]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:28:54.848984 waagent[2029]: 2025-12-16T12:28:54.848833Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 12:28:54.858215 waagent[2029]: 2025-12-16T12:28:54.858178Z INFO Daemon Daemon Deploy ssh public key. Dec 16 12:28:55.926455 waagent[2029]: 2025-12-16T12:28:55.926368Z INFO Daemon Daemon Provisioning complete Dec 16 12:28:55.939509 waagent[2029]: 2025-12-16T12:28:55.939453Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 12:28:55.944043 waagent[2029]: 2025-12-16T12:28:55.944003Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 12:28:55.951365 waagent[2029]: 2025-12-16T12:28:55.951337Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 12:28:56.051838 waagent[2118]: 2025-12-16T12:28:56.051299Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 12:28:56.051838 waagent[2118]: 2025-12-16T12:28:56.051420Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Dec 16 12:28:56.051838 waagent[2118]: 2025-12-16T12:28:56.051458Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 12:28:56.051838 waagent[2118]: 2025-12-16T12:28:56.051493Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 16 12:28:56.099369 waagent[2118]: 2025-12-16T12:28:56.099306Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 12:28:56.100050 waagent[2118]: 2025-12-16T12:28:56.099683Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:28:56.100050 waagent[2118]: 2025-12-16T12:28:56.099745Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:28:56.105562 waagent[2118]: 2025-12-16T12:28:56.105512Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:28:56.111058 waagent[2118]: 2025-12-16T12:28:56.110996Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 12:28:56.111431 waagent[2118]: 2025-12-16T12:28:56.111395Z INFO ExtHandler Dec 16 12:28:56.111481 waagent[2118]: 2025-12-16T12:28:56.111463Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4801701d-060c-4ae1-91e9-d4e19ad83898 eTag: 14154364898097798110 source: Fabric] Dec 16 12:28:56.111706 waagent[2118]: 2025-12-16T12:28:56.111678Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 12:28:56.112132 waagent[2118]: 2025-12-16T12:28:56.112100Z INFO ExtHandler Dec 16 12:28:56.112173 waagent[2118]: 2025-12-16T12:28:56.112155Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:28:56.115430 waagent[2118]: 2025-12-16T12:28:56.115402Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 12:28:56.167327 waagent[2118]: 2025-12-16T12:28:56.167268Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A21A605E0634B2F513F0C30ADB5CA2673EF17791', 'hasPrivateKey': True} Dec 16 12:28:56.167685 waagent[2118]: 2025-12-16T12:28:56.167652Z INFO ExtHandler Fetch goal state completed Dec 16 12:28:56.179924 waagent[2118]: 2025-12-16T12:28:56.179833Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Dec 16 12:28:56.183158 waagent[2118]: 2025-12-16T12:28:56.183113Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2118 Dec 16 12:28:56.183259 waagent[2118]: 2025-12-16T12:28:56.183232Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 12:28:56.183496 waagent[2118]: 2025-12-16T12:28:56.183467Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 12:28:56.184608 waagent[2118]: 2025-12-16T12:28:56.184573Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 12:28:56.184932 waagent[2118]: 2025-12-16T12:28:56.184900Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 12:28:56.185073 waagent[2118]: 2025-12-16T12:28:56.185046Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 12:28:56.185500 waagent[2118]: 2025-12-16T12:28:56.185469Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 12:28:56.219059 waagent[2118]: 2025-12-16T12:28:56.219006Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 12:28:56.219227 waagent[2118]: 2025-12-16T12:28:56.219196Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 12:28:56.223588 waagent[2118]: 2025-12-16T12:28:56.223560Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 12:28:56.228210 systemd[1]: Reload requested from client PID 2133 ('systemctl') (unit waagent.service)... Dec 16 12:28:56.228226 systemd[1]: Reloading... Dec 16 12:28:56.300086 zram_generator::config[2181]: No configuration found. Dec 16 12:28:56.438987 systemd[1]: Reloading finished in 210 ms. Dec 16 12:28:56.464043 waagent[2118]: 2025-12-16T12:28:56.463788Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 12:28:56.464043 waagent[2118]: 2025-12-16T12:28:56.463932Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 12:28:56.628910 waagent[2118]: 2025-12-16T12:28:56.628150Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 12:28:56.628910 waagent[2118]: 2025-12-16T12:28:56.628455Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 12:28:56.629130 waagent[2118]: 2025-12-16T12:28:56.629087Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 12:28:56.629222 waagent[2118]: 2025-12-16T12:28:56.629197Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:28:56.629301 waagent[2118]: 2025-12-16T12:28:56.629271Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:28:56.629470 waagent[2118]: 2025-12-16T12:28:56.629441Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 12:28:56.629827 waagent[2118]: 2025-12-16T12:28:56.629790Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 12:28:56.629943 waagent[2118]: 2025-12-16T12:28:56.629910Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 12:28:56.629943 waagent[2118]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 12:28:56.629943 waagent[2118]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 12:28:56.629943 waagent[2118]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 12:28:56.629943 waagent[2118]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:28:56.629943 waagent[2118]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:28:56.629943 waagent[2118]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:28:56.630380 waagent[2118]: 2025-12-16T12:28:56.630346Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 12:28:56.630475 waagent[2118]: 2025-12-16T12:28:56.630436Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:28:56.630526 waagent[2118]: 2025-12-16T12:28:56.630502Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:28:56.630613 waagent[2118]: 2025-12-16T12:28:56.630578Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 12:28:56.630985 waagent[2118]: 2025-12-16T12:28:56.630948Z INFO EnvHandler ExtHandler Configure routes Dec 16 12:28:56.631042 waagent[2118]: 2025-12-16T12:28:56.631013Z INFO EnvHandler ExtHandler Gateway:None Dec 16 12:28:56.631177 waagent[2118]: 2025-12-16T12:28:56.631150Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 12:28:56.631290 waagent[2118]: 2025-12-16T12:28:56.631269Z INFO EnvHandler ExtHandler Routes:None Dec 16 12:28:56.631441 waagent[2118]: 2025-12-16T12:28:56.631402Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 12:28:56.631499 waagent[2118]: 2025-12-16T12:28:56.631475Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 12:28:56.637190 waagent[2118]: 2025-12-16T12:28:56.637157Z INFO ExtHandler ExtHandler Dec 16 12:28:56.637319 waagent[2118]: 2025-12-16T12:28:56.637294Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6a5dabd9-3b9d-4d86-8ea0-97800d81b406 correlation a67a69e4-3ca1-48ff-8a23-f73fab9578ef created: 2025-12-16T12:28:02.781097Z] Dec 16 12:28:56.637660 waagent[2118]: 2025-12-16T12:28:56.637630Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 12:28:56.638163 waagent[2118]: 2025-12-16T12:28:56.638134Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 12:28:56.706267 waagent[2118]: 2025-12-16T12:28:56.706151Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 12:28:56.706267 waagent[2118]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 12:28:56.706574 waagent[2118]: 2025-12-16T12:28:56.706538Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1C5DCEE9-5683-4B6B-B0B1-AE75C5851174;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 12:28:56.723149 waagent[2118]: 2025-12-16T12:28:56.723109Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 12:28:56.723149 waagent[2118]: Executing ['ip', '-a', '-o', 'link']: Dec 16 12:28:56.723149 waagent[2118]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 12:28:56.723149 waagent[2118]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:d0:28 brd ff:ff:ff:ff:ff:ff Dec 16 12:28:56.723149 waagent[2118]: 3: enP14497s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:d0:28 brd ff:ff:ff:ff:ff:ff\ altname enP14497p0s2 Dec 16 12:28:56.723149 waagent[2118]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 12:28:56.723149 waagent[2118]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 12:28:56.723149 waagent[2118]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 12:28:56.723149 waagent[2118]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 12:28:56.723149 waagent[2118]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 12:28:56.723149 waagent[2118]: 2: eth0 inet6 fe80::222:48ff:feb5:d028/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 12:28:56.768037 waagent[2118]: 2025-12-16T12:28:56.767948Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 12:28:56.768037 waagent[2118]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:28:56.768037 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:56.768037 waagent[2118]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:28:56.768037 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:56.768037 waagent[2118]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:28:56.768037 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:56.768037 waagent[2118]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:28:56.768037 waagent[2118]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:28:56.768037 waagent[2118]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:28:56.770844 waagent[2118]: 2025-12-16T12:28:56.770796Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 12:28:56.770844 waagent[2118]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:28:56.770844 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:56.770844 waagent[2118]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:28:56.770844 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:56.770844 waagent[2118]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Dec 16 12:28:56.770844 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:56.770844 waagent[2118]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:28:56.770844 waagent[2118]: 4 595 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:28:56.770844 waagent[2118]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:28:56.771088 waagent[2118]: 2025-12-16T12:28:56.771005Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 16 12:29:03.790941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:29:03.792675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:03.914840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:03.920227 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:04.020958 kubelet[2267]: E1216 12:29:04.020904 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:04.023654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:04.023769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:04.024219 systemd[1]: kubelet.service: Consumed 111ms CPU time, 105.9M memory peak. Dec 16 12:29:14.221562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:29:14.222790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:14.548121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:14.562230 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:14.588521 kubelet[2282]: E1216 12:29:14.588475 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:14.590541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:14.590736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:14.591226 systemd[1]: kubelet.service: Consumed 104ms CPU time, 104.4M memory peak. Dec 16 12:29:16.051991 chronyd[1850]: Selected source PHC0 Dec 16 12:29:19.226236 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:29:19.227481 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:44982.service - OpenSSH per-connection server daemon (10.200.16.10:44982). Dec 16 12:29:19.813483 sshd[2290]: Accepted publickey for core from 10.200.16.10 port 44982 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:19.814556 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:19.818574 systemd-logind[1878]: New session 3 of user core. Dec 16 12:29:19.827149 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:29:20.249499 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:37526.service - OpenSSH per-connection server daemon (10.200.16.10:37526). Dec 16 12:29:20.746872 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 37526 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:20.747940 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:20.752478 systemd-logind[1878]: New session 4 of user core. Dec 16 12:29:20.759168 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:29:21.098835 sshd[2299]: Connection closed by 10.200.16.10 port 37526 Dec 16 12:29:21.097842 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:21.100865 systemd-logind[1878]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:29:21.101012 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:37526.service: Deactivated successfully. Dec 16 12:29:21.102574 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:29:21.104570 systemd-logind[1878]: Removed session 4. Dec 16 12:29:21.187356 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:37528.service - OpenSSH per-connection server daemon (10.200.16.10:37528). Dec 16 12:29:21.676458 sshd[2305]: Accepted publickey for core from 10.200.16.10 port 37528 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:21.677626 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:21.681176 systemd-logind[1878]: New session 5 of user core. Dec 16 12:29:21.691322 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:29:22.022386 sshd[2308]: Connection closed by 10.200.16.10 port 37528 Dec 16 12:29:22.022940 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:22.025875 systemd-logind[1878]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:29:22.027288 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:37528.service: Deactivated successfully. Dec 16 12:29:22.029226 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:29:22.030619 systemd-logind[1878]: Removed session 5. Dec 16 12:29:22.110519 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:37530.service - OpenSSH per-connection server daemon (10.200.16.10:37530). Dec 16 12:29:22.603250 sshd[2314]: Accepted publickey for core from 10.200.16.10 port 37530 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:22.604357 sshd-session[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:22.607989 systemd-logind[1878]: New session 6 of user core. Dec 16 12:29:22.614160 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:29:22.954522 sshd[2317]: Connection closed by 10.200.16.10 port 37530 Dec 16 12:29:22.955065 sshd-session[2314]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:22.958319 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:37530.service: Deactivated successfully. Dec 16 12:29:22.959709 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:29:22.960356 systemd-logind[1878]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:29:22.961595 systemd-logind[1878]: Removed session 6. Dec 16 12:29:23.046407 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:37534.service - OpenSSH per-connection server daemon (10.200.16.10:37534). Dec 16 12:29:23.535065 sshd[2323]: Accepted publickey for core from 10.200.16.10 port 37534 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:23.536138 sshd-session[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:23.539706 systemd-logind[1878]: New session 7 of user core. Dec 16 12:29:23.551229 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:29:23.928263 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:29:23.928491 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:23.955296 sudo[2327]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:24.032316 sshd[2326]: Connection closed by 10.200.16.10 port 37534 Dec 16 12:29:24.032940 sshd-session[2323]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:24.036073 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:37534.service: Deactivated successfully. Dec 16 12:29:24.037699 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:29:24.038310 systemd-logind[1878]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:29:24.039441 systemd-logind[1878]: Removed session 7. Dec 16 12:29:24.152604 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:37546.service - OpenSSH per-connection server daemon (10.200.16.10:37546). Dec 16 12:29:24.645525 sshd[2333]: Accepted publickey for core from 10.200.16.10 port 37546 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:24.646604 sshd-session[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:24.647575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:29:24.649363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:24.652661 systemd-logind[1878]: New session 8 of user core. Dec 16 12:29:24.659134 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:29:24.862388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:24.865241 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:24.892467 kubelet[2345]: E1216 12:29:24.892408 2345 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:24.894485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:24.894702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:24.895227 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105.4M memory peak. Dec 16 12:29:24.920471 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:29:24.920967 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:25.156271 sudo[2353]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:25.160568 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:29:25.161122 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:25.168562 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:29:25.197443 augenrules[2375]: No rules Dec 16 12:29:25.198547 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:29:25.198836 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:29:25.200294 sudo[2352]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:25.277714 sshd[2339]: Connection closed by 10.200.16.10 port 37546 Dec 16 12:29:25.279611 sshd-session[2333]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:25.282805 systemd-logind[1878]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:29:25.283432 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:37546.service: Deactivated successfully. Dec 16 12:29:25.284812 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:29:25.286080 systemd-logind[1878]: Removed session 8. Dec 16 12:29:25.363580 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:37562.service - OpenSSH per-connection server daemon (10.200.16.10:37562). Dec 16 12:29:25.826126 sshd[2384]: Accepted publickey for core from 10.200.16.10 port 37562 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:25.827183 sshd-session[2384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:25.830793 systemd-logind[1878]: New session 9 of user core. Dec 16 12:29:25.842144 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:29:26.082522 sudo[2388]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:29:26.082726 sudo[2388]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:27.238219 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:29:27.250266 (dockerd)[2405]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:29:27.895048 dockerd[2405]: time="2025-12-16T12:29:27.893423982Z" level=info msg="Starting up" Dec 16 12:29:27.895902 dockerd[2405]: time="2025-12-16T12:29:27.895871934Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:29:27.903892 dockerd[2405]: time="2025-12-16T12:29:27.903851471Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:29:27.938737 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3713597201-merged.mount: Deactivated successfully. Dec 16 12:29:27.984120 dockerd[2405]: time="2025-12-16T12:29:27.984066397Z" level=info msg="Loading containers: start." Dec 16 12:29:28.010043 kernel: Initializing XFRM netlink socket Dec 16 12:29:28.280996 systemd-networkd[1492]: docker0: Link UP Dec 16 12:29:28.297551 dockerd[2405]: time="2025-12-16T12:29:28.297511602Z" level=info msg="Loading containers: done." Dec 16 12:29:28.308153 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2701674382-merged.mount: Deactivated successfully. Dec 16 12:29:28.318373 dockerd[2405]: time="2025-12-16T12:29:28.318334386Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:29:28.318454 dockerd[2405]: time="2025-12-16T12:29:28.318405868Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:29:28.318535 dockerd[2405]: time="2025-12-16T12:29:28.318518464Z" level=info msg="Initializing buildkit" Dec 16 12:29:28.367336 dockerd[2405]: time="2025-12-16T12:29:28.367296887Z" level=info msg="Completed buildkit initialization" Dec 16 12:29:28.371638 dockerd[2405]: time="2025-12-16T12:29:28.371505514Z" level=info msg="Daemon has completed initialization" Dec 16 12:29:28.372091 dockerd[2405]: time="2025-12-16T12:29:28.371831580Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:29:28.372190 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:29:29.132980 containerd[1901]: time="2025-12-16T12:29:29.132907290Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 12:29:29.965011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930245024.mount: Deactivated successfully. Dec 16 12:29:31.049054 containerd[1901]: time="2025-12-16T12:29:31.048528383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:31.052771 containerd[1901]: time="2025-12-16T12:29:31.052747466Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431959" Dec 16 12:29:31.057370 containerd[1901]: time="2025-12-16T12:29:31.057348000Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:31.062911 containerd[1901]: time="2025-12-16T12:29:31.062887354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:31.063360 containerd[1901]: time="2025-12-16T12:29:31.063332591Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.930390307s" Dec 16 12:29:31.063411 containerd[1901]: time="2025-12-16T12:29:31.063364936Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 16 12:29:31.064360 containerd[1901]: time="2025-12-16T12:29:31.064336772Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 12:29:32.416061 containerd[1901]: time="2025-12-16T12:29:32.415427529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:32.421080 containerd[1901]: time="2025-12-16T12:29:32.421047418Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618955" Dec 16 12:29:32.425176 containerd[1901]: time="2025-12-16T12:29:32.425149543Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:32.430920 containerd[1901]: time="2025-12-16T12:29:32.430890939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:32.431926 containerd[1901]: time="2025-12-16T12:29:32.431899264Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.367536932s" Dec 16 12:29:32.431963 containerd[1901]: time="2025-12-16T12:29:32.431930913Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 16 12:29:32.432505 containerd[1901]: time="2025-12-16T12:29:32.432477361Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 12:29:33.565403 containerd[1901]: time="2025-12-16T12:29:33.565337268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:33.568544 containerd[1901]: time="2025-12-16T12:29:33.568352211Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618436" Dec 16 12:29:33.572202 containerd[1901]: time="2025-12-16T12:29:33.572176104Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:33.576607 containerd[1901]: time="2025-12-16T12:29:33.576573222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:33.577248 containerd[1901]: time="2025-12-16T12:29:33.577220473Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.144711536s" Dec 16 12:29:33.577321 containerd[1901]: time="2025-12-16T12:29:33.577309171Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 16 12:29:33.577906 containerd[1901]: time="2025-12-16T12:29:33.577882644Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 12:29:34.683720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169888201.mount: Deactivated successfully. Dec 16 12:29:34.953148 containerd[1901]: time="2025-12-16T12:29:34.952445481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:34.955716 containerd[1901]: time="2025-12-16T12:29:34.955690766Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561799" Dec 16 12:29:34.959153 containerd[1901]: time="2025-12-16T12:29:34.959129816Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:34.963910 containerd[1901]: time="2025-12-16T12:29:34.963863536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:34.965073 containerd[1901]: time="2025-12-16T12:29:34.964967960Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.387060539s" Dec 16 12:29:34.965073 containerd[1901]: time="2025-12-16T12:29:34.964994144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 16 12:29:34.967503 containerd[1901]: time="2025-12-16T12:29:34.967472359Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 12:29:34.970971 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 12:29:34.972765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:35.082957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:35.085681 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:35.222411 kubelet[2695]: E1216 12:29:35.222351 2695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:35.224471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:35.224695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:35.225211 systemd[1]: kubelet.service: Consumed 106ms CPU time, 104.1M memory peak. Dec 16 12:29:36.093044 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 16 12:29:36.106697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895150339.mount: Deactivated successfully. Dec 16 12:29:37.736056 containerd[1901]: time="2025-12-16T12:29:37.735922562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:37.739775 containerd[1901]: time="2025-12-16T12:29:37.739582914Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Dec 16 12:29:37.743093 containerd[1901]: time="2025-12-16T12:29:37.743070014Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:37.749079 containerd[1901]: time="2025-12-16T12:29:37.749044450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:37.749732 containerd[1901]: time="2025-12-16T12:29:37.749704245Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.782203772s" Dec 16 12:29:37.749820 containerd[1901]: time="2025-12-16T12:29:37.749805367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 16 12:29:37.750440 containerd[1901]: time="2025-12-16T12:29:37.750414937Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:29:38.038209 update_engine[1881]: I20251216 12:29:38.038069 1881 update_attempter.cc:509] Updating boot flags... Dec 16 12:29:38.294971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount204543768.mount: Deactivated successfully. Dec 16 12:29:38.319127 containerd[1901]: time="2025-12-16T12:29:38.319079031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:29:38.324759 containerd[1901]: time="2025-12-16T12:29:38.324719736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 16 12:29:38.327983 containerd[1901]: time="2025-12-16T12:29:38.327954365Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:29:38.332941 containerd[1901]: time="2025-12-16T12:29:38.332907243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:29:38.333523 containerd[1901]: time="2025-12-16T12:29:38.333222052Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 582.77849ms" Dec 16 12:29:38.333523 containerd[1901]: time="2025-12-16T12:29:38.333245661Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:29:38.333747 containerd[1901]: time="2025-12-16T12:29:38.333715762Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 12:29:38.980830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418407890.mount: Deactivated successfully. Dec 16 12:29:41.034081 containerd[1901]: time="2025-12-16T12:29:41.034010431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:41.037711 containerd[1901]: time="2025-12-16T12:29:41.037471116Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Dec 16 12:29:41.041856 containerd[1901]: time="2025-12-16T12:29:41.041680297Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:41.047127 containerd[1901]: time="2025-12-16T12:29:41.047097921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:41.047526 containerd[1901]: time="2025-12-16T12:29:41.047495330Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.713752615s" Dec 16 12:29:41.047526 containerd[1901]: time="2025-12-16T12:29:41.047525850Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 16 12:29:43.797224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:43.797333 systemd[1]: kubelet.service: Consumed 106ms CPU time, 104.1M memory peak. Dec 16 12:29:43.798985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:43.822448 systemd[1]: Reload requested from client PID 2955 ('systemctl') (unit session-9.scope)... Dec 16 12:29:43.822462 systemd[1]: Reloading... Dec 16 12:29:43.924044 zram_generator::config[3004]: No configuration found. Dec 16 12:29:44.067128 systemd[1]: Reloading finished in 244 ms. Dec 16 12:29:44.103392 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:29:44.103450 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:29:44.103676 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:44.103716 systemd[1]: kubelet.service: Consumed 75ms CPU time, 95M memory peak. Dec 16 12:29:44.105278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:44.333273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:44.339241 (kubelet)[3069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:29:44.366623 kubelet[3069]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:29:44.366623 kubelet[3069]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:29:44.366623 kubelet[3069]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:29:44.366623 kubelet[3069]: I1216 12:29:44.366594 3069 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:29:44.837171 kubelet[3069]: I1216 12:29:44.837131 3069 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:29:44.837171 kubelet[3069]: I1216 12:29:44.837162 3069 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:29:44.837392 kubelet[3069]: I1216 12:29:44.837372 3069 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:29:44.859291 kubelet[3069]: E1216 12:29:44.859136 3069 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:29:44.861456 kubelet[3069]: I1216 12:29:44.861394 3069 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:29:44.868706 kubelet[3069]: I1216 12:29:44.868599 3069 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:29:44.871646 kubelet[3069]: I1216 12:29:44.871629 3069 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:29:44.872841 kubelet[3069]: I1216 12:29:44.872405 3069 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:29:44.872841 kubelet[3069]: I1216 12:29:44.872441 3069 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-7f44347f41","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:29:44.872841 kubelet[3069]: I1216 12:29:44.872578 3069 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:29:44.872841 kubelet[3069]: I1216 12:29:44.872585 3069 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:29:44.872999 kubelet[3069]: I1216 12:29:44.872693 3069 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:44.875310 kubelet[3069]: I1216 12:29:44.875294 3069 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:29:44.875490 kubelet[3069]: I1216 12:29:44.875477 3069 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:29:44.875574 kubelet[3069]: I1216 12:29:44.875565 3069 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:29:44.875625 kubelet[3069]: I1216 12:29:44.875618 3069 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:29:44.881127 kubelet[3069]: W1216 12:29:44.881085 3069 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-7f44347f41&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Dec 16 12:29:44.881240 kubelet[3069]: E1216 12:29:44.881224 3069 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-7f44347f41&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:29:44.881398 kubelet[3069]: I1216 12:29:44.881385 3069 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:29:44.881741 kubelet[3069]: I1216 12:29:44.881723 3069 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:29:44.881851 kubelet[3069]: W1216 12:29:44.881840 3069 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:29:44.882358 kubelet[3069]: I1216 12:29:44.882336 3069 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:29:44.882931 kubelet[3069]: I1216 12:29:44.882917 3069 server.go:1287] "Started kubelet" Dec 16 12:29:44.888006 kubelet[3069]: E1216 12:29:44.887915 3069 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-7f44347f41.1881b1f2b5010f11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-7f44347f41,UID:ci-4459.2.2-a-7f44347f41,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-7f44347f41,},FirstTimestamp:2025-12-16 12:29:44.882892561 +0000 UTC m=+0.541381843,LastTimestamp:2025-12-16 12:29:44.882892561 +0000 UTC m=+0.541381843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-7f44347f41,}" Dec 16 12:29:44.888330 kubelet[3069]: W1216 12:29:44.888294 3069 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Dec 16 12:29:44.888420 kubelet[3069]: E1216 12:29:44.888406 3069 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:29:44.890230 kubelet[3069]: I1216 12:29:44.890202 3069 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:29:44.890485 kubelet[3069]: I1216 12:29:44.890455 3069 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:29:44.890968 kubelet[3069]: I1216 12:29:44.890951 3069 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:29:44.892235 kubelet[3069]: I1216 12:29:44.892192 3069 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:29:44.892461 kubelet[3069]: I1216 12:29:44.892448 3069 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:29:44.893174 kubelet[3069]: E1216 12:29:44.893159 3069 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:29:44.893377 kubelet[3069]: I1216 12:29:44.893363 3069 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:29:44.894343 kubelet[3069]: E1216 12:29:44.894325 3069 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-7f44347f41\" not found" Dec 16 12:29:44.894439 kubelet[3069]: I1216 12:29:44.894432 3069 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:29:44.894615 kubelet[3069]: I1216 12:29:44.894600 3069 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:29:44.894705 kubelet[3069]: I1216 12:29:44.894696 3069 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:29:44.895013 kubelet[3069]: W1216 12:29:44.894986 3069 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Dec 16 12:29:44.895119 kubelet[3069]: E1216 12:29:44.895106 3069 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:29:44.895504 kubelet[3069]: E1216 12:29:44.895476 3069 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-7f44347f41?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Dec 16 12:29:44.896203 kubelet[3069]: I1216 12:29:44.895807 3069 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:29:44.896203 kubelet[3069]: I1216 12:29:44.895866 3069 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:29:44.897074 kubelet[3069]: I1216 12:29:44.897062 3069 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:29:44.917509 kubelet[3069]: I1216 12:29:44.917488 3069 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:29:44.917584 kubelet[3069]: I1216 12:29:44.917527 3069 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:29:44.917584 kubelet[3069]: I1216 12:29:44.917548 3069 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:44.924933 kubelet[3069]: I1216 12:29:44.924767 3069 policy_none.go:49] "None policy: Start" Dec 16 12:29:44.924933 kubelet[3069]: I1216 12:29:44.924794 3069 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:29:44.924933 kubelet[3069]: I1216 12:29:44.924804 3069 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:29:44.926832 kubelet[3069]: I1216 12:29:44.926809 3069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:29:44.928118 kubelet[3069]: I1216 12:29:44.927936 3069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:29:44.928118 kubelet[3069]: I1216 12:29:44.927954 3069 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:29:44.928118 kubelet[3069]: I1216 12:29:44.927972 3069 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:29:44.928118 kubelet[3069]: I1216 12:29:44.927984 3069 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:29:44.928249 kubelet[3069]: E1216 12:29:44.928231 3069 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:29:44.930799 kubelet[3069]: W1216 12:29:44.930752 3069 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Dec 16 12:29:44.930862 kubelet[3069]: E1216 12:29:44.930800 3069 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:29:44.934495 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:29:44.945555 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:29:44.948176 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:29:44.959760 kubelet[3069]: I1216 12:29:44.959740 3069 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:29:44.960106 kubelet[3069]: I1216 12:29:44.960069 3069 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:29:44.960248 kubelet[3069]: I1216 12:29:44.960087 3069 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:29:44.960584 kubelet[3069]: I1216 12:29:44.960570 3069 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:29:44.961518 kubelet[3069]: E1216 12:29:44.961477 3069 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:29:44.961518 kubelet[3069]: E1216 12:29:44.961509 3069 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-7f44347f41\" not found" Dec 16 12:29:45.038045 systemd[1]: Created slice kubepods-burstable-podfe16b5e5ce27123a3d918efd283edd28.slice - libcontainer container kubepods-burstable-podfe16b5e5ce27123a3d918efd283edd28.slice. Dec 16 12:29:45.043622 kubelet[3069]: E1216 12:29:45.043557 3069 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f44347f41\" not found" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.045746 systemd[1]: Created slice kubepods-burstable-pod1f3307bc582b5016153bc4782832a66e.slice - libcontainer container kubepods-burstable-pod1f3307bc582b5016153bc4782832a66e.slice. Dec 16 12:29:45.047059 kubelet[3069]: E1216 12:29:45.047044 3069 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f44347f41\" not found" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.060396 systemd[1]: Created slice kubepods-burstable-podc9c89019c9c2e7a6f22818764a8d2c62.slice - libcontainer container kubepods-burstable-podc9c89019c9c2e7a6f22818764a8d2c62.slice. Dec 16 12:29:45.062554 kubelet[3069]: I1216 12:29:45.062532 3069 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.062769 kubelet[3069]: E1216 12:29:45.062754 3069 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f44347f41\" not found" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.062917 kubelet[3069]: E1216 12:29:45.062896 3069 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.097482 kubelet[3069]: E1216 12:29:45.096561 3069 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-7f44347f41?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Dec 16 12:29:45.195812 kubelet[3069]: I1216 12:29:45.195686 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9c89019c9c2e7a6f22818764a8d2c62-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-7f44347f41\" (UID: \"c9c89019c9c2e7a6f22818764a8d2c62\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.195812 kubelet[3069]: I1216 12:29:45.195744 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9c89019c9c2e7a6f22818764a8d2c62-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-7f44347f41\" (UID: \"c9c89019c9c2e7a6f22818764a8d2c62\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.195812 kubelet[3069]: I1216 12:29:45.195759 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f3307bc582b5016153bc4782832a66e-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-7f44347f41\" (UID: \"1f3307bc582b5016153bc4782832a66e\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.195812 kubelet[3069]: I1216 12:29:45.195773 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f3307bc582b5016153bc4782832a66e-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-7f44347f41\" (UID: \"1f3307bc582b5016153bc4782832a66e\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.195812 kubelet[3069]: I1216 12:29:45.195783 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f3307bc582b5016153bc4782832a66e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-7f44347f41\" (UID: \"1f3307bc582b5016153bc4782832a66e\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.196038 kubelet[3069]: I1216 12:29:45.195792 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9c89019c9c2e7a6f22818764a8d2c62-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-7f44347f41\" (UID: \"c9c89019c9c2e7a6f22818764a8d2c62\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.196038 kubelet[3069]: I1216 12:29:45.195802 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe16b5e5ce27123a3d918efd283edd28-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-7f44347f41\" (UID: \"fe16b5e5ce27123a3d918efd283edd28\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.196038 kubelet[3069]: I1216 12:29:45.195813 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9c89019c9c2e7a6f22818764a8d2c62-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-7f44347f41\" (UID: \"c9c89019c9c2e7a6f22818764a8d2c62\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.196038 kubelet[3069]: I1216 12:29:45.195835 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9c89019c9c2e7a6f22818764a8d2c62-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-7f44347f41\" (UID: \"c9c89019c9c2e7a6f22818764a8d2c62\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.264531 kubelet[3069]: I1216 12:29:45.264496 3069 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.264886 kubelet[3069]: E1216 12:29:45.264857 3069 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.345127 containerd[1901]: time="2025-12-16T12:29:45.345055439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-7f44347f41,Uid:fe16b5e5ce27123a3d918efd283edd28,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:45.348812 containerd[1901]: time="2025-12-16T12:29:45.348723719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-7f44347f41,Uid:1f3307bc582b5016153bc4782832a66e,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:45.363637 containerd[1901]: time="2025-12-16T12:29:45.363605027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-7f44347f41,Uid:c9c89019c9c2e7a6f22818764a8d2c62,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:45.432194 containerd[1901]: time="2025-12-16T12:29:45.432086775Z" level=info msg="connecting to shim fb9965213b6e0f5b2a1064f37fa442b96c591505f0895e0768c8d76afceee430" address="unix:///run/containerd/s/3658a117118bf768d0113df5b129c88077507dcb8f95dd930c9c152fc06a6a92" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:45.432459 containerd[1901]: time="2025-12-16T12:29:45.432438281Z" level=info msg="connecting to shim b2a4e9de14b75445ebd374e9d84e41181e2912af519bdad641fd3575e4068707" address="unix:///run/containerd/s/bd6e4016c801c989742cb11dba5ea326a4d75d1abae017be4274ff7436d32172" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:45.457162 systemd[1]: Started cri-containerd-fb9965213b6e0f5b2a1064f37fa442b96c591505f0895e0768c8d76afceee430.scope - libcontainer container fb9965213b6e0f5b2a1064f37fa442b96c591505f0895e0768c8d76afceee430. Dec 16 12:29:45.459853 systemd[1]: Started cri-containerd-b2a4e9de14b75445ebd374e9d84e41181e2912af519bdad641fd3575e4068707.scope - libcontainer container b2a4e9de14b75445ebd374e9d84e41181e2912af519bdad641fd3575e4068707. Dec 16 12:29:45.462416 containerd[1901]: time="2025-12-16T12:29:45.462110766Z" level=info msg="connecting to shim 2ec5644534179a281b7b3e197f1825ad0cb6583d5a1de88d34d2f89a202c3b76" address="unix:///run/containerd/s/a13c3d825c816dec99cb9057f471f126e3cec222b616f11426dcce10d8ead453" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:45.490284 systemd[1]: Started cri-containerd-2ec5644534179a281b7b3e197f1825ad0cb6583d5a1de88d34d2f89a202c3b76.scope - libcontainer container 2ec5644534179a281b7b3e197f1825ad0cb6583d5a1de88d34d2f89a202c3b76. Dec 16 12:29:45.497564 kubelet[3069]: E1216 12:29:45.497533 3069 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-7f44347f41?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Dec 16 12:29:45.517564 containerd[1901]: time="2025-12-16T12:29:45.517507610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-7f44347f41,Uid:1f3307bc582b5016153bc4782832a66e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2a4e9de14b75445ebd374e9d84e41181e2912af519bdad641fd3575e4068707\"" Dec 16 12:29:45.522418 containerd[1901]: time="2025-12-16T12:29:45.522385571Z" level=info msg="CreateContainer within sandbox \"b2a4e9de14b75445ebd374e9d84e41181e2912af519bdad641fd3575e4068707\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:29:45.525044 containerd[1901]: time="2025-12-16T12:29:45.524951476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-7f44347f41,Uid:fe16b5e5ce27123a3d918efd283edd28,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb9965213b6e0f5b2a1064f37fa442b96c591505f0895e0768c8d76afceee430\"" Dec 16 12:29:45.527156 containerd[1901]: time="2025-12-16T12:29:45.527127585Z" level=info msg="CreateContainer within sandbox \"fb9965213b6e0f5b2a1064f37fa442b96c591505f0895e0768c8d76afceee430\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:29:45.542957 containerd[1901]: time="2025-12-16T12:29:45.542911431Z" level=info msg="Container a9f1a5d6e1079f735cda7bc5036e18e81067d7d3364119739dda0f022cf0dc61: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:45.549738 containerd[1901]: time="2025-12-16T12:29:45.549647509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-7f44347f41,Uid:c9c89019c9c2e7a6f22818764a8d2c62,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ec5644534179a281b7b3e197f1825ad0cb6583d5a1de88d34d2f89a202c3b76\"" Dec 16 12:29:45.553964 containerd[1901]: time="2025-12-16T12:29:45.553932134Z" level=info msg="CreateContainer within sandbox \"2ec5644534179a281b7b3e197f1825ad0cb6583d5a1de88d34d2f89a202c3b76\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:29:45.567400 containerd[1901]: time="2025-12-16T12:29:45.566863362Z" level=info msg="Container 30fc6feb8ef8467f75c8219346f230305c2c90f06d724e81039899e734b65941: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:45.567400 containerd[1901]: time="2025-12-16T12:29:45.566877619Z" level=info msg="CreateContainer within sandbox \"b2a4e9de14b75445ebd374e9d84e41181e2912af519bdad641fd3575e4068707\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a9f1a5d6e1079f735cda7bc5036e18e81067d7d3364119739dda0f022cf0dc61\"" Dec 16 12:29:45.568052 containerd[1901]: time="2025-12-16T12:29:45.568012243Z" level=info msg="StartContainer for \"a9f1a5d6e1079f735cda7bc5036e18e81067d7d3364119739dda0f022cf0dc61\"" Dec 16 12:29:45.568924 containerd[1901]: time="2025-12-16T12:29:45.568896532Z" level=info msg="connecting to shim a9f1a5d6e1079f735cda7bc5036e18e81067d7d3364119739dda0f022cf0dc61" address="unix:///run/containerd/s/bd6e4016c801c989742cb11dba5ea326a4d75d1abae017be4274ff7436d32172" protocol=ttrpc version=3 Dec 16 12:29:45.587251 systemd[1]: Started cri-containerd-a9f1a5d6e1079f735cda7bc5036e18e81067d7d3364119739dda0f022cf0dc61.scope - libcontainer container a9f1a5d6e1079f735cda7bc5036e18e81067d7d3364119739dda0f022cf0dc61. Dec 16 12:29:45.588465 containerd[1901]: time="2025-12-16T12:29:45.588401658Z" level=info msg="CreateContainer within sandbox \"fb9965213b6e0f5b2a1064f37fa442b96c591505f0895e0768c8d76afceee430\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"30fc6feb8ef8467f75c8219346f230305c2c90f06d724e81039899e734b65941\"" Dec 16 12:29:45.589156 containerd[1901]: time="2025-12-16T12:29:45.589135807Z" level=info msg="StartContainer for \"30fc6feb8ef8467f75c8219346f230305c2c90f06d724e81039899e734b65941\"" Dec 16 12:29:45.592440 containerd[1901]: time="2025-12-16T12:29:45.592414684Z" level=info msg="connecting to shim 30fc6feb8ef8467f75c8219346f230305c2c90f06d724e81039899e734b65941" address="unix:///run/containerd/s/3658a117118bf768d0113df5b129c88077507dcb8f95dd930c9c152fc06a6a92" protocol=ttrpc version=3 Dec 16 12:29:45.592662 containerd[1901]: time="2025-12-16T12:29:45.592619025Z" level=info msg="Container 612c8194c9d8b77f1376cae90ab80a1853afe735fd235ad719773b429b0acc02: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:45.613098 containerd[1901]: time="2025-12-16T12:29:45.611936034Z" level=info msg="CreateContainer within sandbox \"2ec5644534179a281b7b3e197f1825ad0cb6583d5a1de88d34d2f89a202c3b76\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"612c8194c9d8b77f1376cae90ab80a1853afe735fd235ad719773b429b0acc02\"" Dec 16 12:29:45.612120 systemd[1]: Started cri-containerd-30fc6feb8ef8467f75c8219346f230305c2c90f06d724e81039899e734b65941.scope - libcontainer container 30fc6feb8ef8467f75c8219346f230305c2c90f06d724e81039899e734b65941. Dec 16 12:29:45.614944 containerd[1901]: time="2025-12-16T12:29:45.614875741Z" level=info msg="StartContainer for \"612c8194c9d8b77f1376cae90ab80a1853afe735fd235ad719773b429b0acc02\"" Dec 16 12:29:45.617175 containerd[1901]: time="2025-12-16T12:29:45.617147765Z" level=info msg="connecting to shim 612c8194c9d8b77f1376cae90ab80a1853afe735fd235ad719773b429b0acc02" address="unix:///run/containerd/s/a13c3d825c816dec99cb9057f471f126e3cec222b616f11426dcce10d8ead453" protocol=ttrpc version=3 Dec 16 12:29:45.647263 systemd[1]: Started cri-containerd-612c8194c9d8b77f1376cae90ab80a1853afe735fd235ad719773b429b0acc02.scope - libcontainer container 612c8194c9d8b77f1376cae90ab80a1853afe735fd235ad719773b429b0acc02. Dec 16 12:29:45.658284 containerd[1901]: time="2025-12-16T12:29:45.657856570Z" level=info msg="StartContainer for \"a9f1a5d6e1079f735cda7bc5036e18e81067d7d3364119739dda0f022cf0dc61\" returns successfully" Dec 16 12:29:45.667653 kubelet[3069]: I1216 12:29:45.667608 3069 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.668418 kubelet[3069]: E1216 12:29:45.668193 3069 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.677763 containerd[1901]: time="2025-12-16T12:29:45.677725499Z" level=info msg="StartContainer for \"30fc6feb8ef8467f75c8219346f230305c2c90f06d724e81039899e734b65941\" returns successfully" Dec 16 12:29:45.707283 containerd[1901]: time="2025-12-16T12:29:45.707241628Z" level=info msg="StartContainer for \"612c8194c9d8b77f1376cae90ab80a1853afe735fd235ad719773b429b0acc02\" returns successfully" Dec 16 12:29:45.942144 kubelet[3069]: E1216 12:29:45.942054 3069 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f44347f41\" not found" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.944260 kubelet[3069]: E1216 12:29:45.944135 3069 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f44347f41\" not found" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:45.946702 kubelet[3069]: E1216 12:29:45.946606 3069 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f44347f41\" not found" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:46.470597 kubelet[3069]: I1216 12:29:46.470573 3069 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:46.943393 kubelet[3069]: E1216 12:29:46.943279 3069 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-a-7f44347f41\" not found" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:46.948808 kubelet[3069]: E1216 12:29:46.948760 3069 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f44347f41\" not found" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:46.949112 kubelet[3069]: E1216 12:29:46.949053 3069 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f44347f41\" not found" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:46.993935 kubelet[3069]: I1216 12:29:46.993733 3069 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:46.995467 kubelet[3069]: I1216 12:29:46.995443 3069 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:47.066188 kubelet[3069]: E1216 12:29:47.066118 3069 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-7f44347f41\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:47.066188 kubelet[3069]: I1216 12:29:47.066149 3069 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:47.067904 kubelet[3069]: E1216 12:29:47.067856 3069 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-7f44347f41\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:47.067904 kubelet[3069]: I1216 12:29:47.067877 3069 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:47.070274 kubelet[3069]: E1216 12:29:47.070239 3069 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-7f44347f41\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:47.888297 kubelet[3069]: I1216 12:29:47.888212 3069 apiserver.go:52] "Watching apiserver" Dec 16 12:29:47.895191 kubelet[3069]: I1216 12:29:47.895161 3069 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:29:47.948158 kubelet[3069]: I1216 12:29:47.948092 3069 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:47.955384 kubelet[3069]: W1216 12:29:47.955331 3069 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:29:48.930325 systemd[1]: Reload requested from client PID 3340 ('systemctl') (unit session-9.scope)... Dec 16 12:29:48.930339 systemd[1]: Reloading... Dec 16 12:29:49.011047 zram_generator::config[3384]: No configuration found. Dec 16 12:29:49.181048 systemd[1]: Reloading finished in 250 ms. Dec 16 12:29:49.205627 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:49.217822 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:29:49.218157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:49.218217 systemd[1]: kubelet.service: Consumed 792ms CPU time, 125.1M memory peak. Dec 16 12:29:49.219818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:49.327152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:49.337432 (kubelet)[3451]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:29:49.366051 kubelet[3451]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:29:49.366051 kubelet[3451]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:29:49.366051 kubelet[3451]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:29:49.366521 kubelet[3451]: I1216 12:29:49.366481 3451 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:29:49.373042 kubelet[3451]: I1216 12:29:49.372377 3451 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:29:49.373042 kubelet[3451]: I1216 12:29:49.372399 3451 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:29:49.373042 kubelet[3451]: I1216 12:29:49.372565 3451 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:29:49.373624 kubelet[3451]: I1216 12:29:49.373609 3451 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 12:29:49.375388 kubelet[3451]: I1216 12:29:49.375358 3451 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:29:49.379185 kubelet[3451]: I1216 12:29:49.379101 3451 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:29:49.381703 kubelet[3451]: I1216 12:29:49.381686 3451 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:29:49.381882 kubelet[3451]: I1216 12:29:49.381857 3451 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:29:49.381993 kubelet[3451]: I1216 12:29:49.381879 3451 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-7f44347f41","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:29:49.382075 kubelet[3451]: I1216 12:29:49.381999 3451 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:29:49.382075 kubelet[3451]: I1216 12:29:49.382006 3451 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:29:49.382075 kubelet[3451]: I1216 12:29:49.382055 3451 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:49.382162 kubelet[3451]: I1216 12:29:49.382149 3451 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:29:49.382162 kubelet[3451]: I1216 12:29:49.382158 3451 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:29:49.382200 kubelet[3451]: I1216 12:29:49.382175 3451 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:29:49.382200 kubelet[3451]: I1216 12:29:49.382183 3451 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:29:49.384581 kubelet[3451]: I1216 12:29:49.384564 3451 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:29:49.384978 kubelet[3451]: I1216 12:29:49.384963 3451 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:29:49.385760 kubelet[3451]: I1216 12:29:49.385741 3451 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:29:49.385853 kubelet[3451]: I1216 12:29:49.385844 3451 server.go:1287] "Started kubelet" Dec 16 12:29:49.388254 kubelet[3451]: I1216 12:29:49.388239 3451 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:29:49.389737 kubelet[3451]: I1216 12:29:49.389716 3451 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:29:49.390641 kubelet[3451]: I1216 12:29:49.390625 3451 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:29:49.390850 kubelet[3451]: E1216 12:29:49.390834 3451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-7f44347f41\" not found" Dec 16 12:29:49.391283 kubelet[3451]: I1216 12:29:49.391267 3451 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:29:49.391450 kubelet[3451]: I1216 12:29:49.391441 3451 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:29:49.395246 kubelet[3451]: I1216 12:29:49.395217 3451 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:29:49.395902 kubelet[3451]: I1216 12:29:49.395884 3451 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:29:49.398386 kubelet[3451]: I1216 12:29:49.398358 3451 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:29:49.398560 kubelet[3451]: I1216 12:29:49.398515 3451 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:29:49.398723 kubelet[3451]: I1216 12:29:49.398704 3451 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:29:49.399469 kubelet[3451]: I1216 12:29:49.399452 3451 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:29:49.399551 kubelet[3451]: I1216 12:29:49.399542 3451 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:29:49.399599 kubelet[3451]: I1216 12:29:49.399592 3451 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:29:49.399636 kubelet[3451]: I1216 12:29:49.399629 3451 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:29:49.399720 kubelet[3451]: E1216 12:29:49.399700 3451 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:29:49.400713 kubelet[3451]: I1216 12:29:49.400683 3451 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:29:49.400770 kubelet[3451]: I1216 12:29:49.400755 3451 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:29:49.404585 kubelet[3451]: I1216 12:29:49.404562 3451 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:29:49.408474 kubelet[3451]: E1216 12:29:49.408452 3451 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:29:49.457842 kubelet[3451]: I1216 12:29:49.457750 3451 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:29:49.457995 kubelet[3451]: I1216 12:29:49.457981 3451 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:29:49.458066 kubelet[3451]: I1216 12:29:49.458059 3451 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:49.458264 kubelet[3451]: I1216 12:29:49.458248 3451 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:29:49.458334 kubelet[3451]: I1216 12:29:49.458314 3451 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:29:49.458373 kubelet[3451]: I1216 12:29:49.458367 3451 policy_none.go:49] "None policy: Start" Dec 16 12:29:49.458422 kubelet[3451]: I1216 12:29:49.458414 3451 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:29:49.458457 kubelet[3451]: I1216 12:29:49.458452 3451 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:29:49.458591 kubelet[3451]: I1216 12:29:49.458581 3451 state_mem.go:75] "Updated machine memory state" Dec 16 12:29:49.463691 kubelet[3451]: I1216 12:29:49.463674 3451 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:29:49.463933 kubelet[3451]: I1216 12:29:49.463916 3451 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:29:49.464292 kubelet[3451]: I1216 12:29:49.464264 3451 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:29:49.464551 kubelet[3451]: I1216 12:29:49.464533 3451 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:29:49.465826 kubelet[3451]: E1216 12:29:49.465728 3451 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:29:49.500405 kubelet[3451]: I1216 12:29:49.500374 3451 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.501940 kubelet[3451]: I1216 12:29:49.500624 3451 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.502284 kubelet[3451]: I1216 12:29:49.500697 3451 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.512139 kubelet[3451]: W1216 12:29:49.512118 3451 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:29:49.514158 kubelet[3451]: W1216 12:29:49.514135 3451 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:29:49.514158 kubelet[3451]: W1216 12:29:49.514134 3451 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:29:49.514326 kubelet[3451]: E1216 12:29:49.514308 3451 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-7f44347f41\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.566935 kubelet[3451]: I1216 12:29:49.566906 3451 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.577704 kubelet[3451]: I1216 12:29:49.577375 3451 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.577704 kubelet[3451]: I1216 12:29:49.577449 3451 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.692572 kubelet[3451]: I1216 12:29:49.692536 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f3307bc582b5016153bc4782832a66e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-7f44347f41\" (UID: \"1f3307bc582b5016153bc4782832a66e\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.692572 kubelet[3451]: I1216 12:29:49.692570 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9c89019c9c2e7a6f22818764a8d2c62-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-7f44347f41\" (UID: \"c9c89019c9c2e7a6f22818764a8d2c62\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.692726 kubelet[3451]: I1216 12:29:49.692585 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9c89019c9c2e7a6f22818764a8d2c62-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-7f44347f41\" (UID: \"c9c89019c9c2e7a6f22818764a8d2c62\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.692726 kubelet[3451]: I1216 12:29:49.692598 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9c89019c9c2e7a6f22818764a8d2c62-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-7f44347f41\" (UID: \"c9c89019c9c2e7a6f22818764a8d2c62\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.692726 kubelet[3451]: I1216 12:29:49.692611 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe16b5e5ce27123a3d918efd283edd28-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-7f44347f41\" (UID: \"fe16b5e5ce27123a3d918efd283edd28\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.692726 kubelet[3451]: I1216 12:29:49.692621 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f3307bc582b5016153bc4782832a66e-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-7f44347f41\" (UID: \"1f3307bc582b5016153bc4782832a66e\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.692726 kubelet[3451]: I1216 12:29:49.692629 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9c89019c9c2e7a6f22818764a8d2c62-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-7f44347f41\" (UID: \"c9c89019c9c2e7a6f22818764a8d2c62\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.692806 kubelet[3451]: I1216 12:29:49.692639 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9c89019c9c2e7a6f22818764a8d2c62-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-7f44347f41\" (UID: \"c9c89019c9c2e7a6f22818764a8d2c62\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.692806 kubelet[3451]: I1216 12:29:49.692649 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f3307bc582b5016153bc4782832a66e-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-7f44347f41\" (UID: \"1f3307bc582b5016153bc4782832a66e\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:49.972951 sudo[3485]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 12:29:49.973579 sudo[3485]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 12:29:50.205541 sudo[3485]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:50.383193 kubelet[3451]: I1216 12:29:50.383078 3451 apiserver.go:52] "Watching apiserver" Dec 16 12:29:50.392191 kubelet[3451]: I1216 12:29:50.392147 3451 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:29:50.442440 kubelet[3451]: I1216 12:29:50.442402 3451 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:50.456232 kubelet[3451]: W1216 12:29:50.456206 3451 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:29:50.456383 kubelet[3451]: E1216 12:29:50.456368 3451 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-7f44347f41\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" Dec 16 12:29:50.518878 kubelet[3451]: I1216 12:29:50.518828 3451 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f44347f41" podStartSLOduration=3.518812125 podStartE2EDuration="3.518812125s" podCreationTimestamp="2025-12-16 12:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:29:50.50132802 +0000 UTC m=+1.160904912" watchObservedRunningTime="2025-12-16 12:29:50.518812125 +0000 UTC m=+1.178389017" Dec 16 12:29:50.530269 kubelet[3451]: I1216 12:29:50.530169 3451 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f44347f41" podStartSLOduration=1.530157493 podStartE2EDuration="1.530157493s" podCreationTimestamp="2025-12-16 12:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:29:50.519140814 +0000 UTC m=+1.178717706" watchObservedRunningTime="2025-12-16 12:29:50.530157493 +0000 UTC m=+1.189734393" Dec 16 12:29:50.543250 kubelet[3451]: I1216 12:29:50.543212 3451 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f44347f41" podStartSLOduration=1.54319906 podStartE2EDuration="1.54319906s" podCreationTimestamp="2025-12-16 12:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:29:50.530731469 +0000 UTC m=+1.190308417" watchObservedRunningTime="2025-12-16 12:29:50.54319906 +0000 UTC m=+1.202775952" Dec 16 12:29:51.321749 sudo[2388]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:51.399133 sshd[2387]: Connection closed by 10.200.16.10 port 37562 Dec 16 12:29:51.399669 sshd-session[2384]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:51.403788 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:37562.service: Deactivated successfully. Dec 16 12:29:51.407985 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:29:51.408243 systemd[1]: session-9.scope: Consumed 3.324s CPU time, 260.2M memory peak. Dec 16 12:29:51.409848 systemd-logind[1878]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:29:51.414172 systemd-logind[1878]: Removed session 9. Dec 16 12:29:55.037399 kubelet[3451]: I1216 12:29:55.037297 3451 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:29:55.038197 containerd[1901]: time="2025-12-16T12:29:55.038160021Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:29:55.038981 kubelet[3451]: I1216 12:29:55.038720 3451 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:29:55.894665 systemd[1]: Created slice kubepods-besteffort-pod2ef3b466_cd8f_4116_be21_a5f9228ec508.slice - libcontainer container kubepods-besteffort-pod2ef3b466_cd8f_4116_be21_a5f9228ec508.slice. Dec 16 12:29:55.905509 systemd[1]: Created slice kubepods-burstable-pod591f4981_bdad_4905_89a4_2bb53e21dcb8.slice - libcontainer container kubepods-burstable-pod591f4981_bdad_4905_89a4_2bb53e21dcb8.slice. Dec 16 12:29:55.930382 kubelet[3451]: I1216 12:29:55.930053 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-config-path\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930382 kubelet[3451]: I1216 12:29:55.930089 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-host-proc-sys-net\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930382 kubelet[3451]: I1216 12:29:55.930104 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6lv9\" (UniqueName: \"kubernetes.io/projected/2ef3b466-cd8f-4116-be21-a5f9228ec508-kube-api-access-q6lv9\") pod \"kube-proxy-9wrt2\" (UID: \"2ef3b466-cd8f-4116-be21-a5f9228ec508\") " pod="kube-system/kube-proxy-9wrt2" Dec 16 12:29:55.930382 kubelet[3451]: I1216 12:29:55.930116 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cni-path\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930382 kubelet[3451]: I1216 12:29:55.930125 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-lib-modules\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930382 kubelet[3451]: I1216 12:29:55.930134 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-xtables-lock\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930567 kubelet[3451]: I1216 12:29:55.930145 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-hostproc\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930567 kubelet[3451]: I1216 12:29:55.930157 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ef3b466-cd8f-4116-be21-a5f9228ec508-kube-proxy\") pod \"kube-proxy-9wrt2\" (UID: \"2ef3b466-cd8f-4116-be21-a5f9228ec508\") " pod="kube-system/kube-proxy-9wrt2" Dec 16 12:29:55.930567 kubelet[3451]: I1216 12:29:55.930167 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-run\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930567 kubelet[3451]: I1216 12:29:55.930177 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/591f4981-bdad-4905-89a4-2bb53e21dcb8-clustermesh-secrets\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930567 kubelet[3451]: I1216 12:29:55.930186 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-host-proc-sys-kernel\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930567 kubelet[3451]: I1216 12:29:55.930196 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-cgroup\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930651 kubelet[3451]: I1216 12:29:55.930206 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmqn7\" (UniqueName: \"kubernetes.io/projected/591f4981-bdad-4905-89a4-2bb53e21dcb8-kube-api-access-cmqn7\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930651 kubelet[3451]: I1216 12:29:55.930216 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ef3b466-cd8f-4116-be21-a5f9228ec508-xtables-lock\") pod \"kube-proxy-9wrt2\" (UID: \"2ef3b466-cd8f-4116-be21-a5f9228ec508\") " pod="kube-system/kube-proxy-9wrt2" Dec 16 12:29:55.930651 kubelet[3451]: I1216 12:29:55.930242 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ef3b466-cd8f-4116-be21-a5f9228ec508-lib-modules\") pod \"kube-proxy-9wrt2\" (UID: \"2ef3b466-cd8f-4116-be21-a5f9228ec508\") " pod="kube-system/kube-proxy-9wrt2" Dec 16 12:29:55.930651 kubelet[3451]: I1216 12:29:55.930252 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-bpf-maps\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930651 kubelet[3451]: I1216 12:29:55.930262 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-etc-cni-netd\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:55.930651 kubelet[3451]: I1216 12:29:55.930283 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/591f4981-bdad-4905-89a4-2bb53e21dcb8-hubble-tls\") pod \"cilium-bkjfq\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " pod="kube-system/cilium-bkjfq" Dec 16 12:29:56.089140 systemd[1]: Created slice kubepods-besteffort-pod6f038b8d_8607_4673_98e7_85030005a9e6.slice - libcontainer container kubepods-besteffort-pod6f038b8d_8607_4673_98e7_85030005a9e6.slice. Dec 16 12:29:56.132337 kubelet[3451]: I1216 12:29:56.132289 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlfjx\" (UniqueName: \"kubernetes.io/projected/6f038b8d-8607-4673-98e7-85030005a9e6-kube-api-access-dlfjx\") pod \"cilium-operator-6c4d7847fc-lpgv5\" (UID: \"6f038b8d-8607-4673-98e7-85030005a9e6\") " pod="kube-system/cilium-operator-6c4d7847fc-lpgv5" Dec 16 12:29:56.132337 kubelet[3451]: I1216 12:29:56.132340 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f038b8d-8607-4673-98e7-85030005a9e6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lpgv5\" (UID: \"6f038b8d-8607-4673-98e7-85030005a9e6\") " pod="kube-system/cilium-operator-6c4d7847fc-lpgv5" Dec 16 12:29:56.204048 containerd[1901]: time="2025-12-16T12:29:56.203924244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9wrt2,Uid:2ef3b466-cd8f-4116-be21-a5f9228ec508,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:56.209538 containerd[1901]: time="2025-12-16T12:29:56.209503683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bkjfq,Uid:591f4981-bdad-4905-89a4-2bb53e21dcb8,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:56.281001 containerd[1901]: time="2025-12-16T12:29:56.280961667Z" level=info msg="connecting to shim e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553" address="unix:///run/containerd/s/8b2473fb27ec7249442324c477e4dc6827872f7dff9c14ef78eaf9a96abc74e2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:56.282312 containerd[1901]: time="2025-12-16T12:29:56.282281991Z" level=info msg="connecting to shim 575abb6f5b5faadd75486a022b7afa1e80b7bf17e16ed028a8a76365cf507a62" address="unix:///run/containerd/s/0681ba10ff46b1bf0c10afa1f12fcc04ca222738758f0825dfe87b560ed51e0d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:56.299220 systemd[1]: Started cri-containerd-575abb6f5b5faadd75486a022b7afa1e80b7bf17e16ed028a8a76365cf507a62.scope - libcontainer container 575abb6f5b5faadd75486a022b7afa1e80b7bf17e16ed028a8a76365cf507a62. Dec 16 12:29:56.302634 systemd[1]: Started cri-containerd-e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553.scope - libcontainer container e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553. Dec 16 12:29:56.332168 containerd[1901]: time="2025-12-16T12:29:56.332131968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9wrt2,Uid:2ef3b466-cd8f-4116-be21-a5f9228ec508,Namespace:kube-system,Attempt:0,} returns sandbox id \"575abb6f5b5faadd75486a022b7afa1e80b7bf17e16ed028a8a76365cf507a62\"" Dec 16 12:29:56.337573 containerd[1901]: time="2025-12-16T12:29:56.337485840Z" level=info msg="CreateContainer within sandbox \"575abb6f5b5faadd75486a022b7afa1e80b7bf17e16ed028a8a76365cf507a62\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:29:56.338514 containerd[1901]: time="2025-12-16T12:29:56.338482507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bkjfq,Uid:591f4981-bdad-4905-89a4-2bb53e21dcb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\"" Dec 16 12:29:56.340996 containerd[1901]: time="2025-12-16T12:29:56.340973095Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 12:29:56.363968 containerd[1901]: time="2025-12-16T12:29:56.363937858Z" level=info msg="Container bc5c6fb61736efbc01e5603086f07f4426c87b44b65a072fcb9123df7aa66632: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:56.380337 containerd[1901]: time="2025-12-16T12:29:56.380299468Z" level=info msg="CreateContainer within sandbox \"575abb6f5b5faadd75486a022b7afa1e80b7bf17e16ed028a8a76365cf507a62\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc5c6fb61736efbc01e5603086f07f4426c87b44b65a072fcb9123df7aa66632\"" Dec 16 12:29:56.381203 containerd[1901]: time="2025-12-16T12:29:56.381173451Z" level=info msg="StartContainer for \"bc5c6fb61736efbc01e5603086f07f4426c87b44b65a072fcb9123df7aa66632\"" Dec 16 12:29:56.383354 containerd[1901]: time="2025-12-16T12:29:56.383321965Z" level=info msg="connecting to shim bc5c6fb61736efbc01e5603086f07f4426c87b44b65a072fcb9123df7aa66632" address="unix:///run/containerd/s/0681ba10ff46b1bf0c10afa1f12fcc04ca222738758f0825dfe87b560ed51e0d" protocol=ttrpc version=3 Dec 16 12:29:56.394737 containerd[1901]: time="2025-12-16T12:29:56.394713145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lpgv5,Uid:6f038b8d-8607-4673-98e7-85030005a9e6,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:56.404174 systemd[1]: Started cri-containerd-bc5c6fb61736efbc01e5603086f07f4426c87b44b65a072fcb9123df7aa66632.scope - libcontainer container bc5c6fb61736efbc01e5603086f07f4426c87b44b65a072fcb9123df7aa66632. Dec 16 12:29:56.430656 containerd[1901]: time="2025-12-16T12:29:56.430588025Z" level=info msg="connecting to shim 8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1" address="unix:///run/containerd/s/ffb22301c6cd9fec011f0c8ed320738965be5da4019c2dc407a7ddce56e60c3d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:56.449137 systemd[1]: Started cri-containerd-8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1.scope - libcontainer container 8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1. Dec 16 12:29:56.484438 containerd[1901]: time="2025-12-16T12:29:56.484305842Z" level=info msg="StartContainer for \"bc5c6fb61736efbc01e5603086f07f4426c87b44b65a072fcb9123df7aa66632\" returns successfully" Dec 16 12:29:56.503486 containerd[1901]: time="2025-12-16T12:29:56.503452047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lpgv5,Uid:6f038b8d-8607-4673-98e7-85030005a9e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\"" Dec 16 12:29:57.477737 kubelet[3451]: I1216 12:29:57.477558 3451 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9wrt2" podStartSLOduration=2.477541819 podStartE2EDuration="2.477541819s" podCreationTimestamp="2025-12-16 12:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:29:57.477170193 +0000 UTC m=+8.136747085" watchObservedRunningTime="2025-12-16 12:29:57.477541819 +0000 UTC m=+8.137118719" Dec 16 12:30:01.423795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1450281765.mount: Deactivated successfully. Dec 16 12:30:02.990461 containerd[1901]: time="2025-12-16T12:30:02.989919435Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:03.022636 containerd[1901]: time="2025-12-16T12:30:03.022589364Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 16 12:30:03.028242 containerd[1901]: time="2025-12-16T12:30:03.028199316Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:03.029590 containerd[1901]: time="2025-12-16T12:30:03.029526872Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.688484504s" Dec 16 12:30:03.029590 containerd[1901]: time="2025-12-16T12:30:03.029589978Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 16 12:30:03.030886 containerd[1901]: time="2025-12-16T12:30:03.030767281Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 12:30:03.034489 containerd[1901]: time="2025-12-16T12:30:03.034460205Z" level=info msg="CreateContainer within sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:30:03.570043 containerd[1901]: time="2025-12-16T12:30:03.569060375Z" level=info msg="Container 06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:03.584961 containerd[1901]: time="2025-12-16T12:30:03.584919491Z" level=info msg="CreateContainer within sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\"" Dec 16 12:30:03.585746 containerd[1901]: time="2025-12-16T12:30:03.585725217Z" level=info msg="StartContainer for \"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\"" Dec 16 12:30:03.587170 containerd[1901]: time="2025-12-16T12:30:03.587144447Z" level=info msg="connecting to shim 06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87" address="unix:///run/containerd/s/8b2473fb27ec7249442324c477e4dc6827872f7dff9c14ef78eaf9a96abc74e2" protocol=ttrpc version=3 Dec 16 12:30:03.608150 systemd[1]: Started cri-containerd-06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87.scope - libcontainer container 06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87. Dec 16 12:30:03.638058 containerd[1901]: time="2025-12-16T12:30:03.635544537Z" level=info msg="StartContainer for \"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\" returns successfully" Dec 16 12:30:03.641146 systemd[1]: cri-containerd-06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87.scope: Deactivated successfully. Dec 16 12:30:03.643002 containerd[1901]: time="2025-12-16T12:30:03.642919048Z" level=info msg="received container exit event container_id:\"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\" id:\"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\" pid:3869 exited_at:{seconds:1765888203 nanos:642333872}" Dec 16 12:30:03.659825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87-rootfs.mount: Deactivated successfully. Dec 16 12:30:05.481039 containerd[1901]: time="2025-12-16T12:30:05.480796836Z" level=info msg="CreateContainer within sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:30:05.503437 containerd[1901]: time="2025-12-16T12:30:05.502147218Z" level=info msg="Container c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:05.518096 containerd[1901]: time="2025-12-16T12:30:05.518047074Z" level=info msg="CreateContainer within sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\"" Dec 16 12:30:05.520188 containerd[1901]: time="2025-12-16T12:30:05.519959660Z" level=info msg="StartContainer for \"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\"" Dec 16 12:30:05.520860 containerd[1901]: time="2025-12-16T12:30:05.520756417Z" level=info msg="connecting to shim c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c" address="unix:///run/containerd/s/8b2473fb27ec7249442324c477e4dc6827872f7dff9c14ef78eaf9a96abc74e2" protocol=ttrpc version=3 Dec 16 12:30:05.545159 systemd[1]: Started cri-containerd-c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c.scope - libcontainer container c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c. Dec 16 12:30:05.571583 containerd[1901]: time="2025-12-16T12:30:05.571532129Z" level=info msg="StartContainer for \"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\" returns successfully" Dec 16 12:30:05.583661 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:30:05.584070 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:30:05.584790 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:30:05.586563 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:30:05.588877 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:30:05.589746 systemd[1]: cri-containerd-c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c.scope: Deactivated successfully. Dec 16 12:30:05.593109 containerd[1901]: time="2025-12-16T12:30:05.593065708Z" level=info msg="received container exit event container_id:\"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\" id:\"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\" pid:3916 exited_at:{seconds:1765888205 nanos:592491253}" Dec 16 12:30:05.607140 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:30:06.382641 containerd[1901]: time="2025-12-16T12:30:06.382454258Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:06.386547 containerd[1901]: time="2025-12-16T12:30:06.386513148Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 16 12:30:06.390188 containerd[1901]: time="2025-12-16T12:30:06.390156515Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:06.391426 containerd[1901]: time="2025-12-16T12:30:06.391396116Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.360505759s" Dec 16 12:30:06.391487 containerd[1901]: time="2025-12-16T12:30:06.391430068Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 16 12:30:06.393450 containerd[1901]: time="2025-12-16T12:30:06.393417408Z" level=info msg="CreateContainer within sandbox \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 12:30:06.409664 containerd[1901]: time="2025-12-16T12:30:06.409624008Z" level=info msg="Container d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:06.426093 containerd[1901]: time="2025-12-16T12:30:06.426038214Z" level=info msg="CreateContainer within sandbox \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\"" Dec 16 12:30:06.426733 containerd[1901]: time="2025-12-16T12:30:06.426698479Z" level=info msg="StartContainer for \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\"" Dec 16 12:30:06.428265 containerd[1901]: time="2025-12-16T12:30:06.428235407Z" level=info msg="connecting to shim d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1" address="unix:///run/containerd/s/ffb22301c6cd9fec011f0c8ed320738965be5da4019c2dc407a7ddce56e60c3d" protocol=ttrpc version=3 Dec 16 12:30:06.443176 systemd[1]: Started cri-containerd-d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1.scope - libcontainer container d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1. Dec 16 12:30:06.473196 containerd[1901]: time="2025-12-16T12:30:06.473142894Z" level=info msg="StartContainer for \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\" returns successfully" Dec 16 12:30:06.485780 containerd[1901]: time="2025-12-16T12:30:06.485739199Z" level=info msg="CreateContainer within sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:30:06.505302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c-rootfs.mount: Deactivated successfully. Dec 16 12:30:06.514128 containerd[1901]: time="2025-12-16T12:30:06.513600272Z" level=info msg="Container b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:06.515442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount97087145.mount: Deactivated successfully. Dec 16 12:30:06.538369 containerd[1901]: time="2025-12-16T12:30:06.538324734Z" level=info msg="CreateContainer within sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\"" Dec 16 12:30:06.539152 containerd[1901]: time="2025-12-16T12:30:06.538925758Z" level=info msg="StartContainer for \"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\"" Dec 16 12:30:06.541628 containerd[1901]: time="2025-12-16T12:30:06.541536546Z" level=info msg="connecting to shim b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43" address="unix:///run/containerd/s/8b2473fb27ec7249442324c477e4dc6827872f7dff9c14ef78eaf9a96abc74e2" protocol=ttrpc version=3 Dec 16 12:30:06.572171 systemd[1]: Started cri-containerd-b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43.scope - libcontainer container b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43. Dec 16 12:30:06.649524 systemd[1]: cri-containerd-b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43.scope: Deactivated successfully. Dec 16 12:30:06.653509 containerd[1901]: time="2025-12-16T12:30:06.653473562Z" level=info msg="received container exit event container_id:\"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\" id:\"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\" pid:4012 exited_at:{seconds:1765888206 nanos:653327950}" Dec 16 12:30:06.657526 containerd[1901]: time="2025-12-16T12:30:06.657393856Z" level=info msg="StartContainer for \"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\" returns successfully" Dec 16 12:30:06.676190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43-rootfs.mount: Deactivated successfully. Dec 16 12:30:07.493213 containerd[1901]: time="2025-12-16T12:30:07.493100242Z" level=info msg="CreateContainer within sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:30:07.509270 kubelet[3451]: I1216 12:30:07.509222 3451 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lpgv5" podStartSLOduration=1.622302583 podStartE2EDuration="11.509204703s" podCreationTimestamp="2025-12-16 12:29:56 +0000 UTC" firstStartedPulling="2025-12-16 12:29:56.5052594 +0000 UTC m=+7.164836300" lastFinishedPulling="2025-12-16 12:30:06.392161528 +0000 UTC m=+17.051738420" observedRunningTime="2025-12-16 12:30:06.526965533 +0000 UTC m=+17.186542441" watchObservedRunningTime="2025-12-16 12:30:07.509204703 +0000 UTC m=+18.168781595" Dec 16 12:30:07.520892 containerd[1901]: time="2025-12-16T12:30:07.520855464Z" level=info msg="Container 58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:07.521407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664010825.mount: Deactivated successfully. Dec 16 12:30:07.534641 containerd[1901]: time="2025-12-16T12:30:07.534603311Z" level=info msg="CreateContainer within sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\"" Dec 16 12:30:07.535383 containerd[1901]: time="2025-12-16T12:30:07.535359651Z" level=info msg="StartContainer for \"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\"" Dec 16 12:30:07.537336 containerd[1901]: time="2025-12-16T12:30:07.537288517Z" level=info msg="connecting to shim 58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd" address="unix:///run/containerd/s/8b2473fb27ec7249442324c477e4dc6827872f7dff9c14ef78eaf9a96abc74e2" protocol=ttrpc version=3 Dec 16 12:30:07.557162 systemd[1]: Started cri-containerd-58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd.scope - libcontainer container 58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd. Dec 16 12:30:07.577123 systemd[1]: cri-containerd-58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd.scope: Deactivated successfully. Dec 16 12:30:07.582203 containerd[1901]: time="2025-12-16T12:30:07.582158187Z" level=info msg="received container exit event container_id:\"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\" id:\"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\" pid:4051 exited_at:{seconds:1765888207 nanos:578795171}" Dec 16 12:30:07.583705 containerd[1901]: time="2025-12-16T12:30:07.583579096Z" level=info msg="StartContainer for \"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\" returns successfully" Dec 16 12:30:07.598654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd-rootfs.mount: Deactivated successfully. Dec 16 12:30:08.496850 containerd[1901]: time="2025-12-16T12:30:08.496671369Z" level=info msg="CreateContainer within sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:30:08.532251 containerd[1901]: time="2025-12-16T12:30:08.532208275Z" level=info msg="Container 670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:08.548722 containerd[1901]: time="2025-12-16T12:30:08.548677137Z" level=info msg="CreateContainer within sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\"" Dec 16 12:30:08.550055 containerd[1901]: time="2025-12-16T12:30:08.549297674Z" level=info msg="StartContainer for \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\"" Dec 16 12:30:08.550387 containerd[1901]: time="2025-12-16T12:30:08.550305980Z" level=info msg="connecting to shim 670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b" address="unix:///run/containerd/s/8b2473fb27ec7249442324c477e4dc6827872f7dff9c14ef78eaf9a96abc74e2" protocol=ttrpc version=3 Dec 16 12:30:08.569153 systemd[1]: Started cri-containerd-670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b.scope - libcontainer container 670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b. Dec 16 12:30:08.604547 containerd[1901]: time="2025-12-16T12:30:08.604493717Z" level=info msg="StartContainer for \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\" returns successfully" Dec 16 12:30:08.746084 kubelet[3451]: I1216 12:30:08.745369 3451 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:30:08.791341 systemd[1]: Created slice kubepods-burstable-pod1f984a6a_ba54_4f49_bba8_93ddab69d4b3.slice - libcontainer container kubepods-burstable-pod1f984a6a_ba54_4f49_bba8_93ddab69d4b3.slice. Dec 16 12:30:08.803550 systemd[1]: Created slice kubepods-burstable-podff10024d_beda_4b9d_b06f_19d51e5ff94b.slice - libcontainer container kubepods-burstable-podff10024d_beda_4b9d_b06f_19d51e5ff94b.slice. Dec 16 12:30:08.809166 kubelet[3451]: I1216 12:30:08.809134 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7z8d\" (UniqueName: \"kubernetes.io/projected/ff10024d-beda-4b9d-b06f-19d51e5ff94b-kube-api-access-k7z8d\") pod \"coredns-668d6bf9bc-bxwd8\" (UID: \"ff10024d-beda-4b9d-b06f-19d51e5ff94b\") " pod="kube-system/coredns-668d6bf9bc-bxwd8" Dec 16 12:30:08.809166 kubelet[3451]: I1216 12:30:08.809168 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkkhj\" (UniqueName: \"kubernetes.io/projected/1f984a6a-ba54-4f49-bba8-93ddab69d4b3-kube-api-access-gkkhj\") pod \"coredns-668d6bf9bc-qvhdz\" (UID: \"1f984a6a-ba54-4f49-bba8-93ddab69d4b3\") " pod="kube-system/coredns-668d6bf9bc-qvhdz" Dec 16 12:30:08.809166 kubelet[3451]: I1216 12:30:08.809183 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f984a6a-ba54-4f49-bba8-93ddab69d4b3-config-volume\") pod \"coredns-668d6bf9bc-qvhdz\" (UID: \"1f984a6a-ba54-4f49-bba8-93ddab69d4b3\") " pod="kube-system/coredns-668d6bf9bc-qvhdz" Dec 16 12:30:08.809335 kubelet[3451]: I1216 12:30:08.809197 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff10024d-beda-4b9d-b06f-19d51e5ff94b-config-volume\") pod \"coredns-668d6bf9bc-bxwd8\" (UID: \"ff10024d-beda-4b9d-b06f-19d51e5ff94b\") " pod="kube-system/coredns-668d6bf9bc-bxwd8" Dec 16 12:30:09.098422 containerd[1901]: time="2025-12-16T12:30:09.097975775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvhdz,Uid:1f984a6a-ba54-4f49-bba8-93ddab69d4b3,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:09.109462 containerd[1901]: time="2025-12-16T12:30:09.109300896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bxwd8,Uid:ff10024d-beda-4b9d-b06f-19d51e5ff94b,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:10.626639 systemd-networkd[1492]: cilium_host: Link UP Dec 16 12:30:10.626718 systemd-networkd[1492]: cilium_net: Link UP Dec 16 12:30:10.626798 systemd-networkd[1492]: cilium_net: Gained carrier Dec 16 12:30:10.626870 systemd-networkd[1492]: cilium_host: Gained carrier Dec 16 12:30:10.764852 systemd-networkd[1492]: cilium_vxlan: Link UP Dec 16 12:30:10.764857 systemd-networkd[1492]: cilium_vxlan: Gained carrier Dec 16 12:30:11.071124 kernel: NET: Registered PF_ALG protocol family Dec 16 12:30:11.123197 systemd-networkd[1492]: cilium_host: Gained IPv6LL Dec 16 12:30:11.307140 systemd-networkd[1492]: cilium_net: Gained IPv6LL Dec 16 12:30:11.621080 systemd-networkd[1492]: lxc_health: Link UP Dec 16 12:30:11.621323 systemd-networkd[1492]: lxc_health: Gained carrier Dec 16 12:30:12.128703 systemd-networkd[1492]: lxc079112574ea6: Link UP Dec 16 12:30:12.141073 kernel: eth0: renamed from tmp46199 Dec 16 12:30:12.142989 systemd-networkd[1492]: cilium_vxlan: Gained IPv6LL Dec 16 12:30:12.147087 systemd-networkd[1492]: lxc079112574ea6: Gained carrier Dec 16 12:30:12.149154 systemd-networkd[1492]: lxc9cccde618540: Link UP Dec 16 12:30:12.160037 kernel: eth0: renamed from tmpc6f5a Dec 16 12:30:12.160571 systemd-networkd[1492]: lxc9cccde618540: Gained carrier Dec 16 12:30:12.229167 kubelet[3451]: I1216 12:30:12.229107 3451 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bkjfq" podStartSLOduration=10.539002596 podStartE2EDuration="17.229089559s" podCreationTimestamp="2025-12-16 12:29:55 +0000 UTC" firstStartedPulling="2025-12-16 12:29:56.340481913 +0000 UTC m=+7.000058813" lastFinishedPulling="2025-12-16 12:30:03.030568884 +0000 UTC m=+13.690145776" observedRunningTime="2025-12-16 12:30:09.525990058 +0000 UTC m=+20.185566958" watchObservedRunningTime="2025-12-16 12:30:12.229089559 +0000 UTC m=+22.888666459" Dec 16 12:30:13.163252 systemd-networkd[1492]: lxc_health: Gained IPv6LL Dec 16 12:30:13.291191 systemd-networkd[1492]: lxc9cccde618540: Gained IPv6LL Dec 16 12:30:13.483243 systemd-networkd[1492]: lxc079112574ea6: Gained IPv6LL Dec 16 12:30:14.761184 containerd[1901]: time="2025-12-16T12:30:14.759533099Z" level=info msg="connecting to shim c6f5a1421dd9dccfc3b1b759a81b20405d039f54dc6c1f53e4bca7c994c86ec0" address="unix:///run/containerd/s/72f9fc7bc0cd19b8b4bbb92534c77216911c5b98c3e60e59838bf7660187a57f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:14.774660 containerd[1901]: time="2025-12-16T12:30:14.774427706Z" level=info msg="connecting to shim 4619963c553a8ae71f57ffdee01f621c19fd23185da95ea379eb51505fce4479" address="unix:///run/containerd/s/438907f2cc4c5cdec47b2900df08e3210bce913bc8acfc2616218d01e854b971" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:14.790173 systemd[1]: Started cri-containerd-c6f5a1421dd9dccfc3b1b759a81b20405d039f54dc6c1f53e4bca7c994c86ec0.scope - libcontainer container c6f5a1421dd9dccfc3b1b759a81b20405d039f54dc6c1f53e4bca7c994c86ec0. Dec 16 12:30:14.793873 systemd[1]: Started cri-containerd-4619963c553a8ae71f57ffdee01f621c19fd23185da95ea379eb51505fce4479.scope - libcontainer container 4619963c553a8ae71f57ffdee01f621c19fd23185da95ea379eb51505fce4479. Dec 16 12:30:14.833183 containerd[1901]: time="2025-12-16T12:30:14.833014880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bxwd8,Uid:ff10024d-beda-4b9d-b06f-19d51e5ff94b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6f5a1421dd9dccfc3b1b759a81b20405d039f54dc6c1f53e4bca7c994c86ec0\"" Dec 16 12:30:14.836249 containerd[1901]: time="2025-12-16T12:30:14.836213808Z" level=info msg="CreateContainer within sandbox \"c6f5a1421dd9dccfc3b1b759a81b20405d039f54dc6c1f53e4bca7c994c86ec0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:30:14.837111 containerd[1901]: time="2025-12-16T12:30:14.837079783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvhdz,Uid:1f984a6a-ba54-4f49-bba8-93ddab69d4b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4619963c553a8ae71f57ffdee01f621c19fd23185da95ea379eb51505fce4479\"" Dec 16 12:30:14.841195 containerd[1901]: time="2025-12-16T12:30:14.841005291Z" level=info msg="CreateContainer within sandbox \"4619963c553a8ae71f57ffdee01f621c19fd23185da95ea379eb51505fce4479\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:30:14.860086 containerd[1901]: time="2025-12-16T12:30:14.860041522Z" level=info msg="Container c03cff1551b58f870a9ae11e622f4397bad747a3aae6b03e86f01fcbed9abfd4: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:14.877898 containerd[1901]: time="2025-12-16T12:30:14.877851840Z" level=info msg="CreateContainer within sandbox \"c6f5a1421dd9dccfc3b1b759a81b20405d039f54dc6c1f53e4bca7c994c86ec0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c03cff1551b58f870a9ae11e622f4397bad747a3aae6b03e86f01fcbed9abfd4\"" Dec 16 12:30:14.879441 containerd[1901]: time="2025-12-16T12:30:14.879405626Z" level=info msg="StartContainer for \"c03cff1551b58f870a9ae11e622f4397bad747a3aae6b03e86f01fcbed9abfd4\"" Dec 16 12:30:14.881063 containerd[1901]: time="2025-12-16T12:30:14.881032943Z" level=info msg="connecting to shim c03cff1551b58f870a9ae11e622f4397bad747a3aae6b03e86f01fcbed9abfd4" address="unix:///run/containerd/s/72f9fc7bc0cd19b8b4bbb92534c77216911c5b98c3e60e59838bf7660187a57f" protocol=ttrpc version=3 Dec 16 12:30:14.884254 containerd[1901]: time="2025-12-16T12:30:14.884219054Z" level=info msg="Container 124b1ce3b953732296179e57f3ba6ad178a1bb420c04a344dda829dd1ff03ef6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:14.899291 systemd[1]: Started cri-containerd-c03cff1551b58f870a9ae11e622f4397bad747a3aae6b03e86f01fcbed9abfd4.scope - libcontainer container c03cff1551b58f870a9ae11e622f4397bad747a3aae6b03e86f01fcbed9abfd4. Dec 16 12:30:14.905290 containerd[1901]: time="2025-12-16T12:30:14.905243292Z" level=info msg="CreateContainer within sandbox \"4619963c553a8ae71f57ffdee01f621c19fd23185da95ea379eb51505fce4479\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"124b1ce3b953732296179e57f3ba6ad178a1bb420c04a344dda829dd1ff03ef6\"" Dec 16 12:30:14.906571 containerd[1901]: time="2025-12-16T12:30:14.906541807Z" level=info msg="StartContainer for \"124b1ce3b953732296179e57f3ba6ad178a1bb420c04a344dda829dd1ff03ef6\"" Dec 16 12:30:14.907693 containerd[1901]: time="2025-12-16T12:30:14.907672678Z" level=info msg="connecting to shim 124b1ce3b953732296179e57f3ba6ad178a1bb420c04a344dda829dd1ff03ef6" address="unix:///run/containerd/s/438907f2cc4c5cdec47b2900df08e3210bce913bc8acfc2616218d01e854b971" protocol=ttrpc version=3 Dec 16 12:30:14.933284 systemd[1]: Started cri-containerd-124b1ce3b953732296179e57f3ba6ad178a1bb420c04a344dda829dd1ff03ef6.scope - libcontainer container 124b1ce3b953732296179e57f3ba6ad178a1bb420c04a344dda829dd1ff03ef6. Dec 16 12:30:14.947945 containerd[1901]: time="2025-12-16T12:30:14.947894079Z" level=info msg="StartContainer for \"c03cff1551b58f870a9ae11e622f4397bad747a3aae6b03e86f01fcbed9abfd4\" returns successfully" Dec 16 12:30:14.979074 containerd[1901]: time="2025-12-16T12:30:14.978424321Z" level=info msg="StartContainer for \"124b1ce3b953732296179e57f3ba6ad178a1bb420c04a344dda829dd1ff03ef6\" returns successfully" Dec 16 12:30:15.528569 kubelet[3451]: I1216 12:30:15.528489 3451 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qvhdz" podStartSLOduration=19.528371296 podStartE2EDuration="19.528371296s" podCreationTimestamp="2025-12-16 12:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:15.527714542 +0000 UTC m=+26.187291434" watchObservedRunningTime="2025-12-16 12:30:15.528371296 +0000 UTC m=+26.187948188" Dec 16 12:30:15.559284 kubelet[3451]: I1216 12:30:15.558543 3451 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bxwd8" podStartSLOduration=19.558526247 podStartE2EDuration="19.558526247s" podCreationTimestamp="2025-12-16 12:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:15.558322257 +0000 UTC m=+26.217899165" watchObservedRunningTime="2025-12-16 12:30:15.558526247 +0000 UTC m=+26.218103163" Dec 16 12:30:50.030985 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:47294.service - OpenSSH per-connection server daemon (10.200.16.10:47294). Dec 16 12:30:50.530637 sshd[4771]: Accepted publickey for core from 10.200.16.10 port 47294 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:30:50.531845 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:30:50.535447 systemd-logind[1878]: New session 10 of user core. Dec 16 12:30:50.546616 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:30:50.934544 sshd[4774]: Connection closed by 10.200.16.10 port 47294 Dec 16 12:30:50.935233 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Dec 16 12:30:50.938459 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:47294.service: Deactivated successfully. Dec 16 12:30:50.940238 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:30:50.941579 systemd-logind[1878]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:30:50.943477 systemd-logind[1878]: Removed session 10. Dec 16 12:30:56.035228 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:47306.service - OpenSSH per-connection server daemon (10.200.16.10:47306). Dec 16 12:30:56.526413 sshd[4787]: Accepted publickey for core from 10.200.16.10 port 47306 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:30:56.527521 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:30:56.531264 systemd-logind[1878]: New session 11 of user core. Dec 16 12:30:56.542141 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:30:56.915920 sshd[4791]: Connection closed by 10.200.16.10 port 47306 Dec 16 12:30:56.916331 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Dec 16 12:30:56.919816 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:47306.service: Deactivated successfully. Dec 16 12:30:56.921872 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:30:56.922635 systemd-logind[1878]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:30:56.924382 systemd-logind[1878]: Removed session 11. Dec 16 12:31:02.009825 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:36754.service - OpenSSH per-connection server daemon (10.200.16.10:36754). Dec 16 12:31:02.501140 sshd[4806]: Accepted publickey for core from 10.200.16.10 port 36754 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:02.502239 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:02.508302 systemd-logind[1878]: New session 12 of user core. Dec 16 12:31:02.515447 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:31:02.890515 sshd[4809]: Connection closed by 10.200.16.10 port 36754 Dec 16 12:31:02.891166 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:02.894991 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:36754.service: Deactivated successfully. Dec 16 12:31:02.897864 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:31:02.899644 systemd-logind[1878]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:31:02.901524 systemd-logind[1878]: Removed session 12. Dec 16 12:31:07.968438 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:36756.service - OpenSSH per-connection server daemon (10.200.16.10:36756). Dec 16 12:31:08.425323 sshd[4822]: Accepted publickey for core from 10.200.16.10 port 36756 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:08.426390 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:08.430035 systemd-logind[1878]: New session 13 of user core. Dec 16 12:31:08.440147 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:31:08.792074 sshd[4825]: Connection closed by 10.200.16.10 port 36756 Dec 16 12:31:08.792660 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:08.795752 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:36756.service: Deactivated successfully. Dec 16 12:31:08.798339 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:31:08.799389 systemd-logind[1878]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:31:08.800647 systemd-logind[1878]: Removed session 13. Dec 16 12:31:08.885591 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:36764.service - OpenSSH per-connection server daemon (10.200.16.10:36764). Dec 16 12:31:09.376592 sshd[4837]: Accepted publickey for core from 10.200.16.10 port 36764 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:09.377696 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:09.381464 systemd-logind[1878]: New session 14 of user core. Dec 16 12:31:09.387153 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:31:09.858523 sshd[4840]: Connection closed by 10.200.16.10 port 36764 Dec 16 12:31:09.858012 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:09.861140 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:36764.service: Deactivated successfully. Dec 16 12:31:09.863306 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:31:09.864311 systemd-logind[1878]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:31:09.866798 systemd-logind[1878]: Removed session 14. Dec 16 12:31:09.947012 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:36780.service - OpenSSH per-connection server daemon (10.200.16.10:36780). Dec 16 12:31:10.404061 sshd[4850]: Accepted publickey for core from 10.200.16.10 port 36780 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:10.405194 sshd-session[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:10.408939 systemd-logind[1878]: New session 15 of user core. Dec 16 12:31:10.417174 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:31:10.777999 sshd[4853]: Connection closed by 10.200.16.10 port 36780 Dec 16 12:31:10.778598 sshd-session[4850]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:10.781680 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:36780.service: Deactivated successfully. Dec 16 12:31:10.783632 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:31:10.784564 systemd-logind[1878]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:31:10.785548 systemd-logind[1878]: Removed session 15. Dec 16 12:31:15.866933 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:54042.service - OpenSSH per-connection server daemon (10.200.16.10:54042). Dec 16 12:31:16.359746 sshd[4866]: Accepted publickey for core from 10.200.16.10 port 54042 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:16.360807 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:16.364602 systemd-logind[1878]: New session 16 of user core. Dec 16 12:31:16.371136 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:31:16.749056 sshd[4869]: Connection closed by 10.200.16.10 port 54042 Dec 16 12:31:16.749409 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:16.753368 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:54042.service: Deactivated successfully. Dec 16 12:31:16.754948 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:31:16.755642 systemd-logind[1878]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:31:16.756866 systemd-logind[1878]: Removed session 16. Dec 16 12:31:16.846991 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:54054.service - OpenSSH per-connection server daemon (10.200.16.10:54054). Dec 16 12:31:17.343980 sshd[4881]: Accepted publickey for core from 10.200.16.10 port 54054 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:17.345125 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:17.348891 systemd-logind[1878]: New session 17 of user core. Dec 16 12:31:17.361142 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:31:17.764058 sshd[4884]: Connection closed by 10.200.16.10 port 54054 Dec 16 12:31:17.764592 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:17.768232 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:54054.service: Deactivated successfully. Dec 16 12:31:17.770295 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:31:17.771352 systemd-logind[1878]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:31:17.773520 systemd-logind[1878]: Removed session 17. Dec 16 12:31:17.833139 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:54060.service - OpenSSH per-connection server daemon (10.200.16.10:54060). Dec 16 12:31:18.249279 sshd[4894]: Accepted publickey for core from 10.200.16.10 port 54060 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:18.250370 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:18.254161 systemd-logind[1878]: New session 18 of user core. Dec 16 12:31:18.266154 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:31:19.005876 sshd[4897]: Connection closed by 10.200.16.10 port 54060 Dec 16 12:31:19.006426 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:19.010665 systemd-logind[1878]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:31:19.010916 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:54060.service: Deactivated successfully. Dec 16 12:31:19.013777 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:31:19.016628 systemd-logind[1878]: Removed session 18. Dec 16 12:31:19.102584 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:54068.service - OpenSSH per-connection server daemon (10.200.16.10:54068). Dec 16 12:31:19.595005 sshd[4914]: Accepted publickey for core from 10.200.16.10 port 54068 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:19.596117 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:19.599846 systemd-logind[1878]: New session 19 of user core. Dec 16 12:31:19.607327 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:31:20.115732 sshd[4917]: Connection closed by 10.200.16.10 port 54068 Dec 16 12:31:20.116355 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:20.119877 systemd-logind[1878]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:31:20.120037 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:31:20.121083 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:54068.service: Deactivated successfully. Dec 16 12:31:20.123983 systemd-logind[1878]: Removed session 19. Dec 16 12:31:20.210984 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:54302.service - OpenSSH per-connection server daemon (10.200.16.10:54302). Dec 16 12:31:20.700159 sshd[4927]: Accepted publickey for core from 10.200.16.10 port 54302 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:20.701346 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:20.704998 systemd-logind[1878]: New session 20 of user core. Dec 16 12:31:20.712156 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:31:21.090936 sshd[4930]: Connection closed by 10.200.16.10 port 54302 Dec 16 12:31:21.091097 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:21.094656 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:54302.service: Deactivated successfully. Dec 16 12:31:21.096815 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:31:21.097726 systemd-logind[1878]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:31:21.099390 systemd-logind[1878]: Removed session 20. Dec 16 12:31:26.177676 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:54318.service - OpenSSH per-connection server daemon (10.200.16.10:54318). Dec 16 12:31:26.667315 sshd[4944]: Accepted publickey for core from 10.200.16.10 port 54318 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:26.669809 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:26.673808 systemd-logind[1878]: New session 21 of user core. Dec 16 12:31:26.681153 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:31:27.057383 sshd[4950]: Connection closed by 10.200.16.10 port 54318 Dec 16 12:31:27.057951 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:27.060790 systemd-logind[1878]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:31:27.060920 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:54318.service: Deactivated successfully. Dec 16 12:31:27.062470 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:31:27.065963 systemd-logind[1878]: Removed session 21. Dec 16 12:31:32.141255 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:36822.service - OpenSSH per-connection server daemon (10.200.16.10:36822). Dec 16 12:31:32.604294 sshd[4962]: Accepted publickey for core from 10.200.16.10 port 36822 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:32.605402 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:32.609066 systemd-logind[1878]: New session 22 of user core. Dec 16 12:31:32.617142 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:31:32.976857 sshd[4965]: Connection closed by 10.200.16.10 port 36822 Dec 16 12:31:32.977141 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:32.980828 systemd-logind[1878]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:31:32.981280 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:36822.service: Deactivated successfully. Dec 16 12:31:32.983651 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:31:32.985474 systemd-logind[1878]: Removed session 22. Dec 16 12:31:38.059516 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:36830.service - OpenSSH per-connection server daemon (10.200.16.10:36830). Dec 16 12:31:38.513189 sshd[4977]: Accepted publickey for core from 10.200.16.10 port 36830 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:38.514278 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:38.517942 systemd-logind[1878]: New session 23 of user core. Dec 16 12:31:38.528138 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:31:38.884240 sshd[4980]: Connection closed by 10.200.16.10 port 36830 Dec 16 12:31:38.885217 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:38.888615 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:36830.service: Deactivated successfully. Dec 16 12:31:38.890551 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:31:38.892140 systemd-logind[1878]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:31:38.893786 systemd-logind[1878]: Removed session 23. Dec 16 12:31:38.972831 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:36834.service - OpenSSH per-connection server daemon (10.200.16.10:36834). Dec 16 12:31:39.467064 sshd[4991]: Accepted publickey for core from 10.200.16.10 port 36834 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:39.467815 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:39.471539 systemd-logind[1878]: New session 24 of user core. Dec 16 12:31:39.477244 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:31:41.103514 containerd[1901]: time="2025-12-16T12:31:41.103465291Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:31:41.108905 containerd[1901]: time="2025-12-16T12:31:41.108871341Z" level=info msg="StopContainer for \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\" with timeout 2 (s)" Dec 16 12:31:41.109488 containerd[1901]: time="2025-12-16T12:31:41.109435492Z" level=info msg="Stop container \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\" with signal terminated" Dec 16 12:31:41.114427 systemd-networkd[1492]: lxc_health: Link DOWN Dec 16 12:31:41.114699 systemd-networkd[1492]: lxc_health: Lost carrier Dec 16 12:31:41.128694 systemd[1]: cri-containerd-670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b.scope: Deactivated successfully. Dec 16 12:31:41.129214 systemd[1]: cri-containerd-670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b.scope: Consumed 4.428s CPU time, 122.8M memory peak, 128K read from disk, 12.9M written to disk. Dec 16 12:31:41.131590 containerd[1901]: time="2025-12-16T12:31:41.131480524Z" level=info msg="received container exit event container_id:\"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\" id:\"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\" pid:4089 exited_at:{seconds:1765888301 nanos:131214517}" Dec 16 12:31:41.146743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b-rootfs.mount: Deactivated successfully. Dec 16 12:31:41.195988 containerd[1901]: time="2025-12-16T12:31:41.195940193Z" level=info msg="StopContainer for \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\" with timeout 30 (s)" Dec 16 12:31:41.208532 containerd[1901]: time="2025-12-16T12:31:41.196509928Z" level=info msg="Stop container \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\" with signal terminated" Dec 16 12:31:41.213255 systemd[1]: cri-containerd-d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1.scope: Deactivated successfully. Dec 16 12:31:41.213830 containerd[1901]: time="2025-12-16T12:31:41.213798033Z" level=info msg="received container exit event container_id:\"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\" id:\"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\" pid:3978 exited_at:{seconds:1765888301 nanos:213239954}" Dec 16 12:31:41.221038 containerd[1901]: time="2025-12-16T12:31:41.220620120Z" level=info msg="StopContainer for \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\" returns successfully" Dec 16 12:31:41.221109 containerd[1901]: time="2025-12-16T12:31:41.221095701Z" level=info msg="StopPodSandbox for \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\"" Dec 16 12:31:41.221161 containerd[1901]: time="2025-12-16T12:31:41.221141198Z" level=info msg="Container to stop \"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:31:41.221161 containerd[1901]: time="2025-12-16T12:31:41.221155535Z" level=info msg="Container to stop \"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:31:41.221161 containerd[1901]: time="2025-12-16T12:31:41.221161423Z" level=info msg="Container to stop \"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:31:41.221228 containerd[1901]: time="2025-12-16T12:31:41.221168407Z" level=info msg="Container to stop \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:31:41.221228 containerd[1901]: time="2025-12-16T12:31:41.221174087Z" level=info msg="Container to stop \"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:31:41.227352 systemd[1]: cri-containerd-e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553.scope: Deactivated successfully. Dec 16 12:31:41.229458 containerd[1901]: time="2025-12-16T12:31:41.229426965Z" level=info msg="received sandbox exit event container_id:\"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" id:\"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" exit_status:137 exited_at:{seconds:1765888301 nanos:229207159}" monitor_name=podsandbox Dec 16 12:31:41.238682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1-rootfs.mount: Deactivated successfully. Dec 16 12:31:41.247181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553-rootfs.mount: Deactivated successfully. Dec 16 12:31:41.274622 containerd[1901]: time="2025-12-16T12:31:41.274450311Z" level=info msg="shim disconnected" id=e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553 namespace=k8s.io Dec 16 12:31:41.274622 containerd[1901]: time="2025-12-16T12:31:41.274480600Z" level=warning msg="cleaning up after shim disconnected" id=e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553 namespace=k8s.io Dec 16 12:31:41.274622 containerd[1901]: time="2025-12-16T12:31:41.274505177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:31:41.282347 containerd[1901]: time="2025-12-16T12:31:41.282314139Z" level=info msg="StopContainer for \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\" returns successfully" Dec 16 12:31:41.283455 containerd[1901]: time="2025-12-16T12:31:41.283389168Z" level=info msg="StopPodSandbox for \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\"" Dec 16 12:31:41.283526 containerd[1901]: time="2025-12-16T12:31:41.283477906Z" level=info msg="Container to stop \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:31:41.285722 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553-shm.mount: Deactivated successfully. Dec 16 12:31:41.285929 containerd[1901]: time="2025-12-16T12:31:41.285903907Z" level=info msg="received sandbox container exit event sandbox_id:\"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" exit_status:137 exited_at:{seconds:1765888301 nanos:229207159}" monitor_name=criService Dec 16 12:31:41.287089 containerd[1901]: time="2025-12-16T12:31:41.286376008Z" level=info msg="TearDown network for sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" successfully" Dec 16 12:31:41.287089 containerd[1901]: time="2025-12-16T12:31:41.286612558Z" level=info msg="StopPodSandbox for \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" returns successfully" Dec 16 12:31:41.295396 systemd[1]: cri-containerd-8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1.scope: Deactivated successfully. Dec 16 12:31:41.305651 containerd[1901]: time="2025-12-16T12:31:41.305250219Z" level=info msg="received sandbox exit event container_id:\"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\" id:\"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\" exit_status:137 exited_at:{seconds:1765888301 nanos:304367476}" monitor_name=podsandbox Dec 16 12:31:41.322341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1-rootfs.mount: Deactivated successfully. Dec 16 12:31:41.333241 containerd[1901]: time="2025-12-16T12:31:41.333052799Z" level=info msg="shim disconnected" id=8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1 namespace=k8s.io Dec 16 12:31:41.333241 containerd[1901]: time="2025-12-16T12:31:41.333083343Z" level=warning msg="cleaning up after shim disconnected" id=8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1 namespace=k8s.io Dec 16 12:31:41.333241 containerd[1901]: time="2025-12-16T12:31:41.333108232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:31:41.341138 containerd[1901]: time="2025-12-16T12:31:41.341097135Z" level=info msg="received sandbox container exit event sandbox_id:\"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\" exit_status:137 exited_at:{seconds:1765888301 nanos:304367476}" monitor_name=criService Dec 16 12:31:41.341517 containerd[1901]: time="2025-12-16T12:31:41.341431808Z" level=info msg="TearDown network for sandbox \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\" successfully" Dec 16 12:31:41.341517 containerd[1901]: time="2025-12-16T12:31:41.341451416Z" level=info msg="StopPodSandbox for \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\" returns successfully" Dec 16 12:31:41.399746 kubelet[3451]: I1216 12:31:41.399553 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-etc-cni-netd\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400537 kubelet[3451]: I1216 12:31:41.400191 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cni-path\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400537 kubelet[3451]: I1216 12:31:41.400283 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-lib-modules\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400537 kubelet[3451]: I1216 12:31:41.399645 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:31:41.400537 kubelet[3451]: I1216 12:31:41.400318 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-cgroup\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400537 kubelet[3451]: I1216 12:31:41.400340 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmqn7\" (UniqueName: \"kubernetes.io/projected/591f4981-bdad-4905-89a4-2bb53e21dcb8-kube-api-access-cmqn7\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400537 kubelet[3451]: I1216 12:31:41.400350 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cni-path" (OuterVolumeSpecName: "cni-path") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:31:41.400798 kubelet[3451]: I1216 12:31:41.400354 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-host-proc-sys-net\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400798 kubelet[3451]: I1216 12:31:41.400373 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-xtables-lock\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400798 kubelet[3451]: I1216 12:31:41.400392 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-host-proc-sys-kernel\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400798 kubelet[3451]: I1216 12:31:41.400410 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/591f4981-bdad-4905-89a4-2bb53e21dcb8-hubble-tls\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400798 kubelet[3451]: I1216 12:31:41.400425 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-config-path\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400798 kubelet[3451]: I1216 12:31:41.400437 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-bpf-maps\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400893 kubelet[3451]: I1216 12:31:41.400450 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/591f4981-bdad-4905-89a4-2bb53e21dcb8-clustermesh-secrets\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400893 kubelet[3451]: I1216 12:31:41.400469 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f038b8d-8607-4673-98e7-85030005a9e6-cilium-config-path\") pod \"6f038b8d-8607-4673-98e7-85030005a9e6\" (UID: \"6f038b8d-8607-4673-98e7-85030005a9e6\") " Dec 16 12:31:41.400893 kubelet[3451]: I1216 12:31:41.400480 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlfjx\" (UniqueName: \"kubernetes.io/projected/6f038b8d-8607-4673-98e7-85030005a9e6-kube-api-access-dlfjx\") pod \"6f038b8d-8607-4673-98e7-85030005a9e6\" (UID: \"6f038b8d-8607-4673-98e7-85030005a9e6\") " Dec 16 12:31:41.400893 kubelet[3451]: I1216 12:31:41.400489 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-run\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400893 kubelet[3451]: I1216 12:31:41.400499 3451 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-hostproc\") pod \"591f4981-bdad-4905-89a4-2bb53e21dcb8\" (UID: \"591f4981-bdad-4905-89a4-2bb53e21dcb8\") " Dec 16 12:31:41.400893 kubelet[3451]: I1216 12:31:41.400631 3451 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cni-path\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.400979 kubelet[3451]: I1216 12:31:41.400373 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:31:41.400979 kubelet[3451]: I1216 12:31:41.400381 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:31:41.400979 kubelet[3451]: I1216 12:31:41.400655 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-hostproc" (OuterVolumeSpecName: "hostproc") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:31:41.400979 kubelet[3451]: I1216 12:31:41.400665 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:31:41.401713 kubelet[3451]: I1216 12:31:41.400671 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:31:41.401713 kubelet[3451]: I1216 12:31:41.401087 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:31:41.402577 kubelet[3451]: I1216 12:31:41.402551 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:31:41.404445 kubelet[3451]: I1216 12:31:41.404414 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f038b8d-8607-4673-98e7-85030005a9e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f038b8d-8607-4673-98e7-85030005a9e6" (UID: "6f038b8d-8607-4673-98e7-85030005a9e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:31:41.405602 kubelet[3451]: I1216 12:31:41.405582 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:31:41.407509 kubelet[3451]: I1216 12:31:41.407477 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591f4981-bdad-4905-89a4-2bb53e21dcb8-kube-api-access-cmqn7" (OuterVolumeSpecName: "kube-api-access-cmqn7") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "kube-api-access-cmqn7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:31:41.409700 kubelet[3451]: I1216 12:31:41.409648 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:31:41.410178 kubelet[3451]: I1216 12:31:41.410145 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f038b8d-8607-4673-98e7-85030005a9e6-kube-api-access-dlfjx" (OuterVolumeSpecName: "kube-api-access-dlfjx") pod "6f038b8d-8607-4673-98e7-85030005a9e6" (UID: "6f038b8d-8607-4673-98e7-85030005a9e6"). InnerVolumeSpecName "kube-api-access-dlfjx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:31:41.411245 kubelet[3451]: I1216 12:31:41.411223 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591f4981-bdad-4905-89a4-2bb53e21dcb8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:31:41.411346 kubelet[3451]: I1216 12:31:41.411300 3451 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591f4981-bdad-4905-89a4-2bb53e21dcb8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "591f4981-bdad-4905-89a4-2bb53e21dcb8" (UID: "591f4981-bdad-4905-89a4-2bb53e21dcb8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:31:41.415560 systemd[1]: Removed slice kubepods-besteffort-pod6f038b8d_8607_4673_98e7_85030005a9e6.slice - libcontainer container kubepods-besteffort-pod6f038b8d_8607_4673_98e7_85030005a9e6.slice. Dec 16 12:31:41.501534 kubelet[3451]: I1216 12:31:41.501490 3451 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-host-proc-sys-net\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501534 kubelet[3451]: I1216 12:31:41.501527 3451 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-lib-modules\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501534 kubelet[3451]: I1216 12:31:41.501535 3451 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-cgroup\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501534 kubelet[3451]: I1216 12:31:41.501541 3451 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cmqn7\" (UniqueName: \"kubernetes.io/projected/591f4981-bdad-4905-89a4-2bb53e21dcb8-kube-api-access-cmqn7\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501534 kubelet[3451]: I1216 12:31:41.501549 3451 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/591f4981-bdad-4905-89a4-2bb53e21dcb8-hubble-tls\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501534 kubelet[3451]: I1216 12:31:41.501555 3451 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-xtables-lock\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501779 kubelet[3451]: I1216 12:31:41.501560 3451 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-host-proc-sys-kernel\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501779 kubelet[3451]: I1216 12:31:41.501566 3451 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-config-path\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501779 kubelet[3451]: I1216 12:31:41.501571 3451 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-bpf-maps\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501779 kubelet[3451]: I1216 12:31:41.501578 3451 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/591f4981-bdad-4905-89a4-2bb53e21dcb8-clustermesh-secrets\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501779 kubelet[3451]: I1216 12:31:41.501587 3451 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f038b8d-8607-4673-98e7-85030005a9e6-cilium-config-path\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501779 kubelet[3451]: I1216 12:31:41.501593 3451 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dlfjx\" (UniqueName: \"kubernetes.io/projected/6f038b8d-8607-4673-98e7-85030005a9e6-kube-api-access-dlfjx\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501779 kubelet[3451]: I1216 12:31:41.501599 3451 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-hostproc\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501779 kubelet[3451]: I1216 12:31:41.501606 3451 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-cilium-run\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.501894 kubelet[3451]: I1216 12:31:41.501611 3451 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/591f4981-bdad-4905-89a4-2bb53e21dcb8-etc-cni-netd\") on node \"ci-4459.2.2-a-7f44347f41\" DevicePath \"\"" Dec 16 12:31:41.677650 kubelet[3451]: I1216 12:31:41.676657 3451 scope.go:117] "RemoveContainer" containerID="670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b" Dec 16 12:31:41.679964 containerd[1901]: time="2025-12-16T12:31:41.679849201Z" level=info msg="RemoveContainer for \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\"" Dec 16 12:31:41.688116 systemd[1]: Removed slice kubepods-burstable-pod591f4981_bdad_4905_89a4_2bb53e21dcb8.slice - libcontainer container kubepods-burstable-pod591f4981_bdad_4905_89a4_2bb53e21dcb8.slice. Dec 16 12:31:41.688358 systemd[1]: kubepods-burstable-pod591f4981_bdad_4905_89a4_2bb53e21dcb8.slice: Consumed 4.493s CPU time, 123.2M memory peak, 128K read from disk, 12.9M written to disk. Dec 16 12:31:41.692536 containerd[1901]: time="2025-12-16T12:31:41.692486372Z" level=info msg="RemoveContainer for \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\" returns successfully" Dec 16 12:31:41.692941 kubelet[3451]: I1216 12:31:41.692879 3451 scope.go:117] "RemoveContainer" containerID="58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd" Dec 16 12:31:41.694472 containerd[1901]: time="2025-12-16T12:31:41.694127536Z" level=info msg="RemoveContainer for \"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\"" Dec 16 12:31:41.704788 containerd[1901]: time="2025-12-16T12:31:41.704761942Z" level=info msg="RemoveContainer for \"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\" returns successfully" Dec 16 12:31:41.705057 kubelet[3451]: I1216 12:31:41.705032 3451 scope.go:117] "RemoveContainer" containerID="b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43" Dec 16 12:31:41.706630 containerd[1901]: time="2025-12-16T12:31:41.706609520Z" level=info msg="RemoveContainer for \"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\"" Dec 16 12:31:41.715012 containerd[1901]: time="2025-12-16T12:31:41.714988153Z" level=info msg="RemoveContainer for \"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\" returns successfully" Dec 16 12:31:41.715339 kubelet[3451]: I1216 12:31:41.715286 3451 scope.go:117] "RemoveContainer" containerID="c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c" Dec 16 12:31:41.716639 containerd[1901]: time="2025-12-16T12:31:41.716586028Z" level=info msg="RemoveContainer for \"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\"" Dec 16 12:31:41.726988 containerd[1901]: time="2025-12-16T12:31:41.726956323Z" level=info msg="RemoveContainer for \"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\" returns successfully" Dec 16 12:31:41.727231 kubelet[3451]: I1216 12:31:41.727207 3451 scope.go:117] "RemoveContainer" containerID="06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87" Dec 16 12:31:41.728592 containerd[1901]: time="2025-12-16T12:31:41.728565926Z" level=info msg="RemoveContainer for \"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\"" Dec 16 12:31:41.737964 containerd[1901]: time="2025-12-16T12:31:41.737939706Z" level=info msg="RemoveContainer for \"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\" returns successfully" Dec 16 12:31:41.738151 kubelet[3451]: I1216 12:31:41.738126 3451 scope.go:117] "RemoveContainer" containerID="670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b" Dec 16 12:31:41.738365 containerd[1901]: time="2025-12-16T12:31:41.738287347Z" level=error msg="ContainerStatus for \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\": not found" Dec 16 12:31:41.738422 kubelet[3451]: E1216 12:31:41.738399 3451 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\": not found" containerID="670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b" Dec 16 12:31:41.738515 kubelet[3451]: I1216 12:31:41.738439 3451 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b"} err="failed to get container status \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\": rpc error: code = NotFound desc = an error occurred when try to find container \"670f3d320118865c2aa8f5fd2added0c7ed0d53f0ee8830f491192718988381b\": not found" Dec 16 12:31:41.738515 kubelet[3451]: I1216 12:31:41.738510 3451 scope.go:117] "RemoveContainer" containerID="58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd" Dec 16 12:31:41.738712 containerd[1901]: time="2025-12-16T12:31:41.738652485Z" level=error msg="ContainerStatus for \"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\": not found" Dec 16 12:31:41.738878 kubelet[3451]: E1216 12:31:41.738860 3451 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\": not found" containerID="58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd" Dec 16 12:31:41.738925 kubelet[3451]: I1216 12:31:41.738879 3451 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd"} err="failed to get container status \"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"58f06fa953761209d6adb95f3429f9a5bc8932608bab4b71f957142ecad5a7cd\": not found" Dec 16 12:31:41.738925 kubelet[3451]: I1216 12:31:41.738890 3451 scope.go:117] "RemoveContainer" containerID="b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43" Dec 16 12:31:41.739158 containerd[1901]: time="2025-12-16T12:31:41.739111714Z" level=error msg="ContainerStatus for \"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\": not found" Dec 16 12:31:41.739205 kubelet[3451]: E1216 12:31:41.739189 3451 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\": not found" containerID="b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43" Dec 16 12:31:41.739243 kubelet[3451]: I1216 12:31:41.739222 3451 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43"} err="failed to get container status \"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\": rpc error: code = NotFound desc = an error occurred when try to find container \"b99269a2b9b31c986b38d1dbe871dd1a651163ca7117309dd9e35a7a0a79ca43\": not found" Dec 16 12:31:41.739243 kubelet[3451]: I1216 12:31:41.739239 3451 scope.go:117] "RemoveContainer" containerID="c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c" Dec 16 12:31:41.739386 containerd[1901]: time="2025-12-16T12:31:41.739364232Z" level=error msg="ContainerStatus for \"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\": not found" Dec 16 12:31:41.739550 kubelet[3451]: E1216 12:31:41.739524 3451 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\": not found" containerID="c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c" Dec 16 12:31:41.739550 kubelet[3451]: I1216 12:31:41.739545 3451 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c"} err="failed to get container status \"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c00ee0ee7a6b102b751bd3db02361bf3043b9f657f5848830841a6746dd8c26c\": not found" Dec 16 12:31:41.739618 kubelet[3451]: I1216 12:31:41.739556 3451 scope.go:117] "RemoveContainer" containerID="06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87" Dec 16 12:31:41.739804 containerd[1901]: time="2025-12-16T12:31:41.739773755Z" level=error msg="ContainerStatus for \"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\": not found" Dec 16 12:31:41.739907 kubelet[3451]: E1216 12:31:41.739883 3451 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\": not found" containerID="06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87" Dec 16 12:31:41.739933 kubelet[3451]: I1216 12:31:41.739906 3451 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87"} err="failed to get container status \"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\": rpc error: code = NotFound desc = an error occurred when try to find container \"06d2ca496cc9ea3dea0e19a9401000d41d25cbba98a913e39b445a0704172b87\": not found" Dec 16 12:31:41.739933 kubelet[3451]: I1216 12:31:41.739918 3451 scope.go:117] "RemoveContainer" containerID="d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1" Dec 16 12:31:41.741095 containerd[1901]: time="2025-12-16T12:31:41.741075926Z" level=info msg="RemoveContainer for \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\"" Dec 16 12:31:41.755389 containerd[1901]: time="2025-12-16T12:31:41.755313253Z" level=info msg="RemoveContainer for \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\" returns successfully" Dec 16 12:31:41.755506 kubelet[3451]: I1216 12:31:41.755480 3451 scope.go:117] "RemoveContainer" containerID="d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1" Dec 16 12:31:41.755767 containerd[1901]: time="2025-12-16T12:31:41.755717152Z" level=error msg="ContainerStatus for \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\": not found" Dec 16 12:31:41.755825 kubelet[3451]: E1216 12:31:41.755799 3451 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\": not found" containerID="d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1" Dec 16 12:31:41.755825 kubelet[3451]: I1216 12:31:41.755818 3451 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1"} err="failed to get container status \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0ff21469b85fc5f5c3624502a5d5817d803ed810cfe6eb3740e4813b8f1dfd1\": not found" Dec 16 12:31:42.146890 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1-shm.mount: Deactivated successfully. Dec 16 12:31:42.146995 systemd[1]: var-lib-kubelet-pods-6f038b8d\x2d8607\x2d4673\x2d98e7\x2d85030005a9e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddlfjx.mount: Deactivated successfully. Dec 16 12:31:42.147393 systemd[1]: var-lib-kubelet-pods-591f4981\x2dbdad\x2d4905\x2d89a4\x2d2bb53e21dcb8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcmqn7.mount: Deactivated successfully. Dec 16 12:31:42.147457 systemd[1]: var-lib-kubelet-pods-591f4981\x2dbdad\x2d4905\x2d89a4\x2d2bb53e21dcb8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 12:31:42.147497 systemd[1]: var-lib-kubelet-pods-591f4981\x2dbdad\x2d4905\x2d89a4\x2d2bb53e21dcb8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 12:31:43.048607 sshd[4994]: Connection closed by 10.200.16.10 port 36834 Dec 16 12:31:43.048954 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:43.052693 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:36834.service: Deactivated successfully. Dec 16 12:31:43.055404 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:31:43.056193 systemd-logind[1878]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:31:43.057650 systemd-logind[1878]: Removed session 24. Dec 16 12:31:43.128847 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.16.10:53664.service - OpenSSH per-connection server daemon (10.200.16.10:53664). Dec 16 12:31:43.401979 kubelet[3451]: I1216 12:31:43.401867 3451 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="591f4981-bdad-4905-89a4-2bb53e21dcb8" path="/var/lib/kubelet/pods/591f4981-bdad-4905-89a4-2bb53e21dcb8/volumes" Dec 16 12:31:43.402317 kubelet[3451]: I1216 12:31:43.402279 3451 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f038b8d-8607-4673-98e7-85030005a9e6" path="/var/lib/kubelet/pods/6f038b8d-8607-4673-98e7-85030005a9e6/volumes" Dec 16 12:31:43.582846 sshd[5142]: Accepted publickey for core from 10.200.16.10 port 53664 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:43.584016 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:43.587579 systemd-logind[1878]: New session 25 of user core. Dec 16 12:31:43.598161 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:31:44.486864 kubelet[3451]: E1216 12:31:44.486803 3451 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 12:31:44.521133 kubelet[3451]: I1216 12:31:44.521096 3451 memory_manager.go:355] "RemoveStaleState removing state" podUID="6f038b8d-8607-4673-98e7-85030005a9e6" containerName="cilium-operator" Dec 16 12:31:44.521133 kubelet[3451]: I1216 12:31:44.521124 3451 memory_manager.go:355] "RemoveStaleState removing state" podUID="591f4981-bdad-4905-89a4-2bb53e21dcb8" containerName="cilium-agent" Dec 16 12:31:44.529226 kubelet[3451]: W1216 12:31:44.528809 3451 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4459.2.2-a-7f44347f41" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.2-a-7f44347f41' and this object Dec 16 12:31:44.529226 kubelet[3451]: E1216 12:31:44.528853 3451 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4459.2.2-a-7f44347f41\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-a-7f44347f41' and this object" logger="UnhandledError" Dec 16 12:31:44.529226 kubelet[3451]: W1216 12:31:44.528893 3451 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4459.2.2-a-7f44347f41" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.2-a-7f44347f41' and this object Dec 16 12:31:44.529226 kubelet[3451]: E1216 12:31:44.528902 3451 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4459.2.2-a-7f44347f41\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-a-7f44347f41' and this object" logger="UnhandledError" Dec 16 12:31:44.529226 kubelet[3451]: W1216 12:31:44.528922 3451 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4459.2.2-a-7f44347f41" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.2-a-7f44347f41' and this object Dec 16 12:31:44.529438 kubelet[3451]: E1216 12:31:44.528928 3451 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4459.2.2-a-7f44347f41\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-a-7f44347f41' and this object" logger="UnhandledError" Dec 16 12:31:44.529438 kubelet[3451]: W1216 12:31:44.528952 3451 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4459.2.2-a-7f44347f41" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.2-a-7f44347f41' and this object Dec 16 12:31:44.529438 kubelet[3451]: E1216 12:31:44.528958 3451 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4459.2.2-a-7f44347f41\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-a-7f44347f41' and this object" logger="UnhandledError" Dec 16 12:31:44.529438 kubelet[3451]: I1216 12:31:44.529131 3451 status_manager.go:890] "Failed to get status for pod" podUID="149793f6-b28c-44e1-b95e-0e69e6f336eb" pod="kube-system/cilium-b2qsn" err="pods \"cilium-b2qsn\" is forbidden: User \"system:node:ci-4459.2.2-a-7f44347f41\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-a-7f44347f41' and this object" Dec 16 12:31:44.530178 systemd[1]: Created slice kubepods-burstable-pod149793f6_b28c_44e1_b95e_0e69e6f336eb.slice - libcontainer container kubepods-burstable-pod149793f6_b28c_44e1_b95e_0e69e6f336eb.slice. Dec 16 12:31:44.579099 sshd[5145]: Connection closed by 10.200.16.10 port 53664 Dec 16 12:31:44.579818 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:44.582455 systemd[1]: sshd@22-10.200.20.37:22-10.200.16.10:53664.service: Deactivated successfully. Dec 16 12:31:44.584013 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:31:44.585202 systemd-logind[1878]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:31:44.586829 systemd-logind[1878]: Removed session 25. Dec 16 12:31:44.619393 kubelet[3451]: I1216 12:31:44.619304 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/149793f6-b28c-44e1-b95e-0e69e6f336eb-xtables-lock\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619393 kubelet[3451]: I1216 12:31:44.619344 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/149793f6-b28c-44e1-b95e-0e69e6f336eb-host-proc-sys-net\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619393 kubelet[3451]: I1216 12:31:44.619359 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/149793f6-b28c-44e1-b95e-0e69e6f336eb-cilium-config-path\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619393 kubelet[3451]: I1216 12:31:44.619371 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/149793f6-b28c-44e1-b95e-0e69e6f336eb-bpf-maps\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619393 kubelet[3451]: I1216 12:31:44.619385 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/149793f6-b28c-44e1-b95e-0e69e6f336eb-cni-path\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619393 kubelet[3451]: I1216 12:31:44.619396 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/149793f6-b28c-44e1-b95e-0e69e6f336eb-hubble-tls\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619589 kubelet[3451]: I1216 12:31:44.619405 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/149793f6-b28c-44e1-b95e-0e69e6f336eb-cilium-run\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619589 kubelet[3451]: I1216 12:31:44.619416 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/149793f6-b28c-44e1-b95e-0e69e6f336eb-lib-modules\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619589 kubelet[3451]: I1216 12:31:44.619426 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/149793f6-b28c-44e1-b95e-0e69e6f336eb-clustermesh-secrets\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619589 kubelet[3451]: I1216 12:31:44.619435 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m7l4\" (UniqueName: \"kubernetes.io/projected/149793f6-b28c-44e1-b95e-0e69e6f336eb-kube-api-access-6m7l4\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619589 kubelet[3451]: I1216 12:31:44.619447 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/149793f6-b28c-44e1-b95e-0e69e6f336eb-etc-cni-netd\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619589 kubelet[3451]: I1216 12:31:44.619460 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/149793f6-b28c-44e1-b95e-0e69e6f336eb-hostproc\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619680 kubelet[3451]: I1216 12:31:44.619468 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/149793f6-b28c-44e1-b95e-0e69e6f336eb-cilium-cgroup\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619680 kubelet[3451]: I1216 12:31:44.619481 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/149793f6-b28c-44e1-b95e-0e69e6f336eb-cilium-ipsec-secrets\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.619680 kubelet[3451]: I1216 12:31:44.619494 3451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/149793f6-b28c-44e1-b95e-0e69e6f336eb-host-proc-sys-kernel\") pod \"cilium-b2qsn\" (UID: \"149793f6-b28c-44e1-b95e-0e69e6f336eb\") " pod="kube-system/cilium-b2qsn" Dec 16 12:31:44.670249 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.16.10:53668.service - OpenSSH per-connection server daemon (10.200.16.10:53668). Dec 16 12:31:45.154404 sshd[5156]: Accepted publickey for core from 10.200.16.10 port 53668 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:45.156347 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:45.159840 systemd-logind[1878]: New session 26 of user core. Dec 16 12:31:45.168338 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:31:45.500620 sshd[5160]: Connection closed by 10.200.16.10 port 53668 Dec 16 12:31:45.501209 sshd-session[5156]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:45.504713 systemd-logind[1878]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:31:45.504814 systemd[1]: sshd@23-10.200.20.37:22-10.200.16.10:53668.service: Deactivated successfully. Dec 16 12:31:45.506662 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:31:45.508080 systemd-logind[1878]: Removed session 26. Dec 16 12:31:45.584214 systemd[1]: Started sshd@24-10.200.20.37:22-10.200.16.10:53684.service - OpenSSH per-connection server daemon (10.200.16.10:53684). Dec 16 12:31:45.720371 kubelet[3451]: E1216 12:31:45.720328 3451 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 16 12:31:45.720891 kubelet[3451]: E1216 12:31:45.720434 3451 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/149793f6-b28c-44e1-b95e-0e69e6f336eb-cilium-config-path podName:149793f6-b28c-44e1-b95e-0e69e6f336eb nodeName:}" failed. No retries permitted until 2025-12-16 12:31:46.220400921 +0000 UTC m=+116.879977821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/149793f6-b28c-44e1-b95e-0e69e6f336eb-cilium-config-path") pod "cilium-b2qsn" (UID: "149793f6-b28c-44e1-b95e-0e69e6f336eb") : failed to sync configmap cache: timed out waiting for the condition Dec 16 12:31:45.720891 kubelet[3451]: E1216 12:31:45.720342 3451 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Dec 16 12:31:45.720891 kubelet[3451]: E1216 12:31:45.720466 3451 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-b2qsn: failed to sync secret cache: timed out waiting for the condition Dec 16 12:31:45.720891 kubelet[3451]: E1216 12:31:45.720515 3451 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/149793f6-b28c-44e1-b95e-0e69e6f336eb-hubble-tls podName:149793f6-b28c-44e1-b95e-0e69e6f336eb nodeName:}" failed. No retries permitted until 2025-12-16 12:31:46.220504188 +0000 UTC m=+116.880081080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/149793f6-b28c-44e1-b95e-0e69e6f336eb-hubble-tls") pod "cilium-b2qsn" (UID: "149793f6-b28c-44e1-b95e-0e69e6f336eb") : failed to sync secret cache: timed out waiting for the condition Dec 16 12:31:45.720891 kubelet[3451]: E1216 12:31:45.720451 3451 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Dec 16 12:31:45.721124 kubelet[3451]: E1216 12:31:45.720540 3451 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/149793f6-b28c-44e1-b95e-0e69e6f336eb-clustermesh-secrets podName:149793f6-b28c-44e1-b95e-0e69e6f336eb nodeName:}" failed. No retries permitted until 2025-12-16 12:31:46.220534509 +0000 UTC m=+116.880111401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/149793f6-b28c-44e1-b95e-0e69e6f336eb-clustermesh-secrets") pod "cilium-b2qsn" (UID: "149793f6-b28c-44e1-b95e-0e69e6f336eb") : failed to sync secret cache: timed out waiting for the condition Dec 16 12:31:46.038675 sshd[5169]: Accepted publickey for core from 10.200.16.10 port 53684 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:46.039730 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:46.043984 systemd-logind[1878]: New session 27 of user core. Dec 16 12:31:46.049144 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:31:46.339356 containerd[1901]: time="2025-12-16T12:31:46.336183250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b2qsn,Uid:149793f6-b28c-44e1-b95e-0e69e6f336eb,Namespace:kube-system,Attempt:0,}" Dec 16 12:31:46.372465 containerd[1901]: time="2025-12-16T12:31:46.372425240Z" level=info msg="connecting to shim 1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f" address="unix:///run/containerd/s/c432322a56f399cbda2473b9ab9d61496426bd8d26f68895827b28b0b84eaf72" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:31:46.394222 systemd[1]: Started cri-containerd-1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f.scope - libcontainer container 1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f. Dec 16 12:31:46.416791 containerd[1901]: time="2025-12-16T12:31:46.416744103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b2qsn,Uid:149793f6-b28c-44e1-b95e-0e69e6f336eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f\"" Dec 16 12:31:46.420628 containerd[1901]: time="2025-12-16T12:31:46.420131306Z" level=info msg="CreateContainer within sandbox \"1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:31:46.437876 containerd[1901]: time="2025-12-16T12:31:46.437845590Z" level=info msg="Container d52ade06cc3a4782296152c1df839691cae3aeb8722cb28aa06c88c61214fac6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:31:46.450843 containerd[1901]: time="2025-12-16T12:31:46.450804531Z" level=info msg="CreateContainer within sandbox \"1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d52ade06cc3a4782296152c1df839691cae3aeb8722cb28aa06c88c61214fac6\"" Dec 16 12:31:46.451634 containerd[1901]: time="2025-12-16T12:31:46.451601112Z" level=info msg="StartContainer for \"d52ade06cc3a4782296152c1df839691cae3aeb8722cb28aa06c88c61214fac6\"" Dec 16 12:31:46.452546 containerd[1901]: time="2025-12-16T12:31:46.452518801Z" level=info msg="connecting to shim d52ade06cc3a4782296152c1df839691cae3aeb8722cb28aa06c88c61214fac6" address="unix:///run/containerd/s/c432322a56f399cbda2473b9ab9d61496426bd8d26f68895827b28b0b84eaf72" protocol=ttrpc version=3 Dec 16 12:31:46.472147 systemd[1]: Started cri-containerd-d52ade06cc3a4782296152c1df839691cae3aeb8722cb28aa06c88c61214fac6.scope - libcontainer container d52ade06cc3a4782296152c1df839691cae3aeb8722cb28aa06c88c61214fac6. Dec 16 12:31:46.498503 containerd[1901]: time="2025-12-16T12:31:46.498457412Z" level=info msg="StartContainer for \"d52ade06cc3a4782296152c1df839691cae3aeb8722cb28aa06c88c61214fac6\" returns successfully" Dec 16 12:31:46.501898 systemd[1]: cri-containerd-d52ade06cc3a4782296152c1df839691cae3aeb8722cb28aa06c88c61214fac6.scope: Deactivated successfully. Dec 16 12:31:46.505533 containerd[1901]: time="2025-12-16T12:31:46.505502513Z" level=info msg="received container exit event container_id:\"d52ade06cc3a4782296152c1df839691cae3aeb8722cb28aa06c88c61214fac6\" id:\"d52ade06cc3a4782296152c1df839691cae3aeb8722cb28aa06c88c61214fac6\" pid:5237 exited_at:{seconds:1765888306 nanos:505230586}" Dec 16 12:31:46.698962 containerd[1901]: time="2025-12-16T12:31:46.698341776Z" level=info msg="CreateContainer within sandbox \"1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:31:46.718046 containerd[1901]: time="2025-12-16T12:31:46.717848277Z" level=info msg="Container 6bf37adcb60b8b930a8e4ddf98b092037a8382828edc17ccb7cc30d1b8e6a7f7: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:31:46.731528 containerd[1901]: time="2025-12-16T12:31:46.731488875Z" level=info msg="CreateContainer within sandbox \"1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6bf37adcb60b8b930a8e4ddf98b092037a8382828edc17ccb7cc30d1b8e6a7f7\"" Dec 16 12:31:46.732882 containerd[1901]: time="2025-12-16T12:31:46.731946640Z" level=info msg="StartContainer for \"6bf37adcb60b8b930a8e4ddf98b092037a8382828edc17ccb7cc30d1b8e6a7f7\"" Dec 16 12:31:46.734678 containerd[1901]: time="2025-12-16T12:31:46.734637688Z" level=info msg="connecting to shim 6bf37adcb60b8b930a8e4ddf98b092037a8382828edc17ccb7cc30d1b8e6a7f7" address="unix:///run/containerd/s/c432322a56f399cbda2473b9ab9d61496426bd8d26f68895827b28b0b84eaf72" protocol=ttrpc version=3 Dec 16 12:31:46.753165 systemd[1]: Started cri-containerd-6bf37adcb60b8b930a8e4ddf98b092037a8382828edc17ccb7cc30d1b8e6a7f7.scope - libcontainer container 6bf37adcb60b8b930a8e4ddf98b092037a8382828edc17ccb7cc30d1b8e6a7f7. Dec 16 12:31:46.779182 containerd[1901]: time="2025-12-16T12:31:46.779145668Z" level=info msg="StartContainer for \"6bf37adcb60b8b930a8e4ddf98b092037a8382828edc17ccb7cc30d1b8e6a7f7\" returns successfully" Dec 16 12:31:46.779612 systemd[1]: cri-containerd-6bf37adcb60b8b930a8e4ddf98b092037a8382828edc17ccb7cc30d1b8e6a7f7.scope: Deactivated successfully. Dec 16 12:31:46.781667 containerd[1901]: time="2025-12-16T12:31:46.781624935Z" level=info msg="received container exit event container_id:\"6bf37adcb60b8b930a8e4ddf98b092037a8382828edc17ccb7cc30d1b8e6a7f7\" id:\"6bf37adcb60b8b930a8e4ddf98b092037a8382828edc17ccb7cc30d1b8e6a7f7\" pid:5281 exited_at:{seconds:1765888306 nanos:781354808}" Dec 16 12:31:47.702960 containerd[1901]: time="2025-12-16T12:31:47.702089236Z" level=info msg="CreateContainer within sandbox \"1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:31:47.724323 containerd[1901]: time="2025-12-16T12:31:47.724257168Z" level=info msg="Container 73af9993864ec4ca2521fc28718794afc684ac7f78d4c6cd1bccdef36c9c0176: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:31:47.725754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232564129.mount: Deactivated successfully. Dec 16 12:31:47.742856 containerd[1901]: time="2025-12-16T12:31:47.742817779Z" level=info msg="CreateContainer within sandbox \"1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"73af9993864ec4ca2521fc28718794afc684ac7f78d4c6cd1bccdef36c9c0176\"" Dec 16 12:31:47.744263 containerd[1901]: time="2025-12-16T12:31:47.744179280Z" level=info msg="StartContainer for \"73af9993864ec4ca2521fc28718794afc684ac7f78d4c6cd1bccdef36c9c0176\"" Dec 16 12:31:47.745531 containerd[1901]: time="2025-12-16T12:31:47.745505403Z" level=info msg="connecting to shim 73af9993864ec4ca2521fc28718794afc684ac7f78d4c6cd1bccdef36c9c0176" address="unix:///run/containerd/s/c432322a56f399cbda2473b9ab9d61496426bd8d26f68895827b28b0b84eaf72" protocol=ttrpc version=3 Dec 16 12:31:47.763157 systemd[1]: Started cri-containerd-73af9993864ec4ca2521fc28718794afc684ac7f78d4c6cd1bccdef36c9c0176.scope - libcontainer container 73af9993864ec4ca2521fc28718794afc684ac7f78d4c6cd1bccdef36c9c0176. Dec 16 12:31:47.840106 systemd[1]: cri-containerd-73af9993864ec4ca2521fc28718794afc684ac7f78d4c6cd1bccdef36c9c0176.scope: Deactivated successfully. Dec 16 12:31:47.844000 containerd[1901]: time="2025-12-16T12:31:47.843889488Z" level=info msg="received container exit event container_id:\"73af9993864ec4ca2521fc28718794afc684ac7f78d4c6cd1bccdef36c9c0176\" id:\"73af9993864ec4ca2521fc28718794afc684ac7f78d4c6cd1bccdef36c9c0176\" pid:5324 exited_at:{seconds:1765888307 nanos:841849897}" Dec 16 12:31:47.845737 containerd[1901]: time="2025-12-16T12:31:47.845701993Z" level=info msg="StartContainer for \"73af9993864ec4ca2521fc28718794afc684ac7f78d4c6cd1bccdef36c9c0176\" returns successfully" Dec 16 12:31:48.234410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73af9993864ec4ca2521fc28718794afc684ac7f78d4c6cd1bccdef36c9c0176-rootfs.mount: Deactivated successfully. Dec 16 12:31:48.707669 containerd[1901]: time="2025-12-16T12:31:48.707526483Z" level=info msg="CreateContainer within sandbox \"1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:31:48.736687 containerd[1901]: time="2025-12-16T12:31:48.736191244Z" level=info msg="Container de47563a9c9c996626b7b6690df64048604ccac281a0a26fb2766d2ab308a5e6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:31:48.773847 containerd[1901]: time="2025-12-16T12:31:48.773797795Z" level=info msg="CreateContainer within sandbox \"1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de47563a9c9c996626b7b6690df64048604ccac281a0a26fb2766d2ab308a5e6\"" Dec 16 12:31:48.775421 containerd[1901]: time="2025-12-16T12:31:48.774341545Z" level=info msg="StartContainer for \"de47563a9c9c996626b7b6690df64048604ccac281a0a26fb2766d2ab308a5e6\"" Dec 16 12:31:48.775760 containerd[1901]: time="2025-12-16T12:31:48.775725093Z" level=info msg="connecting to shim de47563a9c9c996626b7b6690df64048604ccac281a0a26fb2766d2ab308a5e6" address="unix:///run/containerd/s/c432322a56f399cbda2473b9ab9d61496426bd8d26f68895827b28b0b84eaf72" protocol=ttrpc version=3 Dec 16 12:31:48.793143 systemd[1]: Started cri-containerd-de47563a9c9c996626b7b6690df64048604ccac281a0a26fb2766d2ab308a5e6.scope - libcontainer container de47563a9c9c996626b7b6690df64048604ccac281a0a26fb2766d2ab308a5e6. Dec 16 12:31:48.818405 systemd[1]: cri-containerd-de47563a9c9c996626b7b6690df64048604ccac281a0a26fb2766d2ab308a5e6.scope: Deactivated successfully. Dec 16 12:31:48.823120 containerd[1901]: time="2025-12-16T12:31:48.823050813Z" level=info msg="received container exit event container_id:\"de47563a9c9c996626b7b6690df64048604ccac281a0a26fb2766d2ab308a5e6\" id:\"de47563a9c9c996626b7b6690df64048604ccac281a0a26fb2766d2ab308a5e6\" pid:5365 exited_at:{seconds:1765888308 nanos:819647798}" Dec 16 12:31:48.824729 containerd[1901]: time="2025-12-16T12:31:48.824650158Z" level=info msg="StartContainer for \"de47563a9c9c996626b7b6690df64048604ccac281a0a26fb2766d2ab308a5e6\" returns successfully" Dec 16 12:31:48.839218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de47563a9c9c996626b7b6690df64048604ccac281a0a26fb2766d2ab308a5e6-rootfs.mount: Deactivated successfully. Dec 16 12:31:49.411928 containerd[1901]: time="2025-12-16T12:31:49.411893735Z" level=info msg="StopPodSandbox for \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\"" Dec 16 12:31:49.412565 containerd[1901]: time="2025-12-16T12:31:49.412298345Z" level=info msg="TearDown network for sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" successfully" Dec 16 12:31:49.412565 containerd[1901]: time="2025-12-16T12:31:49.412346923Z" level=info msg="StopPodSandbox for \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" returns successfully" Dec 16 12:31:49.413712 containerd[1901]: time="2025-12-16T12:31:49.412855120Z" level=info msg="RemovePodSandbox for \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\"" Dec 16 12:31:49.413712 containerd[1901]: time="2025-12-16T12:31:49.412877064Z" level=info msg="Forcibly stopping sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\"" Dec 16 12:31:49.413712 containerd[1901]: time="2025-12-16T12:31:49.412935378Z" level=info msg="TearDown network for sandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" successfully" Dec 16 12:31:49.414039 containerd[1901]: time="2025-12-16T12:31:49.413996933Z" level=info msg="Ensure that sandbox e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553 in task-service has been cleanup successfully" Dec 16 12:31:49.424785 containerd[1901]: time="2025-12-16T12:31:49.424749962Z" level=info msg="RemovePodSandbox \"e149ac29a113268a2863f659cacb3d557859cb592734cdbbb1f52a815b19b553\" returns successfully" Dec 16 12:31:49.425275 containerd[1901]: time="2025-12-16T12:31:49.425255743Z" level=info msg="StopPodSandbox for \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\"" Dec 16 12:31:49.425489 containerd[1901]: time="2025-12-16T12:31:49.425468316Z" level=info msg="TearDown network for sandbox \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\" successfully" Dec 16 12:31:49.425572 containerd[1901]: time="2025-12-16T12:31:49.425557550Z" level=info msg="StopPodSandbox for \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\" returns successfully" Dec 16 12:31:49.425872 containerd[1901]: time="2025-12-16T12:31:49.425847182Z" level=info msg="RemovePodSandbox for \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\"" Dec 16 12:31:49.425872 containerd[1901]: time="2025-12-16T12:31:49.425873854Z" level=info msg="Forcibly stopping sandbox \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\"" Dec 16 12:31:49.425991 containerd[1901]: time="2025-12-16T12:31:49.425961001Z" level=info msg="TearDown network for sandbox \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\" successfully" Dec 16 12:31:49.427423 containerd[1901]: time="2025-12-16T12:31:49.427382949Z" level=info msg="Ensure that sandbox 8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1 in task-service has been cleanup successfully" Dec 16 12:31:49.437718 containerd[1901]: time="2025-12-16T12:31:49.437689950Z" level=info msg="RemovePodSandbox \"8992cb749353a1be64e8f20fa22a4fd564766933817e4c154b04750d804ad2f1\" returns successfully" Dec 16 12:31:49.487535 kubelet[3451]: E1216 12:31:49.487480 3451 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 12:31:49.710746 containerd[1901]: time="2025-12-16T12:31:49.710565901Z" level=info msg="CreateContainer within sandbox \"1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:31:49.738059 containerd[1901]: time="2025-12-16T12:31:49.738004134Z" level=info msg="Container 2ecf79c6150b30f48f47d0ead676c6fe1fa2b62364cf855f3f3defc9e7421c77: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:31:49.753638 containerd[1901]: time="2025-12-16T12:31:49.753595919Z" level=info msg="CreateContainer within sandbox \"1ce1ebfe60e449087c16b320e8310e99cc27d6a3846c1926d274f9542245445f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2ecf79c6150b30f48f47d0ead676c6fe1fa2b62364cf855f3f3defc9e7421c77\"" Dec 16 12:31:49.754362 containerd[1901]: time="2025-12-16T12:31:49.754331314Z" level=info msg="StartContainer for \"2ecf79c6150b30f48f47d0ead676c6fe1fa2b62364cf855f3f3defc9e7421c77\"" Dec 16 12:31:49.756194 containerd[1901]: time="2025-12-16T12:31:49.756168857Z" level=info msg="connecting to shim 2ecf79c6150b30f48f47d0ead676c6fe1fa2b62364cf855f3f3defc9e7421c77" address="unix:///run/containerd/s/c432322a56f399cbda2473b9ab9d61496426bd8d26f68895827b28b0b84eaf72" protocol=ttrpc version=3 Dec 16 12:31:49.776178 systemd[1]: Started cri-containerd-2ecf79c6150b30f48f47d0ead676c6fe1fa2b62364cf855f3f3defc9e7421c77.scope - libcontainer container 2ecf79c6150b30f48f47d0ead676c6fe1fa2b62364cf855f3f3defc9e7421c77. Dec 16 12:31:49.820048 containerd[1901]: time="2025-12-16T12:31:49.819950833Z" level=info msg="StartContainer for \"2ecf79c6150b30f48f47d0ead676c6fe1fa2b62364cf855f3f3defc9e7421c77\" returns successfully" Dec 16 12:31:50.138041 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 16 12:31:50.732580 kubelet[3451]: I1216 12:31:50.730516 3451 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b2qsn" podStartSLOduration=6.730501146 podStartE2EDuration="6.730501146s" podCreationTimestamp="2025-12-16 12:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:31:50.730351238 +0000 UTC m=+121.389928130" watchObservedRunningTime="2025-12-16 12:31:50.730501146 +0000 UTC m=+121.390078046" Dec 16 12:31:51.561840 kubelet[3451]: I1216 12:31:51.561792 3451 setters.go:602] "Node became not ready" node="ci-4459.2.2-a-7f44347f41" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T12:31:51Z","lastTransitionTime":"2025-12-16T12:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 12:31:52.505882 kubelet[3451]: E1216 12:31:52.505714 3451 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41626->127.0.0.1:34211: write tcp 127.0.0.1:41626->127.0.0.1:34211: write: broken pipe Dec 16 12:31:52.546432 systemd-networkd[1492]: lxc_health: Link UP Dec 16 12:31:52.554910 systemd-networkd[1492]: lxc_health: Gained carrier Dec 16 12:31:53.963202 systemd-networkd[1492]: lxc_health: Gained IPv6LL Dec 16 12:31:58.848736 sshd[5172]: Connection closed by 10.200.16.10 port 53684 Dec 16 12:31:58.849349 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:58.852485 systemd[1]: sshd@24-10.200.20.37:22-10.200.16.10:53684.service: Deactivated successfully. Dec 16 12:31:58.853912 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:31:58.857073 systemd-logind[1878]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:31:58.858688 systemd-logind[1878]: Removed session 27.