Sep 3 23:26:42.028765 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Sep 3 23:26:42.028783 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 3 22:04:24 -00 2025 Sep 3 23:26:42.028789 kernel: KASLR enabled Sep 3 23:26:42.028793 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 3 23:26:42.028798 kernel: printk: legacy bootconsole [pl11] enabled Sep 3 23:26:42.028801 kernel: efi: EFI v2.7 by EDK II Sep 3 23:26:42.028806 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead5018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 3 23:26:42.028810 kernel: random: crng init done Sep 3 23:26:42.028814 kernel: secureboot: Secure boot disabled Sep 3 23:26:42.028818 kernel: ACPI: Early table checksum verification disabled Sep 3 23:26:42.028822 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 3 23:26:42.028826 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:26:42.028830 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:26:42.028834 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 3 23:26:42.028840 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:26:42.028844 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:26:42.028848 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:26:42.028853 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:26:42.028857 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:26:42.028861 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:26:42.028865 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 3 23:26:42.028869 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:26:42.028873 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 3 23:26:42.028877 kernel: ACPI: Use ACPI SPCR as default console: No Sep 3 23:26:42.028881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 3 23:26:42.028885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Sep 3 23:26:42.028890 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Sep 3 23:26:42.028894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 3 23:26:42.028898 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 3 23:26:42.028903 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 3 23:26:42.028907 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 3 23:26:42.028911 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 3 23:26:42.028915 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 3 23:26:42.028919 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 3 23:26:42.028923 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 3 23:26:42.028927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 3 23:26:42.028931 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Sep 3 23:26:42.028935 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Sep 3 23:26:42.028940 kernel: Zone ranges: Sep 3 23:26:42.028944 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 3 23:26:42.028950 kernel: DMA32 empty Sep 3 23:26:42.028955 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 3 23:26:42.028959 kernel: Device empty Sep 3 23:26:42.028963 kernel: Movable zone start for each node Sep 3 23:26:42.028968 kernel: Early memory node ranges Sep 3 23:26:42.028973 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 3 23:26:42.028977 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 3 23:26:42.028981 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 3 23:26:42.028986 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 3 23:26:42.028990 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 3 23:26:42.028994 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 3 23:26:42.028999 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 3 23:26:42.029003 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 3 23:26:42.029007 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 3 23:26:42.029011 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 3 23:26:42.029016 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 3 23:26:42.029020 kernel: cma: Reserved 16 MiB at 0x000000003ec00000 on node -1 Sep 3 23:26:42.029025 kernel: psci: probing for conduit method from ACPI. Sep 3 23:26:42.029030 kernel: psci: PSCIv1.1 detected in firmware. Sep 3 23:26:42.029034 kernel: psci: Using standard PSCI v0.2 function IDs Sep 3 23:26:42.029038 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 3 23:26:42.029043 kernel: psci: SMC Calling Convention v1.4 Sep 3 23:26:42.029047 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 3 23:26:42.029051 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 3 23:26:42.029056 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 3 23:26:42.029060 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 3 23:26:42.029069 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 3 23:26:42.029074 kernel: Detected PIPT I-cache on CPU0 Sep 3 23:26:42.029079 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Sep 3 23:26:42.029083 kernel: CPU features: detected: GIC system register CPU interface Sep 3 23:26:42.029088 kernel: CPU features: detected: Spectre-v4 Sep 3 23:26:42.029092 kernel: CPU features: detected: Spectre-BHB Sep 3 23:26:42.029096 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 3 23:26:42.029101 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 3 23:26:42.029105 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Sep 3 23:26:42.029109 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 3 23:26:42.029114 kernel: alternatives: applying boot alternatives Sep 3 23:26:42.029119 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:26:42.029124 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 3 23:26:42.029129 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 3 23:26:42.029133 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 3 23:26:42.029137 kernel: Fallback order for Node 0: 0 Sep 3 23:26:42.029142 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Sep 3 23:26:42.029146 kernel: Policy zone: Normal Sep 3 23:26:42.029150 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 3 23:26:42.029155 kernel: software IO TLB: area num 2. Sep 3 23:26:42.029159 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Sep 3 23:26:42.029163 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 3 23:26:42.029168 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 3 23:26:42.029173 kernel: rcu: RCU event tracing is enabled. Sep 3 23:26:42.029178 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 3 23:26:42.029182 kernel: Trampoline variant of Tasks RCU enabled. Sep 3 23:26:42.029187 kernel: Tracing variant of Tasks RCU enabled. Sep 3 23:26:42.029191 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 3 23:26:42.029195 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 3 23:26:42.029200 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:26:42.029204 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:26:42.029209 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 3 23:26:42.029213 kernel: GICv3: 960 SPIs implemented Sep 3 23:26:42.029217 kernel: GICv3: 0 Extended SPIs implemented Sep 3 23:26:42.029221 kernel: Root IRQ handler: gic_handle_irq Sep 3 23:26:42.029226 kernel: GICv3: GICv3 features: 16 PPIs, RSS Sep 3 23:26:42.029231 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Sep 3 23:26:42.029235 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 3 23:26:42.029239 kernel: ITS: No ITS available, not enabling LPIs Sep 3 23:26:42.029244 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 3 23:26:42.029248 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Sep 3 23:26:42.029253 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 3 23:26:42.029257 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Sep 3 23:26:42.029262 kernel: Console: colour dummy device 80x25 Sep 3 23:26:42.029266 kernel: printk: legacy console [tty1] enabled Sep 3 23:26:42.029271 kernel: ACPI: Core revision 20240827 Sep 3 23:26:42.029276 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Sep 3 23:26:42.029281 kernel: pid_max: default: 32768 minimum: 301 Sep 3 23:26:42.029285 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 3 23:26:42.029290 kernel: landlock: Up and running. Sep 3 23:26:42.029294 kernel: SELinux: Initializing. Sep 3 23:26:42.029299 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:26:42.029307 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:26:42.029312 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Sep 3 23:26:42.029317 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Sep 3 23:26:42.029322 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 3 23:26:42.029327 kernel: rcu: Hierarchical SRCU implementation. Sep 3 23:26:42.029331 kernel: rcu: Max phase no-delay instances is 400. Sep 3 23:26:42.029337 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 3 23:26:42.029342 kernel: Remapping and enabling EFI services. Sep 3 23:26:42.029346 kernel: smp: Bringing up secondary CPUs ... Sep 3 23:26:42.029351 kernel: Detected PIPT I-cache on CPU1 Sep 3 23:26:42.029355 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 3 23:26:42.029361 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Sep 3 23:26:42.029366 kernel: smp: Brought up 1 node, 2 CPUs Sep 3 23:26:42.029370 kernel: SMP: Total of 2 processors activated. Sep 3 23:26:42.029375 kernel: CPU: All CPU(s) started at EL1 Sep 3 23:26:42.029380 kernel: CPU features: detected: 32-bit EL0 Support Sep 3 23:26:42.029384 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 3 23:26:42.029389 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 3 23:26:42.029394 kernel: CPU features: detected: Common not Private translations Sep 3 23:26:42.029399 kernel: CPU features: detected: CRC32 instructions Sep 3 23:26:42.029404 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Sep 3 23:26:42.029409 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 3 23:26:42.029414 kernel: CPU features: detected: LSE atomic instructions Sep 3 23:26:42.029418 kernel: CPU features: detected: Privileged Access Never Sep 3 23:26:42.029423 kernel: CPU features: detected: Speculation barrier (SB) Sep 3 23:26:42.029427 kernel: CPU features: detected: TLB range maintenance instructions Sep 3 23:26:42.029432 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 3 23:26:42.029437 kernel: CPU features: detected: Scalable Vector Extension Sep 3 23:26:42.029441 kernel: alternatives: applying system-wide alternatives Sep 3 23:26:42.029447 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 3 23:26:42.029452 kernel: SVE: maximum available vector length 16 bytes per vector Sep 3 23:26:42.029456 kernel: SVE: default vector length 16 bytes per vector Sep 3 23:26:42.029461 kernel: Memory: 3959604K/4194160K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 213368K reserved, 16384K cma-reserved) Sep 3 23:26:42.029482 kernel: devtmpfs: initialized Sep 3 23:26:42.029487 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 3 23:26:42.029492 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 3 23:26:42.029496 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 3 23:26:42.029501 kernel: 0 pages in range for non-PLT usage Sep 3 23:26:42.029507 kernel: 508560 pages in range for PLT usage Sep 3 23:26:42.029512 kernel: pinctrl core: initialized pinctrl subsystem Sep 3 23:26:42.029516 kernel: SMBIOS 3.1.0 present. Sep 3 23:26:42.029521 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 3 23:26:42.029526 kernel: DMI: Memory slots populated: 2/2 Sep 3 23:26:42.029531 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 3 23:26:42.029535 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 3 23:26:42.029540 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 3 23:26:42.029545 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 3 23:26:42.029551 kernel: audit: initializing netlink subsys (disabled) Sep 3 23:26:42.029555 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Sep 3 23:26:42.029560 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 3 23:26:42.029565 kernel: cpuidle: using governor menu Sep 3 23:26:42.029569 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 3 23:26:42.029574 kernel: ASID allocator initialised with 32768 entries Sep 3 23:26:42.029579 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 3 23:26:42.029583 kernel: Serial: AMBA PL011 UART driver Sep 3 23:26:42.029588 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 3 23:26:42.029593 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 3 23:26:42.029598 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 3 23:26:42.029603 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 3 23:26:42.029608 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 3 23:26:42.029612 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 3 23:26:42.029617 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 3 23:26:42.029622 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 3 23:26:42.029626 kernel: ACPI: Added _OSI(Module Device) Sep 3 23:26:42.029631 kernel: ACPI: Added _OSI(Processor Device) Sep 3 23:26:42.029636 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 3 23:26:42.029641 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 3 23:26:42.029646 kernel: ACPI: Interpreter enabled Sep 3 23:26:42.029650 kernel: ACPI: Using GIC for interrupt routing Sep 3 23:26:42.029655 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 3 23:26:42.029660 kernel: printk: legacy console [ttyAMA0] enabled Sep 3 23:26:42.029664 kernel: printk: legacy bootconsole [pl11] disabled Sep 3 23:26:42.029669 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 3 23:26:42.029674 kernel: ACPI: CPU0 has been hot-added Sep 3 23:26:42.029679 kernel: ACPI: CPU1 has been hot-added Sep 3 23:26:42.029684 kernel: iommu: Default domain type: Translated Sep 3 23:26:42.029689 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 3 23:26:42.029693 kernel: efivars: Registered efivars operations Sep 3 23:26:42.029698 kernel: vgaarb: loaded Sep 3 23:26:42.029703 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 3 23:26:42.029708 kernel: VFS: Disk quotas dquot_6.6.0 Sep 3 23:26:42.029712 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 3 23:26:42.029717 kernel: pnp: PnP ACPI init Sep 3 23:26:42.029722 kernel: pnp: PnP ACPI: found 0 devices Sep 3 23:26:42.029727 kernel: NET: Registered PF_INET protocol family Sep 3 23:26:42.029732 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 3 23:26:42.029737 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 3 23:26:42.029741 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 3 23:26:42.029746 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 3 23:26:42.029751 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 3 23:26:42.029755 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 3 23:26:42.029760 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:26:42.029766 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:26:42.029770 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 3 23:26:42.029775 kernel: PCI: CLS 0 bytes, default 64 Sep 3 23:26:42.029780 kernel: kvm [1]: HYP mode not available Sep 3 23:26:42.029784 kernel: Initialise system trusted keyrings Sep 3 23:26:42.029789 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 3 23:26:42.029794 kernel: Key type asymmetric registered Sep 3 23:26:42.029798 kernel: Asymmetric key parser 'x509' registered Sep 3 23:26:42.029803 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 3 23:26:42.029808 kernel: io scheduler mq-deadline registered Sep 3 23:26:42.029813 kernel: io scheduler kyber registered Sep 3 23:26:42.029818 kernel: io scheduler bfq registered Sep 3 23:26:42.029822 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 3 23:26:42.029827 kernel: thunder_xcv, ver 1.0 Sep 3 23:26:42.029831 kernel: thunder_bgx, ver 1.0 Sep 3 23:26:42.029836 kernel: nicpf, ver 1.0 Sep 3 23:26:42.029841 kernel: nicvf, ver 1.0 Sep 3 23:26:42.029947 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 3 23:26:42.029998 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-03T23:26:41 UTC (1756942001) Sep 3 23:26:42.030005 kernel: efifb: probing for efifb Sep 3 23:26:42.030010 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 3 23:26:42.030014 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 3 23:26:42.030019 kernel: efifb: scrolling: redraw Sep 3 23:26:42.030024 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 3 23:26:42.030029 kernel: Console: switching to colour frame buffer device 128x48 Sep 3 23:26:42.030033 kernel: fb0: EFI VGA frame buffer device Sep 3 23:26:42.030039 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 3 23:26:42.030044 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 3 23:26:42.030049 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 3 23:26:42.030053 kernel: NET: Registered PF_INET6 protocol family Sep 3 23:26:42.030058 kernel: watchdog: NMI not fully supported Sep 3 23:26:42.030063 kernel: watchdog: Hard watchdog permanently disabled Sep 3 23:26:42.030067 kernel: Segment Routing with IPv6 Sep 3 23:26:42.030072 kernel: In-situ OAM (IOAM) with IPv6 Sep 3 23:26:42.030077 kernel: NET: Registered PF_PACKET protocol family Sep 3 23:26:42.030082 kernel: Key type dns_resolver registered Sep 3 23:26:42.030087 kernel: registered taskstats version 1 Sep 3 23:26:42.030091 kernel: Loading compiled-in X.509 certificates Sep 3 23:26:42.030096 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 08fc774dab168e64ce30c382a4517d40e72c4744' Sep 3 23:26:42.030101 kernel: Demotion targets for Node 0: null Sep 3 23:26:42.030106 kernel: Key type .fscrypt registered Sep 3 23:26:42.030110 kernel: Key type fscrypt-provisioning registered Sep 3 23:26:42.030115 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 3 23:26:42.030120 kernel: ima: Allocated hash algorithm: sha1 Sep 3 23:26:42.030125 kernel: ima: No architecture policies found Sep 3 23:26:42.030130 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 3 23:26:42.030134 kernel: clk: Disabling unused clocks Sep 3 23:26:42.030139 kernel: PM: genpd: Disabling unused power domains Sep 3 23:26:42.030144 kernel: Warning: unable to open an initial console. Sep 3 23:26:42.030148 kernel: Freeing unused kernel memory: 38976K Sep 3 23:26:42.030153 kernel: Run /init as init process Sep 3 23:26:42.030158 kernel: with arguments: Sep 3 23:26:42.030162 kernel: /init Sep 3 23:26:42.030168 kernel: with environment: Sep 3 23:26:42.030172 kernel: HOME=/ Sep 3 23:26:42.030177 kernel: TERM=linux Sep 3 23:26:42.030181 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 3 23:26:42.030187 systemd[1]: Successfully made /usr/ read-only. Sep 3 23:26:42.030194 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:26:42.030199 systemd[1]: Detected virtualization microsoft. Sep 3 23:26:42.030205 systemd[1]: Detected architecture arm64. Sep 3 23:26:42.030210 systemd[1]: Running in initrd. Sep 3 23:26:42.030215 systemd[1]: No hostname configured, using default hostname. Sep 3 23:26:42.030221 systemd[1]: Hostname set to . Sep 3 23:26:42.030226 systemd[1]: Initializing machine ID from random generator. Sep 3 23:26:42.030231 systemd[1]: Queued start job for default target initrd.target. Sep 3 23:26:42.030236 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:26:42.030241 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:26:42.030246 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 3 23:26:42.030252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:26:42.030257 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 3 23:26:42.030263 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 3 23:26:42.030269 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 3 23:26:42.030274 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 3 23:26:42.030279 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:26:42.030285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:26:42.030290 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:26:42.030295 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:26:42.030300 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:26:42.030305 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:26:42.030310 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:26:42.030315 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:26:42.030320 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 3 23:26:42.030325 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 3 23:26:42.030331 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:26:42.030336 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:26:42.030341 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:26:42.030346 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:26:42.030351 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 3 23:26:42.030357 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:26:42.030362 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 3 23:26:42.030367 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 3 23:26:42.030373 systemd[1]: Starting systemd-fsck-usr.service... Sep 3 23:26:42.030378 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:26:42.030383 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:26:42.030399 systemd-journald[225]: Collecting audit messages is disabled. Sep 3 23:26:42.030413 systemd-journald[225]: Journal started Sep 3 23:26:42.030427 systemd-journald[225]: Runtime Journal (/run/log/journal/835ce22f639943788ccda6730435f3ff) is 8M, max 78.5M, 70.5M free. Sep 3 23:26:42.032497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:26:42.037023 systemd-modules-load[227]: Inserted module 'overlay' Sep 3 23:26:42.058525 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:26:42.058563 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 3 23:26:42.064759 systemd-modules-load[227]: Inserted module 'br_netfilter' Sep 3 23:26:42.069541 kernel: Bridge firewalling registered Sep 3 23:26:42.065113 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 3 23:26:42.074022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:26:42.084736 systemd[1]: Finished systemd-fsck-usr.service. Sep 3 23:26:42.088950 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:26:42.098824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:26:42.111510 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 3 23:26:42.126919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:26:42.131832 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:26:42.151605 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:26:42.166891 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:26:42.174394 systemd-tmpfiles[254]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 3 23:26:42.178848 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:26:42.189874 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:26:42.198105 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:26:42.210003 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 3 23:26:42.225174 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:26:42.235641 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:26:42.249241 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:26:42.280685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:26:42.282461 systemd-resolved[264]: Positive Trust Anchors: Sep 3 23:26:42.282501 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:26:42.282520 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:26:42.284112 systemd-resolved[264]: Defaulting to hostname 'linux'. Sep 3 23:26:42.289690 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:26:42.299601 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:26:42.395482 kernel: SCSI subsystem initialized Sep 3 23:26:42.401476 kernel: Loading iSCSI transport class v2.0-870. Sep 3 23:26:42.409492 kernel: iscsi: registered transport (tcp) Sep 3 23:26:42.419476 kernel: iscsi: registered transport (qla4xxx) Sep 3 23:26:42.419486 kernel: QLogic iSCSI HBA Driver Sep 3 23:26:42.434337 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:26:42.456509 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:26:42.463719 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:26:42.505447 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 3 23:26:42.511178 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 3 23:26:42.575480 kernel: raid6: neonx8 gen() 18540 MB/s Sep 3 23:26:42.592474 kernel: raid6: neonx4 gen() 18571 MB/s Sep 3 23:26:42.611472 kernel: raid6: neonx2 gen() 17093 MB/s Sep 3 23:26:42.631478 kernel: raid6: neonx1 gen() 15032 MB/s Sep 3 23:26:42.650472 kernel: raid6: int64x8 gen() 10536 MB/s Sep 3 23:26:42.669471 kernel: raid6: int64x4 gen() 10612 MB/s Sep 3 23:26:42.689486 kernel: raid6: int64x2 gen() 8978 MB/s Sep 3 23:26:42.711137 kernel: raid6: int64x1 gen() 7023 MB/s Sep 3 23:26:42.711145 kernel: raid6: using algorithm neonx4 gen() 18571 MB/s Sep 3 23:26:42.733054 kernel: raid6: .... xor() 15150 MB/s, rmw enabled Sep 3 23:26:42.733092 kernel: raid6: using neon recovery algorithm Sep 3 23:26:42.742473 kernel: xor: measuring software checksum speed Sep 3 23:26:42.742506 kernel: 8regs : 28667 MB/sec Sep 3 23:26:42.745108 kernel: 32regs : 28832 MB/sec Sep 3 23:26:42.747764 kernel: arm64_neon : 37690 MB/sec Sep 3 23:26:42.750905 kernel: xor: using function: arm64_neon (37690 MB/sec) Sep 3 23:26:42.789491 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 3 23:26:42.794791 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:26:42.805338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:26:42.829125 systemd-udevd[475]: Using default interface naming scheme 'v255'. Sep 3 23:26:42.832027 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:26:42.845042 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 3 23:26:42.868480 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Sep 3 23:26:42.888905 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:26:42.895611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:26:42.934806 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:26:42.946942 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 3 23:26:43.006507 kernel: hv_vmbus: Vmbus version:5.3 Sep 3 23:26:43.025181 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 3 23:26:43.025228 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 3 23:26:43.033114 kernel: hv_vmbus: registering driver hv_netvsc Sep 3 23:26:43.033158 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 3 23:26:43.041161 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:26:43.045381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:26:43.058418 kernel: hv_vmbus: registering driver hid_hyperv Sep 3 23:26:43.058432 kernel: hv_vmbus: registering driver hv_storvsc Sep 3 23:26:43.058839 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:26:43.078520 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 3 23:26:43.078540 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 3 23:26:43.078547 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 3 23:26:43.088699 kernel: scsi host1: storvsc_host_t Sep 3 23:26:43.088832 kernel: scsi host0: storvsc_host_t Sep 3 23:26:43.089005 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:26:43.107616 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 3 23:26:43.112101 kernel: hv_netvsc 000d3ac4-0758-000d-3ac4-0758000d3ac4 eth0: VF slot 1 added Sep 3 23:26:43.112235 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 3 23:26:43.102779 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:26:43.108000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:26:43.110137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:26:43.118218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:26:43.165514 kernel: hv_vmbus: registering driver hv_pci Sep 3 23:26:43.165548 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 3 23:26:43.165680 kernel: PTP clock support registered Sep 3 23:26:43.165688 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 3 23:26:43.165760 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 3 23:26:43.165822 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 3 23:26:43.165883 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 3 23:26:43.165941 kernel: hv_pci e082094b-7ae2-4c99-8eb2-a1533fc61d8f: PCI VMBus probing: Using version 0x10004 Sep 3 23:26:43.166008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#61 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 3 23:26:43.181075 kernel: hv_pci e082094b-7ae2-4c99-8eb2-a1533fc61d8f: PCI host bridge to bus 7ae2:00 Sep 3 23:26:43.181455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:26:43.211493 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#70 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 3 23:26:43.211643 kernel: pci_bus 7ae2:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 3 23:26:43.211731 kernel: pci_bus 7ae2:00: No busn resource found for root bus, will use [bus 00-ff] Sep 3 23:26:43.211788 kernel: pci 7ae2:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Sep 3 23:26:43.211804 kernel: pci 7ae2:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 3 23:26:43.219493 kernel: pci 7ae2:00:02.0: enabling Extended Tags Sep 3 23:26:43.233512 kernel: pci 7ae2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7ae2:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Sep 3 23:26:43.241872 kernel: pci_bus 7ae2:00: busn_res: [bus 00-ff] end is updated to 00 Sep 3 23:26:43.241995 kernel: pci 7ae2:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Sep 3 23:26:43.254719 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 3 23:26:43.254758 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 3 23:26:43.265167 kernel: hv_utils: Registering HyperV Utility Driver Sep 3 23:26:43.265194 kernel: hv_vmbus: registering driver hv_utils Sep 3 23:26:43.270614 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 3 23:26:43.270748 kernel: hv_utils: Heartbeat IC version 3.0 Sep 3 23:26:43.275489 kernel: hv_utils: Shutdown IC version 3.2 Sep 3 23:26:43.275516 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 3 23:26:43.279485 kernel: hv_utils: TimeSync IC version 4.0 Sep 3 23:26:43.008785 systemd-resolved[264]: Clock change detected. Flushing caches. Sep 3 23:26:43.017590 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 3 23:26:43.017696 systemd-journald[225]: Time jumped backwards, rotating. Sep 3 23:26:43.031927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#120 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 3 23:26:43.050919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#94 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 3 23:26:43.076692 kernel: mlx5_core 7ae2:00:02.0: enabling device (0000 -> 0002) Sep 3 23:26:43.081919 kernel: mlx5_core 7ae2:00:02.0: PTM is not supported by PCIe Sep 3 23:26:43.082013 kernel: mlx5_core 7ae2:00:02.0: firmware version: 16.30.5006 Sep 3 23:26:43.249138 kernel: hv_netvsc 000d3ac4-0758-000d-3ac4-0758000d3ac4 eth0: VF registering: eth1 Sep 3 23:26:43.249299 kernel: mlx5_core 7ae2:00:02.0 eth1: joined to eth0 Sep 3 23:26:43.254135 kernel: mlx5_core 7ae2:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 3 23:26:43.263929 kernel: mlx5_core 7ae2:00:02.0 enP31458s1: renamed from eth1 Sep 3 23:26:44.103846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 3 23:26:44.134421 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 3 23:26:44.145284 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 3 23:26:44.253131 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 3 23:26:44.258327 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 3 23:26:44.270209 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 3 23:26:44.279292 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:26:44.287035 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:26:44.295927 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:26:44.304627 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 3 23:26:44.331454 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 3 23:26:44.349937 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 3 23:26:44.349088 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:26:44.365921 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 3 23:26:45.376372 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#118 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 3 23:26:45.390354 disk-uuid[658]: The operation has completed successfully. Sep 3 23:26:45.394772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 3 23:26:45.459759 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 3 23:26:45.463127 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 3 23:26:45.492443 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 3 23:26:45.512899 sh[823]: Success Sep 3 23:26:45.560522 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 3 23:26:45.560553 kernel: device-mapper: uevent: version 1.0.3 Sep 3 23:26:45.565968 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 3 23:26:45.573936 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 3 23:26:46.403223 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 3 23:26:46.408007 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 3 23:26:46.425439 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 3 23:26:46.451117 kernel: BTRFS: device fsid e8b97e78-d30f-4a41-b431-d82f3afef949 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (841) Sep 3 23:26:46.451144 kernel: BTRFS info (device dm-0): first mount of filesystem e8b97e78-d30f-4a41-b431-d82f3afef949 Sep 3 23:26:46.455493 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:26:47.285365 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 3 23:26:47.285439 kernel: BTRFS info (device dm-0): enabling free space tree Sep 3 23:26:47.362196 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 3 23:26:47.366137 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:26:47.373440 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 3 23:26:47.374277 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 3 23:26:47.397450 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 3 23:26:47.426929 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (876) Sep 3 23:26:47.436052 kernel: BTRFS info (device sda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:26:47.436080 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:26:47.481867 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:26:47.492001 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:26:47.518193 systemd-networkd[1004]: lo: Link UP Sep 3 23:26:47.518202 systemd-networkd[1004]: lo: Gained carrier Sep 3 23:26:47.518876 systemd-networkd[1004]: Enumeration completed Sep 3 23:26:47.520858 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:26:47.521078 systemd-networkd[1004]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:26:47.553508 kernel: BTRFS info (device sda6): turning on async discard Sep 3 23:26:47.553520 kernel: BTRFS info (device sda6): enabling free space tree Sep 3 23:26:47.521081 systemd-networkd[1004]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:26:47.527951 systemd[1]: Reached target network.target - Network. Sep 3 23:26:47.567219 kernel: BTRFS info (device sda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:26:47.567790 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 3 23:26:47.576121 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 3 23:26:47.602920 kernel: mlx5_core 7ae2:00:02.0 enP31458s1: Link up Sep 3 23:26:47.634714 systemd-networkd[1004]: enP31458s1: Link UP Sep 3 23:26:47.638238 kernel: hv_netvsc 000d3ac4-0758-000d-3ac4-0758000d3ac4 eth0: Data path switched to VF: enP31458s1 Sep 3 23:26:47.637562 systemd-networkd[1004]: eth0: Link UP Sep 3 23:26:47.637688 systemd-networkd[1004]: eth0: Gained carrier Sep 3 23:26:47.637697 systemd-networkd[1004]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:26:47.657267 systemd-networkd[1004]: enP31458s1: Gained carrier Sep 3 23:26:47.670935 systemd-networkd[1004]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 3 23:26:49.202096 systemd-networkd[1004]: eth0: Gained IPv6LL Sep 3 23:26:49.680095 ignition[1012]: Ignition 2.21.0 Sep 3 23:26:49.680108 ignition[1012]: Stage: fetch-offline Sep 3 23:26:49.680175 ignition[1012]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:26:49.680181 ignition[1012]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:26:49.690151 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:26:49.680255 ignition[1012]: parsed url from cmdline: "" Sep 3 23:26:49.698936 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 3 23:26:49.680257 ignition[1012]: no config URL provided Sep 3 23:26:49.680260 ignition[1012]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:26:49.680265 ignition[1012]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:26:49.680268 ignition[1012]: failed to fetch config: resource requires networking Sep 3 23:26:49.680466 ignition[1012]: Ignition finished successfully Sep 3 23:26:49.733735 ignition[1019]: Ignition 2.21.0 Sep 3 23:26:49.733748 ignition[1019]: Stage: fetch Sep 3 23:26:49.733895 ignition[1019]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:26:49.733903 ignition[1019]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:26:49.733984 ignition[1019]: parsed url from cmdline: "" Sep 3 23:26:49.733987 ignition[1019]: no config URL provided Sep 3 23:26:49.733990 ignition[1019]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:26:49.733995 ignition[1019]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:26:49.734021 ignition[1019]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 3 23:26:49.806417 ignition[1019]: GET result: OK Sep 3 23:26:49.806496 ignition[1019]: config has been read from IMDS userdata Sep 3 23:26:49.806516 ignition[1019]: parsing config with SHA512: 5abe1dbe77fdd09aabef2f7bbd29fdfc7690991370ba64a7bae1133fe0e7b05df9df1efc811e9cd9192a2bc50763b8a791a5a19ffb4656f1c9a473e0055b3d5b Sep 3 23:26:49.813307 unknown[1019]: fetched base config from "system" Sep 3 23:26:49.813313 unknown[1019]: fetched base config from "system" Sep 3 23:26:49.813533 ignition[1019]: fetch: fetch complete Sep 3 23:26:49.813317 unknown[1019]: fetched user config from "azure" Sep 3 23:26:49.813537 ignition[1019]: fetch: fetch passed Sep 3 23:26:49.820209 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 3 23:26:49.813562 ignition[1019]: Ignition finished successfully Sep 3 23:26:49.825072 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 3 23:26:49.854210 ignition[1025]: Ignition 2.21.0 Sep 3 23:26:49.854222 ignition[1025]: Stage: kargs Sep 3 23:26:49.859475 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 3 23:26:49.854344 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:26:49.854351 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:26:49.868211 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 3 23:26:49.856441 ignition[1025]: kargs: kargs passed Sep 3 23:26:49.856486 ignition[1025]: Ignition finished successfully Sep 3 23:26:49.894821 ignition[1032]: Ignition 2.21.0 Sep 3 23:26:49.894835 ignition[1032]: Stage: disks Sep 3 23:26:49.900211 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 3 23:26:49.894973 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:26:49.904670 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 3 23:26:49.894980 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:26:49.913680 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 3 23:26:49.895838 ignition[1032]: disks: disks passed Sep 3 23:26:49.921796 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:26:49.895878 ignition[1032]: Ignition finished successfully Sep 3 23:26:49.930056 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:26:49.938398 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:26:49.946988 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 3 23:26:50.085427 systemd-fsck[1041]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 3 23:26:50.092360 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 3 23:26:50.102000 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 3 23:26:54.366922 kernel: EXT4-fs (sda9): mounted filesystem d953e3b7-a0cb-45f7-b3a7-216a9a578dda r/w with ordered data mode. Quota mode: none. Sep 3 23:26:54.367064 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 3 23:26:54.371021 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 3 23:26:54.438712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:26:54.465927 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1055) Sep 3 23:26:54.476119 kernel: BTRFS info (device sda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:26:54.476151 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:26:54.476853 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 3 23:26:54.491954 kernel: BTRFS info (device sda6): turning on async discard Sep 3 23:26:54.491972 kernel: BTRFS info (device sda6): enabling free space tree Sep 3 23:26:54.492828 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 3 23:26:54.498109 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 3 23:26:54.498134 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:26:54.505984 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:26:54.511613 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 3 23:26:54.524244 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 3 23:26:55.729618 coreos-metadata[1073]: Sep 03 23:26:55.729 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 3 23:26:55.736143 coreos-metadata[1073]: Sep 03 23:26:55.736 INFO Fetch successful Sep 3 23:26:55.739751 coreos-metadata[1073]: Sep 03 23:26:55.739 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 3 23:26:55.754081 coreos-metadata[1073]: Sep 03 23:26:55.754 INFO Fetch successful Sep 3 23:26:55.758468 coreos-metadata[1073]: Sep 03 23:26:55.757 INFO wrote hostname ci-4372.1.0-n-e4e1aff60f to /sysroot/etc/hostname Sep 3 23:26:55.764297 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 3 23:26:56.480330 initrd-setup-root[1087]: cut: /sysroot/etc/passwd: No such file or directory Sep 3 23:26:56.584318 initrd-setup-root[1094]: cut: /sysroot/etc/group: No such file or directory Sep 3 23:26:56.588637 initrd-setup-root[1101]: cut: /sysroot/etc/shadow: No such file or directory Sep 3 23:26:56.623058 initrd-setup-root[1108]: cut: /sysroot/etc/gshadow: No such file or directory Sep 3 23:26:58.643431 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 3 23:26:58.654121 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 3 23:26:58.666368 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 3 23:26:58.678823 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 3 23:26:58.687690 kernel: BTRFS info (device sda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:26:58.707391 ignition[1175]: INFO : Ignition 2.21.0 Sep 3 23:26:58.707391 ignition[1175]: INFO : Stage: mount Sep 3 23:26:58.714092 ignition[1175]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:26:58.714092 ignition[1175]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:26:58.714092 ignition[1175]: INFO : mount: mount passed Sep 3 23:26:58.714092 ignition[1175]: INFO : Ignition finished successfully Sep 3 23:26:58.711434 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 3 23:26:58.718405 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 3 23:26:58.727315 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 3 23:26:58.749008 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:26:58.774926 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1189) Sep 3 23:26:58.779835 kernel: BTRFS info (device sda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:26:58.784391 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:26:58.792691 kernel: BTRFS info (device sda6): turning on async discard Sep 3 23:26:58.792722 kernel: BTRFS info (device sda6): enabling free space tree Sep 3 23:26:58.793802 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:26:58.817893 ignition[1207]: INFO : Ignition 2.21.0 Sep 3 23:26:58.817893 ignition[1207]: INFO : Stage: files Sep 3 23:26:58.827106 ignition[1207]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:26:58.827106 ignition[1207]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:26:58.827106 ignition[1207]: DEBUG : files: compiled without relabeling support, skipping Sep 3 23:26:58.880088 ignition[1207]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 3 23:26:58.880088 ignition[1207]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 3 23:26:59.001996 ignition[1207]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 3 23:26:59.007243 ignition[1207]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 3 23:26:59.007243 ignition[1207]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 3 23:26:59.002303 unknown[1207]: wrote ssh authorized keys file for user: core Sep 3 23:26:59.131655 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 3 23:26:59.139476 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 3 23:26:59.481797 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 3 23:26:59.910589 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 3 23:26:59.910589 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:26:59.910589 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 3 23:27:00.114348 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 3 23:27:00.179374 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:27:00.186366 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 3 23:27:00.186366 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 3 23:27:00.186366 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:27:00.186366 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:27:00.186366 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:27:00.186366 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:27:00.186366 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:27:00.186366 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:27:00.186366 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:27:00.186366 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:27:00.186366 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:27:00.264470 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:27:00.264470 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:27:00.264470 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 3 23:27:00.650620 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 3 23:27:00.885898 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:27:00.885898 ignition[1207]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 3 23:27:00.983726 ignition[1207]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:27:00.991477 ignition[1207]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:27:00.991477 ignition[1207]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 3 23:27:00.991477 ignition[1207]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 3 23:27:00.991477 ignition[1207]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 3 23:27:01.023594 ignition[1207]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:27:01.023594 ignition[1207]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:27:01.023594 ignition[1207]: INFO : files: files passed Sep 3 23:27:01.023594 ignition[1207]: INFO : Ignition finished successfully Sep 3 23:27:00.999123 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 3 23:27:01.009406 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 3 23:27:01.032479 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 3 23:27:01.045205 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 3 23:27:01.049142 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 3 23:27:01.087892 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:27:01.087892 initrd-setup-root-after-ignition[1235]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:27:01.100091 initrd-setup-root-after-ignition[1239]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:27:01.100322 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:27:01.111671 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 3 23:27:01.120655 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 3 23:27:01.164226 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 3 23:27:01.164322 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 3 23:27:01.173071 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 3 23:27:01.181481 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 3 23:27:01.189302 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 3 23:27:01.189867 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 3 23:27:01.220959 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:27:01.228263 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 3 23:27:01.252688 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:27:01.257270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:27:01.265740 systemd[1]: Stopped target timers.target - Timer Units. Sep 3 23:27:01.273968 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 3 23:27:01.274049 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:27:01.285315 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 3 23:27:01.289247 systemd[1]: Stopped target basic.target - Basic System. Sep 3 23:27:01.296840 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 3 23:27:01.304747 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:27:01.312344 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 3 23:27:01.320741 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:27:01.329127 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 3 23:27:01.337120 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:27:01.345976 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 3 23:27:01.353633 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 3 23:27:01.361815 systemd[1]: Stopped target swap.target - Swaps. Sep 3 23:27:01.368576 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 3 23:27:01.368657 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:27:01.378958 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:27:01.383349 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:27:01.391512 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 3 23:27:01.395373 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:27:01.400174 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 3 23:27:01.400254 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 3 23:27:01.413283 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 3 23:27:01.413360 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:27:01.418239 systemd[1]: ignition-files.service: Deactivated successfully. Sep 3 23:27:01.418303 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 3 23:27:01.425618 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 3 23:27:01.483792 ignition[1259]: INFO : Ignition 2.21.0 Sep 3 23:27:01.483792 ignition[1259]: INFO : Stage: umount Sep 3 23:27:01.483792 ignition[1259]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:27:01.483792 ignition[1259]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:27:01.483792 ignition[1259]: INFO : umount: umount passed Sep 3 23:27:01.483792 ignition[1259]: INFO : Ignition finished successfully Sep 3 23:27:01.425679 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 3 23:27:01.440089 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 3 23:27:01.451552 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 3 23:27:01.451653 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:27:01.468090 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 3 23:27:01.478733 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 3 23:27:01.478840 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:27:01.486761 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 3 23:27:01.486837 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:27:01.498794 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 3 23:27:01.498856 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 3 23:27:01.512199 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 3 23:27:01.512905 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 3 23:27:01.512993 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 3 23:27:01.524870 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 3 23:27:01.524905 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 3 23:27:01.532332 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 3 23:27:01.532357 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 3 23:27:01.539833 systemd[1]: Stopped target network.target - Network. Sep 3 23:27:01.547577 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 3 23:27:01.547608 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:27:01.556455 systemd[1]: Stopped target paths.target - Path Units. Sep 3 23:27:01.559942 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 3 23:27:01.563946 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:27:01.569362 systemd[1]: Stopped target slices.target - Slice Units. Sep 3 23:27:01.577018 systemd[1]: Stopped target sockets.target - Socket Units. Sep 3 23:27:01.584383 systemd[1]: iscsid.socket: Deactivated successfully. Sep 3 23:27:01.584408 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:27:01.592329 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 3 23:27:01.592350 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:27:01.600106 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 3 23:27:01.600138 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 3 23:27:01.608078 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 3 23:27:01.608101 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 3 23:27:01.616171 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 3 23:27:01.623244 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 3 23:27:01.634915 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 3 23:27:01.634981 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 3 23:27:01.644065 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 3 23:27:01.644142 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 3 23:27:01.656012 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 3 23:27:01.656155 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 3 23:27:01.656237 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 3 23:27:01.667638 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 3 23:27:01.667871 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 3 23:27:01.830678 kernel: hv_netvsc 000d3ac4-0758-000d-3ac4-0758000d3ac4 eth0: Data path switched from VF: enP31458s1 Sep 3 23:27:01.667973 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 3 23:27:01.677964 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 3 23:27:01.686861 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 3 23:27:01.686905 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:27:01.696650 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 3 23:27:01.696693 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 3 23:27:01.708019 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 3 23:27:01.718561 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 3 23:27:01.718613 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:27:01.726504 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:27:01.726536 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:27:01.737666 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 3 23:27:01.737703 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 3 23:27:01.742001 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 3 23:27:01.742031 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:27:01.753551 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:27:01.758935 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:27:01.758982 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:27:01.787567 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 3 23:27:01.789769 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:27:01.797098 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 3 23:27:01.797129 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 3 23:27:01.804810 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 3 23:27:01.804828 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:27:01.812685 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 3 23:27:01.812719 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:27:01.830734 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 3 23:27:01.830775 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 3 23:27:01.838437 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 3 23:27:01.838467 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:27:01.851368 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 3 23:27:01.862960 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 3 23:27:01.863013 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:27:01.872673 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 3 23:27:01.872717 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:27:01.878305 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:27:01.878343 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:27:01.887539 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 3 23:27:01.887581 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 3 23:27:01.887614 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:27:01.887787 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 3 23:27:01.889036 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 3 23:27:01.921688 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 3 23:27:01.921789 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 3 23:27:01.930763 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 3 23:27:01.940730 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 3 23:27:02.002078 systemd[1]: Switching root. Sep 3 23:27:02.155170 systemd-journald[225]: Journal stopped Sep 3 23:27:15.656486 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Sep 3 23:27:15.656509 kernel: SELinux: policy capability network_peer_controls=1 Sep 3 23:27:15.656518 kernel: SELinux: policy capability open_perms=1 Sep 3 23:27:15.656524 kernel: SELinux: policy capability extended_socket_class=1 Sep 3 23:27:15.656530 kernel: SELinux: policy capability always_check_network=0 Sep 3 23:27:15.656535 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 3 23:27:15.656541 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 3 23:27:15.656546 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 3 23:27:15.656551 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 3 23:27:15.656556 kernel: SELinux: policy capability userspace_initial_context=0 Sep 3 23:27:15.656563 systemd[1]: Successfully loaded SELinux policy in 237.378ms. Sep 3 23:27:15.656569 kernel: audit: type=1403 audit(1756942024.115:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 3 23:27:15.656575 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.722ms. Sep 3 23:27:15.656581 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:27:15.656588 systemd[1]: Detected virtualization microsoft. Sep 3 23:27:15.656594 systemd[1]: Detected architecture arm64. Sep 3 23:27:15.656600 systemd[1]: Detected first boot. Sep 3 23:27:15.656606 systemd[1]: Hostname set to . Sep 3 23:27:15.656612 systemd[1]: Initializing machine ID from random generator. Sep 3 23:27:15.656618 zram_generator::config[1302]: No configuration found. Sep 3 23:27:15.656624 kernel: NET: Registered PF_VSOCK protocol family Sep 3 23:27:15.656630 systemd[1]: Populated /etc with preset unit settings. Sep 3 23:27:15.656637 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 3 23:27:15.656644 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 3 23:27:15.656651 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 3 23:27:15.656657 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 3 23:27:15.656663 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 3 23:27:15.656669 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 3 23:27:15.656675 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 3 23:27:15.656681 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 3 23:27:15.656687 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 3 23:27:15.656693 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 3 23:27:15.656699 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 3 23:27:15.656705 systemd[1]: Created slice user.slice - User and Session Slice. Sep 3 23:27:15.656711 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:27:15.656717 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:27:15.656723 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 3 23:27:15.656730 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 3 23:27:15.656736 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 3 23:27:15.656742 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:27:15.656750 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 3 23:27:15.656756 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:27:15.656762 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:27:15.656768 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 3 23:27:15.656774 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 3 23:27:15.656781 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 3 23:27:15.656787 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 3 23:27:15.656794 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:27:15.656799 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:27:15.656806 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:27:15.656812 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:27:15.656818 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 3 23:27:15.656823 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 3 23:27:15.656831 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 3 23:27:15.656837 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:27:15.656843 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:27:15.656849 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:27:15.656855 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 3 23:27:15.656862 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 3 23:27:15.656868 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 3 23:27:15.656874 systemd[1]: Mounting media.mount - External Media Directory... Sep 3 23:27:15.656880 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 3 23:27:15.656886 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 3 23:27:15.656892 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 3 23:27:15.656898 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 3 23:27:15.656904 systemd[1]: Reached target machines.target - Containers. Sep 3 23:27:15.656920 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 3 23:27:15.656927 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:27:15.656933 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:27:15.656939 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 3 23:27:15.656945 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:27:15.656951 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:27:15.656957 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:27:15.656963 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 3 23:27:15.656970 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:27:15.656976 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 3 23:27:15.656982 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 3 23:27:15.656988 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 3 23:27:15.656994 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 3 23:27:15.657000 systemd[1]: Stopped systemd-fsck-usr.service. Sep 3 23:27:15.657007 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:27:15.657013 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:27:15.657020 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:27:15.657026 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:27:15.657032 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 3 23:27:15.657038 kernel: fuse: init (API version 7.41) Sep 3 23:27:15.657044 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 3 23:27:15.657050 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:27:15.657057 systemd[1]: verity-setup.service: Deactivated successfully. Sep 3 23:27:15.657063 systemd[1]: Stopped verity-setup.service. Sep 3 23:27:15.657069 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 3 23:27:15.657076 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 3 23:27:15.657081 kernel: loop: module loaded Sep 3 23:27:15.657087 systemd[1]: Mounted media.mount - External Media Directory. Sep 3 23:27:15.657093 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 3 23:27:15.657099 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 3 23:27:15.657115 systemd-journald[1382]: Collecting audit messages is disabled. Sep 3 23:27:15.657130 systemd-journald[1382]: Journal started Sep 3 23:27:15.657144 systemd-journald[1382]: Runtime Journal (/run/log/journal/bb273088adf44370a56d35d6a6be8fbb) is 8M, max 78.5M, 70.5M free. Sep 3 23:27:14.749657 systemd[1]: Queued start job for default target multi-user.target. Sep 3 23:27:14.754300 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 3 23:27:14.754644 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 3 23:27:14.754868 systemd[1]: systemd-journald.service: Consumed 2.278s CPU time. Sep 3 23:27:15.666098 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:27:15.667338 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 3 23:27:15.671314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:27:15.676221 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 3 23:27:15.676340 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 3 23:27:15.681597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:27:15.683079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:27:15.688029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:27:15.688153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:27:15.693543 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 3 23:27:15.693651 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 3 23:27:15.698274 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:27:15.698399 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:27:15.705334 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 3 23:27:15.706921 kernel: ACPI: bus type drm_connector registered Sep 3 23:27:15.710559 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:27:15.710679 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:27:15.715423 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:27:15.721222 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:27:15.725993 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 3 23:27:15.735679 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 3 23:27:15.746862 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:27:15.754993 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 3 23:27:15.770588 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 3 23:27:15.775097 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 3 23:27:15.775124 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:27:15.779669 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 3 23:27:15.786650 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 3 23:27:15.790572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:27:15.822527 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 3 23:27:15.833583 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 3 23:27:15.838016 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:27:15.838919 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 3 23:27:15.843226 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:27:15.844108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:27:15.850503 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 3 23:27:15.855837 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 3 23:27:15.861295 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:27:15.866169 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 3 23:27:15.871465 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 3 23:27:15.881453 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 3 23:27:15.886227 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 3 23:27:15.894575 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 3 23:27:15.969018 systemd-journald[1382]: Time spent on flushing to /var/log/journal/bb273088adf44370a56d35d6a6be8fbb is 44.039ms for 946 entries. Sep 3 23:27:15.969018 systemd-journald[1382]: System Journal (/var/log/journal/bb273088adf44370a56d35d6a6be8fbb) is 11.8M, max 2.6G, 2.6G free. Sep 3 23:27:16.078979 systemd-journald[1382]: Received client request to flush runtime journal. Sep 3 23:27:16.079016 systemd-journald[1382]: /var/log/journal/bb273088adf44370a56d35d6a6be8fbb/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 3 23:27:16.079034 systemd-journald[1382]: Rotating system journal. Sep 3 23:27:16.079049 kernel: loop0: detected capacity change from 0 to 138376 Sep 3 23:27:16.015501 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 3 23:27:16.016023 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 3 23:27:16.080271 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 3 23:27:16.112520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:27:17.161343 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 3 23:27:17.167058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:27:17.188928 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 3 23:27:17.258924 kernel: loop1: detected capacity change from 0 to 207008 Sep 3 23:27:17.353927 kernel: loop2: detected capacity change from 0 to 107312 Sep 3 23:27:17.490522 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. Sep 3 23:27:17.490533 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. Sep 3 23:27:17.524462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:27:18.354935 kernel: loop3: detected capacity change from 0 to 28936 Sep 3 23:27:19.046293 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 3 23:27:19.052529 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:27:19.083384 systemd-udevd[1465]: Using default interface naming scheme 'v255'. Sep 3 23:27:19.445927 kernel: loop4: detected capacity change from 0 to 138376 Sep 3 23:27:19.461922 kernel: loop5: detected capacity change from 0 to 207008 Sep 3 23:27:19.475919 kernel: loop6: detected capacity change from 0 to 107312 Sep 3 23:27:19.487924 kernel: loop7: detected capacity change from 0 to 28936 Sep 3 23:27:19.495767 (sd-merge)[1467]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 3 23:27:19.496128 (sd-merge)[1467]: Merged extensions into '/usr'. Sep 3 23:27:19.499538 systemd[1]: Reload requested from client PID 1440 ('systemd-sysext') (unit systemd-sysext.service)... Sep 3 23:27:19.499635 systemd[1]: Reloading... Sep 3 23:27:19.558269 zram_generator::config[1495]: No configuration found. Sep 3 23:27:19.678920 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:27:19.773734 systemd[1]: Reloading finished in 273 ms. Sep 3 23:27:19.802814 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 3 23:27:19.812808 systemd[1]: Starting ensure-sysext.service... Sep 3 23:27:19.816675 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:27:19.861748 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 3 23:27:19.861776 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 3 23:27:19.861978 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 3 23:27:19.862111 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 3 23:27:19.862528 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 3 23:27:19.862671 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Sep 3 23:27:19.862702 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Sep 3 23:27:19.890811 systemd[1]: Reload requested from client PID 1548 ('systemctl') (unit ensure-sysext.service)... Sep 3 23:27:19.890822 systemd[1]: Reloading... Sep 3 23:27:19.943952 zram_generator::config[1574]: No configuration found. Sep 3 23:27:19.987292 systemd-tmpfiles[1549]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:27:19.987303 systemd-tmpfiles[1549]: Skipping /boot Sep 3 23:27:19.994512 systemd-tmpfiles[1549]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:27:19.994523 systemd-tmpfiles[1549]: Skipping /boot Sep 3 23:27:20.008187 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:27:20.068456 systemd[1]: Reloading finished in 177 ms. Sep 3 23:27:20.093693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:27:20.111295 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:27:20.226101 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 3 23:27:20.240327 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 3 23:27:20.246821 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:27:20.251721 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 3 23:27:20.257790 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:27:20.259660 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:27:20.266081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:27:20.273074 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:27:20.278370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:27:20.278540 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:27:20.279441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:27:20.279659 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:27:20.284506 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:27:20.284687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:27:20.290040 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:27:20.290148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:27:20.298547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:27:20.299467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:27:20.308089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:27:20.316084 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:27:20.320244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:27:20.320325 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:27:20.323013 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 3 23:27:20.328528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:27:20.328638 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:27:20.333455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:27:20.333562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:27:20.339011 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:27:20.339114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:27:20.352885 systemd[1]: Finished ensure-sysext.service. Sep 3 23:27:20.357586 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 3 23:27:20.363385 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:27:20.364458 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:27:20.375021 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:27:20.381956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:27:20.387970 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:27:20.391845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:27:20.391875 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:27:20.391926 systemd[1]: Reached target time-set.target - System Time Set. Sep 3 23:27:20.396144 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:27:20.402025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:27:20.406665 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:27:20.406778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:27:20.410903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:27:20.411026 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:27:20.415873 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:27:20.416584 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:27:20.424148 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:27:20.424195 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:27:20.425385 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 3 23:27:20.483312 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 3 23:27:20.636487 systemd-resolved[1638]: Positive Trust Anchors: Sep 3 23:27:20.636753 systemd-resolved[1638]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:27:20.636776 systemd-resolved[1638]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:27:20.733716 systemd-resolved[1638]: Using system hostname 'ci-4372.1.0-n-e4e1aff60f'. Sep 3 23:27:20.735071 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:27:20.739817 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:27:20.862996 augenrules[1686]: No rules Sep 3 23:27:20.864205 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:27:20.864388 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:27:20.931773 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 3 23:27:20.974348 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:27:20.983053 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:27:21.038468 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 3 23:27:21.187509 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 3 23:27:21.192826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:27:21.221055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:27:21.223060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:27:21.235696 kernel: mousedev: PS/2 mouse device common for all mice Sep 3 23:27:21.235757 kernel: hv_vmbus: registering driver hv_balloon Sep 3 23:27:21.235771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#115 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 3 23:27:21.239614 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:27:21.247440 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 3 23:27:21.247500 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 3 23:27:21.246142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:27:21.274612 kernel: hv_vmbus: registering driver hyperv_fb Sep 3 23:27:21.274672 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 3 23:27:21.280757 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 3 23:27:21.284056 kernel: Console: switching to colour dummy device 80x25 Sep 3 23:27:21.290267 kernel: Console: switching to colour frame buffer device 128x48 Sep 3 23:27:21.329729 systemd-networkd[1700]: lo: Link UP Sep 3 23:27:21.329740 systemd-networkd[1700]: lo: Gained carrier Sep 3 23:27:21.330851 systemd-networkd[1700]: Enumeration completed Sep 3 23:27:21.330947 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:27:21.331978 systemd-networkd[1700]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:27:21.332049 systemd-networkd[1700]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:27:21.335697 systemd[1]: Reached target network.target - Network. Sep 3 23:27:21.340412 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 3 23:27:21.347029 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 3 23:27:21.357830 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:27:21.358573 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:27:21.365410 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:27:21.400928 kernel: mlx5_core 7ae2:00:02.0 enP31458s1: Link up Sep 3 23:27:21.422367 kernel: hv_netvsc 000d3ac4-0758-000d-3ac4-0758000d3ac4 eth0: Data path switched to VF: enP31458s1 Sep 3 23:27:21.422410 systemd-networkd[1700]: enP31458s1: Link UP Sep 3 23:27:21.422521 systemd-networkd[1700]: eth0: Link UP Sep 3 23:27:21.422523 systemd-networkd[1700]: eth0: Gained carrier Sep 3 23:27:21.422537 systemd-networkd[1700]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:27:21.426113 systemd-networkd[1700]: enP31458s1: Gained carrier Sep 3 23:27:21.435951 systemd-networkd[1700]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 3 23:27:21.504462 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 3 23:27:21.541824 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 3 23:27:21.547452 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 3 23:27:21.676928 kernel: MACsec IEEE 802.1AE Sep 3 23:27:21.766954 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 3 23:27:22.674058 systemd-networkd[1700]: eth0: Gained IPv6LL Sep 3 23:27:22.676982 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 3 23:27:22.683438 systemd[1]: Reached target network-online.target - Network is Online. Sep 3 23:27:24.190446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:27:25.352954 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 3 23:27:25.358375 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:27:34.705714 ldconfig[1435]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 3 23:27:34.725681 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 3 23:27:34.732484 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 3 23:27:34.780588 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 3 23:27:34.785028 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:27:34.789215 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 3 23:27:34.793741 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 3 23:27:34.798515 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 3 23:27:34.802455 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 3 23:27:34.807352 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 3 23:27:34.812171 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 3 23:27:34.812192 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:27:34.815584 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:27:34.883388 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 3 23:27:34.888391 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 3 23:27:34.893706 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 3 23:27:34.898936 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 3 23:27:34.903846 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 3 23:27:34.916412 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 3 23:27:34.921098 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 3 23:27:34.926686 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 3 23:27:34.931502 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:27:34.935159 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:27:34.938786 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:27:34.938879 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:27:35.004878 systemd[1]: Starting chronyd.service - NTP client/server... Sep 3 23:27:35.014573 systemd[1]: Starting containerd.service - containerd container runtime... Sep 3 23:27:35.021551 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 3 23:27:35.031303 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 3 23:27:35.038076 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 3 23:27:35.044508 (chronyd)[1843]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 3 23:27:35.046626 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 3 23:27:35.051351 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 3 23:27:35.055209 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 3 23:27:35.056136 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 3 23:27:35.061640 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 3 23:27:35.062629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:27:35.069020 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 3 23:27:35.073533 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 3 23:27:35.078811 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 3 23:27:35.084686 jq[1851]: false Sep 3 23:27:35.086804 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 3 23:27:35.093265 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 3 23:27:35.110125 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 3 23:27:35.114591 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 3 23:27:35.114949 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 3 23:27:35.121087 KVP[1853]: KVP starting; pid is:1853 Sep 3 23:27:35.122034 chronyd[1868]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 3 23:27:35.124864 systemd[1]: Starting update-engine.service - Update Engine... Sep 3 23:27:35.128006 kernel: hv_utils: KVP IC version 4.0 Sep 3 23:27:35.129004 KVP[1853]: KVP LIC Version: 3.1 Sep 3 23:27:35.131057 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 3 23:27:35.138950 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 3 23:27:35.139831 jq[1870]: true Sep 3 23:27:35.146015 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 3 23:27:35.146158 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 3 23:27:35.148606 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 3 23:27:35.148755 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 3 23:27:35.168664 jq[1878]: true Sep 3 23:27:35.194000 extend-filesystems[1852]: Found /dev/sda6 Sep 3 23:27:35.210160 chronyd[1868]: Timezone right/UTC failed leap second check, ignoring Sep 3 23:27:35.210287 chronyd[1868]: Loaded seccomp filter (level 2) Sep 3 23:27:35.212508 systemd[1]: Started chronyd.service - NTP client/server. Sep 3 23:27:35.217724 systemd[1]: motdgen.service: Deactivated successfully. Sep 3 23:27:35.219305 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 3 23:27:35.219700 (ntainerd)[1905]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 3 23:27:35.231941 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 3 23:27:35.245178 systemd-logind[1864]: New seat seat0. Sep 3 23:27:35.247373 systemd-logind[1864]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 3 23:27:35.247586 systemd[1]: Started systemd-logind.service - User Login Management. Sep 3 23:27:35.258502 extend-filesystems[1852]: Found /dev/sda9 Sep 3 23:27:35.264236 extend-filesystems[1852]: Checking size of /dev/sda9 Sep 3 23:27:35.269145 update_engine[1866]: I20250903 23:27:35.266047 1866 main.cc:92] Flatcar Update Engine starting Sep 3 23:27:35.312170 bash[1901]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:27:35.315418 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 3 23:27:35.322563 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 3 23:27:35.329950 extend-filesystems[1852]: Old size kept for /dev/sda9 Sep 3 23:27:35.334055 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 3 23:27:35.334212 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 3 23:27:35.342593 tar[1876]: linux-arm64/LICENSE Sep 3 23:27:35.343832 tar[1876]: linux-arm64/helm Sep 3 23:27:35.764477 tar[1876]: linux-arm64/README.md Sep 3 23:27:35.781239 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 3 23:27:35.905323 sshd_keygen[1915]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 3 23:27:35.920513 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 3 23:27:35.930975 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 3 23:27:35.934789 dbus-daemon[1846]: [system] SELinux support is enabled Sep 3 23:27:35.936142 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 3 23:27:35.942684 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 3 23:27:35.946917 update_engine[1866]: I20250903 23:27:35.946861 1866 update_check_scheduler.cc:74] Next update check in 3m25s Sep 3 23:27:35.951305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:27:35.956258 systemd[1]: issuegen.service: Deactivated successfully. Sep 3 23:27:35.956414 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 3 23:27:35.960523 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:27:35.965970 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 3 23:27:35.966013 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 3 23:27:35.967565 dbus-daemon[1846]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 3 23:27:35.976427 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 3 23:27:35.981593 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 3 23:27:35.981620 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 3 23:27:35.988052 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 3 23:27:35.996633 systemd[1]: Started update-engine.service - Update Engine. Sep 3 23:27:36.003125 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 3 23:27:36.052305 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 3 23:27:36.061846 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 3 23:27:36.070128 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 3 23:27:36.075424 systemd[1]: Reached target getty.target - Login Prompts. Sep 3 23:27:36.099362 coreos-metadata[1845]: Sep 03 23:27:36.099 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 3 23:27:36.101929 coreos-metadata[1845]: Sep 03 23:27:36.101 INFO Fetch successful Sep 3 23:27:36.102109 coreos-metadata[1845]: Sep 03 23:27:36.102 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 3 23:27:36.105354 coreos-metadata[1845]: Sep 03 23:27:36.105 INFO Fetch successful Sep 3 23:27:36.105610 coreos-metadata[1845]: Sep 03 23:27:36.105 INFO Fetching http://168.63.129.16/machine/ca25d869-59eb-45ae-834f-7dde6b276c05/b0f3e605%2D5c02%2D4c94%2Da5d8%2D9d5fa1aeafd2.%5Fci%2D4372.1.0%2Dn%2De4e1aff60f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 3 23:27:36.106611 coreos-metadata[1845]: Sep 03 23:27:36.106 INFO Fetch successful Sep 3 23:27:36.106806 coreos-metadata[1845]: Sep 03 23:27:36.106 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 3 23:27:36.114047 coreos-metadata[1845]: Sep 03 23:27:36.114 INFO Fetch successful Sep 3 23:27:36.130434 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 3 23:27:36.135835 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 3 23:27:36.308064 locksmithd[2017]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 3 23:27:36.326926 kubelet[2007]: E0903 23:27:36.326862 2007 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:27:36.329959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:27:36.330069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:27:36.330498 systemd[1]: kubelet.service: Consumed 537ms CPU time, 255.5M memory peak. Sep 3 23:27:36.489125 containerd[1905]: time="2025-09-03T23:27:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 3 23:27:36.489702 containerd[1905]: time="2025-09-03T23:27:36.489670088Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 3 23:27:36.495440 containerd[1905]: time="2025-09-03T23:27:36.495408816Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.776µs" Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495514704Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495539776Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495673504Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495685288Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495702256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495736576Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495745736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495886216Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495894664Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495901576Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495929784Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 3 23:27:36.496526 containerd[1905]: time="2025-09-03T23:27:36.495992496Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 3 23:27:36.496706 containerd[1905]: time="2025-09-03T23:27:36.496140152Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:27:36.496706 containerd[1905]: time="2025-09-03T23:27:36.496158856Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:27:36.496706 containerd[1905]: time="2025-09-03T23:27:36.496165592Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 3 23:27:36.496706 containerd[1905]: time="2025-09-03T23:27:36.496195208Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 3 23:27:36.496706 containerd[1905]: time="2025-09-03T23:27:36.496354656Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 3 23:27:36.496706 containerd[1905]: time="2025-09-03T23:27:36.496403512Z" level=info msg="metadata content store policy set" policy=shared Sep 3 23:27:36.513835 containerd[1905]: time="2025-09-03T23:27:36.513812384Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 3 23:27:36.513967 containerd[1905]: time="2025-09-03T23:27:36.513953912Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 3 23:27:36.514093 containerd[1905]: time="2025-09-03T23:27:36.514078880Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 3 23:27:36.514157 containerd[1905]: time="2025-09-03T23:27:36.514145952Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 3 23:27:36.514201 containerd[1905]: time="2025-09-03T23:27:36.514191376Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 3 23:27:36.514253 containerd[1905]: time="2025-09-03T23:27:36.514244248Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 3 23:27:36.514313 containerd[1905]: time="2025-09-03T23:27:36.514293992Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 3 23:27:36.514360 containerd[1905]: time="2025-09-03T23:27:36.514349624Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 3 23:27:36.514413 containerd[1905]: time="2025-09-03T23:27:36.514402472Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 3 23:27:36.514451 containerd[1905]: time="2025-09-03T23:27:36.514441872Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 3 23:27:36.514496 containerd[1905]: time="2025-09-03T23:27:36.514487192Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 3 23:27:36.514531 containerd[1905]: time="2025-09-03T23:27:36.514524624Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 3 23:27:36.514689 containerd[1905]: time="2025-09-03T23:27:36.514675896Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 3 23:27:36.514770 containerd[1905]: time="2025-09-03T23:27:36.514756192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 3 23:27:36.514820 containerd[1905]: time="2025-09-03T23:27:36.514810816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 3 23:27:36.514863 containerd[1905]: time="2025-09-03T23:27:36.514853208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 3 23:27:36.514931 containerd[1905]: time="2025-09-03T23:27:36.514919744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 3 23:27:36.514988 containerd[1905]: time="2025-09-03T23:27:36.514979800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 3 23:27:36.515040 containerd[1905]: time="2025-09-03T23:27:36.515029360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 3 23:27:36.515092 containerd[1905]: time="2025-09-03T23:27:36.515081984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 3 23:27:36.515148 containerd[1905]: time="2025-09-03T23:27:36.515133656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 3 23:27:36.515195 containerd[1905]: time="2025-09-03T23:27:36.515185808Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 3 23:27:36.515242 containerd[1905]: time="2025-09-03T23:27:36.515231304Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 3 23:27:36.515347 containerd[1905]: time="2025-09-03T23:27:36.515330360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 3 23:27:36.515409 containerd[1905]: time="2025-09-03T23:27:36.515400368Z" level=info msg="Start snapshots syncer" Sep 3 23:27:36.515481 containerd[1905]: time="2025-09-03T23:27:36.515469424Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 3 23:27:36.515709 containerd[1905]: time="2025-09-03T23:27:36.515675904Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 3 23:27:36.515851 containerd[1905]: time="2025-09-03T23:27:36.515837128Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 3 23:27:36.515998 containerd[1905]: time="2025-09-03T23:27:36.515985192Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 3 23:27:36.516168 containerd[1905]: time="2025-09-03T23:27:36.516154232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 3 23:27:36.516247 containerd[1905]: time="2025-09-03T23:27:36.516236792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 3 23:27:36.516284 containerd[1905]: time="2025-09-03T23:27:36.516277664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 3 23:27:36.516341 containerd[1905]: time="2025-09-03T23:27:36.516330504Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 3 23:27:36.516380 containerd[1905]: time="2025-09-03T23:27:36.516371016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 3 23:27:36.516433 containerd[1905]: time="2025-09-03T23:27:36.516422680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 3 23:27:36.516475 containerd[1905]: time="2025-09-03T23:27:36.516465024Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 3 23:27:36.516547 containerd[1905]: time="2025-09-03T23:27:36.516537360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 3 23:27:36.516603 containerd[1905]: time="2025-09-03T23:27:36.516594000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 3 23:27:36.516644 containerd[1905]: time="2025-09-03T23:27:36.516634240Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 3 23:27:36.516739 containerd[1905]: time="2025-09-03T23:27:36.516724880Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:27:36.516799 containerd[1905]: time="2025-09-03T23:27:36.516788384Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:27:36.516832 containerd[1905]: time="2025-09-03T23:27:36.516824040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:27:36.516930 containerd[1905]: time="2025-09-03T23:27:36.516861936Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:27:36.516930 containerd[1905]: time="2025-09-03T23:27:36.516870144Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 3 23:27:36.516930 containerd[1905]: time="2025-09-03T23:27:36.516879264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 3 23:27:36.516930 containerd[1905]: time="2025-09-03T23:27:36.516887488Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 3 23:27:36.516930 containerd[1905]: time="2025-09-03T23:27:36.516900560Z" level=info msg="runtime interface created" Sep 3 23:27:36.516930 containerd[1905]: time="2025-09-03T23:27:36.516904136Z" level=info msg="created NRI interface" Sep 3 23:27:36.517040 containerd[1905]: time="2025-09-03T23:27:36.517029008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 3 23:27:36.517081 containerd[1905]: time="2025-09-03T23:27:36.517071640Z" level=info msg="Connect containerd service" Sep 3 23:27:36.517148 containerd[1905]: time="2025-09-03T23:27:36.517139144Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 3 23:27:36.518305 containerd[1905]: time="2025-09-03T23:27:36.518262416Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:27:38.603368 containerd[1905]: time="2025-09-03T23:27:38.602924664Z" level=info msg="Start subscribing containerd event" Sep 3 23:27:38.603368 containerd[1905]: time="2025-09-03T23:27:38.602980448Z" level=info msg="Start recovering state" Sep 3 23:27:38.603368 containerd[1905]: time="2025-09-03T23:27:38.603061184Z" level=info msg="Start event monitor" Sep 3 23:27:38.603368 containerd[1905]: time="2025-09-03T23:27:38.603071216Z" level=info msg="Start cni network conf syncer for default" Sep 3 23:27:38.603368 containerd[1905]: time="2025-09-03T23:27:38.603076376Z" level=info msg="Start streaming server" Sep 3 23:27:38.603368 containerd[1905]: time="2025-09-03T23:27:38.603084088Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 3 23:27:38.603368 containerd[1905]: time="2025-09-03T23:27:38.603089384Z" level=info msg="runtime interface starting up..." Sep 3 23:27:38.603368 containerd[1905]: time="2025-09-03T23:27:38.603092968Z" level=info msg="starting plugins..." Sep 3 23:27:38.603368 containerd[1905]: time="2025-09-03T23:27:38.603103704Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 3 23:27:38.603975 containerd[1905]: time="2025-09-03T23:27:38.603953616Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 3 23:27:38.604080 containerd[1905]: time="2025-09-03T23:27:38.604067016Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 3 23:27:38.604176 containerd[1905]: time="2025-09-03T23:27:38.604164032Z" level=info msg="containerd successfully booted in 2.115346s" Sep 3 23:27:38.604323 systemd[1]: Started containerd.service - containerd container runtime. Sep 3 23:27:38.609845 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 3 23:27:38.615455 systemd[1]: Startup finished in 1.580s (kernel) + 22.535s (initrd) + 34.736s (userspace) = 58.852s. Sep 3 23:27:39.913414 login[2023]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 3 23:27:39.914412 login[2024]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:39.924965 systemd-logind[1864]: New session 2 of user core. Sep 3 23:27:39.925873 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 3 23:27:39.927476 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 3 23:27:40.007043 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 3 23:27:40.011285 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 3 23:27:40.056034 (systemd)[2064]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 3 23:27:40.058011 systemd-logind[1864]: New session c1 of user core. Sep 3 23:27:40.074379 waagent[2015]: 2025-09-03T23:27:40.074319Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 3 23:27:40.078330 waagent[2015]: 2025-09-03T23:27:40.078293Z INFO Daemon Daemon OS: flatcar 4372.1.0 Sep 3 23:27:40.081450 waagent[2015]: 2025-09-03T23:27:40.081424Z INFO Daemon Daemon Python: 3.11.12 Sep 3 23:27:40.084375 waagent[2015]: 2025-09-03T23:27:40.084330Z INFO Daemon Daemon Run daemon Sep 3 23:27:40.087205 waagent[2015]: 2025-09-03T23:27:40.087179Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.1.0' Sep 3 23:27:40.093341 waagent[2015]: 2025-09-03T23:27:40.093278Z INFO Daemon Daemon Using waagent for provisioning Sep 3 23:27:40.096982 waagent[2015]: 2025-09-03T23:27:40.096952Z INFO Daemon Daemon Activate resource disk Sep 3 23:27:40.100431 waagent[2015]: 2025-09-03T23:27:40.100404Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 3 23:27:40.107930 waagent[2015]: 2025-09-03T23:27:40.107889Z INFO Daemon Daemon Found device: None Sep 3 23:27:40.110904 waagent[2015]: 2025-09-03T23:27:40.110878Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 3 23:27:40.117031 waagent[2015]: 2025-09-03T23:27:40.116924Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 3 23:27:40.124993 waagent[2015]: 2025-09-03T23:27:40.124956Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 3 23:27:40.129029 waagent[2015]: 2025-09-03T23:27:40.129002Z INFO Daemon Daemon Running default provisioning handler Sep 3 23:27:40.136390 waagent[2015]: 2025-09-03T23:27:40.136348Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 3 23:27:40.147679 waagent[2015]: 2025-09-03T23:27:40.146033Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 3 23:27:40.152782 waagent[2015]: 2025-09-03T23:27:40.152748Z INFO Daemon Daemon cloud-init is enabled: False Sep 3 23:27:40.156245 waagent[2015]: 2025-09-03T23:27:40.156221Z INFO Daemon Daemon Copying ovf-env.xml Sep 3 23:27:40.618212 waagent[2015]: 2025-09-03T23:27:40.618137Z INFO Daemon Daemon Successfully mounted dvd Sep 3 23:27:40.661834 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 3 23:27:40.662579 waagent[2015]: 2025-09-03T23:27:40.662314Z INFO Daemon Daemon Detect protocol endpoint Sep 3 23:27:40.665996 waagent[2015]: 2025-09-03T23:27:40.665962Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 3 23:27:40.670431 waagent[2015]: 2025-09-03T23:27:40.670406Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 3 23:27:40.675233 waagent[2015]: 2025-09-03T23:27:40.675206Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 3 23:27:40.678901 waagent[2015]: 2025-09-03T23:27:40.678875Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 3 23:27:40.682630 waagent[2015]: 2025-09-03T23:27:40.682603Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 3 23:27:40.824718 waagent[2015]: 2025-09-03T23:27:40.824684Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 3 23:27:40.829600 waagent[2015]: 2025-09-03T23:27:40.829578Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 3 23:27:40.833086 waagent[2015]: 2025-09-03T23:27:40.833064Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 3 23:27:40.913726 login[2023]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:40.915167 systemd[2064]: Queued start job for default target default.target. Sep 3 23:27:40.918747 systemd-logind[1864]: New session 1 of user core. Sep 3 23:27:40.921348 systemd[2064]: Created slice app.slice - User Application Slice. Sep 3 23:27:40.921373 systemd[2064]: Reached target paths.target - Paths. Sep 3 23:27:40.921399 systemd[2064]: Reached target timers.target - Timers. Sep 3 23:27:40.922381 systemd[2064]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 3 23:27:40.930343 systemd[2064]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 3 23:27:40.930394 systemd[2064]: Reached target sockets.target - Sockets. Sep 3 23:27:40.930427 systemd[2064]: Reached target basic.target - Basic System. Sep 3 23:27:40.930447 systemd[2064]: Reached target default.target - Main User Target. Sep 3 23:27:40.930465 systemd[2064]: Startup finished in 868ms. Sep 3 23:27:40.930525 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 3 23:27:40.931513 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 3 23:27:40.932023 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 3 23:27:41.213649 waagent[2015]: 2025-09-03T23:27:41.213531Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 3 23:27:41.218031 waagent[2015]: 2025-09-03T23:27:41.217994Z INFO Daemon Daemon Forcing an update of the goal state. Sep 3 23:27:41.224472 waagent[2015]: 2025-09-03T23:27:41.224436Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 3 23:27:41.292296 waagent[2015]: 2025-09-03T23:27:41.292254Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 3 23:27:41.296549 waagent[2015]: 2025-09-03T23:27:41.296516Z INFO Daemon Sep 3 23:27:41.298511 waagent[2015]: 2025-09-03T23:27:41.298486Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4f96ca73-a7a6-4bcf-9342-794d040709d6 eTag: 4460048496565548156 source: Fabric] Sep 3 23:27:41.306845 waagent[2015]: 2025-09-03T23:27:41.306815Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 3 23:27:41.311527 waagent[2015]: 2025-09-03T23:27:41.311501Z INFO Daemon Sep 3 23:27:41.313534 waagent[2015]: 2025-09-03T23:27:41.313511Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 3 23:27:41.326542 waagent[2015]: 2025-09-03T23:27:41.326306Z INFO Daemon Daemon Downloading artifacts profile blob Sep 3 23:27:41.389391 waagent[2015]: 2025-09-03T23:27:41.389340Z INFO Daemon Downloaded certificate {'thumbprint': '90BE7C1AE18601C6559AF1924531B351D009440E', 'hasPrivateKey': True} Sep 3 23:27:41.396728 waagent[2015]: 2025-09-03T23:27:41.396691Z INFO Daemon Fetch goal state completed Sep 3 23:27:41.408629 waagent[2015]: 2025-09-03T23:27:41.408582Z INFO Daemon Daemon Starting provisioning Sep 3 23:27:41.412595 waagent[2015]: 2025-09-03T23:27:41.412565Z INFO Daemon Daemon Handle ovf-env.xml. Sep 3 23:27:41.416003 waagent[2015]: 2025-09-03T23:27:41.415981Z INFO Daemon Daemon Set hostname [ci-4372.1.0-n-e4e1aff60f] Sep 3 23:27:41.486936 waagent[2015]: 2025-09-03T23:27:41.486506Z INFO Daemon Daemon Publish hostname [ci-4372.1.0-n-e4e1aff60f] Sep 3 23:27:41.491235 waagent[2015]: 2025-09-03T23:27:41.491196Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 3 23:27:41.495407 waagent[2015]: 2025-09-03T23:27:41.495377Z INFO Daemon Daemon Primary interface is [eth0] Sep 3 23:27:41.504953 systemd-networkd[1700]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:27:41.504959 systemd-networkd[1700]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:27:41.504983 systemd-networkd[1700]: eth0: DHCP lease lost Sep 3 23:27:41.509179 waagent[2015]: 2025-09-03T23:27:41.505516Z INFO Daemon Daemon Create user account if not exists Sep 3 23:27:41.509467 waagent[2015]: 2025-09-03T23:27:41.509438Z INFO Daemon Daemon User core already exists, skip useradd Sep 3 23:27:41.513484 waagent[2015]: 2025-09-03T23:27:41.513456Z INFO Daemon Daemon Configure sudoer Sep 3 23:27:41.521776 waagent[2015]: 2025-09-03T23:27:41.521743Z INFO Daemon Daemon Configure sshd Sep 3 23:27:41.529029 waagent[2015]: 2025-09-03T23:27:41.528995Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 3 23:27:41.537677 waagent[2015]: 2025-09-03T23:27:41.537650Z INFO Daemon Daemon Deploy ssh public key. Sep 3 23:27:41.537941 systemd-networkd[1700]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 3 23:27:42.734505 waagent[2015]: 2025-09-03T23:27:42.731337Z INFO Daemon Daemon Provisioning complete Sep 3 23:27:42.742897 waagent[2015]: 2025-09-03T23:27:42.742868Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 3 23:27:42.747559 waagent[2015]: 2025-09-03T23:27:42.747530Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 3 23:27:42.754351 waagent[2015]: 2025-09-03T23:27:42.754326Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 3 23:27:42.847663 waagent[2114]: 2025-09-03T23:27:42.847301Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 3 23:27:42.847663 waagent[2114]: 2025-09-03T23:27:42.847395Z INFO ExtHandler ExtHandler OS: flatcar 4372.1.0 Sep 3 23:27:42.847663 waagent[2114]: 2025-09-03T23:27:42.847429Z INFO ExtHandler ExtHandler Python: 3.11.12 Sep 3 23:27:42.847663 waagent[2114]: 2025-09-03T23:27:42.847459Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 3 23:27:42.988787 waagent[2114]: 2025-09-03T23:27:42.988667Z INFO ExtHandler ExtHandler Distro: flatcar-4372.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 3 23:27:42.989106 waagent[2114]: 2025-09-03T23:27:42.989073Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 3 23:27:42.989238 waagent[2114]: 2025-09-03T23:27:42.989212Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 3 23:27:42.994404 waagent[2114]: 2025-09-03T23:27:42.994357Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 3 23:27:42.999936 waagent[2114]: 2025-09-03T23:27:42.998592Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 3 23:27:42.999936 waagent[2114]: 2025-09-03T23:27:42.998950Z INFO ExtHandler Sep 3 23:27:42.999936 waagent[2114]: 2025-09-03T23:27:42.999009Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 171ce0a2-0b25-4029-9dce-43a009d2cbc0 eTag: 4460048496565548156 source: Fabric] Sep 3 23:27:42.999936 waagent[2114]: 2025-09-03T23:27:42.999207Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 3 23:27:42.999936 waagent[2114]: 2025-09-03T23:27:42.999565Z INFO ExtHandler Sep 3 23:27:42.999936 waagent[2114]: 2025-09-03T23:27:42.999603Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 3 23:27:43.002269 waagent[2114]: 2025-09-03T23:27:43.002242Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 3 23:27:43.050348 waagent[2114]: 2025-09-03T23:27:43.050312Z INFO ExtHandler Downloaded certificate {'thumbprint': '90BE7C1AE18601C6559AF1924531B351D009440E', 'hasPrivateKey': True} Sep 3 23:27:43.050757 waagent[2114]: 2025-09-03T23:27:43.050725Z INFO ExtHandler Fetch goal state completed Sep 3 23:27:43.060759 waagent[2114]: 2025-09-03T23:27:43.060730Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Sep 3 23:27:43.063928 waagent[2114]: 2025-09-03T23:27:43.063878Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2114 Sep 3 23:27:43.064121 waagent[2114]: 2025-09-03T23:27:43.064093Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 3 23:27:43.064434 waagent[2114]: 2025-09-03T23:27:43.064406Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 3 23:27:43.065564 waagent[2114]: 2025-09-03T23:27:43.065530Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 3 23:27:43.065973 waagent[2114]: 2025-09-03T23:27:43.065943Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 3 23:27:43.066177 waagent[2114]: 2025-09-03T23:27:43.066149Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 3 23:27:43.066676 waagent[2114]: 2025-09-03T23:27:43.066645Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 3 23:27:43.383405 waagent[2114]: 2025-09-03T23:27:43.383321Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 3 23:27:43.383514 waagent[2114]: 2025-09-03T23:27:43.383488Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 3 23:27:43.387594 waagent[2114]: 2025-09-03T23:27:43.387568Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 3 23:27:43.392287 systemd[1]: Reload requested from client PID 2131 ('systemctl') (unit waagent.service)... Sep 3 23:27:43.392516 systemd[1]: Reloading... Sep 3 23:27:43.456944 zram_generator::config[2165]: No configuration found. Sep 3 23:27:43.522874 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:27:43.601240 systemd[1]: Reloading finished in 208 ms. Sep 3 23:27:43.611674 waagent[2114]: 2025-09-03T23:27:43.609633Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 3 23:27:43.611674 waagent[2114]: 2025-09-03T23:27:43.609738Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 3 23:27:44.570793 waagent[2114]: 2025-09-03T23:27:44.570716Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 3 23:27:44.571095 waagent[2114]: 2025-09-03T23:27:44.571042Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 3 23:27:44.571721 waagent[2114]: 2025-09-03T23:27:44.571660Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 3 23:27:44.571859 waagent[2114]: 2025-09-03T23:27:44.571768Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 3 23:27:44.571929 waagent[2114]: 2025-09-03T23:27:44.571894Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 3 23:27:44.572214 waagent[2114]: 2025-09-03T23:27:44.572075Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 3 23:27:44.572363 waagent[2114]: 2025-09-03T23:27:44.572325Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 3 23:27:44.572609 waagent[2114]: 2025-09-03T23:27:44.572537Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 3 23:27:44.572826 waagent[2114]: 2025-09-03T23:27:44.572798Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 3 23:27:44.572873 waagent[2114]: 2025-09-03T23:27:44.572856Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 3 23:27:44.572873 waagent[2114]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 3 23:27:44.572873 waagent[2114]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 3 23:27:44.572873 waagent[2114]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 3 23:27:44.572873 waagent[2114]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 3 23:27:44.572873 waagent[2114]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 3 23:27:44.572873 waagent[2114]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 3 23:27:44.573043 waagent[2114]: 2025-09-03T23:27:44.573001Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 3 23:27:44.573201 waagent[2114]: 2025-09-03T23:27:44.573110Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 3 23:27:44.573435 waagent[2114]: 2025-09-03T23:27:44.573407Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 3 23:27:44.573540 waagent[2114]: 2025-09-03T23:27:44.573505Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 3 23:27:44.573725 waagent[2114]: 2025-09-03T23:27:44.573595Z INFO EnvHandler ExtHandler Configure routes Sep 3 23:27:44.574014 waagent[2114]: 2025-09-03T23:27:44.573985Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 3 23:27:44.574611 waagent[2114]: 2025-09-03T23:27:44.574538Z INFO EnvHandler ExtHandler Gateway:None Sep 3 23:27:44.574611 waagent[2114]: 2025-09-03T23:27:44.574588Z INFO EnvHandler ExtHandler Routes:None Sep 3 23:27:44.578864 waagent[2114]: 2025-09-03T23:27:44.578831Z INFO ExtHandler ExtHandler Sep 3 23:27:44.579231 waagent[2114]: 2025-09-03T23:27:44.579200Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 42d6cda8-149e-4973-9e0b-b4fe956c001f correlation ae0e1d41-02ff-4755-831b-8e563776e95d created: 2025-09-03T23:25:21.656204Z] Sep 3 23:27:44.579746 waagent[2114]: 2025-09-03T23:27:44.579715Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 3 23:27:44.580247 waagent[2114]: 2025-09-03T23:27:44.580218Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 3 23:27:44.655132 waagent[2114]: 2025-09-03T23:27:44.655086Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 3 23:27:44.655132 waagent[2114]: Try `iptables -h' or 'iptables --help' for more information.) Sep 3 23:27:44.655419 waagent[2114]: 2025-09-03T23:27:44.655384Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6D837E4B-4184-4B27-8928-3C1ABCC95C6E;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 3 23:27:44.828967 waagent[2114]: 2025-09-03T23:27:44.828871Z INFO MonitorHandler ExtHandler Network interfaces: Sep 3 23:27:44.828967 waagent[2114]: Executing ['ip', '-a', '-o', 'link']: Sep 3 23:27:44.828967 waagent[2114]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 3 23:27:44.828967 waagent[2114]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c4:07:58 brd ff:ff:ff:ff:ff:ff Sep 3 23:27:44.828967 waagent[2114]: 3: enP31458s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c4:07:58 brd ff:ff:ff:ff:ff:ff\ altname enP31458p0s2 Sep 3 23:27:44.828967 waagent[2114]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 3 23:27:44.828967 waagent[2114]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 3 23:27:44.828967 waagent[2114]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 3 23:27:44.828967 waagent[2114]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 3 23:27:44.828967 waagent[2114]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 3 23:27:44.828967 waagent[2114]: 2: eth0 inet6 fe80::20d:3aff:fec4:758/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 3 23:27:44.897797 waagent[2114]: 2025-09-03T23:27:44.897765Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 3 23:27:44.897797 waagent[2114]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 3 23:27:44.897797 waagent[2114]: pkts bytes target prot opt in out source destination Sep 3 23:27:44.897797 waagent[2114]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 3 23:27:44.897797 waagent[2114]: pkts bytes target prot opt in out source destination Sep 3 23:27:44.897797 waagent[2114]: Chain OUTPUT (policy ACCEPT 4 packets, 406 bytes) Sep 3 23:27:44.897797 waagent[2114]: pkts bytes target prot opt in out source destination Sep 3 23:27:44.897797 waagent[2114]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 3 23:27:44.897797 waagent[2114]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 3 23:27:44.897797 waagent[2114]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 3 23:27:44.900271 waagent[2114]: 2025-09-03T23:27:44.900242Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 3 23:27:44.900271 waagent[2114]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 3 23:27:44.900271 waagent[2114]: pkts bytes target prot opt in out source destination Sep 3 23:27:44.900271 waagent[2114]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 3 23:27:44.900271 waagent[2114]: pkts bytes target prot opt in out source destination Sep 3 23:27:44.900271 waagent[2114]: Chain OUTPUT (policy ACCEPT 4 packets, 406 bytes) Sep 3 23:27:44.900271 waagent[2114]: pkts bytes target prot opt in out source destination Sep 3 23:27:44.900271 waagent[2114]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 3 23:27:44.900271 waagent[2114]: 10 1114 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 3 23:27:44.900271 waagent[2114]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 3 23:27:44.900660 waagent[2114]: 2025-09-03T23:27:44.900637Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 3 23:27:46.411413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 3 23:27:46.412763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:27:46.507132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:27:46.511292 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:27:46.626158 kubelet[2264]: E0903 23:27:46.626107 2264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:27:46.628548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:27:46.628645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:27:46.629981 systemd[1]: kubelet.service: Consumed 108ms CPU time, 106.7M memory peak. Sep 3 23:27:56.661903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 3 23:27:56.663219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:27:56.772545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:27:56.774780 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:27:56.875533 kubelet[2278]: E0903 23:27:56.875494 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:27:56.877448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:27:56.877544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:27:56.878087 systemd[1]: kubelet.service: Consumed 98ms CPU time, 105.7M memory peak. Sep 3 23:27:59.009232 chronyd[1868]: Selected source PHC0 Sep 3 23:28:06.911438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 3 23:28:06.912761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:28:07.000453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:28:07.002674 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:28:07.120638 kubelet[2292]: E0903 23:28:07.120588 2292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:28:07.122538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:28:07.122752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:28:07.125034 systemd[1]: kubelet.service: Consumed 100ms CPU time, 106.8M memory peak. Sep 3 23:28:08.851613 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 3 23:28:08.852569 systemd[1]: Started sshd@0-10.200.20.24:22-10.200.16.10:52868.service - OpenSSH per-connection server daemon (10.200.16.10:52868). Sep 3 23:28:09.373789 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 3 23:28:09.686919 sshd[2299]: Accepted publickey for core from 10.200.16.10 port 52868 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:28:09.687960 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:28:09.691499 systemd-logind[1864]: New session 3 of user core. Sep 3 23:28:09.706155 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 3 23:28:10.155818 systemd[1]: Started sshd@1-10.200.20.24:22-10.200.16.10:36666.service - OpenSSH per-connection server daemon (10.200.16.10:36666). Sep 3 23:28:10.649008 sshd[2304]: Accepted publickey for core from 10.200.16.10 port 36666 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:28:10.650063 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:28:10.653525 systemd-logind[1864]: New session 4 of user core. Sep 3 23:28:10.672001 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 3 23:28:11.010876 sshd[2306]: Connection closed by 10.200.16.10 port 36666 Sep 3 23:28:11.011218 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Sep 3 23:28:11.013606 systemd[1]: sshd@1-10.200.20.24:22-10.200.16.10:36666.service: Deactivated successfully. Sep 3 23:28:11.014687 systemd[1]: session-4.scope: Deactivated successfully. Sep 3 23:28:11.015522 systemd-logind[1864]: Session 4 logged out. Waiting for processes to exit. Sep 3 23:28:11.016428 systemd-logind[1864]: Removed session 4. Sep 3 23:28:11.104014 systemd[1]: Started sshd@2-10.200.20.24:22-10.200.16.10:36678.service - OpenSSH per-connection server daemon (10.200.16.10:36678). Sep 3 23:28:11.584074 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 36678 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:28:11.585079 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:28:11.588814 systemd-logind[1864]: New session 5 of user core. Sep 3 23:28:11.595067 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 3 23:28:11.920744 sshd[2314]: Connection closed by 10.200.16.10 port 36678 Sep 3 23:28:11.921135 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Sep 3 23:28:11.923659 systemd[1]: sshd@2-10.200.20.24:22-10.200.16.10:36678.service: Deactivated successfully. Sep 3 23:28:11.924920 systemd[1]: session-5.scope: Deactivated successfully. Sep 3 23:28:11.925468 systemd-logind[1864]: Session 5 logged out. Waiting for processes to exit. Sep 3 23:28:11.926402 systemd-logind[1864]: Removed session 5. Sep 3 23:28:12.011033 systemd[1]: Started sshd@3-10.200.20.24:22-10.200.16.10:36690.service - OpenSSH per-connection server daemon (10.200.16.10:36690). Sep 3 23:28:12.501846 sshd[2320]: Accepted publickey for core from 10.200.16.10 port 36690 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:28:12.502886 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:28:12.506941 systemd-logind[1864]: New session 6 of user core. Sep 3 23:28:12.513035 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 3 23:28:12.852485 sshd[2322]: Connection closed by 10.200.16.10 port 36690 Sep 3 23:28:12.851967 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Sep 3 23:28:12.854331 systemd-logind[1864]: Session 6 logged out. Waiting for processes to exit. Sep 3 23:28:12.854540 systemd[1]: sshd@3-10.200.20.24:22-10.200.16.10:36690.service: Deactivated successfully. Sep 3 23:28:12.855901 systemd[1]: session-6.scope: Deactivated successfully. Sep 3 23:28:12.857610 systemd-logind[1864]: Removed session 6. Sep 3 23:28:12.933183 systemd[1]: Started sshd@4-10.200.20.24:22-10.200.16.10:36700.service - OpenSSH per-connection server daemon (10.200.16.10:36700). Sep 3 23:28:13.387856 sshd[2328]: Accepted publickey for core from 10.200.16.10 port 36700 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:28:13.388789 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:28:13.392895 systemd-logind[1864]: New session 7 of user core. Sep 3 23:28:13.399028 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 3 23:28:14.095529 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 3 23:28:14.095744 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:28:14.139558 sudo[2331]: pam_unix(sudo:session): session closed for user root Sep 3 23:28:14.225065 sshd[2330]: Connection closed by 10.200.16.10 port 36700 Sep 3 23:28:14.225543 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Sep 3 23:28:14.228425 systemd[1]: sshd@4-10.200.20.24:22-10.200.16.10:36700.service: Deactivated successfully. Sep 3 23:28:14.229704 systemd[1]: session-7.scope: Deactivated successfully. Sep 3 23:28:14.230247 systemd-logind[1864]: Session 7 logged out. Waiting for processes to exit. Sep 3 23:28:14.231308 systemd-logind[1864]: Removed session 7. Sep 3 23:28:14.320970 systemd[1]: Started sshd@5-10.200.20.24:22-10.200.16.10:36704.service - OpenSSH per-connection server daemon (10.200.16.10:36704). Sep 3 23:28:14.775177 sshd[2337]: Accepted publickey for core from 10.200.16.10 port 36704 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:28:14.776095 sshd-session[2337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:28:14.779390 systemd-logind[1864]: New session 8 of user core. Sep 3 23:28:14.790019 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 3 23:28:15.031206 sudo[2341]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 3 23:28:15.031397 sudo[2341]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:28:15.038185 sudo[2341]: pam_unix(sudo:session): session closed for user root Sep 3 23:28:15.041317 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 3 23:28:15.041496 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:28:15.047302 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:28:15.075020 augenrules[2363]: No rules Sep 3 23:28:15.075979 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:28:15.076260 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:28:15.077463 sudo[2340]: pam_unix(sudo:session): session closed for user root Sep 3 23:28:15.162934 sshd[2339]: Connection closed by 10.200.16.10 port 36704 Sep 3 23:28:15.163263 sshd-session[2337]: pam_unix(sshd:session): session closed for user core Sep 3 23:28:15.165799 systemd[1]: sshd@5-10.200.20.24:22-10.200.16.10:36704.service: Deactivated successfully. Sep 3 23:28:15.166862 systemd[1]: session-8.scope: Deactivated successfully. Sep 3 23:28:15.168214 systemd-logind[1864]: Session 8 logged out. Waiting for processes to exit. Sep 3 23:28:15.169417 systemd-logind[1864]: Removed session 8. Sep 3 23:28:15.249795 systemd[1]: Started sshd@6-10.200.20.24:22-10.200.16.10:36712.service - OpenSSH per-connection server daemon (10.200.16.10:36712). Sep 3 23:28:15.743042 sshd[2372]: Accepted publickey for core from 10.200.16.10 port 36712 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:28:15.743981 sshd-session[2372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:28:15.747179 systemd-logind[1864]: New session 9 of user core. Sep 3 23:28:15.758012 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 3 23:28:16.017506 sudo[2375]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 3 23:28:16.017720 sudo[2375]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:28:17.161494 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 3 23:28:17.162683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:28:17.729645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:28:17.731880 (kubelet)[2395]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:28:17.756581 kubelet[2395]: E0903 23:28:17.756532 2395 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:28:17.758423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:28:17.758581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:28:17.759039 systemd[1]: kubelet.service: Consumed 98ms CPU time, 104.9M memory peak. Sep 3 23:28:18.662092 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 3 23:28:18.677273 (dockerd)[2407]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 3 23:28:20.221545 dockerd[2407]: time="2025-09-03T23:28:20.221285982Z" level=info msg="Starting up" Sep 3 23:28:20.222766 dockerd[2407]: time="2025-09-03T23:28:20.222711652Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 3 23:28:20.290126 dockerd[2407]: time="2025-09-03T23:28:20.290074532Z" level=info msg="Loading containers: start." Sep 3 23:28:20.446932 kernel: Initializing XFRM netlink socket Sep 3 23:28:20.763107 update_engine[1866]: I20250903 23:28:20.763057 1866 update_attempter.cc:509] Updating boot flags... Sep 3 23:28:21.265088 systemd-networkd[1700]: docker0: Link UP Sep 3 23:28:21.299026 dockerd[2407]: time="2025-09-03T23:28:21.298991788Z" level=info msg="Loading containers: done." Sep 3 23:28:21.307831 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2476753259-merged.mount: Deactivated successfully. Sep 3 23:28:21.319489 dockerd[2407]: time="2025-09-03T23:28:21.319461517Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 3 23:28:21.319574 dockerd[2407]: time="2025-09-03T23:28:21.319521880Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 3 23:28:21.319620 dockerd[2407]: time="2025-09-03T23:28:21.319605964Z" level=info msg="Initializing buildkit" Sep 3 23:28:21.364094 dockerd[2407]: time="2025-09-03T23:28:21.364025207Z" level=info msg="Completed buildkit initialization" Sep 3 23:28:21.369609 dockerd[2407]: time="2025-09-03T23:28:21.369574232Z" level=info msg="Daemon has completed initialization" Sep 3 23:28:21.369783 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 3 23:28:21.371139 dockerd[2407]: time="2025-09-03T23:28:21.369687909Z" level=info msg="API listen on /run/docker.sock" Sep 3 23:28:22.215867 containerd[1905]: time="2025-09-03T23:28:22.215830614Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 3 23:28:23.319119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082864643.mount: Deactivated successfully. Sep 3 23:28:24.283948 containerd[1905]: time="2025-09-03T23:28:24.283689422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:24.286586 containerd[1905]: time="2025-09-03T23:28:24.286552650Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328357" Sep 3 23:28:24.289797 containerd[1905]: time="2025-09-03T23:28:24.289767594Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:24.294055 containerd[1905]: time="2025-09-03T23:28:24.294008113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:24.295940 containerd[1905]: time="2025-09-03T23:28:24.295619685Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 2.07975175s" Sep 3 23:28:24.295940 containerd[1905]: time="2025-09-03T23:28:24.295647999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 3 23:28:24.296972 containerd[1905]: time="2025-09-03T23:28:24.296954743Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 3 23:28:25.611949 containerd[1905]: time="2025-09-03T23:28:25.611692537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:25.614819 containerd[1905]: time="2025-09-03T23:28:25.614795509Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528552" Sep 3 23:28:25.617844 containerd[1905]: time="2025-09-03T23:28:25.617823151Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:25.622536 containerd[1905]: time="2025-09-03T23:28:25.622511263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:25.623041 containerd[1905]: time="2025-09-03T23:28:25.622825202Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.325785256s" Sep 3 23:28:25.623041 containerd[1905]: time="2025-09-03T23:28:25.622847963Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 3 23:28:25.623249 containerd[1905]: time="2025-09-03T23:28:25.623230250Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 3 23:28:27.002350 containerd[1905]: time="2025-09-03T23:28:27.002248429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:27.005454 containerd[1905]: time="2025-09-03T23:28:27.005423996Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483527" Sep 3 23:28:27.009564 containerd[1905]: time="2025-09-03T23:28:27.009516854Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:27.014092 containerd[1905]: time="2025-09-03T23:28:27.014055104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:27.014678 containerd[1905]: time="2025-09-03T23:28:27.014585164Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.391333233s" Sep 3 23:28:27.014678 containerd[1905]: time="2025-09-03T23:28:27.014610285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 3 23:28:27.015084 containerd[1905]: time="2025-09-03T23:28:27.015063494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 3 23:28:27.890515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 3 23:28:27.893155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:28:27.987033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:28:27.994429 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:28:28.101570 kubelet[2741]: E0903 23:28:28.101073 2741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:28:28.103777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:28:28.104003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:28:28.105989 systemd[1]: kubelet.service: Consumed 102ms CPU time, 104.6M memory peak. Sep 3 23:28:28.144222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831041513.mount: Deactivated successfully. Sep 3 23:28:28.753098 containerd[1905]: time="2025-09-03T23:28:28.753052743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:28.756544 containerd[1905]: time="2025-09-03T23:28:28.756514401Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376724" Sep 3 23:28:28.760090 containerd[1905]: time="2025-09-03T23:28:28.760057054Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:28.763726 containerd[1905]: time="2025-09-03T23:28:28.763691966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:28.764038 containerd[1905]: time="2025-09-03T23:28:28.763923239Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.748836889s" Sep 3 23:28:28.764038 containerd[1905]: time="2025-09-03T23:28:28.763955696Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 3 23:28:28.764398 containerd[1905]: time="2025-09-03T23:28:28.764373616Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 3 23:28:29.528732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738916843.mount: Deactivated successfully. Sep 3 23:28:31.090649 containerd[1905]: time="2025-09-03T23:28:31.090304552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:31.099004 containerd[1905]: time="2025-09-03T23:28:31.098978859Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 3 23:28:31.102933 containerd[1905]: time="2025-09-03T23:28:31.102314580Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:31.106544 containerd[1905]: time="2025-09-03T23:28:31.106509896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:31.107183 containerd[1905]: time="2025-09-03T23:28:31.107085591Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.342397684s" Sep 3 23:28:31.107183 containerd[1905]: time="2025-09-03T23:28:31.107111208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 3 23:28:31.107792 containerd[1905]: time="2025-09-03T23:28:31.107691952Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 3 23:28:31.670045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130352666.mount: Deactivated successfully. Sep 3 23:28:31.692437 containerd[1905]: time="2025-09-03T23:28:31.692388667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:28:31.695498 containerd[1905]: time="2025-09-03T23:28:31.695474265Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 3 23:28:31.698606 containerd[1905]: time="2025-09-03T23:28:31.698583945Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:28:31.702910 containerd[1905]: time="2025-09-03T23:28:31.702884209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:28:31.703395 containerd[1905]: time="2025-09-03T23:28:31.703252712Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 595.527895ms" Sep 3 23:28:31.703395 containerd[1905]: time="2025-09-03T23:28:31.703276777Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 3 23:28:31.703647 containerd[1905]: time="2025-09-03T23:28:31.703630007Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 3 23:28:32.335236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3354980886.mount: Deactivated successfully. Sep 3 23:28:34.535931 containerd[1905]: time="2025-09-03T23:28:34.535798836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:34.539375 containerd[1905]: time="2025-09-03T23:28:34.539348742Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 3 23:28:34.542421 containerd[1905]: time="2025-09-03T23:28:34.542389298Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:34.546648 containerd[1905]: time="2025-09-03T23:28:34.546615567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:34.547875 containerd[1905]: time="2025-09-03T23:28:34.547774527Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.844073413s" Sep 3 23:28:34.547875 containerd[1905]: time="2025-09-03T23:28:34.547800320Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 3 23:28:37.495553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:28:37.495964 systemd[1]: kubelet.service: Consumed 102ms CPU time, 104.6M memory peak. Sep 3 23:28:37.497618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:28:37.516652 systemd[1]: Reload requested from client PID 2891 ('systemctl') (unit session-9.scope)... Sep 3 23:28:37.516748 systemd[1]: Reloading... Sep 3 23:28:37.609932 zram_generator::config[2934]: No configuration found. Sep 3 23:28:37.677438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:28:37.759334 systemd[1]: Reloading finished in 242 ms. Sep 3 23:28:37.799259 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 3 23:28:37.799414 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 3 23:28:37.799678 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:28:37.799785 systemd[1]: kubelet.service: Consumed 71ms CPU time, 95M memory peak. Sep 3 23:28:37.800887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:28:38.057933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:28:38.060848 (kubelet)[3004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:28:38.215932 kubelet[3004]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:28:38.215932 kubelet[3004]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:28:38.215932 kubelet[3004]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:28:38.215932 kubelet[3004]: I0903 23:28:38.215713 3004 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:28:38.589936 kubelet[3004]: I0903 23:28:38.588889 3004 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 3 23:28:38.589936 kubelet[3004]: I0903 23:28:38.588923 3004 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:28:38.589936 kubelet[3004]: I0903 23:28:38.589214 3004 server.go:954] "Client rotation is on, will bootstrap in background" Sep 3 23:28:38.608872 kubelet[3004]: E0903 23:28:38.608852 3004 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:28:38.610196 kubelet[3004]: I0903 23:28:38.610178 3004 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:28:38.614651 kubelet[3004]: I0903 23:28:38.614638 3004 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:28:38.617132 kubelet[3004]: I0903 23:28:38.617118 3004 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:28:38.617796 kubelet[3004]: I0903 23:28:38.617768 3004 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:28:38.618001 kubelet[3004]: I0903 23:28:38.617857 3004 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-e4e1aff60f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:28:38.618135 kubelet[3004]: I0903 23:28:38.618124 3004 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:28:38.618179 kubelet[3004]: I0903 23:28:38.618172 3004 container_manager_linux.go:304] "Creating device plugin manager" Sep 3 23:28:38.618312 kubelet[3004]: I0903 23:28:38.618301 3004 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:28:38.620731 kubelet[3004]: I0903 23:28:38.620717 3004 kubelet.go:446] "Attempting to sync node with API server" Sep 3 23:28:38.620812 kubelet[3004]: I0903 23:28:38.620803 3004 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:28:38.620871 kubelet[3004]: I0903 23:28:38.620864 3004 kubelet.go:352] "Adding apiserver pod source" Sep 3 23:28:38.620937 kubelet[3004]: I0903 23:28:38.620929 3004 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:28:38.624699 kubelet[3004]: W0903 23:28:38.624660 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-e4e1aff60f&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 3 23:28:38.624699 kubelet[3004]: E0903 23:28:38.624701 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-e4e1aff60f&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:28:38.625516 kubelet[3004]: W0903 23:28:38.625471 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 3 23:28:38.625516 kubelet[3004]: E0903 23:28:38.625515 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:28:38.626070 kubelet[3004]: I0903 23:28:38.625569 3004 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:28:38.626070 kubelet[3004]: I0903 23:28:38.625833 3004 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:28:38.626070 kubelet[3004]: W0903 23:28:38.625886 3004 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 3 23:28:38.626944 kubelet[3004]: I0903 23:28:38.626929 3004 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:28:38.627132 kubelet[3004]: I0903 23:28:38.627057 3004 server.go:1287] "Started kubelet" Sep 3 23:28:38.630877 kubelet[3004]: E0903 23:28:38.630796 3004 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-n-e4e1aff60f.1861e98c57ae5be2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-n-e4e1aff60f,UID:ci-4372.1.0-n-e4e1aff60f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-n-e4e1aff60f,},FirstTimestamp:2025-09-03 23:28:38.626941922 +0000 UTC m=+0.563264711,LastTimestamp:2025-09-03 23:28:38.626941922 +0000 UTC m=+0.563264711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-n-e4e1aff60f,}" Sep 3 23:28:38.631727 kubelet[3004]: I0903 23:28:38.631704 3004 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:28:38.634677 kubelet[3004]: I0903 23:28:38.634634 3004 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:28:38.635843 kubelet[3004]: I0903 23:28:38.635791 3004 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:28:38.636057 kubelet[3004]: I0903 23:28:38.636041 3004 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:28:38.636208 kubelet[3004]: I0903 23:28:38.636194 3004 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:28:38.636969 kubelet[3004]: I0903 23:28:38.636956 3004 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:28:38.637100 kubelet[3004]: E0903 23:28:38.637086 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" Sep 3 23:28:38.637926 kubelet[3004]: I0903 23:28:38.637742 3004 server.go:479] "Adding debug handlers to kubelet server" Sep 3 23:28:38.638743 kubelet[3004]: I0903 23:28:38.638723 3004 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:28:38.638799 kubelet[3004]: I0903 23:28:38.638757 3004 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:28:38.639291 kubelet[3004]: W0903 23:28:38.639259 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 3 23:28:38.639291 kubelet[3004]: E0903 23:28:38.639290 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:28:38.639454 kubelet[3004]: E0903 23:28:38.639424 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-e4e1aff60f?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="200ms" Sep 3 23:28:38.639585 kubelet[3004]: I0903 23:28:38.639573 3004 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:28:38.639699 kubelet[3004]: I0903 23:28:38.639622 3004 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:28:38.640218 kubelet[3004]: E0903 23:28:38.640197 3004 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:28:38.640459 kubelet[3004]: I0903 23:28:38.640444 3004 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:28:38.656410 kubelet[3004]: I0903 23:28:38.656393 3004 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:28:38.656410 kubelet[3004]: I0903 23:28:38.656405 3004 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:28:38.656492 kubelet[3004]: I0903 23:28:38.656418 3004 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:28:38.662205 kubelet[3004]: I0903 23:28:38.662189 3004 policy_none.go:49] "None policy: Start" Sep 3 23:28:38.662205 kubelet[3004]: I0903 23:28:38.662204 3004 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:28:38.662274 kubelet[3004]: I0903 23:28:38.662212 3004 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:28:38.670452 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 3 23:28:38.682684 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 3 23:28:38.685486 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 3 23:28:38.703398 kubelet[3004]: I0903 23:28:38.703381 3004 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:28:38.703398 kubelet[3004]: I0903 23:28:38.703636 3004 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:28:38.703724 kubelet[3004]: I0903 23:28:38.703656 3004 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:28:38.703841 kubelet[3004]: I0903 23:28:38.703827 3004 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:28:38.705958 kubelet[3004]: E0903 23:28:38.705569 3004 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:28:38.705958 kubelet[3004]: E0903 23:28:38.705827 3004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-n-e4e1aff60f\" not found" Sep 3 23:28:38.774077 kubelet[3004]: I0903 23:28:38.774049 3004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:28:38.775505 kubelet[3004]: I0903 23:28:38.775479 3004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:28:38.775505 kubelet[3004]: I0903 23:28:38.775501 3004 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 3 23:28:38.775697 kubelet[3004]: I0903 23:28:38.775515 3004 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:28:38.775697 kubelet[3004]: I0903 23:28:38.775521 3004 kubelet.go:2382] "Starting kubelet main sync loop" Sep 3 23:28:38.775697 kubelet[3004]: E0903 23:28:38.775561 3004 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 3 23:28:38.777340 kubelet[3004]: W0903 23:28:38.777277 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 3 23:28:38.777340 kubelet[3004]: E0903 23:28:38.777310 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:28:38.805169 kubelet[3004]: I0903 23:28:38.804992 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.805243 kubelet[3004]: E0903 23:28:38.805215 3004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.839663 kubelet[3004]: E0903 23:28:38.839634 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-e4e1aff60f?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="400ms" Sep 3 23:28:38.882461 systemd[1]: Created slice kubepods-burstable-pod66e185b9b6262122b4eb0e6bf804e535.slice - libcontainer container kubepods-burstable-pod66e185b9b6262122b4eb0e6bf804e535.slice. Sep 3 23:28:38.891902 kubelet[3004]: E0903 23:28:38.891882 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.893626 systemd[1]: Created slice kubepods-burstable-podacaf686ae1cc065812528c32da9e0979.slice - libcontainer container kubepods-burstable-podacaf686ae1cc065812528c32da9e0979.slice. Sep 3 23:28:38.911871 kubelet[3004]: E0903 23:28:38.911851 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.912847 systemd[1]: Created slice kubepods-burstable-pod4b7ebffaad841f4d6fc4a47690a9500d.slice - libcontainer container kubepods-burstable-pod4b7ebffaad841f4d6fc4a47690a9500d.slice. Sep 3 23:28:38.914342 kubelet[3004]: E0903 23:28:38.914212 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.940447 kubelet[3004]: I0903 23:28:38.940426 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acaf686ae1cc065812528c32da9e0979-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-e4e1aff60f\" (UID: \"acaf686ae1cc065812528c32da9e0979\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.940575 kubelet[3004]: I0903 23:28:38.940451 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acaf686ae1cc065812528c32da9e0979-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-e4e1aff60f\" (UID: \"acaf686ae1cc065812528c32da9e0979\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.940575 kubelet[3004]: I0903 23:28:38.940464 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/acaf686ae1cc065812528c32da9e0979-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-e4e1aff60f\" (UID: \"acaf686ae1cc065812528c32da9e0979\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.940575 kubelet[3004]: I0903 23:28:38.940474 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b7ebffaad841f4d6fc4a47690a9500d-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-e4e1aff60f\" (UID: \"4b7ebffaad841f4d6fc4a47690a9500d\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.940575 kubelet[3004]: I0903 23:28:38.940484 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e185b9b6262122b4eb0e6bf804e535-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-e4e1aff60f\" (UID: \"66e185b9b6262122b4eb0e6bf804e535\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.940575 kubelet[3004]: I0903 23:28:38.940494 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e185b9b6262122b4eb0e6bf804e535-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-e4e1aff60f\" (UID: \"66e185b9b6262122b4eb0e6bf804e535\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.940663 kubelet[3004]: I0903 23:28:38.940505 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/acaf686ae1cc065812528c32da9e0979-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-e4e1aff60f\" (UID: \"acaf686ae1cc065812528c32da9e0979\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.940663 kubelet[3004]: I0903 23:28:38.940515 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acaf686ae1cc065812528c32da9e0979-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-e4e1aff60f\" (UID: \"acaf686ae1cc065812528c32da9e0979\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:38.940663 kubelet[3004]: I0903 23:28:38.940525 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e185b9b6262122b4eb0e6bf804e535-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-e4e1aff60f\" (UID: \"66e185b9b6262122b4eb0e6bf804e535\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:39.006605 kubelet[3004]: I0903 23:28:39.006585 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:39.006872 kubelet[3004]: E0903 23:28:39.006846 3004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:39.196236 containerd[1905]: time="2025-09-03T23:28:39.196162349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-e4e1aff60f,Uid:66e185b9b6262122b4eb0e6bf804e535,Namespace:kube-system,Attempt:0,}" Sep 3 23:28:39.212542 containerd[1905]: time="2025-09-03T23:28:39.212520282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-e4e1aff60f,Uid:acaf686ae1cc065812528c32da9e0979,Namespace:kube-system,Attempt:0,}" Sep 3 23:28:39.215140 containerd[1905]: time="2025-09-03T23:28:39.215110775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-e4e1aff60f,Uid:4b7ebffaad841f4d6fc4a47690a9500d,Namespace:kube-system,Attempt:0,}" Sep 3 23:28:39.240927 kubelet[3004]: E0903 23:28:39.240872 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-e4e1aff60f?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="800ms" Sep 3 23:28:39.408934 kubelet[3004]: I0903 23:28:39.408810 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:39.409150 kubelet[3004]: E0903 23:28:39.409126 3004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:39.504883 kubelet[3004]: W0903 23:28:39.504783 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 3 23:28:39.504883 kubelet[3004]: E0903 23:28:39.504836 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:28:39.680740 kubelet[3004]: W0903 23:28:39.680679 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 3 23:28:39.680740 kubelet[3004]: E0903 23:28:39.680740 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:28:39.935057 kubelet[3004]: W0903 23:28:39.934979 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-e4e1aff60f&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 3 23:28:39.935057 kubelet[3004]: E0903 23:28:39.935034 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-e4e1aff60f&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:28:40.041175 kubelet[3004]: E0903 23:28:40.041145 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-e4e1aff60f?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="1.6s" Sep 3 23:28:40.084643 kubelet[3004]: W0903 23:28:40.084578 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 3 23:28:40.084643 kubelet[3004]: E0903 23:28:40.084623 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:28:40.211046 kubelet[3004]: I0903 23:28:40.210635 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:40.211046 kubelet[3004]: E0903 23:28:40.210928 3004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:40.337673 containerd[1905]: time="2025-09-03T23:28:40.337638780Z" level=info msg="connecting to shim 9c184a6e73c781b5939132f0e833af4e66ba244231dccaba524344edbe1ecaf5" address="unix:///run/containerd/s/8dbe3e0c124954d1b96c5b0627780dd02b6e29404beb78eff97d4e4293739573" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:28:40.362225 containerd[1905]: time="2025-09-03T23:28:40.362193105Z" level=info msg="connecting to shim f5a2508648df110a7fe4a89e6dad1737b095bc95f0ef7ae0794bfdf2b228416c" address="unix:///run/containerd/s/2c340183a90135adfe28dae2838bc4bf96722d92c68b3b9a1385632723946e35" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:28:40.363123 systemd[1]: Started cri-containerd-9c184a6e73c781b5939132f0e833af4e66ba244231dccaba524344edbe1ecaf5.scope - libcontainer container 9c184a6e73c781b5939132f0e833af4e66ba244231dccaba524344edbe1ecaf5. Sep 3 23:28:40.363893 containerd[1905]: time="2025-09-03T23:28:40.363868061Z" level=info msg="connecting to shim 4d9f147db46d7de99e994d2c8cb96a64bd06a4d10d9da5e1a6cca7a490232b7f" address="unix:///run/containerd/s/88d28be2701b94db3cbfd7da4e58e8a3c682e1bfd8124f49fe7e3910e287082f" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:28:40.389021 systemd[1]: Started cri-containerd-f5a2508648df110a7fe4a89e6dad1737b095bc95f0ef7ae0794bfdf2b228416c.scope - libcontainer container f5a2508648df110a7fe4a89e6dad1737b095bc95f0ef7ae0794bfdf2b228416c. Sep 3 23:28:40.392048 systemd[1]: Started cri-containerd-4d9f147db46d7de99e994d2c8cb96a64bd06a4d10d9da5e1a6cca7a490232b7f.scope - libcontainer container 4d9f147db46d7de99e994d2c8cb96a64bd06a4d10d9da5e1a6cca7a490232b7f. Sep 3 23:28:40.413628 containerd[1905]: time="2025-09-03T23:28:40.413521811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-e4e1aff60f,Uid:66e185b9b6262122b4eb0e6bf804e535,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c184a6e73c781b5939132f0e833af4e66ba244231dccaba524344edbe1ecaf5\"" Sep 3 23:28:40.417182 containerd[1905]: time="2025-09-03T23:28:40.417156382Z" level=info msg="CreateContainer within sandbox \"9c184a6e73c781b5939132f0e833af4e66ba244231dccaba524344edbe1ecaf5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 3 23:28:40.439381 containerd[1905]: time="2025-09-03T23:28:40.439347614Z" level=info msg="Container 4e6659ca0b27704c6eb81c129fb452c2f5199b3849a87453a441a3a0f6d27ec8: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:28:40.441701 containerd[1905]: time="2025-09-03T23:28:40.441667274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-e4e1aff60f,Uid:4b7ebffaad841f4d6fc4a47690a9500d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d9f147db46d7de99e994d2c8cb96a64bd06a4d10d9da5e1a6cca7a490232b7f\"" Sep 3 23:28:40.443372 containerd[1905]: time="2025-09-03T23:28:40.443342558Z" level=info msg="CreateContainer within sandbox \"4d9f147db46d7de99e994d2c8cb96a64bd06a4d10d9da5e1a6cca7a490232b7f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 3 23:28:40.445809 containerd[1905]: time="2025-09-03T23:28:40.445779446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-e4e1aff60f,Uid:acaf686ae1cc065812528c32da9e0979,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5a2508648df110a7fe4a89e6dad1737b095bc95f0ef7ae0794bfdf2b228416c\"" Sep 3 23:28:40.447537 containerd[1905]: time="2025-09-03T23:28:40.447514572Z" level=info msg="CreateContainer within sandbox \"f5a2508648df110a7fe4a89e6dad1737b095bc95f0ef7ae0794bfdf2b228416c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 3 23:28:40.463172 containerd[1905]: time="2025-09-03T23:28:40.462852117Z" level=info msg="CreateContainer within sandbox \"9c184a6e73c781b5939132f0e833af4e66ba244231dccaba524344edbe1ecaf5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4e6659ca0b27704c6eb81c129fb452c2f5199b3849a87453a441a3a0f6d27ec8\"" Sep 3 23:28:40.463637 containerd[1905]: time="2025-09-03T23:28:40.463623297Z" level=info msg="StartContainer for \"4e6659ca0b27704c6eb81c129fb452c2f5199b3849a87453a441a3a0f6d27ec8\"" Sep 3 23:28:40.464805 containerd[1905]: time="2025-09-03T23:28:40.464787059Z" level=info msg="connecting to shim 4e6659ca0b27704c6eb81c129fb452c2f5199b3849a87453a441a3a0f6d27ec8" address="unix:///run/containerd/s/8dbe3e0c124954d1b96c5b0627780dd02b6e29404beb78eff97d4e4293739573" protocol=ttrpc version=3 Sep 3 23:28:40.483006 systemd[1]: Started cri-containerd-4e6659ca0b27704c6eb81c129fb452c2f5199b3849a87453a441a3a0f6d27ec8.scope - libcontainer container 4e6659ca0b27704c6eb81c129fb452c2f5199b3849a87453a441a3a0f6d27ec8. Sep 3 23:28:40.488153 containerd[1905]: time="2025-09-03T23:28:40.488133436Z" level=info msg="Container 6f6c3c7831f3162249dd16f05f335475d43ce250a579d848ea062beb7fde71d8: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:28:40.495927 containerd[1905]: time="2025-09-03T23:28:40.495880332Z" level=info msg="Container 6d49b80055c0441680d9b497c39fc96ac8921e587729223ee23f6390e0e8f28d: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:28:40.518555 containerd[1905]: time="2025-09-03T23:28:40.518532492Z" level=info msg="StartContainer for \"4e6659ca0b27704c6eb81c129fb452c2f5199b3849a87453a441a3a0f6d27ec8\" returns successfully" Sep 3 23:28:40.523647 containerd[1905]: time="2025-09-03T23:28:40.523551145Z" level=info msg="CreateContainer within sandbox \"4d9f147db46d7de99e994d2c8cb96a64bd06a4d10d9da5e1a6cca7a490232b7f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f6c3c7831f3162249dd16f05f335475d43ce250a579d848ea062beb7fde71d8\"" Sep 3 23:28:40.524135 containerd[1905]: time="2025-09-03T23:28:40.524107981Z" level=info msg="StartContainer for \"6f6c3c7831f3162249dd16f05f335475d43ce250a579d848ea062beb7fde71d8\"" Sep 3 23:28:40.526918 containerd[1905]: time="2025-09-03T23:28:40.526175896Z" level=info msg="CreateContainer within sandbox \"f5a2508648df110a7fe4a89e6dad1737b095bc95f0ef7ae0794bfdf2b228416c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6d49b80055c0441680d9b497c39fc96ac8921e587729223ee23f6390e0e8f28d\"" Sep 3 23:28:40.526918 containerd[1905]: time="2025-09-03T23:28:40.526596623Z" level=info msg="connecting to shim 6f6c3c7831f3162249dd16f05f335475d43ce250a579d848ea062beb7fde71d8" address="unix:///run/containerd/s/88d28be2701b94db3cbfd7da4e58e8a3c682e1bfd8124f49fe7e3910e287082f" protocol=ttrpc version=3 Sep 3 23:28:40.528412 containerd[1905]: time="2025-09-03T23:28:40.528139782Z" level=info msg="StartContainer for \"6d49b80055c0441680d9b497c39fc96ac8921e587729223ee23f6390e0e8f28d\"" Sep 3 23:28:40.532299 containerd[1905]: time="2025-09-03T23:28:40.532075780Z" level=info msg="connecting to shim 6d49b80055c0441680d9b497c39fc96ac8921e587729223ee23f6390e0e8f28d" address="unix:///run/containerd/s/2c340183a90135adfe28dae2838bc4bf96722d92c68b3b9a1385632723946e35" protocol=ttrpc version=3 Sep 3 23:28:40.551031 systemd[1]: Started cri-containerd-6d49b80055c0441680d9b497c39fc96ac8921e587729223ee23f6390e0e8f28d.scope - libcontainer container 6d49b80055c0441680d9b497c39fc96ac8921e587729223ee23f6390e0e8f28d. Sep 3 23:28:40.554710 systemd[1]: Started cri-containerd-6f6c3c7831f3162249dd16f05f335475d43ce250a579d848ea062beb7fde71d8.scope - libcontainer container 6f6c3c7831f3162249dd16f05f335475d43ce250a579d848ea062beb7fde71d8. Sep 3 23:28:40.597446 containerd[1905]: time="2025-09-03T23:28:40.597413456Z" level=info msg="StartContainer for \"6d49b80055c0441680d9b497c39fc96ac8921e587729223ee23f6390e0e8f28d\" returns successfully" Sep 3 23:28:40.601729 containerd[1905]: time="2025-09-03T23:28:40.601705866Z" level=info msg="StartContainer for \"6f6c3c7831f3162249dd16f05f335475d43ce250a579d848ea062beb7fde71d8\" returns successfully" Sep 3 23:28:40.784700 kubelet[3004]: E0903 23:28:40.784312 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:40.788390 kubelet[3004]: E0903 23:28:40.788374 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:40.789583 kubelet[3004]: E0903 23:28:40.789567 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:41.759274 kubelet[3004]: E0903 23:28:41.759182 3004 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.1.0-n-e4e1aff60f\" not found" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:41.791797 kubelet[3004]: E0903 23:28:41.791768 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:41.792393 kubelet[3004]: E0903 23:28:41.792194 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:41.812056 kubelet[3004]: I0903 23:28:41.812037 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:41.853935 kubelet[3004]: E0903 23:28:41.853902 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:41.913464 kubelet[3004]: I0903 23:28:41.913434 3004 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:41.913464 kubelet[3004]: E0903 23:28:41.913457 3004 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.1.0-n-e4e1aff60f\": node \"ci-4372.1.0-n-e4e1aff60f\" not found" Sep 3 23:28:42.099444 kubelet[3004]: E0903 23:28:42.099113 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" Sep 3 23:28:42.199759 kubelet[3004]: E0903 23:28:42.199734 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" Sep 3 23:28:42.300188 kubelet[3004]: E0903 23:28:42.300162 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" Sep 3 23:28:42.401266 kubelet[3004]: E0903 23:28:42.401057 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" Sep 3 23:28:42.437983 kubelet[3004]: I0903 23:28:42.437962 3004 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:42.452764 kubelet[3004]: E0903 23:28:42.452687 3004 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-e4e1aff60f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:42.452764 kubelet[3004]: I0903 23:28:42.452704 3004 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:42.454004 kubelet[3004]: E0903 23:28:42.453947 3004 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.1.0-n-e4e1aff60f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:42.454004 kubelet[3004]: I0903 23:28:42.453965 3004 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:42.456316 kubelet[3004]: E0903 23:28:42.456286 3004 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-e4e1aff60f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:42.627616 kubelet[3004]: I0903 23:28:42.627574 3004 apiserver.go:52] "Watching apiserver" Sep 3 23:28:42.639678 kubelet[3004]: I0903 23:28:42.639642 3004 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:28:44.133035 systemd[1]: Reload requested from client PID 3276 ('systemctl') (unit session-9.scope)... Sep 3 23:28:44.133050 systemd[1]: Reloading... Sep 3 23:28:44.204167 zram_generator::config[3319]: No configuration found. Sep 3 23:28:44.276532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:28:44.368357 systemd[1]: Reloading finished in 235 ms. Sep 3 23:28:44.407440 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:28:44.421452 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:28:44.421706 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:28:44.421748 systemd[1]: kubelet.service: Consumed 658ms CPU time, 125.5M memory peak. Sep 3 23:28:44.423369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:28:44.519787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:28:44.526138 (kubelet)[3386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:28:44.558539 kubelet[3386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:28:44.558539 kubelet[3386]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:28:44.558539 kubelet[3386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:28:44.558539 kubelet[3386]: I0903 23:28:44.557257 3386 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:28:44.563528 kubelet[3386]: I0903 23:28:44.563501 3386 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 3 23:28:44.563528 kubelet[3386]: I0903 23:28:44.563521 3386 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:28:44.563726 kubelet[3386]: I0903 23:28:44.563692 3386 server.go:954] "Client rotation is on, will bootstrap in background" Sep 3 23:28:44.564621 kubelet[3386]: I0903 23:28:44.564586 3386 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 3 23:28:44.566563 kubelet[3386]: I0903 23:28:44.566433 3386 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:28:44.570335 kubelet[3386]: I0903 23:28:44.570317 3386 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:28:44.573138 kubelet[3386]: I0903 23:28:44.573058 3386 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:28:44.573301 kubelet[3386]: I0903 23:28:44.573208 3386 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:28:44.573343 kubelet[3386]: I0903 23:28:44.573228 3386 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-e4e1aff60f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:28:44.573343 kubelet[3386]: I0903 23:28:44.573336 3386 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:28:44.573343 kubelet[3386]: I0903 23:28:44.573342 3386 container_manager_linux.go:304] "Creating device plugin manager" Sep 3 23:28:44.573981 kubelet[3386]: I0903 23:28:44.573371 3386 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:28:44.573981 kubelet[3386]: I0903 23:28:44.573460 3386 kubelet.go:446] "Attempting to sync node with API server" Sep 3 23:28:44.573981 kubelet[3386]: I0903 23:28:44.573468 3386 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:28:44.573981 kubelet[3386]: I0903 23:28:44.573482 3386 kubelet.go:352] "Adding apiserver pod source" Sep 3 23:28:44.573981 kubelet[3386]: I0903 23:28:44.573489 3386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:28:44.581607 kubelet[3386]: I0903 23:28:44.580142 3386 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:28:44.581607 kubelet[3386]: I0903 23:28:44.580470 3386 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:28:44.581607 kubelet[3386]: I0903 23:28:44.580776 3386 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:28:44.581607 kubelet[3386]: I0903 23:28:44.580797 3386 server.go:1287] "Started kubelet" Sep 3 23:28:44.581607 kubelet[3386]: I0903 23:28:44.581529 3386 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:28:44.582346 kubelet[3386]: I0903 23:28:44.582157 3386 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:28:44.582916 kubelet[3386]: I0903 23:28:44.582667 3386 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:28:44.583383 kubelet[3386]: I0903 23:28:44.583283 3386 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:28:44.584409 kubelet[3386]: I0903 23:28:44.582375 3386 server.go:479] "Adding debug handlers to kubelet server" Sep 3 23:28:44.585103 kubelet[3386]: I0903 23:28:44.583214 3386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:28:44.585995 kubelet[3386]: I0903 23:28:44.585788 3386 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:28:44.586626 kubelet[3386]: I0903 23:28:44.586533 3386 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:28:44.586991 kubelet[3386]: I0903 23:28:44.586855 3386 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:28:44.590479 kubelet[3386]: E0903 23:28:44.590349 3386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-e4e1aff60f\" not found" Sep 3 23:28:44.592570 kubelet[3386]: I0903 23:28:44.592157 3386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:28:44.592973 kubelet[3386]: I0903 23:28:44.592807 3386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:28:44.592973 kubelet[3386]: I0903 23:28:44.592827 3386 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 3 23:28:44.592973 kubelet[3386]: I0903 23:28:44.592839 3386 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:28:44.592973 kubelet[3386]: I0903 23:28:44.592843 3386 kubelet.go:2382] "Starting kubelet main sync loop" Sep 3 23:28:44.592973 kubelet[3386]: E0903 23:28:44.592873 3386 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:28:44.600944 kubelet[3386]: I0903 23:28:44.599626 3386 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:28:44.600944 kubelet[3386]: I0903 23:28:44.599696 3386 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:28:44.603084 kubelet[3386]: I0903 23:28:44.603057 3386 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:28:44.660885 kubelet[3386]: I0903 23:28:44.658955 3386 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:28:44.660885 kubelet[3386]: I0903 23:28:44.658970 3386 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:28:44.660885 kubelet[3386]: I0903 23:28:44.658985 3386 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:28:44.660885 kubelet[3386]: I0903 23:28:44.659089 3386 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 3 23:28:44.660885 kubelet[3386]: I0903 23:28:44.659096 3386 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 3 23:28:44.660885 kubelet[3386]: I0903 23:28:44.659107 3386 policy_none.go:49] "None policy: Start" Sep 3 23:28:44.660885 kubelet[3386]: I0903 23:28:44.659113 3386 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:28:44.660885 kubelet[3386]: I0903 23:28:44.659120 3386 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:28:44.660885 kubelet[3386]: I0903 23:28:44.659181 3386 state_mem.go:75] "Updated machine memory state" Sep 3 23:28:44.666166 kubelet[3386]: I0903 23:28:44.666150 3386 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:28:44.666880 kubelet[3386]: I0903 23:28:44.666346 3386 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:28:44.667012 kubelet[3386]: I0903 23:28:44.666985 3386 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:28:44.667267 kubelet[3386]: I0903 23:28:44.667180 3386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:28:44.669952 kubelet[3386]: E0903 23:28:44.669903 3386 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:28:44.693652 kubelet[3386]: I0903 23:28:44.693630 3386 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.694085 kubelet[3386]: I0903 23:28:44.693836 3386 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.695244 kubelet[3386]: I0903 23:28:44.693906 3386 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.701206 kubelet[3386]: W0903 23:28:44.701188 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 3 23:28:44.706135 kubelet[3386]: W0903 23:28:44.705985 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 3 23:28:44.706135 kubelet[3386]: W0903 23:28:44.706010 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 3 23:28:44.771019 kubelet[3386]: I0903 23:28:44.770985 3386 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.783184 kubelet[3386]: I0903 23:28:44.783153 3386 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.783360 kubelet[3386]: I0903 23:28:44.783211 3386 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.788365 kubelet[3386]: I0903 23:28:44.788339 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acaf686ae1cc065812528c32da9e0979-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-e4e1aff60f\" (UID: \"acaf686ae1cc065812528c32da9e0979\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.788365 kubelet[3386]: I0903 23:28:44.788367 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/acaf686ae1cc065812528c32da9e0979-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-e4e1aff60f\" (UID: \"acaf686ae1cc065812528c32da9e0979\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.788627 kubelet[3386]: I0903 23:28:44.788381 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b7ebffaad841f4d6fc4a47690a9500d-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-e4e1aff60f\" (UID: \"4b7ebffaad841f4d6fc4a47690a9500d\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.788627 kubelet[3386]: I0903 23:28:44.788400 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acaf686ae1cc065812528c32da9e0979-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-e4e1aff60f\" (UID: \"acaf686ae1cc065812528c32da9e0979\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.788627 kubelet[3386]: I0903 23:28:44.788413 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/acaf686ae1cc065812528c32da9e0979-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-e4e1aff60f\" (UID: \"acaf686ae1cc065812528c32da9e0979\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.788627 kubelet[3386]: I0903 23:28:44.788423 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acaf686ae1cc065812528c32da9e0979-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-e4e1aff60f\" (UID: \"acaf686ae1cc065812528c32da9e0979\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.788627 kubelet[3386]: I0903 23:28:44.788435 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e185b9b6262122b4eb0e6bf804e535-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-e4e1aff60f\" (UID: \"66e185b9b6262122b4eb0e6bf804e535\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.788838 kubelet[3386]: I0903 23:28:44.788443 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e185b9b6262122b4eb0e6bf804e535-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-e4e1aff60f\" (UID: \"66e185b9b6262122b4eb0e6bf804e535\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:44.788838 kubelet[3386]: I0903 23:28:44.788453 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e185b9b6262122b4eb0e6bf804e535-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-e4e1aff60f\" (UID: \"66e185b9b6262122b4eb0e6bf804e535\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:45.575924 kubelet[3386]: I0903 23:28:45.575819 3386 apiserver.go:52] "Watching apiserver" Sep 3 23:28:45.587311 kubelet[3386]: I0903 23:28:45.587283 3386 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:28:45.638256 kubelet[3386]: I0903 23:28:45.638222 3386 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:45.704398 kubelet[3386]: I0903 23:28:45.638425 3386 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:45.704398 kubelet[3386]: W0903 23:28:45.651365 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 3 23:28:45.704398 kubelet[3386]: E0903 23:28:45.651491 3386 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-e4e1aff60f\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:45.704398 kubelet[3386]: W0903 23:28:45.651828 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 3 23:28:45.704398 kubelet[3386]: E0903 23:28:45.651903 3386 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-e4e1aff60f\" already exists" pod="kube-system/kube-scheduler-ci-4372.1.0-n-e4e1aff60f" Sep 3 23:28:45.704398 kubelet[3386]: I0903 23:28:45.678660 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-n-e4e1aff60f" podStartSLOduration=1.678649176 podStartE2EDuration="1.678649176s" podCreationTimestamp="2025-09-03 23:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:28:45.666244297 +0000 UTC m=+1.137170445" watchObservedRunningTime="2025-09-03 23:28:45.678649176 +0000 UTC m=+1.149575316" Sep 3 23:28:45.704398 kubelet[3386]: I0903 23:28:45.688604 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-n-e4e1aff60f" podStartSLOduration=1.688593727 podStartE2EDuration="1.688593727s" podCreationTimestamp="2025-09-03 23:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:28:45.678842247 +0000 UTC m=+1.149768387" watchObservedRunningTime="2025-09-03 23:28:45.688593727 +0000 UTC m=+1.159519867" Sep 3 23:28:45.704563 kubelet[3386]: I0903 23:28:45.699309 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-e4e1aff60f" podStartSLOduration=1.699299777 podStartE2EDuration="1.699299777s" podCreationTimestamp="2025-09-03 23:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:28:45.689045447 +0000 UTC m=+1.159971595" watchObservedRunningTime="2025-09-03 23:28:45.699299777 +0000 UTC m=+1.170225925" Sep 3 23:28:45.709044 sudo[3420]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 3 23:28:45.709251 sudo[3420]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 3 23:28:46.083801 sudo[3420]: pam_unix(sudo:session): session closed for user root Sep 3 23:28:47.087324 sudo[2375]: pam_unix(sudo:session): session closed for user root Sep 3 23:28:47.175492 sshd[2374]: Connection closed by 10.200.16.10 port 36712 Sep 3 23:28:47.175799 sshd-session[2372]: pam_unix(sshd:session): session closed for user core Sep 3 23:28:47.179259 systemd-logind[1864]: Session 9 logged out. Waiting for processes to exit. Sep 3 23:28:47.179892 systemd[1]: sshd@6-10.200.20.24:22-10.200.16.10:36712.service: Deactivated successfully. Sep 3 23:28:47.182148 systemd[1]: session-9.scope: Deactivated successfully. Sep 3 23:28:47.182984 systemd[1]: session-9.scope: Consumed 3.466s CPU time, 270.6M memory peak. Sep 3 23:28:47.185096 systemd-logind[1864]: Removed session 9. Sep 3 23:28:49.188938 kubelet[3386]: I0903 23:28:49.188836 3386 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 3 23:28:49.189947 kubelet[3386]: I0903 23:28:49.189690 3386 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 3 23:28:49.189990 containerd[1905]: time="2025-09-03T23:28:49.189448535Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 3 23:28:49.874901 systemd[1]: Created slice kubepods-besteffort-pod38753954_e8c0_4963_ab13_49ae7de8ce86.slice - libcontainer container kubepods-besteffort-pod38753954_e8c0_4963_ab13_49ae7de8ce86.slice. Sep 3 23:28:49.887162 systemd[1]: Created slice kubepods-burstable-pod89cc364d_d6db_40fb_8f31_457c13967201.slice - libcontainer container kubepods-burstable-pod89cc364d_d6db_40fb_8f31_457c13967201.slice. Sep 3 23:28:49.925616 kubelet[3386]: I0903 23:28:49.925452 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38753954-e8c0-4963-ab13-49ae7de8ce86-xtables-lock\") pod \"kube-proxy-xvchx\" (UID: \"38753954-e8c0-4963-ab13-49ae7de8ce86\") " pod="kube-system/kube-proxy-xvchx" Sep 3 23:28:49.925616 kubelet[3386]: I0903 23:28:49.925487 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cilium-run\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.925616 kubelet[3386]: I0903 23:28:49.925500 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-bpf-maps\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.925616 kubelet[3386]: I0903 23:28:49.925509 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-hostproc\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.925616 kubelet[3386]: I0903 23:28:49.925518 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-host-proc-sys-net\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.925616 kubelet[3386]: I0903 23:28:49.925527 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsg5c\" (UniqueName: \"kubernetes.io/projected/38753954-e8c0-4963-ab13-49ae7de8ce86-kube-api-access-bsg5c\") pod \"kube-proxy-xvchx\" (UID: \"38753954-e8c0-4963-ab13-49ae7de8ce86\") " pod="kube-system/kube-proxy-xvchx" Sep 3 23:28:49.925782 kubelet[3386]: I0903 23:28:49.925537 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-etc-cni-netd\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.925782 kubelet[3386]: I0903 23:28:49.925547 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89cc364d-d6db-40fb-8f31-457c13967201-clustermesh-secrets\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.925782 kubelet[3386]: I0903 23:28:49.925557 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cilium-cgroup\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.925782 kubelet[3386]: I0903 23:28:49.925567 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/38753954-e8c0-4963-ab13-49ae7de8ce86-kube-proxy\") pod \"kube-proxy-xvchx\" (UID: \"38753954-e8c0-4963-ab13-49ae7de8ce86\") " pod="kube-system/kube-proxy-xvchx" Sep 3 23:28:49.925782 kubelet[3386]: I0903 23:28:49.925577 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89m87\" (UniqueName: \"kubernetes.io/projected/89cc364d-d6db-40fb-8f31-457c13967201-kube-api-access-89m87\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.925855 kubelet[3386]: I0903 23:28:49.925587 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89cc364d-d6db-40fb-8f31-457c13967201-cilium-config-path\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.926043 kubelet[3386]: I0903 23:28:49.926001 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-host-proc-sys-kernel\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.926043 kubelet[3386]: I0903 23:28:49.926023 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89cc364d-d6db-40fb-8f31-457c13967201-hubble-tls\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.926190 kubelet[3386]: I0903 23:28:49.926137 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38753954-e8c0-4963-ab13-49ae7de8ce86-lib-modules\") pod \"kube-proxy-xvchx\" (UID: \"38753954-e8c0-4963-ab13-49ae7de8ce86\") " pod="kube-system/kube-proxy-xvchx" Sep 3 23:28:49.926190 kubelet[3386]: I0903 23:28:49.926156 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cni-path\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.926190 kubelet[3386]: I0903 23:28:49.926166 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-lib-modules\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:49.926190 kubelet[3386]: I0903 23:28:49.926175 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-xtables-lock\") pod \"cilium-w5vrl\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " pod="kube-system/cilium-w5vrl" Sep 3 23:28:50.040778 kubelet[3386]: E0903 23:28:50.040755 3386 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 3 23:28:50.040962 kubelet[3386]: E0903 23:28:50.040945 3386 projected.go:194] Error preparing data for projected volume kube-api-access-bsg5c for pod kube-system/kube-proxy-xvchx: configmap "kube-root-ca.crt" not found Sep 3 23:28:50.041023 kubelet[3386]: E0903 23:28:50.041011 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/38753954-e8c0-4963-ab13-49ae7de8ce86-kube-api-access-bsg5c podName:38753954-e8c0-4963-ab13-49ae7de8ce86 nodeName:}" failed. No retries permitted until 2025-09-03 23:28:50.540987991 +0000 UTC m=+6.011914131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bsg5c" (UniqueName: "kubernetes.io/projected/38753954-e8c0-4963-ab13-49ae7de8ce86-kube-api-access-bsg5c") pod "kube-proxy-xvchx" (UID: "38753954-e8c0-4963-ab13-49ae7de8ce86") : configmap "kube-root-ca.crt" not found Sep 3 23:28:50.042587 kubelet[3386]: E0903 23:28:50.042569 3386 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 3 23:28:50.042587 kubelet[3386]: E0903 23:28:50.042585 3386 projected.go:194] Error preparing data for projected volume kube-api-access-89m87 for pod kube-system/cilium-w5vrl: configmap "kube-root-ca.crt" not found Sep 3 23:28:50.042757 kubelet[3386]: E0903 23:28:50.042609 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89cc364d-d6db-40fb-8f31-457c13967201-kube-api-access-89m87 podName:89cc364d-d6db-40fb-8f31-457c13967201 nodeName:}" failed. No retries permitted until 2025-09-03 23:28:50.542600616 +0000 UTC m=+6.013526756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-89m87" (UniqueName: "kubernetes.io/projected/89cc364d-d6db-40fb-8f31-457c13967201-kube-api-access-89m87") pod "cilium-w5vrl" (UID: "89cc364d-d6db-40fb-8f31-457c13967201") : configmap "kube-root-ca.crt" not found Sep 3 23:28:50.298110 systemd[1]: Created slice kubepods-besteffort-podf0756153_c1ae_4b65_9224_2175f0918895.slice - libcontainer container kubepods-besteffort-podf0756153_c1ae_4b65_9224_2175f0918895.slice. Sep 3 23:28:50.328611 kubelet[3386]: I0903 23:28:50.328552 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0756153-c1ae-4b65-9224-2175f0918895-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-sjkzh\" (UID: \"f0756153-c1ae-4b65-9224-2175f0918895\") " pod="kube-system/cilium-operator-6c4d7847fc-sjkzh" Sep 3 23:28:50.328611 kubelet[3386]: I0903 23:28:50.328585 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8qpz\" (UniqueName: \"kubernetes.io/projected/f0756153-c1ae-4b65-9224-2175f0918895-kube-api-access-l8qpz\") pod \"cilium-operator-6c4d7847fc-sjkzh\" (UID: \"f0756153-c1ae-4b65-9224-2175f0918895\") " pod="kube-system/cilium-operator-6c4d7847fc-sjkzh" Sep 3 23:28:50.602382 containerd[1905]: time="2025-09-03T23:28:50.602250857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sjkzh,Uid:f0756153-c1ae-4b65-9224-2175f0918895,Namespace:kube-system,Attempt:0,}" Sep 3 23:28:50.644599 containerd[1905]: time="2025-09-03T23:28:50.644568588Z" level=info msg="connecting to shim ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243" address="unix:///run/containerd/s/7555a013d384d93cd8a7adae4cc8f5d06adf6f0e602eef88a94cfbe59f9ea80a" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:28:50.663022 systemd[1]: Started cri-containerd-ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243.scope - libcontainer container ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243. Sep 3 23:28:50.689452 containerd[1905]: time="2025-09-03T23:28:50.689423681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sjkzh,Uid:f0756153-c1ae-4b65-9224-2175f0918895,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\"" Sep 3 23:28:50.692013 containerd[1905]: time="2025-09-03T23:28:50.691976691Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 3 23:28:50.784105 containerd[1905]: time="2025-09-03T23:28:50.784076785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvchx,Uid:38753954-e8c0-4963-ab13-49ae7de8ce86,Namespace:kube-system,Attempt:0,}" Sep 3 23:28:50.792026 containerd[1905]: time="2025-09-03T23:28:50.791934197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5vrl,Uid:89cc364d-d6db-40fb-8f31-457c13967201,Namespace:kube-system,Attempt:0,}" Sep 3 23:28:50.881934 containerd[1905]: time="2025-09-03T23:28:50.881639127Z" level=info msg="connecting to shim 75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974" address="unix:///run/containerd/s/8281d6c6a394de90cf1f35009959accd57b8a7e540918cd491ac3c3aa0bd919b" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:28:50.882063 containerd[1905]: time="2025-09-03T23:28:50.882043037Z" level=info msg="connecting to shim 6bbf76c9ae73731907f8e7fa69faa431b3cd1b4729569334ac07d80fc398a6b6" address="unix:///run/containerd/s/2b2561be81e599f1ed432adf1cd854aebb241ec550f755bdf9c22dd65d23fac6" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:28:50.902013 systemd[1]: Started cri-containerd-75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974.scope - libcontainer container 75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974. Sep 3 23:28:50.904410 systemd[1]: Started cri-containerd-6bbf76c9ae73731907f8e7fa69faa431b3cd1b4729569334ac07d80fc398a6b6.scope - libcontainer container 6bbf76c9ae73731907f8e7fa69faa431b3cd1b4729569334ac07d80fc398a6b6. Sep 3 23:28:50.927633 containerd[1905]: time="2025-09-03T23:28:50.927606202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5vrl,Uid:89cc364d-d6db-40fb-8f31-457c13967201,Namespace:kube-system,Attempt:0,} returns sandbox id \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\"" Sep 3 23:28:50.936088 containerd[1905]: time="2025-09-03T23:28:50.936060412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvchx,Uid:38753954-e8c0-4963-ab13-49ae7de8ce86,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bbf76c9ae73731907f8e7fa69faa431b3cd1b4729569334ac07d80fc398a6b6\"" Sep 3 23:28:50.938894 containerd[1905]: time="2025-09-03T23:28:50.938545700Z" level=info msg="CreateContainer within sandbox \"6bbf76c9ae73731907f8e7fa69faa431b3cd1b4729569334ac07d80fc398a6b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 3 23:28:50.959303 containerd[1905]: time="2025-09-03T23:28:50.959281871Z" level=info msg="Container d5e1954c635fa12690ede99a71b685824f43493bfa9f25691f1ea4e05aa83c3b: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:28:50.975390 containerd[1905]: time="2025-09-03T23:28:50.975365461Z" level=info msg="CreateContainer within sandbox \"6bbf76c9ae73731907f8e7fa69faa431b3cd1b4729569334ac07d80fc398a6b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d5e1954c635fa12690ede99a71b685824f43493bfa9f25691f1ea4e05aa83c3b\"" Sep 3 23:28:50.976092 containerd[1905]: time="2025-09-03T23:28:50.975984475Z" level=info msg="StartContainer for \"d5e1954c635fa12690ede99a71b685824f43493bfa9f25691f1ea4e05aa83c3b\"" Sep 3 23:28:50.977007 containerd[1905]: time="2025-09-03T23:28:50.976978374Z" level=info msg="connecting to shim d5e1954c635fa12690ede99a71b685824f43493bfa9f25691f1ea4e05aa83c3b" address="unix:///run/containerd/s/2b2561be81e599f1ed432adf1cd854aebb241ec550f755bdf9c22dd65d23fac6" protocol=ttrpc version=3 Sep 3 23:28:50.992011 systemd[1]: Started cri-containerd-d5e1954c635fa12690ede99a71b685824f43493bfa9f25691f1ea4e05aa83c3b.scope - libcontainer container d5e1954c635fa12690ede99a71b685824f43493bfa9f25691f1ea4e05aa83c3b. Sep 3 23:28:51.020348 containerd[1905]: time="2025-09-03T23:28:51.020323182Z" level=info msg="StartContainer for \"d5e1954c635fa12690ede99a71b685824f43493bfa9f25691f1ea4e05aa83c3b\" returns successfully" Sep 3 23:28:51.669298 kubelet[3386]: I0903 23:28:51.669248 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xvchx" podStartSLOduration=2.669144149 podStartE2EDuration="2.669144149s" podCreationTimestamp="2025-09-03 23:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:28:51.668633419 +0000 UTC m=+7.139559559" watchObservedRunningTime="2025-09-03 23:28:51.669144149 +0000 UTC m=+7.140070289" Sep 3 23:28:52.287311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2764608603.mount: Deactivated successfully. Sep 3 23:28:52.675426 containerd[1905]: time="2025-09-03T23:28:52.675389162Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:52.678350 containerd[1905]: time="2025-09-03T23:28:52.678244838Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 3 23:28:52.681843 containerd[1905]: time="2025-09-03T23:28:52.681821948Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:52.682815 containerd[1905]: time="2025-09-03T23:28:52.682692443Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.990583507s" Sep 3 23:28:52.682815 containerd[1905]: time="2025-09-03T23:28:52.682715700Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 3 23:28:52.683877 containerd[1905]: time="2025-09-03T23:28:52.683854292Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 3 23:28:52.686017 containerd[1905]: time="2025-09-03T23:28:52.685998976Z" level=info msg="CreateContainer within sandbox \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 3 23:28:52.705847 containerd[1905]: time="2025-09-03T23:28:52.705289687Z" level=info msg="Container 048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:28:52.718732 containerd[1905]: time="2025-09-03T23:28:52.718701520Z" level=info msg="CreateContainer within sandbox \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\"" Sep 3 23:28:52.719365 containerd[1905]: time="2025-09-03T23:28:52.719240227Z" level=info msg="StartContainer for \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\"" Sep 3 23:28:52.720289 containerd[1905]: time="2025-09-03T23:28:52.720108522Z" level=info msg="connecting to shim 048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d" address="unix:///run/containerd/s/7555a013d384d93cd8a7adae4cc8f5d06adf6f0e602eef88a94cfbe59f9ea80a" protocol=ttrpc version=3 Sep 3 23:28:52.740026 systemd[1]: Started cri-containerd-048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d.scope - libcontainer container 048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d. Sep 3 23:28:52.763976 containerd[1905]: time="2025-09-03T23:28:52.763821590Z" level=info msg="StartContainer for \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\" returns successfully" Sep 3 23:28:53.669247 kubelet[3386]: I0903 23:28:53.669181 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-sjkzh" podStartSLOduration=1.675971926 podStartE2EDuration="3.669166942s" podCreationTimestamp="2025-09-03 23:28:50 +0000 UTC" firstStartedPulling="2025-09-03 23:28:50.690534752 +0000 UTC m=+6.161460900" lastFinishedPulling="2025-09-03 23:28:52.683729776 +0000 UTC m=+8.154655916" observedRunningTime="2025-09-03 23:28:53.669008968 +0000 UTC m=+9.139935108" watchObservedRunningTime="2025-09-03 23:28:53.669166942 +0000 UTC m=+9.140093082" Sep 3 23:28:56.368319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651949027.mount: Deactivated successfully. Sep 3 23:28:57.852088 containerd[1905]: time="2025-09-03T23:28:57.851976832Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:57.855663 containerd[1905]: time="2025-09-03T23:28:57.855637040Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 3 23:28:57.858917 containerd[1905]: time="2025-09-03T23:28:57.858875778Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:28:57.859885 containerd[1905]: time="2025-09-03T23:28:57.859857493Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.175978287s" Sep 3 23:28:57.859885 containerd[1905]: time="2025-09-03T23:28:57.859881237Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 3 23:28:57.863328 containerd[1905]: time="2025-09-03T23:28:57.862499033Z" level=info msg="CreateContainer within sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:28:57.926245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount742919277.mount: Deactivated successfully. Sep 3 23:28:57.928467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370094944.mount: Deactivated successfully. Sep 3 23:28:57.930489 containerd[1905]: time="2025-09-03T23:28:57.928851514Z" level=info msg="Container 70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:28:58.213566 containerd[1905]: time="2025-09-03T23:28:58.213531392Z" level=info msg="CreateContainer within sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\"" Sep 3 23:28:58.214888 containerd[1905]: time="2025-09-03T23:28:58.214059970Z" level=info msg="StartContainer for \"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\"" Sep 3 23:28:58.214888 containerd[1905]: time="2025-09-03T23:28:58.214634918Z" level=info msg="connecting to shim 70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe" address="unix:///run/containerd/s/8281d6c6a394de90cf1f35009959accd57b8a7e540918cd491ac3c3aa0bd919b" protocol=ttrpc version=3 Sep 3 23:28:58.232031 systemd[1]: Started cri-containerd-70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe.scope - libcontainer container 70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe. Sep 3 23:28:58.258326 containerd[1905]: time="2025-09-03T23:28:58.258301179Z" level=info msg="StartContainer for \"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\" returns successfully" Sep 3 23:28:58.261185 systemd[1]: cri-containerd-70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe.scope: Deactivated successfully. Sep 3 23:28:58.265763 containerd[1905]: time="2025-09-03T23:28:58.265720487Z" level=info msg="received exit event container_id:\"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\" id:\"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\" pid:3850 exited_at:{seconds:1756942138 nanos:265465910}" Sep 3 23:28:58.266657 containerd[1905]: time="2025-09-03T23:28:58.265919526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\" id:\"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\" pid:3850 exited_at:{seconds:1756942138 nanos:265465910}" Sep 3 23:28:58.924119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe-rootfs.mount: Deactivated successfully. Sep 3 23:29:00.673572 containerd[1905]: time="2025-09-03T23:29:00.673276616Z" level=info msg="CreateContainer within sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:29:00.696582 containerd[1905]: time="2025-09-03T23:29:00.695129447Z" level=info msg="Container ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:29:00.710239 containerd[1905]: time="2025-09-03T23:29:00.710216192Z" level=info msg="CreateContainer within sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\"" Sep 3 23:29:00.710686 containerd[1905]: time="2025-09-03T23:29:00.710668592Z" level=info msg="StartContainer for \"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\"" Sep 3 23:29:00.711782 containerd[1905]: time="2025-09-03T23:29:00.711762046Z" level=info msg="connecting to shim ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1" address="unix:///run/containerd/s/8281d6c6a394de90cf1f35009959accd57b8a7e540918cd491ac3c3aa0bd919b" protocol=ttrpc version=3 Sep 3 23:29:00.730042 systemd[1]: Started cri-containerd-ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1.scope - libcontainer container ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1. Sep 3 23:29:00.754304 containerd[1905]: time="2025-09-03T23:29:00.754276674Z" level=info msg="StartContainer for \"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\" returns successfully" Sep 3 23:29:00.761780 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:29:00.762085 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:29:00.762985 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:29:00.764949 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:29:00.767822 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:29:00.768531 systemd[1]: cri-containerd-ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1.scope: Deactivated successfully. Sep 3 23:29:00.769405 containerd[1905]: time="2025-09-03T23:29:00.769129644Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\" id:\"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\" pid:3895 exited_at:{seconds:1756942140 nanos:768373593}" Sep 3 23:29:00.769585 containerd[1905]: time="2025-09-03T23:29:00.769563827Z" level=info msg="received exit event container_id:\"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\" id:\"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\" pid:3895 exited_at:{seconds:1756942140 nanos:768373593}" Sep 3 23:29:00.780050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:29:01.675134 containerd[1905]: time="2025-09-03T23:29:01.675092773Z" level=info msg="CreateContainer within sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:29:01.694947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1-rootfs.mount: Deactivated successfully. Sep 3 23:29:01.704016 containerd[1905]: time="2025-09-03T23:29:01.703956938Z" level=info msg="Container c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:29:01.726797 containerd[1905]: time="2025-09-03T23:29:01.726754970Z" level=info msg="CreateContainer within sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\"" Sep 3 23:29:01.727377 containerd[1905]: time="2025-09-03T23:29:01.727147879Z" level=info msg="StartContainer for \"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\"" Sep 3 23:29:01.728548 containerd[1905]: time="2025-09-03T23:29:01.728410068Z" level=info msg="connecting to shim c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a" address="unix:///run/containerd/s/8281d6c6a394de90cf1f35009959accd57b8a7e540918cd491ac3c3aa0bd919b" protocol=ttrpc version=3 Sep 3 23:29:01.744022 systemd[1]: Started cri-containerd-c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a.scope - libcontainer container c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a. Sep 3 23:29:01.767904 systemd[1]: cri-containerd-c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a.scope: Deactivated successfully. Sep 3 23:29:01.771030 containerd[1905]: time="2025-09-03T23:29:01.770983010Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\" id:\"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\" pid:3942 exited_at:{seconds:1756942141 nanos:770722456}" Sep 3 23:29:01.771251 containerd[1905]: time="2025-09-03T23:29:01.771201545Z" level=info msg="received exit event container_id:\"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\" id:\"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\" pid:3942 exited_at:{seconds:1756942141 nanos:770722456}" Sep 3 23:29:01.772480 containerd[1905]: time="2025-09-03T23:29:01.772452445Z" level=info msg="StartContainer for \"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\" returns successfully" Sep 3 23:29:02.680374 containerd[1905]: time="2025-09-03T23:29:02.680315049Z" level=info msg="CreateContainer within sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:29:02.693691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a-rootfs.mount: Deactivated successfully. Sep 3 23:29:02.702517 containerd[1905]: time="2025-09-03T23:29:02.702114686Z" level=info msg="Container 1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:29:02.716070 containerd[1905]: time="2025-09-03T23:29:02.716043991Z" level=info msg="CreateContainer within sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\"" Sep 3 23:29:02.716676 containerd[1905]: time="2025-09-03T23:29:02.716356178Z" level=info msg="StartContainer for \"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\"" Sep 3 23:29:02.716871 containerd[1905]: time="2025-09-03T23:29:02.716847051Z" level=info msg="connecting to shim 1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503" address="unix:///run/containerd/s/8281d6c6a394de90cf1f35009959accd57b8a7e540918cd491ac3c3aa0bd919b" protocol=ttrpc version=3 Sep 3 23:29:02.734016 systemd[1]: Started cri-containerd-1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503.scope - libcontainer container 1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503. Sep 3 23:29:02.752807 systemd[1]: cri-containerd-1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503.scope: Deactivated successfully. Sep 3 23:29:02.754045 containerd[1905]: time="2025-09-03T23:29:02.753920752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\" id:\"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\" pid:3981 exited_at:{seconds:1756942142 nanos:753494337}" Sep 3 23:29:02.758318 containerd[1905]: time="2025-09-03T23:29:02.758231848Z" level=info msg="received exit event container_id:\"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\" id:\"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\" pid:3981 exited_at:{seconds:1756942142 nanos:753494337}" Sep 3 23:29:02.759383 containerd[1905]: time="2025-09-03T23:29:02.759360223Z" level=info msg="StartContainer for \"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\" returns successfully" Sep 3 23:29:02.772998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503-rootfs.mount: Deactivated successfully. Sep 3 23:29:03.685227 containerd[1905]: time="2025-09-03T23:29:03.685036584Z" level=info msg="CreateContainer within sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:29:03.715516 containerd[1905]: time="2025-09-03T23:29:03.714539477Z" level=info msg="Container e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:29:03.729379 containerd[1905]: time="2025-09-03T23:29:03.729352573Z" level=info msg="CreateContainer within sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\"" Sep 3 23:29:03.730030 containerd[1905]: time="2025-09-03T23:29:03.729906296Z" level=info msg="StartContainer for \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\"" Sep 3 23:29:03.731443 containerd[1905]: time="2025-09-03T23:29:03.731408684Z" level=info msg="connecting to shim e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215" address="unix:///run/containerd/s/8281d6c6a394de90cf1f35009959accd57b8a7e540918cd491ac3c3aa0bd919b" protocol=ttrpc version=3 Sep 3 23:29:03.755020 systemd[1]: Started cri-containerd-e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215.scope - libcontainer container e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215. Sep 3 23:29:03.784792 containerd[1905]: time="2025-09-03T23:29:03.784714432Z" level=info msg="StartContainer for \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" returns successfully" Sep 3 23:29:03.840863 containerd[1905]: time="2025-09-03T23:29:03.840836462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" id:\"b636058deb6ef98053d623be475ac7efc1076df4c6cdc1b9a95546bdb1b90c05\" pid:4052 exited_at:{seconds:1756942143 nanos:839890589}" Sep 3 23:29:03.943797 kubelet[3386]: I0903 23:29:03.943666 3386 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 3 23:29:03.986743 systemd[1]: Created slice kubepods-burstable-podb3afd9d6_8bd2_4ec6_b51c_b2a364bf1dc6.slice - libcontainer container kubepods-burstable-podb3afd9d6_8bd2_4ec6_b51c_b2a364bf1dc6.slice. Sep 3 23:29:03.994927 systemd[1]: Created slice kubepods-burstable-pod1de02790_4c54_4caf_9d28_9614c2aa7c38.slice - libcontainer container kubepods-burstable-pod1de02790_4c54_4caf_9d28_9614c2aa7c38.slice. Sep 3 23:29:04.005151 kubelet[3386]: I0903 23:29:04.005122 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3afd9d6-8bd2-4ec6-b51c-b2a364bf1dc6-config-volume\") pod \"coredns-668d6bf9bc-jqbv7\" (UID: \"b3afd9d6-8bd2-4ec6-b51c-b2a364bf1dc6\") " pod="kube-system/coredns-668d6bf9bc-jqbv7" Sep 3 23:29:04.005151 kubelet[3386]: I0903 23:29:04.005151 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1de02790-4c54-4caf-9d28-9614c2aa7c38-config-volume\") pod \"coredns-668d6bf9bc-bbffz\" (UID: \"1de02790-4c54-4caf-9d28-9614c2aa7c38\") " pod="kube-system/coredns-668d6bf9bc-bbffz" Sep 3 23:29:04.005270 kubelet[3386]: I0903 23:29:04.005166 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn54r\" (UniqueName: \"kubernetes.io/projected/b3afd9d6-8bd2-4ec6-b51c-b2a364bf1dc6-kube-api-access-jn54r\") pod \"coredns-668d6bf9bc-jqbv7\" (UID: \"b3afd9d6-8bd2-4ec6-b51c-b2a364bf1dc6\") " pod="kube-system/coredns-668d6bf9bc-jqbv7" Sep 3 23:29:04.005270 kubelet[3386]: I0903 23:29:04.005176 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr45v\" (UniqueName: \"kubernetes.io/projected/1de02790-4c54-4caf-9d28-9614c2aa7c38-kube-api-access-qr45v\") pod \"coredns-668d6bf9bc-bbffz\" (UID: \"1de02790-4c54-4caf-9d28-9614c2aa7c38\") " pod="kube-system/coredns-668d6bf9bc-bbffz" Sep 3 23:29:04.291661 containerd[1905]: time="2025-09-03T23:29:04.291409826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jqbv7,Uid:b3afd9d6-8bd2-4ec6-b51c-b2a364bf1dc6,Namespace:kube-system,Attempt:0,}" Sep 3 23:29:04.303348 containerd[1905]: time="2025-09-03T23:29:04.303267676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbffz,Uid:1de02790-4c54-4caf-9d28-9614c2aa7c38,Namespace:kube-system,Attempt:0,}" Sep 3 23:29:04.703543 kubelet[3386]: I0903 23:29:04.703438 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w5vrl" podStartSLOduration=8.771352261 podStartE2EDuration="15.703422736s" podCreationTimestamp="2025-09-03 23:28:49 +0000 UTC" firstStartedPulling="2025-09-03 23:28:50.928721066 +0000 UTC m=+6.399647206" lastFinishedPulling="2025-09-03 23:28:57.860791541 +0000 UTC m=+13.331717681" observedRunningTime="2025-09-03 23:29:04.702866413 +0000 UTC m=+20.173792553" watchObservedRunningTime="2025-09-03 23:29:04.703422736 +0000 UTC m=+20.174348884" Sep 3 23:29:06.123483 systemd-networkd[1700]: cilium_host: Link UP Sep 3 23:29:06.124080 systemd-networkd[1700]: cilium_net: Link UP Sep 3 23:29:06.124173 systemd-networkd[1700]: cilium_net: Gained carrier Sep 3 23:29:06.124243 systemd-networkd[1700]: cilium_host: Gained carrier Sep 3 23:29:06.493620 systemd-networkd[1700]: cilium_vxlan: Link UP Sep 3 23:29:06.493627 systemd-networkd[1700]: cilium_vxlan: Gained carrier Sep 3 23:29:06.834935 kernel: NET: Registered PF_ALG protocol family Sep 3 23:29:06.867030 systemd-networkd[1700]: cilium_net: Gained IPv6LL Sep 3 23:29:07.122082 systemd-networkd[1700]: cilium_host: Gained IPv6LL Sep 3 23:29:07.520428 systemd-networkd[1700]: lxc_health: Link UP Sep 3 23:29:07.531163 systemd-networkd[1700]: lxc_health: Gained carrier Sep 3 23:29:07.822934 kernel: eth0: renamed from tmp994b2 Sep 3 23:29:07.824088 systemd-networkd[1700]: lxc96170b13d1c5: Link UP Sep 3 23:29:07.824237 systemd-networkd[1700]: lxc96170b13d1c5: Gained carrier Sep 3 23:29:07.826040 systemd-networkd[1700]: cilium_vxlan: Gained IPv6LL Sep 3 23:29:07.847014 systemd-networkd[1700]: lxcc4631a81e98b: Link UP Sep 3 23:29:07.848954 kernel: eth0: renamed from tmpd2ec9 Sep 3 23:29:07.849741 systemd-networkd[1700]: lxcc4631a81e98b: Gained carrier Sep 3 23:29:08.978079 systemd-networkd[1700]: lxc96170b13d1c5: Gained IPv6LL Sep 3 23:29:09.362552 systemd-networkd[1700]: lxcc4631a81e98b: Gained IPv6LL Sep 3 23:29:09.554065 systemd-networkd[1700]: lxc_health: Gained IPv6LL Sep 3 23:29:10.284542 containerd[1905]: time="2025-09-03T23:29:10.284505020Z" level=info msg="connecting to shim 994b21d8da339950283f6b0bc2589db73a14913c44d5de8218005321c4312c1e" address="unix:///run/containerd/s/018ca03a7b984063b8a640810ed50a4ecabf3f91055d70e691694cc8c5bc7f9d" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:29:10.285133 containerd[1905]: time="2025-09-03T23:29:10.285110144Z" level=info msg="connecting to shim d2ec93242c57352e040deafcda6dd368de907bfc7b8459d63ca5129107976077" address="unix:///run/containerd/s/bed8b41ee383ff6247c6185a3bca9c78ec0dbfd9e226cab1df7f9df8c58150fa" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:29:10.304029 systemd[1]: Started cri-containerd-994b21d8da339950283f6b0bc2589db73a14913c44d5de8218005321c4312c1e.scope - libcontainer container 994b21d8da339950283f6b0bc2589db73a14913c44d5de8218005321c4312c1e. Sep 3 23:29:10.306504 systemd[1]: Started cri-containerd-d2ec93242c57352e040deafcda6dd368de907bfc7b8459d63ca5129107976077.scope - libcontainer container d2ec93242c57352e040deafcda6dd368de907bfc7b8459d63ca5129107976077. Sep 3 23:29:10.340166 containerd[1905]: time="2025-09-03T23:29:10.340098111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jqbv7,Uid:b3afd9d6-8bd2-4ec6-b51c-b2a364bf1dc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"994b21d8da339950283f6b0bc2589db73a14913c44d5de8218005321c4312c1e\"" Sep 3 23:29:10.344756 containerd[1905]: time="2025-09-03T23:29:10.344675445Z" level=info msg="CreateContainer within sandbox \"994b21d8da339950283f6b0bc2589db73a14913c44d5de8218005321c4312c1e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:29:10.347426 containerd[1905]: time="2025-09-03T23:29:10.347403436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbffz,Uid:1de02790-4c54-4caf-9d28-9614c2aa7c38,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2ec93242c57352e040deafcda6dd368de907bfc7b8459d63ca5129107976077\"" Sep 3 23:29:10.351447 containerd[1905]: time="2025-09-03T23:29:10.351219480Z" level=info msg="CreateContainer within sandbox \"d2ec93242c57352e040deafcda6dd368de907bfc7b8459d63ca5129107976077\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:29:10.374536 containerd[1905]: time="2025-09-03T23:29:10.374515838Z" level=info msg="Container c49fe90466159508c6916d2231b251acb3e4f91933b40edde94f1522939f7aeb: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:29:10.383036 containerd[1905]: time="2025-09-03T23:29:10.383011683Z" level=info msg="Container 87da5848ec31a5e86ac18b9f88a2d64a0a6bd83519c4740ace9ec74745bb8d87: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:29:10.396406 containerd[1905]: time="2025-09-03T23:29:10.396382186Z" level=info msg="CreateContainer within sandbox \"994b21d8da339950283f6b0bc2589db73a14913c44d5de8218005321c4312c1e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c49fe90466159508c6916d2231b251acb3e4f91933b40edde94f1522939f7aeb\"" Sep 3 23:29:10.396949 containerd[1905]: time="2025-09-03T23:29:10.396930917Z" level=info msg="StartContainer for \"c49fe90466159508c6916d2231b251acb3e4f91933b40edde94f1522939f7aeb\"" Sep 3 23:29:10.397517 containerd[1905]: time="2025-09-03T23:29:10.397494489Z" level=info msg="connecting to shim c49fe90466159508c6916d2231b251acb3e4f91933b40edde94f1522939f7aeb" address="unix:///run/containerd/s/018ca03a7b984063b8a640810ed50a4ecabf3f91055d70e691694cc8c5bc7f9d" protocol=ttrpc version=3 Sep 3 23:29:10.405988 containerd[1905]: time="2025-09-03T23:29:10.405961197Z" level=info msg="CreateContainer within sandbox \"d2ec93242c57352e040deafcda6dd368de907bfc7b8459d63ca5129107976077\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87da5848ec31a5e86ac18b9f88a2d64a0a6bd83519c4740ace9ec74745bb8d87\"" Sep 3 23:29:10.407232 containerd[1905]: time="2025-09-03T23:29:10.406329266Z" level=info msg="StartContainer for \"87da5848ec31a5e86ac18b9f88a2d64a0a6bd83519c4740ace9ec74745bb8d87\"" Sep 3 23:29:10.407232 containerd[1905]: time="2025-09-03T23:29:10.406816051Z" level=info msg="connecting to shim 87da5848ec31a5e86ac18b9f88a2d64a0a6bd83519c4740ace9ec74745bb8d87" address="unix:///run/containerd/s/bed8b41ee383ff6247c6185a3bca9c78ec0dbfd9e226cab1df7f9df8c58150fa" protocol=ttrpc version=3 Sep 3 23:29:10.414091 systemd[1]: Started cri-containerd-c49fe90466159508c6916d2231b251acb3e4f91933b40edde94f1522939f7aeb.scope - libcontainer container c49fe90466159508c6916d2231b251acb3e4f91933b40edde94f1522939f7aeb. Sep 3 23:29:10.428020 systemd[1]: Started cri-containerd-87da5848ec31a5e86ac18b9f88a2d64a0a6bd83519c4740ace9ec74745bb8d87.scope - libcontainer container 87da5848ec31a5e86ac18b9f88a2d64a0a6bd83519c4740ace9ec74745bb8d87. Sep 3 23:29:10.454309 containerd[1905]: time="2025-09-03T23:29:10.454256476Z" level=info msg="StartContainer for \"c49fe90466159508c6916d2231b251acb3e4f91933b40edde94f1522939f7aeb\" returns successfully" Sep 3 23:29:10.459885 containerd[1905]: time="2025-09-03T23:29:10.459849254Z" level=info msg="StartContainer for \"87da5848ec31a5e86ac18b9f88a2d64a0a6bd83519c4740ace9ec74745bb8d87\" returns successfully" Sep 3 23:29:10.711880 kubelet[3386]: I0903 23:29:10.711837 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bbffz" podStartSLOduration=20.711825619 podStartE2EDuration="20.711825619s" podCreationTimestamp="2025-09-03 23:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:29:10.710710484 +0000 UTC m=+26.181636632" watchObservedRunningTime="2025-09-03 23:29:10.711825619 +0000 UTC m=+26.182751767" Sep 3 23:29:10.727230 kubelet[3386]: I0903 23:29:10.727058 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jqbv7" podStartSLOduration=20.727048001 podStartE2EDuration="20.727048001s" podCreationTimestamp="2025-09-03 23:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:29:10.725865937 +0000 UTC m=+26.196792077" watchObservedRunningTime="2025-09-03 23:29:10.727048001 +0000 UTC m=+26.197974141" Sep 3 23:29:11.266546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877978960.mount: Deactivated successfully. Sep 3 23:30:51.383472 systemd[1]: Started sshd@7-10.200.20.24:22-10.200.16.10:49680.service - OpenSSH per-connection server daemon (10.200.16.10:49680). Sep 3 23:30:51.839518 sshd[4707]: Accepted publickey for core from 10.200.16.10 port 49680 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:30:51.840969 sshd-session[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:30:51.844742 systemd-logind[1864]: New session 10 of user core. Sep 3 23:30:51.857216 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 3 23:30:52.323441 sshd[4709]: Connection closed by 10.200.16.10 port 49680 Sep 3 23:30:52.322961 sshd-session[4707]: pam_unix(sshd:session): session closed for user core Sep 3 23:30:52.325403 systemd-logind[1864]: Session 10 logged out. Waiting for processes to exit. Sep 3 23:30:52.326003 systemd[1]: sshd@7-10.200.20.24:22-10.200.16.10:49680.service: Deactivated successfully. Sep 3 23:30:52.327223 systemd[1]: session-10.scope: Deactivated successfully. Sep 3 23:30:52.330120 systemd-logind[1864]: Removed session 10. Sep 3 23:30:57.412420 systemd[1]: Started sshd@8-10.200.20.24:22-10.200.16.10:49694.service - OpenSSH per-connection server daemon (10.200.16.10:49694). Sep 3 23:30:57.906293 sshd[4722]: Accepted publickey for core from 10.200.16.10 port 49694 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:30:57.907289 sshd-session[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:30:57.910686 systemd-logind[1864]: New session 11 of user core. Sep 3 23:30:57.915030 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 3 23:30:58.296050 sshd[4724]: Connection closed by 10.200.16.10 port 49694 Sep 3 23:30:58.296502 sshd-session[4722]: pam_unix(sshd:session): session closed for user core Sep 3 23:30:58.299107 systemd[1]: sshd@8-10.200.20.24:22-10.200.16.10:49694.service: Deactivated successfully. Sep 3 23:30:58.300761 systemd[1]: session-11.scope: Deactivated successfully. Sep 3 23:30:58.301502 systemd-logind[1864]: Session 11 logged out. Waiting for processes to exit. Sep 3 23:30:58.302658 systemd-logind[1864]: Removed session 11. Sep 3 23:31:00.763984 update_engine[1866]: I20250903 23:31:00.763933 1866 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 3 23:31:00.763984 update_engine[1866]: I20250903 23:31:00.763976 1866 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 3 23:31:00.764318 update_engine[1866]: I20250903 23:31:00.764124 1866 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 3 23:31:00.764558 update_engine[1866]: I20250903 23:31:00.764406 1866 omaha_request_params.cc:62] Current group set to beta Sep 3 23:31:00.764558 update_engine[1866]: I20250903 23:31:00.764492 1866 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 3 23:31:00.764558 update_engine[1866]: I20250903 23:31:00.764498 1866 update_attempter.cc:643] Scheduling an action processor start. Sep 3 23:31:00.764558 update_engine[1866]: I20250903 23:31:00.764513 1866 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 3 23:31:00.764558 update_engine[1866]: I20250903 23:31:00.764533 1866 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 3 23:31:00.764651 update_engine[1866]: I20250903 23:31:00.764570 1866 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 3 23:31:00.764651 update_engine[1866]: I20250903 23:31:00.764575 1866 omaha_request_action.cc:272] Request: Sep 3 23:31:00.764651 update_engine[1866]: Sep 3 23:31:00.764651 update_engine[1866]: Sep 3 23:31:00.764651 update_engine[1866]: Sep 3 23:31:00.764651 update_engine[1866]: Sep 3 23:31:00.764651 update_engine[1866]: Sep 3 23:31:00.764651 update_engine[1866]: Sep 3 23:31:00.764651 update_engine[1866]: Sep 3 23:31:00.764651 update_engine[1866]: Sep 3 23:31:00.764651 update_engine[1866]: I20250903 23:31:00.764578 1866 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 3 23:31:00.765257 locksmithd[2017]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 3 23:31:00.765417 update_engine[1866]: I20250903 23:31:00.765372 1866 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 3 23:31:00.765649 update_engine[1866]: I20250903 23:31:00.765620 1866 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 3 23:31:00.884403 update_engine[1866]: E20250903 23:31:00.884366 1866 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 3 23:31:00.884477 update_engine[1866]: I20250903 23:31:00.884433 1866 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 3 23:31:03.390028 systemd[1]: Started sshd@9-10.200.20.24:22-10.200.16.10:44232.service - OpenSSH per-connection server daemon (10.200.16.10:44232). Sep 3 23:31:03.882175 sshd[4737]: Accepted publickey for core from 10.200.16.10 port 44232 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:03.883206 sshd-session[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:03.886661 systemd-logind[1864]: New session 12 of user core. Sep 3 23:31:03.894022 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 3 23:31:04.296159 sshd[4739]: Connection closed by 10.200.16.10 port 44232 Sep 3 23:31:04.296496 sshd-session[4737]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:04.300228 systemd[1]: sshd@9-10.200.20.24:22-10.200.16.10:44232.service: Deactivated successfully. Sep 3 23:31:04.302176 systemd[1]: session-12.scope: Deactivated successfully. Sep 3 23:31:04.303029 systemd-logind[1864]: Session 12 logged out. Waiting for processes to exit. Sep 3 23:31:04.304531 systemd-logind[1864]: Removed session 12. Sep 3 23:31:09.391221 systemd[1]: Started sshd@10-10.200.20.24:22-10.200.16.10:44246.service - OpenSSH per-connection server daemon (10.200.16.10:44246). Sep 3 23:31:09.844336 sshd[4751]: Accepted publickey for core from 10.200.16.10 port 44246 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:09.845743 sshd-session[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:09.849752 systemd-logind[1864]: New session 13 of user core. Sep 3 23:31:09.859007 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 3 23:31:10.225000 sshd[4753]: Connection closed by 10.200.16.10 port 44246 Sep 3 23:31:10.224348 sshd-session[4751]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:10.227465 systemd-logind[1864]: Session 13 logged out. Waiting for processes to exit. Sep 3 23:31:10.227880 systemd[1]: sshd@10-10.200.20.24:22-10.200.16.10:44246.service: Deactivated successfully. Sep 3 23:31:10.231024 systemd[1]: session-13.scope: Deactivated successfully. Sep 3 23:31:10.232396 systemd-logind[1864]: Removed session 13. Sep 3 23:31:10.311169 systemd[1]: Started sshd@11-10.200.20.24:22-10.200.16.10:46276.service - OpenSSH per-connection server daemon (10.200.16.10:46276). Sep 3 23:31:10.763517 update_engine[1866]: I20250903 23:31:10.763470 1866 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 3 23:31:10.763782 update_engine[1866]: I20250903 23:31:10.763648 1866 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 3 23:31:10.763856 update_engine[1866]: I20250903 23:31:10.763828 1866 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 3 23:31:10.806901 sshd[4765]: Accepted publickey for core from 10.200.16.10 port 46276 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:10.807938 sshd-session[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:10.811182 systemd-logind[1864]: New session 14 of user core. Sep 3 23:31:10.815111 update_engine[1866]: E20250903 23:31:10.815075 1866 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 3 23:31:10.815178 update_engine[1866]: I20250903 23:31:10.815126 1866 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 3 23:31:10.817022 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 3 23:31:11.245159 sshd[4767]: Connection closed by 10.200.16.10 port 46276 Sep 3 23:31:11.245641 sshd-session[4765]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:11.248802 systemd[1]: sshd@11-10.200.20.24:22-10.200.16.10:46276.service: Deactivated successfully. Sep 3 23:31:11.251311 systemd[1]: session-14.scope: Deactivated successfully. Sep 3 23:31:11.252702 systemd-logind[1864]: Session 14 logged out. Waiting for processes to exit. Sep 3 23:31:11.254215 systemd-logind[1864]: Removed session 14. Sep 3 23:31:11.335204 systemd[1]: Started sshd@12-10.200.20.24:22-10.200.16.10:46290.service - OpenSSH per-connection server daemon (10.200.16.10:46290). Sep 3 23:31:11.827747 sshd[4777]: Accepted publickey for core from 10.200.16.10 port 46290 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:11.828749 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:11.832222 systemd-logind[1864]: New session 15 of user core. Sep 3 23:31:11.844025 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 3 23:31:12.219763 sshd[4779]: Connection closed by 10.200.16.10 port 46290 Sep 3 23:31:12.220439 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:12.223552 systemd-logind[1864]: Session 15 logged out. Waiting for processes to exit. Sep 3 23:31:12.223692 systemd[1]: sshd@12-10.200.20.24:22-10.200.16.10:46290.service: Deactivated successfully. Sep 3 23:31:12.224883 systemd[1]: session-15.scope: Deactivated successfully. Sep 3 23:31:12.226306 systemd-logind[1864]: Removed session 15. Sep 3 23:31:17.300735 systemd[1]: Started sshd@13-10.200.20.24:22-10.200.16.10:46302.service - OpenSSH per-connection server daemon (10.200.16.10:46302). Sep 3 23:31:17.763298 sshd[4790]: Accepted publickey for core from 10.200.16.10 port 46302 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:17.764306 sshd-session[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:17.767651 systemd-logind[1864]: New session 16 of user core. Sep 3 23:31:17.776177 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 3 23:31:18.128779 sshd[4792]: Connection closed by 10.200.16.10 port 46302 Sep 3 23:31:18.128176 sshd-session[4790]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:18.130401 systemd-logind[1864]: Session 16 logged out. Waiting for processes to exit. Sep 3 23:31:18.131635 systemd[1]: sshd@13-10.200.20.24:22-10.200.16.10:46302.service: Deactivated successfully. Sep 3 23:31:18.133138 systemd[1]: session-16.scope: Deactivated successfully. Sep 3 23:31:18.135247 systemd-logind[1864]: Removed session 16. Sep 3 23:31:18.232865 systemd[1]: Started sshd@14-10.200.20.24:22-10.200.16.10:46318.service - OpenSSH per-connection server daemon (10.200.16.10:46318). Sep 3 23:31:18.685657 sshd[4803]: Accepted publickey for core from 10.200.16.10 port 46318 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:18.686609 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:18.690037 systemd-logind[1864]: New session 17 of user core. Sep 3 23:31:18.695018 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 3 23:31:19.110333 sshd[4805]: Connection closed by 10.200.16.10 port 46318 Sep 3 23:31:19.110845 sshd-session[4803]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:19.113575 systemd[1]: sshd@14-10.200.20.24:22-10.200.16.10:46318.service: Deactivated successfully. Sep 3 23:31:19.114782 systemd[1]: session-17.scope: Deactivated successfully. Sep 3 23:31:19.115865 systemd-logind[1864]: Session 17 logged out. Waiting for processes to exit. Sep 3 23:31:19.116950 systemd-logind[1864]: Removed session 17. Sep 3 23:31:19.197218 systemd[1]: Started sshd@15-10.200.20.24:22-10.200.16.10:46324.service - OpenSSH per-connection server daemon (10.200.16.10:46324). Sep 3 23:31:19.654787 sshd[4815]: Accepted publickey for core from 10.200.16.10 port 46324 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:19.655818 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:19.659308 systemd-logind[1864]: New session 18 of user core. Sep 3 23:31:19.666020 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 3 23:31:20.457229 sshd[4817]: Connection closed by 10.200.16.10 port 46324 Sep 3 23:31:20.457717 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:20.460750 systemd[1]: sshd@15-10.200.20.24:22-10.200.16.10:46324.service: Deactivated successfully. Sep 3 23:31:20.463159 systemd[1]: session-18.scope: Deactivated successfully. Sep 3 23:31:20.464540 systemd-logind[1864]: Session 18 logged out. Waiting for processes to exit. Sep 3 23:31:20.465819 systemd-logind[1864]: Removed session 18. Sep 3 23:31:20.548977 systemd[1]: Started sshd@16-10.200.20.24:22-10.200.16.10:56302.service - OpenSSH per-connection server daemon (10.200.16.10:56302). Sep 3 23:31:20.763145 update_engine[1866]: I20250903 23:31:20.763015 1866 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 3 23:31:20.763367 update_engine[1866]: I20250903 23:31:20.763233 1866 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 3 23:31:20.763458 update_engine[1866]: I20250903 23:31:20.763430 1866 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 3 23:31:20.841740 update_engine[1866]: E20250903 23:31:20.841696 1866 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 3 23:31:20.841827 update_engine[1866]: I20250903 23:31:20.841758 1866 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 3 23:31:21.042711 sshd[4835]: Accepted publickey for core from 10.200.16.10 port 56302 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:21.043749 sshd-session[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:21.047058 systemd-logind[1864]: New session 19 of user core. Sep 3 23:31:21.054221 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 3 23:31:21.510439 sshd[4837]: Connection closed by 10.200.16.10 port 56302 Sep 3 23:31:21.510725 sshd-session[4835]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:21.514182 systemd[1]: sshd@16-10.200.20.24:22-10.200.16.10:56302.service: Deactivated successfully. Sep 3 23:31:21.515470 systemd[1]: session-19.scope: Deactivated successfully. Sep 3 23:31:21.516086 systemd-logind[1864]: Session 19 logged out. Waiting for processes to exit. Sep 3 23:31:21.519455 systemd-logind[1864]: Removed session 19. Sep 3 23:31:21.597147 systemd[1]: Started sshd@17-10.200.20.24:22-10.200.16.10:56306.service - OpenSSH per-connection server daemon (10.200.16.10:56306). Sep 3 23:31:22.084194 sshd[4848]: Accepted publickey for core from 10.200.16.10 port 56306 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:22.085272 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:22.088711 systemd-logind[1864]: New session 20 of user core. Sep 3 23:31:22.100013 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 3 23:31:22.472072 sshd[4850]: Connection closed by 10.200.16.10 port 56306 Sep 3 23:31:22.472533 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:22.475200 systemd[1]: sshd@17-10.200.20.24:22-10.200.16.10:56306.service: Deactivated successfully. Sep 3 23:31:22.476447 systemd[1]: session-20.scope: Deactivated successfully. Sep 3 23:31:22.477842 systemd-logind[1864]: Session 20 logged out. Waiting for processes to exit. Sep 3 23:31:22.479116 systemd-logind[1864]: Removed session 20. Sep 3 23:31:27.554462 systemd[1]: Started sshd@18-10.200.20.24:22-10.200.16.10:56314.service - OpenSSH per-connection server daemon (10.200.16.10:56314). Sep 3 23:31:28.010885 sshd[4864]: Accepted publickey for core from 10.200.16.10 port 56314 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:28.011888 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:28.015228 systemd-logind[1864]: New session 21 of user core. Sep 3 23:31:28.024192 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 3 23:31:28.387478 sshd[4866]: Connection closed by 10.200.16.10 port 56314 Sep 3 23:31:28.387351 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:28.389762 systemd-logind[1864]: Session 21 logged out. Waiting for processes to exit. Sep 3 23:31:28.389877 systemd[1]: sshd@18-10.200.20.24:22-10.200.16.10:56314.service: Deactivated successfully. Sep 3 23:31:28.391283 systemd[1]: session-21.scope: Deactivated successfully. Sep 3 23:31:28.393559 systemd-logind[1864]: Removed session 21. Sep 3 23:31:30.764919 update_engine[1866]: I20250903 23:31:30.764855 1866 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 3 23:31:30.765258 update_engine[1866]: I20250903 23:31:30.765089 1866 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 3 23:31:30.765301 update_engine[1866]: I20250903 23:31:30.765277 1866 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 3 23:31:30.770448 update_engine[1866]: E20250903 23:31:30.770418 1866 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 3 23:31:30.770518 update_engine[1866]: I20250903 23:31:30.770460 1866 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 3 23:31:30.770518 update_engine[1866]: I20250903 23:31:30.770466 1866 omaha_request_action.cc:617] Omaha request response: Sep 3 23:31:30.770565 update_engine[1866]: E20250903 23:31:30.770548 1866 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 3 23:31:30.770581 update_engine[1866]: I20250903 23:31:30.770565 1866 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 3 23:31:30.770581 update_engine[1866]: I20250903 23:31:30.770568 1866 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 3 23:31:30.770581 update_engine[1866]: I20250903 23:31:30.770572 1866 update_attempter.cc:306] Processing Done. Sep 3 23:31:30.770624 update_engine[1866]: E20250903 23:31:30.770583 1866 update_attempter.cc:619] Update failed. Sep 3 23:31:30.770624 update_engine[1866]: I20250903 23:31:30.770587 1866 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 3 23:31:30.770624 update_engine[1866]: I20250903 23:31:30.770591 1866 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 3 23:31:30.770624 update_engine[1866]: I20250903 23:31:30.770593 1866 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 3 23:31:30.770905 update_engine[1866]: I20250903 23:31:30.770792 1866 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 3 23:31:30.770905 update_engine[1866]: I20250903 23:31:30.770823 1866 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 3 23:31:30.770905 update_engine[1866]: I20250903 23:31:30.770828 1866 omaha_request_action.cc:272] Request: Sep 3 23:31:30.770905 update_engine[1866]: Sep 3 23:31:30.770905 update_engine[1866]: Sep 3 23:31:30.770905 update_engine[1866]: Sep 3 23:31:30.770905 update_engine[1866]: Sep 3 23:31:30.770905 update_engine[1866]: Sep 3 23:31:30.770905 update_engine[1866]: Sep 3 23:31:30.770905 update_engine[1866]: I20250903 23:31:30.770831 1866 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 3 23:31:30.771071 locksmithd[2017]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 3 23:31:30.771245 update_engine[1866]: I20250903 23:31:30.770951 1866 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 3 23:31:30.771245 update_engine[1866]: I20250903 23:31:30.771080 1866 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 3 23:31:30.775022 update_engine[1866]: E20250903 23:31:30.774997 1866 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 3 23:31:30.775070 update_engine[1866]: I20250903 23:31:30.775029 1866 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 3 23:31:30.775070 update_engine[1866]: I20250903 23:31:30.775034 1866 omaha_request_action.cc:617] Omaha request response: Sep 3 23:31:30.775070 update_engine[1866]: I20250903 23:31:30.775039 1866 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 3 23:31:30.775070 update_engine[1866]: I20250903 23:31:30.775043 1866 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 3 23:31:30.775070 update_engine[1866]: I20250903 23:31:30.775045 1866 update_attempter.cc:306] Processing Done. Sep 3 23:31:30.775070 update_engine[1866]: I20250903 23:31:30.775048 1866 update_attempter.cc:310] Error event sent. Sep 3 23:31:30.775070 update_engine[1866]: I20250903 23:31:30.775054 1866 update_check_scheduler.cc:74] Next update check in 47m15s Sep 3 23:31:30.775291 locksmithd[2017]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 3 23:31:33.477114 systemd[1]: Started sshd@19-10.200.20.24:22-10.200.16.10:50196.service - OpenSSH per-connection server daemon (10.200.16.10:50196). Sep 3 23:31:33.967542 sshd[4877]: Accepted publickey for core from 10.200.16.10 port 50196 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:33.968612 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:33.971938 systemd-logind[1864]: New session 22 of user core. Sep 3 23:31:33.974013 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 3 23:31:34.355960 sshd[4879]: Connection closed by 10.200.16.10 port 50196 Sep 3 23:31:34.355758 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:34.359146 systemd-logind[1864]: Session 22 logged out. Waiting for processes to exit. Sep 3 23:31:34.359626 systemd[1]: sshd@19-10.200.20.24:22-10.200.16.10:50196.service: Deactivated successfully. Sep 3 23:31:34.362100 systemd[1]: session-22.scope: Deactivated successfully. Sep 3 23:31:34.363820 systemd-logind[1864]: Removed session 22. Sep 3 23:31:39.440568 systemd[1]: Started sshd@20-10.200.20.24:22-10.200.16.10:50210.service - OpenSSH per-connection server daemon (10.200.16.10:50210). Sep 3 23:31:39.900742 sshd[4890]: Accepted publickey for core from 10.200.16.10 port 50210 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:39.901687 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:39.905403 systemd-logind[1864]: New session 23 of user core. Sep 3 23:31:39.913040 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 3 23:31:40.280049 sshd[4892]: Connection closed by 10.200.16.10 port 50210 Sep 3 23:31:40.280551 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:40.283599 systemd-logind[1864]: Session 23 logged out. Waiting for processes to exit. Sep 3 23:31:40.283826 systemd[1]: sshd@20-10.200.20.24:22-10.200.16.10:50210.service: Deactivated successfully. Sep 3 23:31:40.285112 systemd[1]: session-23.scope: Deactivated successfully. Sep 3 23:31:40.289295 systemd-logind[1864]: Removed session 23. Sep 3 23:31:40.370748 systemd[1]: Started sshd@21-10.200.20.24:22-10.200.16.10:57472.service - OpenSSH per-connection server daemon (10.200.16.10:57472). Sep 3 23:31:40.878550 sshd[4903]: Accepted publickey for core from 10.200.16.10 port 57472 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:40.879481 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:40.883203 systemd-logind[1864]: New session 24 of user core. Sep 3 23:31:40.886007 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 3 23:31:42.433177 containerd[1905]: time="2025-09-03T23:31:42.433132712Z" level=info msg="StopContainer for \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\" with timeout 30 (s)" Sep 3 23:31:42.434690 containerd[1905]: time="2025-09-03T23:31:42.434633316Z" level=info msg="Stop container \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\" with signal terminated" Sep 3 23:31:42.446242 containerd[1905]: time="2025-09-03T23:31:42.446209271Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:31:42.447207 systemd[1]: cri-containerd-048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d.scope: Deactivated successfully. Sep 3 23:31:42.449212 containerd[1905]: time="2025-09-03T23:31:42.449184007Z" level=info msg="received exit event container_id:\"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\" id:\"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\" pid:3788 exited_at:{seconds:1756942302 nanos:448715998}" Sep 3 23:31:42.449455 containerd[1905]: time="2025-09-03T23:31:42.449431959Z" level=info msg="TaskExit event in podsandbox handler container_id:\"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\" id:\"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\" pid:3788 exited_at:{seconds:1756942302 nanos:448715998}" Sep 3 23:31:42.454892 containerd[1905]: time="2025-09-03T23:31:42.454861468Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" id:\"027776942e6cbc3becfe382150413b9f16082c26571a2d372e19610627f6772b\" pid:4923 exited_at:{seconds:1756942302 nanos:454032720}" Sep 3 23:31:42.456880 containerd[1905]: time="2025-09-03T23:31:42.456761727Z" level=info msg="StopContainer for \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" with timeout 2 (s)" Sep 3 23:31:42.457061 containerd[1905]: time="2025-09-03T23:31:42.457043840Z" level=info msg="Stop container \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" with signal terminated" Sep 3 23:31:42.464008 systemd-networkd[1700]: lxc_health: Link DOWN Sep 3 23:31:42.464287 systemd-networkd[1700]: lxc_health: Lost carrier Sep 3 23:31:42.474755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d-rootfs.mount: Deactivated successfully. Sep 3 23:31:42.476940 systemd[1]: cri-containerd-e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215.scope: Deactivated successfully. Sep 3 23:31:42.477027 containerd[1905]: time="2025-09-03T23:31:42.476900548Z" level=info msg="received exit event container_id:\"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" id:\"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" pid:4017 exited_at:{seconds:1756942302 nanos:476714389}" Sep 3 23:31:42.477114 containerd[1905]: time="2025-09-03T23:31:42.476987695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" id:\"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" pid:4017 exited_at:{seconds:1756942302 nanos:476714389}" Sep 3 23:31:42.477528 systemd[1]: cri-containerd-e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215.scope: Consumed 4.200s CPU time, 125.8M memory peak, 128K read from disk, 12.9M written to disk. Sep 3 23:31:42.493428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215-rootfs.mount: Deactivated successfully. Sep 3 23:31:42.543163 containerd[1905]: time="2025-09-03T23:31:42.543052859Z" level=info msg="StopContainer for \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" returns successfully" Sep 3 23:31:42.543699 containerd[1905]: time="2025-09-03T23:31:42.543672648Z" level=info msg="StopPodSandbox for \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\"" Sep 3 23:31:42.543748 containerd[1905]: time="2025-09-03T23:31:42.543721898Z" level=info msg="Container to stop \"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:31:42.543748 containerd[1905]: time="2025-09-03T23:31:42.543730362Z" level=info msg="Container to stop \"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:31:42.543748 containerd[1905]: time="2025-09-03T23:31:42.543735875Z" level=info msg="Container to stop \"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:31:42.543748 containerd[1905]: time="2025-09-03T23:31:42.543741499Z" level=info msg="Container to stop \"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:31:42.543827 containerd[1905]: time="2025-09-03T23:31:42.543750499Z" level=info msg="Container to stop \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:31:42.547118 containerd[1905]: time="2025-09-03T23:31:42.547050214Z" level=info msg="StopContainer for \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\" returns successfully" Sep 3 23:31:42.547343 containerd[1905]: time="2025-09-03T23:31:42.547323887Z" level=info msg="StopPodSandbox for \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\"" Sep 3 23:31:42.547465 containerd[1905]: time="2025-09-03T23:31:42.547452228Z" level=info msg="Container to stop \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:31:42.549942 systemd[1]: cri-containerd-75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974.scope: Deactivated successfully. Sep 3 23:31:42.551945 containerd[1905]: time="2025-09-03T23:31:42.551886190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" id:\"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" pid:3570 exit_status:137 exited_at:{seconds:1756942302 nanos:551618485}" Sep 3 23:31:42.556011 systemd[1]: cri-containerd-ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243.scope: Deactivated successfully. Sep 3 23:31:42.579581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974-rootfs.mount: Deactivated successfully. Sep 3 23:31:42.590168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243-rootfs.mount: Deactivated successfully. Sep 3 23:31:42.597148 containerd[1905]: time="2025-09-03T23:31:42.596941767Z" level=info msg="shim disconnected" id=75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974 namespace=k8s.io Sep 3 23:31:42.597148 containerd[1905]: time="2025-09-03T23:31:42.596970928Z" level=warning msg="cleaning up after shim disconnected" id=75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974 namespace=k8s.io Sep 3 23:31:42.597148 containerd[1905]: time="2025-09-03T23:31:42.596994337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:31:42.604229 containerd[1905]: time="2025-09-03T23:31:42.604061439Z" level=info msg="shim disconnected" id=ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243 namespace=k8s.io Sep 3 23:31:42.604229 containerd[1905]: time="2025-09-03T23:31:42.604085944Z" level=warning msg="cleaning up after shim disconnected" id=ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243 namespace=k8s.io Sep 3 23:31:42.604229 containerd[1905]: time="2025-09-03T23:31:42.604104864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:31:42.608175 containerd[1905]: time="2025-09-03T23:31:42.606891993Z" level=info msg="received exit event sandbox_id:\"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" exit_status:137 exited_at:{seconds:1756942302 nanos:551618485}" Sep 3 23:31:42.608177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974-shm.mount: Deactivated successfully. Sep 3 23:31:42.608736 containerd[1905]: time="2025-09-03T23:31:42.606898962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\" id:\"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\" pid:3494 exit_status:137 exited_at:{seconds:1756942302 nanos:565147276}" Sep 3 23:31:42.608800 containerd[1905]: time="2025-09-03T23:31:42.608755650Z" level=info msg="received exit event sandbox_id:\"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\" exit_status:137 exited_at:{seconds:1756942302 nanos:565147276}" Sep 3 23:31:42.610078 containerd[1905]: time="2025-09-03T23:31:42.610016062Z" level=info msg="TearDown network for sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" successfully" Sep 3 23:31:42.610391 containerd[1905]: time="2025-09-03T23:31:42.610331729Z" level=info msg="StopPodSandbox for \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" returns successfully" Sep 3 23:31:42.611097 containerd[1905]: time="2025-09-03T23:31:42.610977464Z" level=info msg="TearDown network for sandbox \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\" successfully" Sep 3 23:31:42.611097 containerd[1905]: time="2025-09-03T23:31:42.611001160Z" level=info msg="StopPodSandbox for \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\" returns successfully" Sep 3 23:31:42.728775 kubelet[3386]: I0903 23:31:42.728384 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89cc364d-d6db-40fb-8f31-457c13967201-hubble-tls\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.728775 kubelet[3386]: I0903 23:31:42.728430 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-xtables-lock\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.728775 kubelet[3386]: I0903 23:31:42.728444 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89m87\" (UniqueName: \"kubernetes.io/projected/89cc364d-d6db-40fb-8f31-457c13967201-kube-api-access-89m87\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.728775 kubelet[3386]: I0903 23:31:42.728457 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-host-proc-sys-kernel\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.728775 kubelet[3386]: I0903 23:31:42.728471 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0756153-c1ae-4b65-9224-2175f0918895-cilium-config-path\") pod \"f0756153-c1ae-4b65-9224-2175f0918895\" (UID: \"f0756153-c1ae-4b65-9224-2175f0918895\") " Sep 3 23:31:42.728775 kubelet[3386]: I0903 23:31:42.728481 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cilium-run\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.729230 kubelet[3386]: I0903 23:31:42.728491 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-hostproc\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.729230 kubelet[3386]: I0903 23:31:42.728504 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-host-proc-sys-net\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.729230 kubelet[3386]: I0903 23:31:42.728515 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89cc364d-d6db-40fb-8f31-457c13967201-cilium-config-path\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.729230 kubelet[3386]: I0903 23:31:42.728524 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cni-path\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.729230 kubelet[3386]: I0903 23:31:42.728538 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89cc364d-d6db-40fb-8f31-457c13967201-clustermesh-secrets\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.729230 kubelet[3386]: I0903 23:31:42.728550 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-bpf-maps\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.729323 kubelet[3386]: I0903 23:31:42.728561 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-etc-cni-netd\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.729323 kubelet[3386]: I0903 23:31:42.728575 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cilium-cgroup\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.729323 kubelet[3386]: I0903 23:31:42.728586 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-lib-modules\") pod \"89cc364d-d6db-40fb-8f31-457c13967201\" (UID: \"89cc364d-d6db-40fb-8f31-457c13967201\") " Sep 3 23:31:42.729323 kubelet[3386]: I0903 23:31:42.728596 3386 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8qpz\" (UniqueName: \"kubernetes.io/projected/f0756153-c1ae-4b65-9224-2175f0918895-kube-api-access-l8qpz\") pod \"f0756153-c1ae-4b65-9224-2175f0918895\" (UID: \"f0756153-c1ae-4b65-9224-2175f0918895\") " Sep 3 23:31:42.729427 kubelet[3386]: I0903 23:31:42.729391 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:31:42.730469 kubelet[3386]: I0903 23:31:42.730427 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:31:42.731923 kubelet[3386]: I0903 23:31:42.731723 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cni-path" (OuterVolumeSpecName: "cni-path") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:31:42.731923 kubelet[3386]: I0903 23:31:42.731748 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:31:42.731923 kubelet[3386]: I0903 23:31:42.731757 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-hostproc" (OuterVolumeSpecName: "hostproc") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:31:42.731923 kubelet[3386]: I0903 23:31:42.731765 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:31:42.733368 kubelet[3386]: I0903 23:31:42.733346 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89cc364d-d6db-40fb-8f31-457c13967201-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 3 23:31:42.733458 kubelet[3386]: I0903 23:31:42.733439 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:31:42.733515 kubelet[3386]: I0903 23:31:42.733400 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:31:42.733557 kubelet[3386]: I0903 23:31:42.733421 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:31:42.733590 kubelet[3386]: I0903 23:31:42.733428 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:31:42.733701 kubelet[3386]: I0903 23:31:42.733684 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89cc364d-d6db-40fb-8f31-457c13967201-kube-api-access-89m87" (OuterVolumeSpecName: "kube-api-access-89m87") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "kube-api-access-89m87". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 3 23:31:42.733836 kubelet[3386]: I0903 23:31:42.733823 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89cc364d-d6db-40fb-8f31-457c13967201-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 3 23:31:42.733905 kubelet[3386]: I0903 23:31:42.733881 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0756153-c1ae-4b65-9224-2175f0918895-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f0756153-c1ae-4b65-9224-2175f0918895" (UID: "f0756153-c1ae-4b65-9224-2175f0918895"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 3 23:31:42.734728 kubelet[3386]: I0903 23:31:42.734687 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89cc364d-d6db-40fb-8f31-457c13967201-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "89cc364d-d6db-40fb-8f31-457c13967201" (UID: "89cc364d-d6db-40fb-8f31-457c13967201"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 3 23:31:42.734905 kubelet[3386]: I0903 23:31:42.734875 3386 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0756153-c1ae-4b65-9224-2175f0918895-kube-api-access-l8qpz" (OuterVolumeSpecName: "kube-api-access-l8qpz") pod "f0756153-c1ae-4b65-9224-2175f0918895" (UID: "f0756153-c1ae-4b65-9224-2175f0918895"). InnerVolumeSpecName "kube-api-access-l8qpz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 3 23:31:42.829405 kubelet[3386]: I0903 23:31:42.829273 3386 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cilium-run\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829405 kubelet[3386]: I0903 23:31:42.829300 3386 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-hostproc\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829405 kubelet[3386]: I0903 23:31:42.829307 3386 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-host-proc-sys-net\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829405 kubelet[3386]: I0903 23:31:42.829314 3386 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89cc364d-d6db-40fb-8f31-457c13967201-cilium-config-path\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829405 kubelet[3386]: I0903 23:31:42.829321 3386 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cni-path\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829405 kubelet[3386]: I0903 23:31:42.829327 3386 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0756153-c1ae-4b65-9224-2175f0918895-cilium-config-path\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829405 kubelet[3386]: I0903 23:31:42.829333 3386 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89cc364d-d6db-40fb-8f31-457c13967201-clustermesh-secrets\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829405 kubelet[3386]: I0903 23:31:42.829338 3386 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-bpf-maps\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829608 kubelet[3386]: I0903 23:31:42.829344 3386 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-etc-cni-netd\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829608 kubelet[3386]: I0903 23:31:42.829349 3386 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-cilium-cgroup\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829608 kubelet[3386]: I0903 23:31:42.829355 3386 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-lib-modules\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829608 kubelet[3386]: I0903 23:31:42.829362 3386 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l8qpz\" (UniqueName: \"kubernetes.io/projected/f0756153-c1ae-4b65-9224-2175f0918895-kube-api-access-l8qpz\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829608 kubelet[3386]: I0903 23:31:42.829368 3386 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89cc364d-d6db-40fb-8f31-457c13967201-hubble-tls\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829608 kubelet[3386]: I0903 23:31:42.829373 3386 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-xtables-lock\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829608 kubelet[3386]: I0903 23:31:42.829378 3386 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-89m87\" (UniqueName: \"kubernetes.io/projected/89cc364d-d6db-40fb-8f31-457c13967201-kube-api-access-89m87\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.829608 kubelet[3386]: I0903 23:31:42.829386 3386 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89cc364d-d6db-40fb-8f31-457c13967201-host-proc-sys-kernel\") on node \"ci-4372.1.0-n-e4e1aff60f\" DevicePath \"\"" Sep 3 23:31:42.944976 kubelet[3386]: I0903 23:31:42.944939 3386 scope.go:117] "RemoveContainer" containerID="e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215" Sep 3 23:31:42.953919 containerd[1905]: time="2025-09-03T23:31:42.953120287Z" level=info msg="RemoveContainer for \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\"" Sep 3 23:31:42.953212 systemd[1]: Removed slice kubepods-burstable-pod89cc364d_d6db_40fb_8f31_457c13967201.slice - libcontainer container kubepods-burstable-pod89cc364d_d6db_40fb_8f31_457c13967201.slice. Sep 3 23:31:42.953295 systemd[1]: kubepods-burstable-pod89cc364d_d6db_40fb_8f31_457c13967201.slice: Consumed 4.255s CPU time, 126.2M memory peak, 128K read from disk, 12.9M written to disk. Sep 3 23:31:42.954954 systemd[1]: Removed slice kubepods-besteffort-podf0756153_c1ae_4b65_9224_2175f0918895.slice - libcontainer container kubepods-besteffort-podf0756153_c1ae_4b65_9224_2175f0918895.slice. Sep 3 23:31:42.967550 containerd[1905]: time="2025-09-03T23:31:42.967518636Z" level=info msg="RemoveContainer for \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" returns successfully" Sep 3 23:31:42.967821 kubelet[3386]: I0903 23:31:42.967800 3386 scope.go:117] "RemoveContainer" containerID="1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503" Sep 3 23:31:42.968987 containerd[1905]: time="2025-09-03T23:31:42.968966591Z" level=info msg="RemoveContainer for \"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\"" Sep 3 23:31:42.978318 containerd[1905]: time="2025-09-03T23:31:42.977729888Z" level=info msg="RemoveContainer for \"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\" returns successfully" Sep 3 23:31:42.978386 kubelet[3386]: I0903 23:31:42.977858 3386 scope.go:117] "RemoveContainer" containerID="c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a" Sep 3 23:31:42.980645 containerd[1905]: time="2025-09-03T23:31:42.980167428Z" level=info msg="RemoveContainer for \"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\"" Sep 3 23:31:42.990812 containerd[1905]: time="2025-09-03T23:31:42.990778438Z" level=info msg="RemoveContainer for \"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\" returns successfully" Sep 3 23:31:42.992459 kubelet[3386]: I0903 23:31:42.992438 3386 scope.go:117] "RemoveContainer" containerID="ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1" Sep 3 23:31:42.997510 containerd[1905]: time="2025-09-03T23:31:42.997483295Z" level=info msg="RemoveContainer for \"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\"" Sep 3 23:31:43.004988 containerd[1905]: time="2025-09-03T23:31:43.004966828Z" level=info msg="RemoveContainer for \"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\" returns successfully" Sep 3 23:31:43.005128 kubelet[3386]: I0903 23:31:43.005106 3386 scope.go:117] "RemoveContainer" containerID="70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe" Sep 3 23:31:43.006189 containerd[1905]: time="2025-09-03T23:31:43.006155525Z" level=info msg="RemoveContainer for \"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\"" Sep 3 23:31:43.014434 containerd[1905]: time="2025-09-03T23:31:43.014413749Z" level=info msg="RemoveContainer for \"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\" returns successfully" Sep 3 23:31:43.014636 kubelet[3386]: I0903 23:31:43.014622 3386 scope.go:117] "RemoveContainer" containerID="e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215" Sep 3 23:31:43.015001 containerd[1905]: time="2025-09-03T23:31:43.014865812Z" level=error msg="ContainerStatus for \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\": not found" Sep 3 23:31:43.015190 kubelet[3386]: E0903 23:31:43.015167 3386 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\": not found" containerID="e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215" Sep 3 23:31:43.015245 kubelet[3386]: I0903 23:31:43.015192 3386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215"} err="failed to get container status \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\": rpc error: code = NotFound desc = an error occurred when try to find container \"e60989499cdd8bf6804d18e4d9c41f4dbfc66e4f8ca858e2105211d9876d9215\": not found" Sep 3 23:31:43.015245 kubelet[3386]: I0903 23:31:43.015236 3386 scope.go:117] "RemoveContainer" containerID="1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503" Sep 3 23:31:43.015567 containerd[1905]: time="2025-09-03T23:31:43.015502043Z" level=error msg="ContainerStatus for \"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\": not found" Sep 3 23:31:43.015695 kubelet[3386]: E0903 23:31:43.015681 3386 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\": not found" containerID="1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503" Sep 3 23:31:43.015785 kubelet[3386]: I0903 23:31:43.015770 3386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503"} err="failed to get container status \"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\": rpc error: code = NotFound desc = an error occurred when try to find container \"1931a4a05095b217db313a3d24f21887de6990192a5431601d6c1199db3ca503\": not found" Sep 3 23:31:43.015897 kubelet[3386]: I0903 23:31:43.015837 3386 scope.go:117] "RemoveContainer" containerID="c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a" Sep 3 23:31:43.016219 containerd[1905]: time="2025-09-03T23:31:43.016053646Z" level=error msg="ContainerStatus for \"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\": not found" Sep 3 23:31:43.016273 kubelet[3386]: E0903 23:31:43.016146 3386 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\": not found" containerID="c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a" Sep 3 23:31:43.016273 kubelet[3386]: I0903 23:31:43.016166 3386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a"} err="failed to get container status \"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c208bf9ca565ccd7135aec2218730225e3dad8c42c9fdae090c0f57b862a918a\": not found" Sep 3 23:31:43.016273 kubelet[3386]: I0903 23:31:43.016180 3386 scope.go:117] "RemoveContainer" containerID="ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1" Sep 3 23:31:43.016481 containerd[1905]: time="2025-09-03T23:31:43.016450324Z" level=error msg="ContainerStatus for \"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\": not found" Sep 3 23:31:43.016629 kubelet[3386]: E0903 23:31:43.016557 3386 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\": not found" containerID="ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1" Sep 3 23:31:43.016629 kubelet[3386]: I0903 23:31:43.016574 3386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1"} err="failed to get container status \"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed0fa52cb70133a7b4526cce4da860a44c0200d61c47099254388c3be08568c1\": not found" Sep 3 23:31:43.016741 kubelet[3386]: I0903 23:31:43.016586 3386 scope.go:117] "RemoveContainer" containerID="70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe" Sep 3 23:31:43.017026 containerd[1905]: time="2025-09-03T23:31:43.016897987Z" level=error msg="ContainerStatus for \"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\": not found" Sep 3 23:31:43.017198 kubelet[3386]: E0903 23:31:43.017184 3386 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\": not found" containerID="70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe" Sep 3 23:31:43.017303 kubelet[3386]: I0903 23:31:43.017286 3386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe"} err="failed to get container status \"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\": rpc error: code = NotFound desc = an error occurred when try to find container \"70bbfdb414e3b535736ce6412241ab23ee964ad3adfe14172bb90b69c805babe\": not found" Sep 3 23:31:43.017412 kubelet[3386]: I0903 23:31:43.017354 3386 scope.go:117] "RemoveContainer" containerID="048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d" Sep 3 23:31:43.018520 containerd[1905]: time="2025-09-03T23:31:43.018463474Z" level=info msg="RemoveContainer for \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\"" Sep 3 23:31:43.025834 containerd[1905]: time="2025-09-03T23:31:43.025645084Z" level=info msg="RemoveContainer for \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\" returns successfully" Sep 3 23:31:43.025897 kubelet[3386]: I0903 23:31:43.025765 3386 scope.go:117] "RemoveContainer" containerID="048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d" Sep 3 23:31:43.026137 kubelet[3386]: E0903 23:31:43.026094 3386 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\": not found" containerID="048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d" Sep 3 23:31:43.026137 kubelet[3386]: I0903 23:31:43.026109 3386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d"} err="failed to get container status \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\": rpc error: code = NotFound desc = an error occurred when try to find container \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\": not found" Sep 3 23:31:43.026180 containerd[1905]: time="2025-09-03T23:31:43.026018393Z" level=error msg="ContainerStatus for \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"048bc212eed58d2ea585879a46d66460c3441243108edcf20d8747b328c0760d\": not found" Sep 3 23:31:43.473720 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243-shm.mount: Deactivated successfully. Sep 3 23:31:43.473813 systemd[1]: var-lib-kubelet-pods-89cc364d\x2dd6db\x2d40fb\x2d8f31\x2d457c13967201-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d89m87.mount: Deactivated successfully. Sep 3 23:31:43.473861 systemd[1]: var-lib-kubelet-pods-f0756153\x2dc1ae\x2d4b65\x2d9224\x2d2175f0918895-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl8qpz.mount: Deactivated successfully. Sep 3 23:31:43.474282 systemd[1]: var-lib-kubelet-pods-89cc364d\x2dd6db\x2d40fb\x2d8f31\x2d457c13967201-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 3 23:31:43.474354 systemd[1]: var-lib-kubelet-pods-89cc364d\x2dd6db\x2d40fb\x2d8f31\x2d457c13967201-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 3 23:31:44.455465 sshd[4905]: Connection closed by 10.200.16.10 port 57472 Sep 3 23:31:44.456020 sshd-session[4903]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:44.458552 systemd-logind[1864]: Session 24 logged out. Waiting for processes to exit. Sep 3 23:31:44.460136 systemd[1]: sshd@21-10.200.20.24:22-10.200.16.10:57472.service: Deactivated successfully. Sep 3 23:31:44.462117 systemd[1]: session-24.scope: Deactivated successfully. Sep 3 23:31:44.463367 systemd-logind[1864]: Removed session 24. Sep 3 23:31:44.547180 systemd[1]: Started sshd@22-10.200.20.24:22-10.200.16.10:57488.service - OpenSSH per-connection server daemon (10.200.16.10:57488). Sep 3 23:31:44.595989 kubelet[3386]: I0903 23:31:44.595960 3386 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89cc364d-d6db-40fb-8f31-457c13967201" path="/var/lib/kubelet/pods/89cc364d-d6db-40fb-8f31-457c13967201/volumes" Sep 3 23:31:44.596338 kubelet[3386]: I0903 23:31:44.596319 3386 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0756153-c1ae-4b65-9224-2175f0918895" path="/var/lib/kubelet/pods/f0756153-c1ae-4b65-9224-2175f0918895/volumes" Sep 3 23:31:44.610978 containerd[1905]: time="2025-09-03T23:31:44.610947096Z" level=info msg="StopPodSandbox for \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\"" Sep 3 23:31:44.611166 containerd[1905]: time="2025-09-03T23:31:44.611081484Z" level=info msg="TearDown network for sandbox \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\" successfully" Sep 3 23:31:44.611166 containerd[1905]: time="2025-09-03T23:31:44.611090572Z" level=info msg="StopPodSandbox for \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\" returns successfully" Sep 3 23:31:44.611538 containerd[1905]: time="2025-09-03T23:31:44.611520195Z" level=info msg="RemovePodSandbox for \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\"" Sep 3 23:31:44.611679 containerd[1905]: time="2025-09-03T23:31:44.611615854Z" level=info msg="Forcibly stopping sandbox \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\"" Sep 3 23:31:44.611754 containerd[1905]: time="2025-09-03T23:31:44.611741859Z" level=info msg="TearDown network for sandbox \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\" successfully" Sep 3 23:31:44.612594 containerd[1905]: time="2025-09-03T23:31:44.612562574Z" level=info msg="Ensure that sandbox ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243 in task-service has been cleanup successfully" Sep 3 23:31:44.623868 containerd[1905]: time="2025-09-03T23:31:44.623808044Z" level=info msg="RemovePodSandbox \"ae28248f7cab9aab8fb17c7066c3c22cd508056f3d0dd970c8929f19c74cf243\" returns successfully" Sep 3 23:31:44.624227 containerd[1905]: time="2025-09-03T23:31:44.624205385Z" level=info msg="StopPodSandbox for \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\"" Sep 3 23:31:44.624293 containerd[1905]: time="2025-09-03T23:31:44.624275124Z" level=info msg="TearDown network for sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" successfully" Sep 3 23:31:44.624293 containerd[1905]: time="2025-09-03T23:31:44.624288468Z" level=info msg="StopPodSandbox for \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" returns successfully" Sep 3 23:31:44.624557 containerd[1905]: time="2025-09-03T23:31:44.624529052Z" level=info msg="RemovePodSandbox for \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\"" Sep 3 23:31:44.624557 containerd[1905]: time="2025-09-03T23:31:44.624547837Z" level=info msg="Forcibly stopping sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\"" Sep 3 23:31:44.624611 containerd[1905]: time="2025-09-03T23:31:44.624591215Z" level=info msg="TearDown network for sandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" successfully" Sep 3 23:31:44.625209 containerd[1905]: time="2025-09-03T23:31:44.625190699Z" level=info msg="Ensure that sandbox 75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974 in task-service has been cleanup successfully" Sep 3 23:31:44.635028 containerd[1905]: time="2025-09-03T23:31:44.635000832Z" level=info msg="RemovePodSandbox \"75eedc8befe3b38d801240ed4445e18bc5a978179d101d851400e41cdfbbd974\" returns successfully" Sep 3 23:31:44.703492 kubelet[3386]: E0903 23:31:44.703446 3386 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 3 23:31:45.044004 sshd[5054]: Accepted publickey for core from 10.200.16.10 port 57488 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:45.044700 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:45.047956 systemd-logind[1864]: New session 25 of user core. Sep 3 23:31:45.057015 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 3 23:31:45.803971 kubelet[3386]: I0903 23:31:45.803402 3386 memory_manager.go:355] "RemoveStaleState removing state" podUID="f0756153-c1ae-4b65-9224-2175f0918895" containerName="cilium-operator" Sep 3 23:31:45.803971 kubelet[3386]: I0903 23:31:45.803428 3386 memory_manager.go:355] "RemoveStaleState removing state" podUID="89cc364d-d6db-40fb-8f31-457c13967201" containerName="cilium-agent" Sep 3 23:31:45.811295 systemd[1]: Created slice kubepods-burstable-poda009aacb_f707_4e8c_9861_0d2ea7a382bd.slice - libcontainer container kubepods-burstable-poda009aacb_f707_4e8c_9861_0d2ea7a382bd.slice. Sep 3 23:31:45.888710 sshd[5058]: Connection closed by 10.200.16.10 port 57488 Sep 3 23:31:45.889104 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:45.891884 systemd-logind[1864]: Session 25 logged out. Waiting for processes to exit. Sep 3 23:31:45.892306 systemd[1]: sshd@22-10.200.20.24:22-10.200.16.10:57488.service: Deactivated successfully. Sep 3 23:31:45.893719 systemd[1]: session-25.scope: Deactivated successfully. Sep 3 23:31:45.895428 systemd-logind[1864]: Removed session 25. Sep 3 23:31:45.944232 kubelet[3386]: I0903 23:31:45.944185 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a009aacb-f707-4e8c-9861-0d2ea7a382bd-cilium-cgroup\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944232 kubelet[3386]: I0903 23:31:45.944213 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a009aacb-f707-4e8c-9861-0d2ea7a382bd-cni-path\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944458 kubelet[3386]: I0903 23:31:45.944251 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a009aacb-f707-4e8c-9861-0d2ea7a382bd-cilium-config-path\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944458 kubelet[3386]: I0903 23:31:45.944290 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a009aacb-f707-4e8c-9861-0d2ea7a382bd-host-proc-sys-net\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944458 kubelet[3386]: I0903 23:31:45.944308 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w95sg\" (UniqueName: \"kubernetes.io/projected/a009aacb-f707-4e8c-9861-0d2ea7a382bd-kube-api-access-w95sg\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944458 kubelet[3386]: I0903 23:31:45.944326 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a009aacb-f707-4e8c-9861-0d2ea7a382bd-bpf-maps\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944458 kubelet[3386]: I0903 23:31:45.944347 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a009aacb-f707-4e8c-9861-0d2ea7a382bd-etc-cni-netd\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944458 kubelet[3386]: I0903 23:31:45.944363 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a009aacb-f707-4e8c-9861-0d2ea7a382bd-cilium-run\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944582 kubelet[3386]: I0903 23:31:45.944375 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a009aacb-f707-4e8c-9861-0d2ea7a382bd-cilium-ipsec-secrets\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944582 kubelet[3386]: I0903 23:31:45.944385 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a009aacb-f707-4e8c-9861-0d2ea7a382bd-clustermesh-secrets\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944582 kubelet[3386]: I0903 23:31:45.944394 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a009aacb-f707-4e8c-9861-0d2ea7a382bd-hubble-tls\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944582 kubelet[3386]: I0903 23:31:45.944404 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a009aacb-f707-4e8c-9861-0d2ea7a382bd-host-proc-sys-kernel\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944582 kubelet[3386]: I0903 23:31:45.944416 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a009aacb-f707-4e8c-9861-0d2ea7a382bd-lib-modules\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944582 kubelet[3386]: I0903 23:31:45.944439 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a009aacb-f707-4e8c-9861-0d2ea7a382bd-xtables-lock\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.944697 kubelet[3386]: I0903 23:31:45.944448 3386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a009aacb-f707-4e8c-9861-0d2ea7a382bd-hostproc\") pod \"cilium-ks2sc\" (UID: \"a009aacb-f707-4e8c-9861-0d2ea7a382bd\") " pod="kube-system/cilium-ks2sc" Sep 3 23:31:45.976358 systemd[1]: Started sshd@23-10.200.20.24:22-10.200.16.10:57490.service - OpenSSH per-connection server daemon (10.200.16.10:57490). Sep 3 23:31:46.115798 containerd[1905]: time="2025-09-03T23:31:46.115713182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ks2sc,Uid:a009aacb-f707-4e8c-9861-0d2ea7a382bd,Namespace:kube-system,Attempt:0,}" Sep 3 23:31:46.150513 containerd[1905]: time="2025-09-03T23:31:46.150478946Z" level=info msg="connecting to shim c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041" address="unix:///run/containerd/s/13e3a66341b5f911b9970b9a02c15e983991f0b307cf8b2bda067d857b62d652" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:31:46.170019 systemd[1]: Started cri-containerd-c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041.scope - libcontainer container c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041. Sep 3 23:31:46.192763 containerd[1905]: time="2025-09-03T23:31:46.192730885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ks2sc,Uid:a009aacb-f707-4e8c-9861-0d2ea7a382bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041\"" Sep 3 23:31:46.195458 containerd[1905]: time="2025-09-03T23:31:46.195425217Z" level=info msg="CreateContainer within sandbox \"c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:31:46.210827 containerd[1905]: time="2025-09-03T23:31:46.210801195Z" level=info msg="Container 46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:31:46.224960 containerd[1905]: time="2025-09-03T23:31:46.224934106Z" level=info msg="CreateContainer within sandbox \"c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4\"" Sep 3 23:31:46.226123 containerd[1905]: time="2025-09-03T23:31:46.225425955Z" level=info msg="StartContainer for \"46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4\"" Sep 3 23:31:46.226200 containerd[1905]: time="2025-09-03T23:31:46.226150236Z" level=info msg="connecting to shim 46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4" address="unix:///run/containerd/s/13e3a66341b5f911b9970b9a02c15e983991f0b307cf8b2bda067d857b62d652" protocol=ttrpc version=3 Sep 3 23:31:46.238035 systemd[1]: Started cri-containerd-46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4.scope - libcontainer container 46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4. Sep 3 23:31:46.264864 containerd[1905]: time="2025-09-03T23:31:46.264831613Z" level=info msg="StartContainer for \"46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4\" returns successfully" Sep 3 23:31:46.270180 systemd[1]: cri-containerd-46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4.scope: Deactivated successfully. Sep 3 23:31:46.274337 containerd[1905]: time="2025-09-03T23:31:46.274310039Z" level=info msg="received exit event container_id:\"46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4\" id:\"46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4\" pid:5132 exited_at:{seconds:1756942306 nanos:273905545}" Sep 3 23:31:46.274475 containerd[1905]: time="2025-09-03T23:31:46.274454308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4\" id:\"46c9ffadf67e6149d4dbfc6409be5101bec6429638388b21661be8cefdc35fb4\" pid:5132 exited_at:{seconds:1756942306 nanos:273905545}" Sep 3 23:31:46.442872 sshd[5069]: Accepted publickey for core from 10.200.16.10 port 57490 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:46.444243 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:46.447433 systemd-logind[1864]: New session 26 of user core. Sep 3 23:31:46.453020 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 3 23:31:46.768965 sshd[5165]: Connection closed by 10.200.16.10 port 57490 Sep 3 23:31:46.768279 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:46.771885 systemd[1]: sshd@23-10.200.20.24:22-10.200.16.10:57490.service: Deactivated successfully. Sep 3 23:31:46.773946 systemd[1]: session-26.scope: Deactivated successfully. Sep 3 23:31:46.775170 systemd-logind[1864]: Session 26 logged out. Waiting for processes to exit. Sep 3 23:31:46.776300 systemd-logind[1864]: Removed session 26. Sep 3 23:31:46.851099 systemd[1]: Started sshd@24-10.200.20.24:22-10.200.16.10:57502.service - OpenSSH per-connection server daemon (10.200.16.10:57502). Sep 3 23:31:46.960928 containerd[1905]: time="2025-09-03T23:31:46.960852367Z" level=info msg="CreateContainer within sandbox \"c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:31:46.976968 containerd[1905]: time="2025-09-03T23:31:46.976541684Z" level=info msg="Container 4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:31:46.990065 containerd[1905]: time="2025-09-03T23:31:46.989976404Z" level=info msg="CreateContainer within sandbox \"c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb\"" Sep 3 23:31:46.991825 containerd[1905]: time="2025-09-03T23:31:46.991795946Z" level=info msg="StartContainer for \"4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb\"" Sep 3 23:31:46.993241 containerd[1905]: time="2025-09-03T23:31:46.993040340Z" level=info msg="connecting to shim 4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb" address="unix:///run/containerd/s/13e3a66341b5f911b9970b9a02c15e983991f0b307cf8b2bda067d857b62d652" protocol=ttrpc version=3 Sep 3 23:31:47.017037 systemd[1]: Started cri-containerd-4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb.scope - libcontainer container 4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb. Sep 3 23:31:47.041018 containerd[1905]: time="2025-09-03T23:31:47.040935622Z" level=info msg="StartContainer for \"4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb\" returns successfully" Sep 3 23:31:47.042966 systemd[1]: cri-containerd-4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb.scope: Deactivated successfully. Sep 3 23:31:47.043957 containerd[1905]: time="2025-09-03T23:31:47.043933460Z" level=info msg="received exit event container_id:\"4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb\" id:\"4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb\" pid:5186 exited_at:{seconds:1756942307 nanos:43665083}" Sep 3 23:31:47.044143 containerd[1905]: time="2025-09-03T23:31:47.044120906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb\" id:\"4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb\" pid:5186 exited_at:{seconds:1756942307 nanos:43665083}" Sep 3 23:31:47.062754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4655161dbd0ddf223169b8519c01f30cff201bb64c4435bf194135d1db6227fb-rootfs.mount: Deactivated successfully. Sep 3 23:31:47.302116 sshd[5173]: Accepted publickey for core from 10.200.16.10 port 57502 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:31:47.303181 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:31:47.307086 systemd-logind[1864]: New session 27 of user core. Sep 3 23:31:47.314040 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 3 23:31:47.961474 containerd[1905]: time="2025-09-03T23:31:47.961430142Z" level=info msg="CreateContainer within sandbox \"c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:31:47.983160 containerd[1905]: time="2025-09-03T23:31:47.983127534Z" level=info msg="Container 91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:31:48.001140 containerd[1905]: time="2025-09-03T23:31:48.001111449Z" level=info msg="CreateContainer within sandbox \"c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574\"" Sep 3 23:31:48.001737 containerd[1905]: time="2025-09-03T23:31:48.001710749Z" level=info msg="StartContainer for \"91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574\"" Sep 3 23:31:48.002724 containerd[1905]: time="2025-09-03T23:31:48.002697143Z" level=info msg="connecting to shim 91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574" address="unix:///run/containerd/s/13e3a66341b5f911b9970b9a02c15e983991f0b307cf8b2bda067d857b62d652" protocol=ttrpc version=3 Sep 3 23:31:48.019050 systemd[1]: Started cri-containerd-91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574.scope - libcontainer container 91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574. Sep 3 23:31:48.043706 systemd[1]: cri-containerd-91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574.scope: Deactivated successfully. Sep 3 23:31:48.046278 containerd[1905]: time="2025-09-03T23:31:48.046253382Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574\" id:\"91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574\" pid:5243 exited_at:{seconds:1756942308 nanos:45711939}" Sep 3 23:31:48.047000 containerd[1905]: time="2025-09-03T23:31:48.046962726Z" level=info msg="received exit event container_id:\"91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574\" id:\"91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574\" pid:5243 exited_at:{seconds:1756942308 nanos:45711939}" Sep 3 23:31:48.052409 containerd[1905]: time="2025-09-03T23:31:48.052387774Z" level=info msg="StartContainer for \"91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574\" returns successfully" Sep 3 23:31:48.062721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91c65202d2025ab93bcc3940ead5764de88041d2eafe22cab9dd2ea632856574-rootfs.mount: Deactivated successfully. Sep 3 23:31:48.690978 kubelet[3386]: I0903 23:31:48.690936 3386 setters.go:602] "Node became not ready" node="ci-4372.1.0-n-e4e1aff60f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-03T23:31:48Z","lastTransitionTime":"2025-09-03T23:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 3 23:31:48.968033 containerd[1905]: time="2025-09-03T23:31:48.967757704Z" level=info msg="CreateContainer within sandbox \"c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:31:48.991886 containerd[1905]: time="2025-09-03T23:31:48.991258126Z" level=info msg="Container 6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:31:49.006018 containerd[1905]: time="2025-09-03T23:31:49.005990010Z" level=info msg="CreateContainer within sandbox \"c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc\"" Sep 3 23:31:49.006934 containerd[1905]: time="2025-09-03T23:31:49.006414616Z" level=info msg="StartContainer for \"6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc\"" Sep 3 23:31:49.007699 containerd[1905]: time="2025-09-03T23:31:49.007676635Z" level=info msg="connecting to shim 6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc" address="unix:///run/containerd/s/13e3a66341b5f911b9970b9a02c15e983991f0b307cf8b2bda067d857b62d652" protocol=ttrpc version=3 Sep 3 23:31:49.023038 systemd[1]: Started cri-containerd-6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc.scope - libcontainer container 6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc. Sep 3 23:31:49.041085 systemd[1]: cri-containerd-6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc.scope: Deactivated successfully. Sep 3 23:31:49.043955 containerd[1905]: time="2025-09-03T23:31:49.043930570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc\" id:\"6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc\" pid:5282 exited_at:{seconds:1756942309 nanos:43457034}" Sep 3 23:31:49.051757 containerd[1905]: time="2025-09-03T23:31:49.051694058Z" level=info msg="received exit event container_id:\"6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc\" id:\"6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc\" pid:5282 exited_at:{seconds:1756942309 nanos:43457034}" Sep 3 23:31:49.052507 containerd[1905]: time="2025-09-03T23:31:49.052484893Z" level=info msg="StartContainer for \"6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc\" returns successfully" Sep 3 23:31:49.066609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6787444b5591713373e09847fb95b97566b18f9b61665094cf8c25b376daf0fc-rootfs.mount: Deactivated successfully. Sep 3 23:31:49.704148 kubelet[3386]: E0903 23:31:49.704108 3386 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 3 23:31:49.975138 containerd[1905]: time="2025-09-03T23:31:49.975043402Z" level=info msg="CreateContainer within sandbox \"c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:31:49.996571 containerd[1905]: time="2025-09-03T23:31:49.995490553Z" level=info msg="Container 80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:31:50.014733 containerd[1905]: time="2025-09-03T23:31:50.014705317Z" level=info msg="CreateContainer within sandbox \"c3a4ba92f6c175c345b01104710362244964e0ccfb0b67ba4d45487d1d9da041\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14\"" Sep 3 23:31:50.015292 containerd[1905]: time="2025-09-03T23:31:50.015269568Z" level=info msg="StartContainer for \"80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14\"" Sep 3 23:31:50.015845 containerd[1905]: time="2025-09-03T23:31:50.015822667Z" level=info msg="connecting to shim 80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14" address="unix:///run/containerd/s/13e3a66341b5f911b9970b9a02c15e983991f0b307cf8b2bda067d857b62d652" protocol=ttrpc version=3 Sep 3 23:31:50.033029 systemd[1]: Started cri-containerd-80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14.scope - libcontainer container 80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14. Sep 3 23:31:50.060181 containerd[1905]: time="2025-09-03T23:31:50.060106019Z" level=info msg="StartContainer for \"80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14\" returns successfully" Sep 3 23:31:50.124157 containerd[1905]: time="2025-09-03T23:31:50.124127585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14\" id:\"af88e150b75f72d316e8ecf22acd4fa05beb2c9435be6294448a89ec3c46fc91\" pid:5352 exited_at:{seconds:1756942310 nanos:123681530}" Sep 3 23:31:50.463928 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 3 23:31:50.990210 kubelet[3386]: I0903 23:31:50.990147 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ks2sc" podStartSLOduration=5.99012279 podStartE2EDuration="5.99012279s" podCreationTimestamp="2025-09-03 23:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:31:50.989999434 +0000 UTC m=+186.460925574" watchObservedRunningTime="2025-09-03 23:31:50.99012279 +0000 UTC m=+186.461048930" Sep 3 23:31:51.696795 containerd[1905]: time="2025-09-03T23:31:51.696747139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14\" id:\"6ce96511121079d3e46ff49d95cf72660a37f8ffa4d34e71f528299191a3ebd1\" pid:5431 exit_status:1 exited_at:{seconds:1756942311 nanos:696574869}" Sep 3 23:31:52.813056 systemd-networkd[1700]: lxc_health: Link UP Sep 3 23:31:52.816820 systemd-networkd[1700]: lxc_health: Gained carrier Sep 3 23:31:53.594926 kubelet[3386]: E0903 23:31:53.594793 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-bbffz" podUID="1de02790-4c54-4caf-9d28-9614c2aa7c38" Sep 3 23:31:53.785869 containerd[1905]: time="2025-09-03T23:31:53.785832391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14\" id:\"56c2bf2a9fbd4a5593efead354e6bb144be33396b66664c92fd27f0c2b9a6512\" pid:5871 exited_at:{seconds:1756942313 nanos:785574222}" Sep 3 23:31:54.034134 systemd-networkd[1700]: lxc_health: Gained IPv6LL Sep 3 23:31:55.876898 containerd[1905]: time="2025-09-03T23:31:55.876849166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14\" id:\"5f9c14d7fee133c086f181d82da2017321f9b9f54bb5d949402963aa6ec16cba\" pid:5916 exited_at:{seconds:1756942315 nanos:876657399}" Sep 3 23:31:57.951339 containerd[1905]: time="2025-09-03T23:31:57.951302007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80762a7e6c0715f5fbfcc81bd0751cb605d4f71f39f01b446335bae288cfaa14\" id:\"a4b5fad6d1d7946b699d02f313946d7ac8bc933a5877a71cdc7774024cf498fa\" pid:5942 exited_at:{seconds:1756942317 nanos:950820470}" Sep 3 23:31:58.069941 sshd[5222]: Connection closed by 10.200.16.10 port 57502 Sep 3 23:31:58.070903 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Sep 3 23:31:58.073556 systemd[1]: sshd@24-10.200.20.24:22-10.200.16.10:57502.service: Deactivated successfully. Sep 3 23:31:58.075129 systemd[1]: session-27.scope: Deactivated successfully. Sep 3 23:31:58.075752 systemd-logind[1864]: Session 27 logged out. Waiting for processes to exit. Sep 3 23:31:58.076893 systemd-logind[1864]: Removed session 27.