Sep 9 23:41:24.028499 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Sep 9 23:41:24.028516 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:10:22 -00 2025 Sep 9 23:41:24.028523 kernel: KASLR enabled Sep 9 23:41:24.028527 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 9 23:41:24.028532 kernel: printk: legacy bootconsole [pl11] enabled Sep 9 23:41:24.028536 kernel: efi: EFI v2.7 by EDK II Sep 9 23:41:24.028541 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 9 23:41:24.028545 kernel: random: crng init done Sep 9 23:41:24.028549 kernel: secureboot: Secure boot disabled Sep 9 23:41:24.028553 kernel: ACPI: Early table checksum verification disabled Sep 9 23:41:24.028557 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 9 23:41:24.028561 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:24.028565 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:24.028569 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 9 23:41:24.028575 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:24.028579 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:24.028583 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:24.028587 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:24.028592 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:24.028596 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:24.028600 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 9 23:41:24.028605 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:24.028609 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 9 23:41:24.028613 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:41:24.028617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 9 23:41:24.028621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Sep 9 23:41:24.028625 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Sep 9 23:41:24.028629 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 9 23:41:24.028634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 9 23:41:24.028639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 9 23:41:24.028643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 9 23:41:24.028647 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 9 23:41:24.028651 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 9 23:41:24.028655 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 9 23:41:24.028659 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 9 23:41:24.028664 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 9 23:41:24.028668 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Sep 9 23:41:24.028672 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Sep 9 23:41:24.028676 kernel: Zone ranges: Sep 9 23:41:24.028680 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 9 23:41:24.028687 kernel: DMA32 empty Sep 9 23:41:24.028692 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 9 23:41:24.028696 kernel: Device empty Sep 9 23:41:24.028700 kernel: Movable zone start for each node Sep 9 23:41:24.028705 kernel: Early memory node ranges Sep 9 23:41:24.028710 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 9 23:41:24.028714 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 9 23:41:24.028719 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 9 23:41:24.028723 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 9 23:41:24.028728 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 9 23:41:24.028732 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 9 23:41:24.028736 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 9 23:41:24.028741 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 9 23:41:24.028745 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 9 23:41:24.028750 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 9 23:41:24.028754 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 9 23:41:24.028758 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Sep 9 23:41:24.028763 kernel: psci: probing for conduit method from ACPI. Sep 9 23:41:24.028768 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 23:41:24.028772 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:41:24.028777 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 9 23:41:24.028781 kernel: psci: SMC Calling Convention v1.4 Sep 9 23:41:24.028785 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 9 23:41:24.028790 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 9 23:41:24.028794 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:41:24.028799 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:41:24.028803 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 9 23:41:24.028808 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:41:24.028813 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Sep 9 23:41:24.028817 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:41:24.028822 kernel: CPU features: detected: Spectre-v4 Sep 9 23:41:24.028826 kernel: CPU features: detected: Spectre-BHB Sep 9 23:41:24.028831 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 23:41:24.028835 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 23:41:24.028839 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Sep 9 23:41:24.028844 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 23:41:24.028848 kernel: alternatives: applying boot alternatives Sep 9 23:41:24.028853 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:41:24.028858 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:41:24.028863 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:41:24.028868 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:41:24.028872 kernel: Fallback order for Node 0: 0 Sep 9 23:41:24.028876 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Sep 9 23:41:24.028881 kernel: Policy zone: Normal Sep 9 23:41:24.028885 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:41:24.028889 kernel: software IO TLB: area num 2. Sep 9 23:41:24.028894 kernel: software IO TLB: mapped [mem 0x0000000036290000-0x000000003a290000] (64MB) Sep 9 23:41:24.028898 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 23:41:24.028903 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:41:24.028908 kernel: rcu: RCU event tracing is enabled. Sep 9 23:41:24.028913 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 23:41:24.028917 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:41:24.028922 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:41:24.028926 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:41:24.028931 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 23:41:24.028935 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:41:24.028940 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:41:24.028944 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:41:24.028949 kernel: GICv3: 960 SPIs implemented Sep 9 23:41:24.028953 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:41:24.028957 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:41:24.028962 kernel: GICv3: GICv3 features: 16 PPIs, RSS Sep 9 23:41:24.028967 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Sep 9 23:41:24.028971 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 9 23:41:24.028976 kernel: ITS: No ITS available, not enabling LPIs Sep 9 23:41:24.028980 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:41:24.028985 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Sep 9 23:41:24.028989 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 23:41:24.028994 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Sep 9 23:41:24.028998 kernel: Console: colour dummy device 80x25 Sep 9 23:41:24.029003 kernel: printk: legacy console [tty1] enabled Sep 9 23:41:24.029008 kernel: ACPI: Core revision 20240827 Sep 9 23:41:24.029012 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Sep 9 23:41:24.029017 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:41:24.029022 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:41:24.029027 kernel: landlock: Up and running. Sep 9 23:41:24.029031 kernel: SELinux: Initializing. Sep 9 23:41:24.029036 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:41:24.029043 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:41:24.029049 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Sep 9 23:41:24.029054 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Sep 9 23:41:24.029059 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 9 23:41:24.029063 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:41:24.029068 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:41:24.029074 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:41:24.029079 kernel: Remapping and enabling EFI services. Sep 9 23:41:24.029083 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:41:24.029088 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:41:24.029093 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 9 23:41:24.029098 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Sep 9 23:41:24.029103 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 23:41:24.029108 kernel: SMP: Total of 2 processors activated. Sep 9 23:41:24.029113 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:41:24.029117 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:41:24.029122 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 9 23:41:24.029127 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 23:41:24.029132 kernel: CPU features: detected: Common not Private translations Sep 9 23:41:24.029137 kernel: CPU features: detected: CRC32 instructions Sep 9 23:41:24.029142 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Sep 9 23:41:24.029147 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 23:41:24.029152 kernel: CPU features: detected: LSE atomic instructions Sep 9 23:41:24.029157 kernel: CPU features: detected: Privileged Access Never Sep 9 23:41:24.029162 kernel: CPU features: detected: Speculation barrier (SB) Sep 9 23:41:24.029166 kernel: CPU features: detected: TLB range maintenance instructions Sep 9 23:41:24.029171 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 23:41:24.029176 kernel: CPU features: detected: Scalable Vector Extension Sep 9 23:41:24.029181 kernel: alternatives: applying system-wide alternatives Sep 9 23:41:24.029186 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 9 23:41:24.029191 kernel: SVE: maximum available vector length 16 bytes per vector Sep 9 23:41:24.029196 kernel: SVE: default vector length 16 bytes per vector Sep 9 23:41:24.029201 kernel: Memory: 3959668K/4194160K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 213304K reserved, 16384K cma-reserved) Sep 9 23:41:24.029206 kernel: devtmpfs: initialized Sep 9 23:41:24.029210 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:41:24.029230 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 23:41:24.029234 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 23:41:24.029239 kernel: 0 pages in range for non-PLT usage Sep 9 23:41:24.029245 kernel: 508576 pages in range for PLT usage Sep 9 23:41:24.029250 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:41:24.029254 kernel: SMBIOS 3.1.0 present. Sep 9 23:41:24.029259 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 9 23:41:24.029264 kernel: DMI: Memory slots populated: 2/2 Sep 9 23:41:24.029269 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:41:24.029274 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:41:24.029279 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:41:24.029283 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:41:24.029289 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:41:24.029294 kernel: audit: type=2000 audit(0.058:1): state=initialized audit_enabled=0 res=1 Sep 9 23:41:24.029299 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:41:24.029303 kernel: cpuidle: using governor menu Sep 9 23:41:24.029308 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:41:24.029313 kernel: ASID allocator initialised with 32768 entries Sep 9 23:41:24.029318 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:41:24.029323 kernel: Serial: AMBA PL011 UART driver Sep 9 23:41:24.029327 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:41:24.029333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:41:24.029338 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:41:24.029343 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:41:24.029347 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:41:24.029352 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:41:24.029357 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:41:24.029362 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:41:24.029366 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:41:24.029371 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:41:24.029377 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:41:24.029381 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:41:24.029386 kernel: ACPI: Interpreter enabled Sep 9 23:41:24.029391 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:41:24.029396 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 9 23:41:24.029401 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 23:41:24.029405 kernel: printk: legacy bootconsole [pl11] disabled Sep 9 23:41:24.029410 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 9 23:41:24.029415 kernel: ACPI: CPU0 has been hot-added Sep 9 23:41:24.029420 kernel: ACPI: CPU1 has been hot-added Sep 9 23:41:24.029425 kernel: iommu: Default domain type: Translated Sep 9 23:41:24.029430 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:41:24.029435 kernel: efivars: Registered efivars operations Sep 9 23:41:24.029439 kernel: vgaarb: loaded Sep 9 23:41:24.029444 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:41:24.029449 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:41:24.029454 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:41:24.029458 kernel: pnp: PnP ACPI init Sep 9 23:41:24.029464 kernel: pnp: PnP ACPI: found 0 devices Sep 9 23:41:24.029469 kernel: NET: Registered PF_INET protocol family Sep 9 23:41:24.029473 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:41:24.029478 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:41:24.029483 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:41:24.029488 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:41:24.029493 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:41:24.029497 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:41:24.029502 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:41:24.029508 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:41:24.029513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:41:24.029517 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:41:24.029522 kernel: kvm [1]: HYP mode not available Sep 9 23:41:24.029527 kernel: Initialise system trusted keyrings Sep 9 23:41:24.029531 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:41:24.029536 kernel: Key type asymmetric registered Sep 9 23:41:24.029541 kernel: Asymmetric key parser 'x509' registered Sep 9 23:41:24.029546 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:41:24.029551 kernel: io scheduler mq-deadline registered Sep 9 23:41:24.029556 kernel: io scheduler kyber registered Sep 9 23:41:24.029561 kernel: io scheduler bfq registered Sep 9 23:41:24.029565 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:41:24.029570 kernel: thunder_xcv, ver 1.0 Sep 9 23:41:24.029575 kernel: thunder_bgx, ver 1.0 Sep 9 23:41:24.029580 kernel: nicpf, ver 1.0 Sep 9 23:41:24.029584 kernel: nicvf, ver 1.0 Sep 9 23:41:24.029689 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:41:24.029740 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:41:23 UTC (1757461283) Sep 9 23:41:24.029746 kernel: efifb: probing for efifb Sep 9 23:41:24.029751 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 9 23:41:24.029756 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 9 23:41:24.029761 kernel: efifb: scrolling: redraw Sep 9 23:41:24.029765 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 23:41:24.029770 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 23:41:24.029775 kernel: fb0: EFI VGA frame buffer device Sep 9 23:41:24.029781 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 9 23:41:24.029786 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:41:24.029790 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 23:41:24.029795 kernel: watchdog: NMI not fully supported Sep 9 23:41:24.029800 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:41:24.029805 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:41:24.029810 kernel: Segment Routing with IPv6 Sep 9 23:41:24.029814 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:41:24.029819 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:41:24.029824 kernel: Key type dns_resolver registered Sep 9 23:41:24.029829 kernel: registered taskstats version 1 Sep 9 23:41:24.029834 kernel: Loading compiled-in X.509 certificates Sep 9 23:41:24.029839 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 61217a1897415238555e2058a4e44c51622b0f87' Sep 9 23:41:24.029844 kernel: Demotion targets for Node 0: null Sep 9 23:41:24.029848 kernel: Key type .fscrypt registered Sep 9 23:41:24.029853 kernel: Key type fscrypt-provisioning registered Sep 9 23:41:24.029858 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:41:24.029863 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:41:24.029868 kernel: ima: No architecture policies found Sep 9 23:41:24.029873 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:41:24.029878 kernel: clk: Disabling unused clocks Sep 9 23:41:24.029883 kernel: PM: genpd: Disabling unused power domains Sep 9 23:41:24.029888 kernel: Warning: unable to open an initial console. Sep 9 23:41:24.029892 kernel: Freeing unused kernel memory: 38912K Sep 9 23:41:24.029897 kernel: Run /init as init process Sep 9 23:41:24.029902 kernel: with arguments: Sep 9 23:41:24.029906 kernel: /init Sep 9 23:41:24.029912 kernel: with environment: Sep 9 23:41:24.029917 kernel: HOME=/ Sep 9 23:41:24.029921 kernel: TERM=linux Sep 9 23:41:24.029926 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:41:24.029932 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:41:24.029938 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:41:24.029944 systemd[1]: Detected virtualization microsoft. Sep 9 23:41:24.029950 systemd[1]: Detected architecture arm64. Sep 9 23:41:24.029955 systemd[1]: Running in initrd. Sep 9 23:41:24.029960 systemd[1]: No hostname configured, using default hostname. Sep 9 23:41:24.029965 systemd[1]: Hostname set to . Sep 9 23:41:24.029970 systemd[1]: Initializing machine ID from random generator. Sep 9 23:41:24.029975 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:41:24.029980 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:41:24.029986 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:41:24.029991 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:41:24.029997 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:41:24.030002 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:41:24.030008 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:41:24.030014 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:41:24.030019 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:41:24.030025 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:41:24.030031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:41:24.030036 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:41:24.030041 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:41:24.030046 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:41:24.030051 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:41:24.030056 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:41:24.030061 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:41:24.030067 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:41:24.030072 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:41:24.030078 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:41:24.030083 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:41:24.030089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:41:24.030094 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:41:24.030099 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:41:24.030104 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:41:24.030109 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:41:24.030115 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:41:24.030121 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:41:24.030126 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:41:24.030131 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:41:24.030146 systemd-journald[224]: Collecting audit messages is disabled. Sep 9 23:41:24.030160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:24.030166 systemd-journald[224]: Journal started Sep 9 23:41:24.030179 systemd-journald[224]: Runtime Journal (/run/log/journal/1f5d5365e6a24d4a9051853987c2e897) is 8M, max 78.5M, 70.5M free. Sep 9 23:41:24.031869 systemd-modules-load[226]: Inserted module 'overlay' Sep 9 23:41:24.042227 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:41:24.048509 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:41:24.066311 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:41:24.066330 kernel: Bridge firewalling registered Sep 9 23:41:24.061843 systemd-modules-load[226]: Inserted module 'br_netfilter' Sep 9 23:41:24.064973 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:41:24.073098 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:41:24.077066 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:41:24.088029 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:24.097813 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:41:24.120657 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:41:24.124804 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:41:24.143617 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:41:24.153804 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:41:24.160466 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:41:24.165993 systemd-tmpfiles[255]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:41:24.175257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:41:24.183275 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:41:24.196066 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:41:24.210427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:41:24.221528 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:41:24.245070 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:41:24.240554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:41:24.288841 systemd-resolved[263]: Positive Trust Anchors: Sep 9 23:41:24.288855 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:41:24.288875 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:41:24.290559 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 9 23:41:24.291622 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:41:24.295935 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:41:24.369242 kernel: SCSI subsystem initialized Sep 9 23:41:24.374241 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:41:24.382249 kernel: iscsi: registered transport (tcp) Sep 9 23:41:24.394889 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:41:24.394921 kernel: QLogic iSCSI HBA Driver Sep 9 23:41:24.407259 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:41:24.429444 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:41:24.435753 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:41:24.479516 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:41:24.485336 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:41:24.545234 kernel: raid6: neonx8 gen() 18550 MB/s Sep 9 23:41:24.564220 kernel: raid6: neonx4 gen() 18553 MB/s Sep 9 23:41:24.583238 kernel: raid6: neonx2 gen() 17087 MB/s Sep 9 23:41:24.603224 kernel: raid6: neonx1 gen() 15146 MB/s Sep 9 23:41:24.622220 kernel: raid6: int64x8 gen() 10776 MB/s Sep 9 23:41:24.641219 kernel: raid6: int64x4 gen() 10678 MB/s Sep 9 23:41:24.661220 kernel: raid6: int64x2 gen() 9019 MB/s Sep 9 23:41:24.682552 kernel: raid6: int64x1 gen() 7100 MB/s Sep 9 23:41:24.682559 kernel: raid6: using algorithm neonx4 gen() 18553 MB/s Sep 9 23:41:24.703979 kernel: raid6: .... xor() 15150 MB/s, rmw enabled Sep 9 23:41:24.704036 kernel: raid6: using neon recovery algorithm Sep 9 23:41:24.709226 kernel: xor: measuring software checksum speed Sep 9 23:41:24.714124 kernel: 8regs : 27224 MB/sec Sep 9 23:41:24.714132 kernel: 32regs : 29114 MB/sec Sep 9 23:41:24.717285 kernel: arm64_neon : 36893 MB/sec Sep 9 23:41:24.720000 kernel: xor: using function: arm64_neon (36893 MB/sec) Sep 9 23:41:24.757517 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:41:24.761451 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:41:24.770314 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:41:24.797031 systemd-udevd[474]: Using default interface naming scheme 'v255'. Sep 9 23:41:24.799938 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:41:24.813268 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:41:24.834045 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Sep 9 23:41:24.852236 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:41:24.861559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:41:24.903628 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:41:24.912921 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:41:24.975871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:41:24.979569 kernel: hv_vmbus: Vmbus version:5.3 Sep 9 23:41:24.979374 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:24.986725 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:25.041815 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 23:41:25.041835 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 9 23:41:25.041849 kernel: hv_vmbus: registering driver hid_hyperv Sep 9 23:41:25.041856 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 23:41:25.041867 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 9 23:41:25.041874 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 9 23:41:25.041886 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 9 23:41:25.048260 kernel: hv_vmbus: registering driver hv_netvsc Sep 9 23:41:25.048273 kernel: hv_vmbus: registering driver hv_storvsc Sep 9 23:41:25.048280 kernel: PTP clock support registered Sep 9 23:41:25.048287 kernel: scsi host0: storvsc_host_t Sep 9 23:41:25.048311 kernel: scsi host1: storvsc_host_t Sep 9 23:41:25.021440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:25.059159 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 9 23:41:25.055056 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:41:25.070275 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 9 23:41:25.065801 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:41:25.065889 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:25.171098 kernel: hv_utils: Registering HyperV Utility Driver Sep 9 23:41:25.171119 kernel: hv_vmbus: registering driver hv_utils Sep 9 23:41:25.171126 kernel: hv_utils: Heartbeat IC version 3.0 Sep 9 23:41:25.171132 kernel: hv_utils: Shutdown IC version 3.2 Sep 9 23:41:25.171139 kernel: hv_utils: TimeSync IC version 4.0 Sep 9 23:41:25.171145 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 9 23:41:25.171265 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 9 23:41:25.171342 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 9 23:41:25.171404 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 9 23:41:25.171463 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 9 23:41:25.171522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#61 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 23:41:25.171584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#4 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 23:41:25.171638 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 23:41:25.171645 kernel: hv_netvsc 0022487a-00e0-0022-487a-00e00022487a eth0: VF slot 1 added Sep 9 23:41:25.171700 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 9 23:41:25.171758 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 9 23:41:25.171827 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 23:41:25.074686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:25.179889 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 9 23:41:25.180086 kernel: hv_vmbus: registering driver hv_pci Sep 9 23:41:25.115199 systemd-resolved[263]: Clock change detected. Flushing caches. Sep 9 23:41:25.190978 kernel: hv_pci d2fcffa1-1c3e-428a-b969-7acaa1de1b8c: PCI VMBus probing: Using version 0x10004 Sep 9 23:41:25.204224 kernel: hv_pci d2fcffa1-1c3e-428a-b969-7acaa1de1b8c: PCI host bridge to bus 1c3e:00 Sep 9 23:41:25.204362 kernel: pci_bus 1c3e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 9 23:41:25.208332 kernel: pci_bus 1c3e:00: No busn resource found for root bus, will use [bus 00-ff] Sep 9 23:41:25.208992 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#204 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 23:41:25.217489 kernel: pci 1c3e:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Sep 9 23:41:25.217735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:25.226155 kernel: pci 1c3e:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 9 23:41:25.233986 kernel: pci 1c3e:00:02.0: enabling Extended Tags Sep 9 23:41:25.248942 kernel: pci 1c3e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1c3e:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Sep 9 23:41:25.259363 kernel: pci_bus 1c3e:00: busn_res: [bus 00-ff] end is updated to 00 Sep 9 23:41:25.259489 kernel: pci 1c3e:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Sep 9 23:41:25.266586 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#242 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 23:41:25.322867 kernel: mlx5_core 1c3e:00:02.0: enabling device (0000 -> 0002) Sep 9 23:41:25.330236 kernel: mlx5_core 1c3e:00:02.0: PTM is not supported by PCIe Sep 9 23:41:25.330371 kernel: mlx5_core 1c3e:00:02.0: firmware version: 16.30.5006 Sep 9 23:41:25.500329 kernel: hv_netvsc 0022487a-00e0-0022-487a-00e00022487a eth0: VF registering: eth1 Sep 9 23:41:25.500505 kernel: mlx5_core 1c3e:00:02.0 eth1: joined to eth0 Sep 9 23:41:25.505942 kernel: mlx5_core 1c3e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 9 23:41:25.513912 kernel: mlx5_core 1c3e:00:02.0 enP7230s1: renamed from eth1 Sep 9 23:41:25.717278 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 9 23:41:25.772504 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 9 23:41:25.777686 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 9 23:41:25.795516 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:41:25.811327 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 9 23:41:25.830238 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 9 23:41:25.951799 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:41:25.960311 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:41:25.964800 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:41:25.973590 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:41:25.986062 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:41:26.006470 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:41:26.832757 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#232 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 23:41:26.843957 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 23:41:26.844718 disk-uuid[645]: The operation has completed successfully. Sep 9 23:41:26.911621 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:41:26.915277 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:41:26.945376 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:41:26.963206 sh[821]: Success Sep 9 23:41:27.022845 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:41:27.022902 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:41:27.041911 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:41:27.050925 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:41:27.377886 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:41:27.386882 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:41:27.399493 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:41:27.420386 kernel: BTRFS: device fsid 2bc16190-0dd5-44d6-b331-3d703f5a1d1f devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (839) Sep 9 23:41:27.420424 kernel: BTRFS info (device dm-0): first mount of filesystem 2bc16190-0dd5-44d6-b331-3d703f5a1d1f Sep 9 23:41:27.424302 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:41:27.791352 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:41:27.791435 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:41:27.848847 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:41:27.852427 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:41:27.859468 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:41:27.860087 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:41:27.880603 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:41:27.907934 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (876) Sep 9 23:41:27.917704 kernel: BTRFS info (device sda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:27.917735 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:41:27.965669 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:41:27.983549 kernel: BTRFS info (device sda6): turning on async discard Sep 9 23:41:27.983567 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 23:41:27.983575 kernel: BTRFS info (device sda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:27.983494 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:41:27.988921 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:41:28.003715 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:41:28.027007 systemd-networkd[1006]: lo: Link UP Sep 9 23:41:28.027019 systemd-networkd[1006]: lo: Gained carrier Sep 9 23:41:28.027968 systemd-networkd[1006]: Enumeration completed Sep 9 23:41:28.029586 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:41:28.029764 systemd-networkd[1006]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:41:28.029766 systemd-networkd[1006]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:41:28.037484 systemd[1]: Reached target network.target - Network. Sep 9 23:41:28.114703 kernel: mlx5_core 1c3e:00:02.0 enP7230s1: Link up Sep 9 23:41:28.114993 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 9 23:41:28.146911 kernel: hv_netvsc 0022487a-00e0-0022-487a-00e00022487a eth0: Data path switched to VF: enP7230s1 Sep 9 23:41:28.147308 systemd-networkd[1006]: enP7230s1: Link UP Sep 9 23:41:28.147368 systemd-networkd[1006]: eth0: Link UP Sep 9 23:41:28.147434 systemd-networkd[1006]: eth0: Gained carrier Sep 9 23:41:28.147448 systemd-networkd[1006]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:41:28.153021 systemd-networkd[1006]: enP7230s1: Gained carrier Sep 9 23:41:28.168930 systemd-networkd[1006]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 23:41:29.321136 ignition[1009]: Ignition 2.21.0 Sep 9 23:41:29.321148 ignition[1009]: Stage: fetch-offline Sep 9 23:41:29.321222 ignition[1009]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:29.328929 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:41:29.321228 ignition[1009]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:29.342032 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 23:41:29.321316 ignition[1009]: parsed url from cmdline: "" Sep 9 23:41:29.321318 ignition[1009]: no config URL provided Sep 9 23:41:29.321321 ignition[1009]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:41:29.321327 ignition[1009]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:41:29.321330 ignition[1009]: failed to fetch config: resource requires networking Sep 9 23:41:29.323911 ignition[1009]: Ignition finished successfully Sep 9 23:41:29.370036 ignition[1020]: Ignition 2.21.0 Sep 9 23:41:29.370041 ignition[1020]: Stage: fetch Sep 9 23:41:29.370206 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:29.370213 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:29.370327 ignition[1020]: parsed url from cmdline: "" Sep 9 23:41:29.370331 ignition[1020]: no config URL provided Sep 9 23:41:29.370337 ignition[1020]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:41:29.370346 ignition[1020]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:41:29.370385 ignition[1020]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 9 23:41:29.461495 ignition[1020]: GET result: OK Sep 9 23:41:29.461601 ignition[1020]: config has been read from IMDS userdata Sep 9 23:41:29.461628 ignition[1020]: parsing config with SHA512: a6935f4f4ab28fb5f442c512384049c4655f707a4df06abad0c10d83e762fb9d2824baff076501e02ccdc4209e87e0d392c1153a7d0c7fa450a75264983eed2d Sep 9 23:41:29.469016 unknown[1020]: fetched base config from "system" Sep 9 23:41:29.469036 unknown[1020]: fetched base config from "system" Sep 9 23:41:29.469293 ignition[1020]: fetch: fetch complete Sep 9 23:41:29.469042 unknown[1020]: fetched user config from "azure" Sep 9 23:41:29.469298 ignition[1020]: fetch: fetch passed Sep 9 23:41:29.473698 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 23:41:29.469340 ignition[1020]: Ignition finished successfully Sep 9 23:41:29.479722 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:41:29.512505 ignition[1026]: Ignition 2.21.0 Sep 9 23:41:29.512515 ignition[1026]: Stage: kargs Sep 9 23:41:29.512644 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:29.519931 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:41:29.512649 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:29.528008 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:41:29.515269 ignition[1026]: kargs: kargs passed Sep 9 23:41:29.515307 ignition[1026]: Ignition finished successfully Sep 9 23:41:29.552052 ignition[1032]: Ignition 2.21.0 Sep 9 23:41:29.552067 ignition[1032]: Stage: disks Sep 9 23:41:29.552209 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:29.552216 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:29.557534 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:41:29.552722 ignition[1032]: disks: disks passed Sep 9 23:41:29.564465 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:41:29.552761 ignition[1032]: Ignition finished successfully Sep 9 23:41:29.572434 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:41:29.580267 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:41:29.586198 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:41:29.593553 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:41:29.600450 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:41:29.687421 systemd-fsck[1040]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 9 23:41:29.695532 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:41:29.701139 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:41:29.945094 systemd-networkd[1006]: eth0: Gained IPv6LL Sep 9 23:41:31.824913 kernel: EXT4-fs (sda9): mounted filesystem 7cc0d7f3-e4a1-4dc4-8b58-ceece0d874c1 r/w with ordered data mode. Quota mode: none. Sep 9 23:41:31.825376 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:41:31.828765 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:41:31.862706 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:41:31.879502 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:41:31.888023 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 9 23:41:31.898414 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:41:31.898444 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:41:31.904431 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:41:31.917302 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:41:31.946338 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1054) Sep 9 23:41:31.946385 kernel: BTRFS info (device sda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:31.950468 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:41:31.958986 kernel: BTRFS info (device sda6): turning on async discard Sep 9 23:41:31.959020 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 23:41:31.960944 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:41:32.523374 coreos-metadata[1056]: Sep 09 23:41:32.523 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 9 23:41:32.530785 coreos-metadata[1056]: Sep 09 23:41:32.530 INFO Fetch successful Sep 9 23:41:32.530785 coreos-metadata[1056]: Sep 09 23:41:32.530 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 9 23:41:32.543018 coreos-metadata[1056]: Sep 09 23:41:32.542 INFO Fetch successful Sep 9 23:41:32.547971 coreos-metadata[1056]: Sep 09 23:41:32.543 INFO wrote hostname ci-4426.0.0-n-d9fce76d1d to /sysroot/etc/hostname Sep 9 23:41:32.548323 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 23:41:32.784602 initrd-setup-root[1084]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:41:32.835525 initrd-setup-root[1091]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:41:32.852313 initrd-setup-root[1098]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:41:32.857476 initrd-setup-root[1105]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:41:33.871170 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:41:33.876823 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:41:33.894532 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:41:33.911732 kernel: BTRFS info (device sda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:33.906842 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:41:33.934928 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:41:33.942448 ignition[1173]: INFO : Ignition 2.21.0 Sep 9 23:41:33.942448 ignition[1173]: INFO : Stage: mount Sep 9 23:41:33.942448 ignition[1173]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:33.942448 ignition[1173]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:33.942448 ignition[1173]: INFO : mount: mount passed Sep 9 23:41:33.942448 ignition[1173]: INFO : Ignition finished successfully Sep 9 23:41:33.938999 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:41:33.947283 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:41:33.980992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:41:33.999929 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1185) Sep 9 23:41:34.009722 kernel: BTRFS info (device sda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:34.009740 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:41:34.018748 kernel: BTRFS info (device sda6): turning on async discard Sep 9 23:41:34.018774 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 23:41:34.020260 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:41:34.047319 ignition[1203]: INFO : Ignition 2.21.0 Sep 9 23:41:34.047319 ignition[1203]: INFO : Stage: files Sep 9 23:41:34.053731 ignition[1203]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:34.053731 ignition[1203]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:34.053731 ignition[1203]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:41:34.072841 ignition[1203]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:41:34.072841 ignition[1203]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:41:34.116448 ignition[1203]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:41:34.121836 ignition[1203]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:41:34.121836 ignition[1203]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:41:34.116941 unknown[1203]: wrote ssh authorized keys file for user: core Sep 9 23:41:34.154999 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:41:34.163954 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 23:41:34.252781 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:41:35.163638 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:41:35.171639 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:41:35.171639 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 23:41:35.401271 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 23:41:35.608608 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:41:35.608608 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:41:35.621732 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:41:35.621732 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:41:35.621732 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:41:35.621732 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:41:35.621732 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:41:35.621732 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:41:35.621732 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:41:35.667616 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:41:35.667616 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:41:35.667616 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:41:35.667616 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:41:35.667616 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:41:35.667616 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 23:41:36.173543 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 23:41:37.081690 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:41:37.081690 ignition[1203]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 23:41:37.112126 ignition[1203]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:41:37.127873 ignition[1203]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:41:37.127873 ignition[1203]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 23:41:37.127873 ignition[1203]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:41:37.154731 ignition[1203]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:41:37.154731 ignition[1203]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:41:37.154731 ignition[1203]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:41:37.154731 ignition[1203]: INFO : files: files passed Sep 9 23:41:37.154731 ignition[1203]: INFO : Ignition finished successfully Sep 9 23:41:37.136801 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:41:37.145859 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:41:37.174384 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:41:37.187409 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:41:37.193460 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:41:37.216923 initrd-setup-root-after-ignition[1232]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:41:37.216923 initrd-setup-root-after-ignition[1232]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:41:37.229650 initrd-setup-root-after-ignition[1236]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:41:37.224274 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:41:37.234836 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:41:37.245333 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:41:37.296653 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:41:37.296788 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:41:37.305228 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:41:37.312949 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:41:37.320748 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:41:37.321406 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:41:37.351958 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:41:37.358025 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:41:37.379841 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:41:37.384733 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:41:37.393141 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:41:37.401345 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:41:37.401445 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:41:37.412375 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:41:37.420596 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:41:37.427495 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:41:37.435334 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:41:37.443465 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:41:37.451658 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:41:37.459551 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:41:37.467374 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:41:37.475472 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:41:37.484252 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:41:37.492032 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:41:37.498457 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:41:37.498559 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:41:37.508805 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:41:37.513099 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:41:37.521546 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:41:37.525288 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:41:37.530193 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:41:37.530281 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:41:37.542468 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:41:37.542556 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:41:37.547276 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:41:37.547349 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:41:37.556307 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 9 23:41:37.556372 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 23:41:37.609972 ignition[1256]: INFO : Ignition 2.21.0 Sep 9 23:41:37.609972 ignition[1256]: INFO : Stage: umount Sep 9 23:41:37.609972 ignition[1256]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:37.609972 ignition[1256]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:37.609972 ignition[1256]: INFO : umount: umount passed Sep 9 23:41:37.609972 ignition[1256]: INFO : Ignition finished successfully Sep 9 23:41:37.565317 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:41:37.592754 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:41:37.602087 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:41:37.606848 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:41:37.612008 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:41:37.612121 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:41:37.622787 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:41:37.624997 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:41:37.632544 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:41:37.632628 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:41:37.641820 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:41:37.641862 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:41:37.648701 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:41:37.648735 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:41:37.656508 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 23:41:37.656544 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 23:41:37.663583 systemd[1]: Stopped target network.target - Network. Sep 9 23:41:37.670840 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:41:37.670888 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:41:37.680557 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:41:37.687946 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:41:37.695914 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:41:37.704569 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:41:37.712062 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:41:37.721460 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:41:37.721510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:41:37.729078 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:41:37.729112 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:41:37.736381 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:41:37.736431 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:41:37.743528 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:41:37.743555 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:41:37.751007 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:41:37.757821 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:41:37.769678 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:41:37.770181 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:41:37.770251 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:41:37.777672 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:41:37.777762 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:41:37.789611 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:41:37.789771 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:41:37.789852 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:41:37.804048 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:41:37.805312 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:41:37.811991 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:41:37.812032 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:41:37.821005 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:41:37.821055 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:41:37.980404 kernel: hv_netvsc 0022487a-00e0-0022-487a-00e00022487a eth0: Data path switched from VF: enP7230s1 Sep 9 23:41:37.830123 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:41:37.842258 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:41:37.842324 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:41:37.850511 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:41:37.850553 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:41:37.862045 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:41:37.862087 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:41:37.866233 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:41:37.866270 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:41:37.878346 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:41:37.886372 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:41:37.886426 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:41:37.904451 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:41:37.912863 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:41:37.921343 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:41:37.921378 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:41:37.928715 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:41:37.928744 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:41:37.936830 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:41:37.936878 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:41:37.949592 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:41:37.949635 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:41:37.957129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:41:37.957176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:41:37.980407 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:41:37.987599 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:41:37.987659 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:41:37.999430 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:41:37.999471 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:41:38.004580 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 23:41:38.004624 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:41:38.009722 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:41:38.009754 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:41:38.020565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:41:38.020607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:38.032838 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 23:41:38.032876 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 23:41:38.032936 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 23:41:38.032962 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:41:38.033205 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:41:38.208270 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Sep 9 23:41:38.033285 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:41:38.039940 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:41:38.040006 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:41:38.048624 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:41:38.057626 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:41:38.110039 systemd[1]: Switching root. Sep 9 23:41:38.230503 systemd-journald[224]: Journal stopped Sep 9 23:41:47.280211 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:41:47.280230 kernel: SELinux: policy capability open_perms=1 Sep 9 23:41:47.280238 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:41:47.280243 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:41:47.280250 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:41:47.280256 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:41:47.280262 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:41:47.280268 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:41:47.280273 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:41:47.280278 kernel: audit: type=1403 audit(1757461301.045:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:41:47.280285 systemd[1]: Successfully loaded SELinux policy in 239.008ms. Sep 9 23:41:47.280293 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.315ms. Sep 9 23:41:47.280300 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:41:47.280306 systemd[1]: Detected virtualization microsoft. Sep 9 23:41:47.280313 systemd[1]: Detected architecture arm64. Sep 9 23:41:47.280319 systemd[1]: Detected first boot. Sep 9 23:41:47.280326 systemd[1]: Hostname set to . Sep 9 23:41:47.280332 systemd[1]: Initializing machine ID from random generator. Sep 9 23:41:47.280338 zram_generator::config[1299]: No configuration found. Sep 9 23:41:47.280344 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:41:47.280351 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:41:47.280358 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:41:47.280364 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:41:47.280370 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:41:47.280376 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:41:47.280382 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:41:47.280389 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:41:47.280395 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:41:47.280401 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:41:47.280408 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:41:47.280414 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:41:47.280420 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:41:47.280426 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:41:47.280432 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:41:47.280439 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:41:47.280445 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:41:47.280451 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:41:47.280457 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:41:47.280464 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:41:47.280470 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 23:41:47.280477 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:41:47.280485 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:41:47.280492 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:41:47.280498 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:41:47.280504 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:41:47.280511 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:41:47.280517 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:41:47.280523 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:41:47.280529 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:41:47.280536 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:41:47.280542 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:41:47.280548 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:41:47.280555 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:41:47.280562 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:41:47.280568 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:41:47.280574 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:41:47.280580 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:41:47.280587 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:41:47.280594 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:41:47.280600 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:41:47.280606 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:41:47.280612 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:41:47.280619 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:41:47.280626 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:41:47.280632 systemd[1]: Reached target machines.target - Containers. Sep 9 23:41:47.280638 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:41:47.280646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:41:47.280652 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:41:47.280659 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:41:47.280665 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:41:47.280671 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:41:47.280677 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:41:47.280684 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:41:47.280690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:41:47.280696 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:41:47.280704 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:41:47.280710 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:41:47.280716 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:41:47.280723 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:41:47.280728 kernel: fuse: init (API version 7.41) Sep 9 23:41:47.280735 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:41:47.280741 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:41:47.280747 kernel: loop: module loaded Sep 9 23:41:47.280754 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:41:47.280761 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:41:47.280767 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:41:47.280774 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:41:47.280780 kernel: ACPI: bus type drm_connector registered Sep 9 23:41:47.280786 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:41:47.280792 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:41:47.280813 systemd-journald[1389]: Collecting audit messages is disabled. Sep 9 23:41:47.280828 systemd[1]: Stopped verity-setup.service. Sep 9 23:41:47.280835 systemd-journald[1389]: Journal started Sep 9 23:41:47.280849 systemd-journald[1389]: Runtime Journal (/run/log/journal/8ade48dbd75844778a9983de39871c6e) is 8M, max 78.5M, 70.5M free. Sep 9 23:41:47.299505 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:41:47.299549 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:41:46.466410 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:41:46.473308 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 9 23:41:46.473670 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:41:46.473935 systemd[1]: systemd-journald.service: Consumed 2.261s CPU time. Sep 9 23:41:47.310968 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:41:47.311536 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:41:47.316079 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:41:47.320271 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:41:47.324827 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:41:47.328826 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:41:47.335307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:41:47.340320 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:41:47.340458 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:41:47.345769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:41:47.345911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:41:47.350712 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:41:47.351996 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:41:47.356262 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:41:47.356406 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:41:47.361327 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:41:47.361448 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:41:47.365739 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:41:47.365849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:41:47.372004 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:41:47.376536 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:41:47.382052 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:41:47.387428 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:41:47.400712 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:41:47.405788 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:41:47.417803 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:41:47.422270 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:41:47.422304 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:41:47.427296 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:41:47.432891 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:41:47.436721 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:41:47.450685 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:41:47.463069 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:41:47.467134 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:41:47.467920 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:41:47.471980 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:41:47.473170 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:41:47.478058 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:41:47.482946 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:41:47.490125 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:41:47.496526 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:41:47.501619 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:41:47.508919 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:41:47.517428 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:41:47.522710 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:41:47.577686 systemd-journald[1389]: Time spent on flushing to /var/log/journal/8ade48dbd75844778a9983de39871c6e is 10.921ms for 947 entries. Sep 9 23:41:47.577686 systemd-journald[1389]: System Journal (/var/log/journal/8ade48dbd75844778a9983de39871c6e) is 8M, max 2.6G, 2.6G free. Sep 9 23:41:47.624266 systemd-journald[1389]: Received client request to flush runtime journal. Sep 9 23:41:47.624317 kernel: loop0: detected capacity change from 0 to 100608 Sep 9 23:41:47.625552 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:41:47.634667 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:41:47.635350 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:41:47.649216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:41:47.756011 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Sep 9 23:41:47.756024 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Sep 9 23:41:47.758641 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:41:47.764995 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:41:48.066919 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:41:48.143930 kernel: loop1: detected capacity change from 0 to 29264 Sep 9 23:41:48.408970 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:41:48.414611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:41:48.429214 systemd-tmpfiles[1457]: ACLs are not supported, ignoring. Sep 9 23:41:48.429442 systemd-tmpfiles[1457]: ACLs are not supported, ignoring. Sep 9 23:41:48.431544 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:41:48.707932 kernel: loop2: detected capacity change from 0 to 119320 Sep 9 23:41:49.131924 kernel: loop3: detected capacity change from 0 to 207008 Sep 9 23:41:49.166924 kernel: loop4: detected capacity change from 0 to 100608 Sep 9 23:41:49.190915 kernel: loop5: detected capacity change from 0 to 29264 Sep 9 23:41:49.209912 kernel: loop6: detected capacity change from 0 to 119320 Sep 9 23:41:49.210228 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:41:49.217034 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:41:49.224937 kernel: loop7: detected capacity change from 0 to 207008 Sep 9 23:41:49.234801 (sd-merge)[1463]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 9 23:41:49.235436 (sd-merge)[1463]: Merged extensions into '/usr'. Sep 9 23:41:49.239381 systemd[1]: Reload requested from client PID 1437 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:41:49.239391 systemd[1]: Reloading... Sep 9 23:41:49.246464 systemd-udevd[1465]: Using default interface naming scheme 'v255'. Sep 9 23:41:49.286957 zram_generator::config[1487]: No configuration found. Sep 9 23:41:49.469955 systemd[1]: Reloading finished in 230 ms. Sep 9 23:41:49.498888 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:41:49.513707 systemd[1]: Starting ensure-sysext.service... Sep 9 23:41:49.519029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:41:49.552334 systemd[1]: Reload requested from client PID 1546 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:41:49.552711 systemd[1]: Reloading... Sep 9 23:41:49.562724 systemd-tmpfiles[1547]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:41:49.562747 systemd-tmpfiles[1547]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:41:49.563262 systemd-tmpfiles[1547]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:41:49.563493 systemd-tmpfiles[1547]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:41:49.564021 systemd-tmpfiles[1547]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:41:49.564260 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Sep 9 23:41:49.564369 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Sep 9 23:41:49.613931 zram_generator::config[1575]: No configuration found. Sep 9 23:41:49.623179 systemd-tmpfiles[1547]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:41:49.623191 systemd-tmpfiles[1547]: Skipping /boot Sep 9 23:41:49.628728 systemd-tmpfiles[1547]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:41:49.628744 systemd-tmpfiles[1547]: Skipping /boot Sep 9 23:41:49.742657 systemd[1]: Reloading finished in 189 ms. Sep 9 23:41:49.765445 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:41:49.778068 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:41:49.877049 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:41:49.891103 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:41:49.897554 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:41:49.905924 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:41:49.915019 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:41:49.915987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:41:49.927469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:41:49.935312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:41:49.940508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:41:49.940709 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:41:49.941920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:41:49.942045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:41:49.947539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:41:49.947728 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:41:49.953210 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:41:49.953964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:41:49.967526 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:41:49.973455 systemd[1]: Finished ensure-sysext.service. Sep 9 23:41:49.978982 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 9 23:41:49.982844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:41:49.983774 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:41:49.991033 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:41:49.996047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:41:50.006416 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:41:50.012028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:41:50.012171 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:41:50.012544 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:41:50.017571 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:41:50.022131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:41:50.022287 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:41:50.027409 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:41:50.027548 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:41:50.031837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:41:50.032088 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:41:50.037085 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:41:50.037217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:41:50.043530 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:41:50.043598 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:41:50.046389 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:41:50.056109 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:41:50.081162 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:41:50.202969 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 23:41:50.243231 augenrules[1723]: No rules Sep 9 23:41:50.244728 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:41:50.244928 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:41:50.263924 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 23:41:50.270084 systemd-resolved[1636]: Positive Trust Anchors: Sep 9 23:41:50.270317 systemd-resolved[1636]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:41:50.270341 systemd-resolved[1636]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:41:50.284033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#209 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 23:41:50.287749 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:41:50.302676 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 9 23:41:50.335068 kernel: hv_vmbus: registering driver hv_balloon Sep 9 23:41:50.335138 kernel: hv_vmbus: registering driver hyperv_fb Sep 9 23:41:50.335159 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 9 23:41:50.335167 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 9 23:41:50.340732 systemd-networkd[1671]: lo: Link UP Sep 9 23:41:50.340737 systemd-networkd[1671]: lo: Gained carrier Sep 9 23:41:50.342152 systemd-networkd[1671]: Enumeration completed Sep 9 23:41:50.342250 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:41:50.343175 systemd-networkd[1671]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:41:50.343178 systemd-networkd[1671]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:41:50.357722 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 9 23:41:50.357799 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 9 23:41:50.353427 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:41:50.364549 kernel: Console: switching to colour dummy device 80x25 Sep 9 23:41:50.364462 systemd-resolved[1636]: Using system hostname 'ci-4426.0.0-n-d9fce76d1d'. Sep 9 23:41:50.369864 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 23:41:50.370125 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:41:50.397024 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:50.418082 kernel: mlx5_core 1c3e:00:02.0 enP7230s1: Link up Sep 9 23:41:50.418315 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 9 23:41:50.430410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:41:50.430584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:50.437582 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:41:50.440472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:50.453914 kernel: hv_netvsc 0022487a-00e0-0022-487a-00e00022487a eth0: Data path switched to VF: enP7230s1 Sep 9 23:41:50.453931 systemd-networkd[1671]: enP7230s1: Link UP Sep 9 23:41:50.454051 systemd-networkd[1671]: eth0: Link UP Sep 9 23:41:50.454053 systemd-networkd[1671]: eth0: Gained carrier Sep 9 23:41:50.454073 systemd-networkd[1671]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:41:50.456517 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:41:50.461498 systemd-networkd[1671]: enP7230s1: Gained carrier Sep 9 23:41:50.464174 systemd[1]: Reached target network.target - Network. Sep 9 23:41:50.469608 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:41:50.481164 systemd-networkd[1671]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 23:41:50.485438 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:41:50.512001 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 9 23:41:50.522495 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:41:50.574926 kernel: MACsec IEEE 802.1AE Sep 9 23:41:50.607455 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:41:51.668528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:52.345061 systemd-networkd[1671]: eth0: Gained IPv6LL Sep 9 23:41:52.347233 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:41:52.352048 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:41:52.465019 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:41:52.470476 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:41:56.402889 ldconfig[1432]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:41:56.416983 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:41:56.423350 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:41:56.461530 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:41:56.466315 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:41:56.470451 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:41:56.475015 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:41:56.479672 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:41:56.483868 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:41:56.488592 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:41:56.493143 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:41:56.493168 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:41:56.497371 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:41:56.526260 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:41:56.532373 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:41:56.537187 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:41:56.541930 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:41:56.547163 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:41:56.552364 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:41:56.556307 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:41:56.561555 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:41:56.565631 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:41:56.569124 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:41:56.572498 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:41:56.572522 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:41:56.588382 systemd[1]: Starting chronyd.service - NTP client/server... Sep 9 23:41:56.599993 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:41:56.607130 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 23:41:56.615007 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:41:56.627704 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:41:56.635001 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:41:56.647944 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:41:56.653012 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:41:56.653816 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 9 23:41:56.658224 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 9 23:41:56.659037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:41:56.664972 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:41:56.670114 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:41:56.674778 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:41:56.680019 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:41:56.685238 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:41:56.691346 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:41:56.695667 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:41:56.696050 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:41:56.698061 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:41:56.702682 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:41:56.704824 jq[1840]: false Sep 9 23:41:56.718970 KVP[1842]: KVP starting; pid is:1842 Sep 9 23:41:56.722951 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:41:56.729222 jq[1851]: true Sep 9 23:41:56.732935 kernel: hv_utils: KVP IC version 4.0 Sep 9 23:41:56.732509 KVP[1842]: KVP LIC Version: 3.1 Sep 9 23:41:56.732844 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:41:56.734941 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:41:56.738552 chronyd[1832]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 9 23:41:56.739305 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:41:56.740945 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:41:56.760924 extend-filesystems[1841]: Found /dev/sda6 Sep 9 23:41:56.776477 jq[1863]: true Sep 9 23:41:56.780326 extend-filesystems[1841]: Found /dev/sda9 Sep 9 23:41:56.795088 extend-filesystems[1841]: Checking size of /dev/sda9 Sep 9 23:41:56.786493 chronyd[1832]: Timezone right/UTC failed leap second check, ignoring Sep 9 23:41:56.805820 update_engine[1850]: I20250909 23:41:56.788404 1850 main.cc:92] Flatcar Update Engine starting Sep 9 23:41:56.786750 systemd[1]: Started chronyd.service - NTP client/server. Sep 9 23:41:56.786648 chronyd[1832]: Loaded seccomp filter (level 2) Sep 9 23:41:56.799849 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:41:56.800055 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:41:56.800602 (ntainerd)[1876]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:41:56.810451 systemd-logind[1849]: New seat seat0. Sep 9 23:41:56.811005 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:41:56.811505 systemd-logind[1849]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 9 23:41:56.817182 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:41:56.825071 tar[1861]: linux-arm64/LICENSE Sep 9 23:41:56.825284 tar[1861]: linux-arm64/helm Sep 9 23:41:56.848995 extend-filesystems[1841]: Old size kept for /dev/sda9 Sep 9 23:41:56.851197 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:41:56.851421 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:41:56.904354 bash[1902]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:41:56.905292 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:41:56.920317 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 23:41:57.122734 dbus-daemon[1835]: [system] SELinux support is enabled Sep 9 23:41:57.124316 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:41:57.131050 update_engine[1850]: I20250909 23:41:57.131000 1850 update_check_scheduler.cc:74] Next update check in 10m56s Sep 9 23:41:57.133457 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:41:57.134507 dbus-daemon[1835]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 23:41:57.133482 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:41:57.142936 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:41:57.142956 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:41:57.151201 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:41:57.163231 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:41:57.227219 coreos-metadata[1834]: Sep 09 23:41:57.227 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 9 23:41:57.233951 coreos-metadata[1834]: Sep 09 23:41:57.233 INFO Fetch successful Sep 9 23:41:57.233951 coreos-metadata[1834]: Sep 09 23:41:57.233 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 9 23:41:57.236002 coreos-metadata[1834]: Sep 09 23:41:57.235 INFO Fetch successful Sep 9 23:41:57.236509 coreos-metadata[1834]: Sep 09 23:41:57.236 INFO Fetching http://168.63.129.16/machine/0d06cc19-ecc9-4386-9806-4e0556206327/2b5a8df5%2Dbe49%2D4a66%2D93aa%2Df61307420ecd.%5Fci%2D4426.0.0%2Dn%2Dd9fce76d1d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 9 23:41:57.238576 coreos-metadata[1834]: Sep 09 23:41:57.238 INFO Fetch successful Sep 9 23:41:57.239807 coreos-metadata[1834]: Sep 09 23:41:57.239 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 9 23:41:57.254494 coreos-metadata[1834]: Sep 09 23:41:57.254 INFO Fetch successful Sep 9 23:41:57.298425 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 23:41:57.308015 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:41:57.370869 tar[1861]: linux-arm64/README.md Sep 9 23:41:57.381636 sshd_keygen[1884]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:41:57.382977 locksmithd[1972]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:41:57.388923 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:41:57.405744 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:41:57.414058 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:41:57.420149 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 9 23:41:57.430426 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:41:57.430569 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:41:57.439108 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:41:57.454356 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 9 23:41:57.473043 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:41:57.480809 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:41:57.490048 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 23:41:57.496584 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:41:57.632673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:41:57.637656 (kubelet)[2019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:41:57.655137 containerd[1876]: time="2025-09-09T23:41:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:41:57.657574 containerd[1876]: time="2025-09-09T23:41:57.655679876Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 23:41:57.661758 containerd[1876]: time="2025-09-09T23:41:57.661724636Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.352µs" Sep 9 23:41:57.661758 containerd[1876]: time="2025-09-09T23:41:57.661753692Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:41:57.661834 containerd[1876]: time="2025-09-09T23:41:57.661770828Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:41:57.661994 containerd[1876]: time="2025-09-09T23:41:57.661958548Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:41:57.661994 containerd[1876]: time="2025-09-09T23:41:57.661982100Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:41:57.662040 containerd[1876]: time="2025-09-09T23:41:57.662000932Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:41:57.662454 containerd[1876]: time="2025-09-09T23:41:57.662050020Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:41:57.662454 containerd[1876]: time="2025-09-09T23:41:57.662060180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:41:57.662454 containerd[1876]: time="2025-09-09T23:41:57.662241756Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:41:57.662454 containerd[1876]: time="2025-09-09T23:41:57.662252948Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:41:57.662454 containerd[1876]: time="2025-09-09T23:41:57.662259684Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:41:57.662454 containerd[1876]: time="2025-09-09T23:41:57.662264388Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:41:57.662454 containerd[1876]: time="2025-09-09T23:41:57.662323556Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:41:57.662555 containerd[1876]: time="2025-09-09T23:41:57.662458228Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:41:57.662555 containerd[1876]: time="2025-09-09T23:41:57.662477756Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:41:57.662555 containerd[1876]: time="2025-09-09T23:41:57.662483636Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:41:57.662555 containerd[1876]: time="2025-09-09T23:41:57.662516836Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:41:57.662704 containerd[1876]: time="2025-09-09T23:41:57.662668348Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:41:57.662920 containerd[1876]: time="2025-09-09T23:41:57.662739684Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:41:57.681093 containerd[1876]: time="2025-09-09T23:41:57.681019772Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:41:57.681154 containerd[1876]: time="2025-09-09T23:41:57.681099452Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:41:57.681154 containerd[1876]: time="2025-09-09T23:41:57.681110724Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:41:57.681154 containerd[1876]: time="2025-09-09T23:41:57.681130276Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:41:57.681154 containerd[1876]: time="2025-09-09T23:41:57.681137804Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:41:57.681154 containerd[1876]: time="2025-09-09T23:41:57.681146436Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:41:57.681154 containerd[1876]: time="2025-09-09T23:41:57.681154076Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:41:57.681239 containerd[1876]: time="2025-09-09T23:41:57.681164436Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:41:57.681239 containerd[1876]: time="2025-09-09T23:41:57.681170852Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:41:57.681239 containerd[1876]: time="2025-09-09T23:41:57.681176740Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:41:57.681239 containerd[1876]: time="2025-09-09T23:41:57.681182644Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:41:57.681239 containerd[1876]: time="2025-09-09T23:41:57.681190300Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681320404Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681339348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681356460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681363484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681372156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681379476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681386676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681392948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681400172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681406172Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681412788Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681465372Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681476172Z" level=info msg="Start snapshots syncer" Sep 9 23:41:57.681569 containerd[1876]: time="2025-09-09T23:41:57.681506276Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:41:57.682121 containerd[1876]: time="2025-09-09T23:41:57.681693964Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:41:57.682121 containerd[1876]: time="2025-09-09T23:41:57.681728828Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.681790748Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.681936732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.681954900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.681961828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.681970788Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.681978540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.681985068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.681999204Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.682014980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.682021684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.682027764Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.682091444Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.682102268Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:41:57.682202 containerd[1876]: time="2025-09-09T23:41:57.682107612Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:41:57.682812 containerd[1876]: time="2025-09-09T23:41:57.682113292Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:41:57.682812 containerd[1876]: time="2025-09-09T23:41:57.682118036Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:41:57.682812 containerd[1876]: time="2025-09-09T23:41:57.682123532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:41:57.682812 containerd[1876]: time="2025-09-09T23:41:57.682130364Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:41:57.682812 containerd[1876]: time="2025-09-09T23:41:57.682140724Z" level=info msg="runtime interface created" Sep 9 23:41:57.682812 containerd[1876]: time="2025-09-09T23:41:57.682146676Z" level=info msg="created NRI interface" Sep 9 23:41:57.682812 containerd[1876]: time="2025-09-09T23:41:57.682151588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:41:57.682812 containerd[1876]: time="2025-09-09T23:41:57.682164748Z" level=info msg="Connect containerd service" Sep 9 23:41:57.682812 containerd[1876]: time="2025-09-09T23:41:57.682182404Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:41:57.682812 containerd[1876]: time="2025-09-09T23:41:57.682802836Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:41:57.961955 kubelet[2019]: E0909 23:41:57.961826 2019 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:41:57.964079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:41:57.964343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:41:57.964747 systemd[1]: kubelet.service: Consumed 528ms CPU time, 252.6M memory peak. Sep 9 23:41:58.079206 containerd[1876]: time="2025-09-09T23:41:58.079152052Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:41:58.079206 containerd[1876]: time="2025-09-09T23:41:58.079216644Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:41:58.079345 containerd[1876]: time="2025-09-09T23:41:58.079241476Z" level=info msg="Start subscribing containerd event" Sep 9 23:41:58.079345 containerd[1876]: time="2025-09-09T23:41:58.079273876Z" level=info msg="Start recovering state" Sep 9 23:41:58.079345 containerd[1876]: time="2025-09-09T23:41:58.079337836Z" level=info msg="Start event monitor" Sep 9 23:41:58.079387 containerd[1876]: time="2025-09-09T23:41:58.079349772Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:41:58.079387 containerd[1876]: time="2025-09-09T23:41:58.079354204Z" level=info msg="Start streaming server" Sep 9 23:41:58.079387 containerd[1876]: time="2025-09-09T23:41:58.079359772Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:41:58.079387 containerd[1876]: time="2025-09-09T23:41:58.079364196Z" level=info msg="runtime interface starting up..." Sep 9 23:41:58.079387 containerd[1876]: time="2025-09-09T23:41:58.079368100Z" level=info msg="starting plugins..." Sep 9 23:41:58.079387 containerd[1876]: time="2025-09-09T23:41:58.079378860Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:41:58.083542 containerd[1876]: time="2025-09-09T23:41:58.079470148Z" level=info msg="containerd successfully booted in 0.424639s" Sep 9 23:41:58.079585 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:41:58.084132 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:41:58.088282 systemd[1]: Startup finished in 1.532s (kernel) + 17.169s (initrd) + 17.280s (userspace) = 35.982s. Sep 9 23:41:59.039055 login[2013]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 9 23:41:59.039644 login[2012]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:41:59.048866 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:41:59.049955 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:41:59.052504 systemd-logind[1849]: New session 2 of user core. Sep 9 23:41:59.088228 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:41:59.091106 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:41:59.129568 (systemd)[2046]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:41:59.132099 systemd-logind[1849]: New session c1 of user core. Sep 9 23:41:59.405157 systemd[2046]: Queued start job for default target default.target. Sep 9 23:41:59.412621 systemd[2046]: Created slice app.slice - User Application Slice. Sep 9 23:41:59.412654 systemd[2046]: Reached target paths.target - Paths. Sep 9 23:41:59.412683 systemd[2046]: Reached target timers.target - Timers. Sep 9 23:41:59.413793 systemd[2046]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:41:59.423180 systemd[2046]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:41:59.423309 systemd[2046]: Reached target sockets.target - Sockets. Sep 9 23:41:59.423430 systemd[2046]: Reached target basic.target - Basic System. Sep 9 23:41:59.423519 systemd[2046]: Reached target default.target - Main User Target. Sep 9 23:41:59.423572 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:41:59.423638 systemd[2046]: Startup finished in 286ms. Sep 9 23:41:59.424785 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:41:59.811368 waagent[2009]: 2025-09-09T23:41:59.811136Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 9 23:41:59.818267 waagent[2009]: 2025-09-09T23:41:59.815374Z INFO Daemon Daemon OS: flatcar 4426.0.0 Sep 9 23:41:59.818542 waagent[2009]: 2025-09-09T23:41:59.818507Z INFO Daemon Daemon Python: 3.11.13 Sep 9 23:41:59.821741 waagent[2009]: 2025-09-09T23:41:59.821703Z INFO Daemon Daemon Run daemon Sep 9 23:41:59.824865 waagent[2009]: 2025-09-09T23:41:59.824832Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4426.0.0' Sep 9 23:41:59.831747 waagent[2009]: 2025-09-09T23:41:59.831708Z INFO Daemon Daemon Using waagent for provisioning Sep 9 23:41:59.835569 waagent[2009]: 2025-09-09T23:41:59.835529Z INFO Daemon Daemon Activate resource disk Sep 9 23:41:59.838954 waagent[2009]: 2025-09-09T23:41:59.838919Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 9 23:41:59.846913 waagent[2009]: 2025-09-09T23:41:59.846855Z INFO Daemon Daemon Found device: None Sep 9 23:41:59.850294 waagent[2009]: 2025-09-09T23:41:59.850259Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 9 23:41:59.856238 waagent[2009]: 2025-09-09T23:41:59.856201Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 9 23:41:59.864610 waagent[2009]: 2025-09-09T23:41:59.864567Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 9 23:41:59.868800 waagent[2009]: 2025-09-09T23:41:59.868760Z INFO Daemon Daemon Running default provisioning handler Sep 9 23:41:59.877198 waagent[2009]: 2025-09-09T23:41:59.877161Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 9 23:41:59.887237 waagent[2009]: 2025-09-09T23:41:59.887197Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 9 23:41:59.894486 waagent[2009]: 2025-09-09T23:41:59.894442Z INFO Daemon Daemon cloud-init is enabled: False Sep 9 23:41:59.898378 waagent[2009]: 2025-09-09T23:41:59.898344Z INFO Daemon Daemon Copying ovf-env.xml Sep 9 23:42:00.000125 waagent[2009]: 2025-09-09T23:42:00.000065Z INFO Daemon Daemon Successfully mounted dvd Sep 9 23:42:00.027880 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 9 23:42:00.029832 waagent[2009]: 2025-09-09T23:42:00.029782Z INFO Daemon Daemon Detect protocol endpoint Sep 9 23:42:00.033405 waagent[2009]: 2025-09-09T23:42:00.033372Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 9 23:42:00.038151 waagent[2009]: 2025-09-09T23:42:00.038122Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 9 23:42:00.043520 waagent[2009]: 2025-09-09T23:42:00.043493Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 9 23:42:00.044567 login[2013]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:00.047998 waagent[2009]: 2025-09-09T23:42:00.047960Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 9 23:42:00.054071 waagent[2009]: 2025-09-09T23:42:00.054031Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 9 23:42:00.061360 systemd-logind[1849]: New session 1 of user core. Sep 9 23:42:00.068024 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:42:00.117720 waagent[2009]: 2025-09-09T23:42:00.117674Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 9 23:42:00.123918 waagent[2009]: 2025-09-09T23:42:00.123038Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 9 23:42:00.126972 waagent[2009]: 2025-09-09T23:42:00.126930Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 9 23:42:00.287715 waagent[2009]: 2025-09-09T23:42:00.287620Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 9 23:42:00.292125 waagent[2009]: 2025-09-09T23:42:00.292089Z INFO Daemon Daemon Forcing an update of the goal state. Sep 9 23:42:00.299460 waagent[2009]: 2025-09-09T23:42:00.299419Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 9 23:42:00.316420 waagent[2009]: 2025-09-09T23:42:00.316350Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 9 23:42:00.320655 waagent[2009]: 2025-09-09T23:42:00.320618Z INFO Daemon Sep 9 23:42:00.323055 waagent[2009]: 2025-09-09T23:42:00.323024Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 88fc7531-803c-4b48-a2f2-3f75b632b376 eTag: 2391554178563195161 source: Fabric] Sep 9 23:42:00.331662 waagent[2009]: 2025-09-09T23:42:00.331628Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 9 23:42:00.336747 waagent[2009]: 2025-09-09T23:42:00.336716Z INFO Daemon Sep 9 23:42:00.338623 waagent[2009]: 2025-09-09T23:42:00.338598Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 9 23:42:00.346490 waagent[2009]: 2025-09-09T23:42:00.346458Z INFO Daemon Daemon Downloading artifacts profile blob Sep 9 23:42:00.485949 waagent[2009]: 2025-09-09T23:42:00.485856Z INFO Daemon Downloaded certificate {'thumbprint': '9046E32487F7C0B3F3EA556C01E2CCF7DA5A5907', 'hasPrivateKey': True} Sep 9 23:42:00.492832 waagent[2009]: 2025-09-09T23:42:00.492792Z INFO Daemon Fetch goal state completed Sep 9 23:42:00.501166 waagent[2009]: 2025-09-09T23:42:00.501136Z INFO Daemon Daemon Starting provisioning Sep 9 23:42:00.504632 waagent[2009]: 2025-09-09T23:42:00.504603Z INFO Daemon Daemon Handle ovf-env.xml. Sep 9 23:42:00.507725 waagent[2009]: 2025-09-09T23:42:00.507702Z INFO Daemon Daemon Set hostname [ci-4426.0.0-n-d9fce76d1d] Sep 9 23:42:00.542142 waagent[2009]: 2025-09-09T23:42:00.542090Z INFO Daemon Daemon Publish hostname [ci-4426.0.0-n-d9fce76d1d] Sep 9 23:42:00.546879 waagent[2009]: 2025-09-09T23:42:00.546838Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 9 23:42:00.551038 waagent[2009]: 2025-09-09T23:42:00.551002Z INFO Daemon Daemon Primary interface is [eth0] Sep 9 23:42:00.559963 systemd-networkd[1671]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:42:00.560168 systemd-networkd[1671]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:42:00.560198 systemd-networkd[1671]: eth0: DHCP lease lost Sep 9 23:42:00.560857 waagent[2009]: 2025-09-09T23:42:00.560807Z INFO Daemon Daemon Create user account if not exists Sep 9 23:42:00.564721 waagent[2009]: 2025-09-09T23:42:00.564683Z INFO Daemon Daemon User core already exists, skip useradd Sep 9 23:42:00.568436 waagent[2009]: 2025-09-09T23:42:00.568377Z INFO Daemon Daemon Configure sudoer Sep 9 23:42:00.578781 waagent[2009]: 2025-09-09T23:42:00.578731Z INFO Daemon Daemon Configure sshd Sep 9 23:42:00.585490 waagent[2009]: 2025-09-09T23:42:00.585448Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 9 23:42:00.594932 systemd-networkd[1671]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 23:42:00.595193 waagent[2009]: 2025-09-09T23:42:00.595143Z INFO Daemon Daemon Deploy ssh public key. Sep 9 23:42:01.731442 waagent[2009]: 2025-09-09T23:42:01.731372Z INFO Daemon Daemon Provisioning complete Sep 9 23:42:01.749048 waagent[2009]: 2025-09-09T23:42:01.749014Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 9 23:42:01.753964 waagent[2009]: 2025-09-09T23:42:01.753927Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 9 23:42:01.760913 waagent[2009]: 2025-09-09T23:42:01.760874Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 9 23:42:01.859941 waagent[2098]: 2025-09-09T23:42:01.859233Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 9 23:42:01.859941 waagent[2098]: 2025-09-09T23:42:01.859358Z INFO ExtHandler ExtHandler OS: flatcar 4426.0.0 Sep 9 23:42:01.859941 waagent[2098]: 2025-09-09T23:42:01.859394Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 9 23:42:01.859941 waagent[2098]: 2025-09-09T23:42:01.859426Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 9 23:42:02.696929 waagent[2098]: 2025-09-09T23:42:02.696171Z INFO ExtHandler ExtHandler Distro: flatcar-4426.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 9 23:42:02.696929 waagent[2098]: 2025-09-09T23:42:02.696398Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 23:42:02.696929 waagent[2098]: 2025-09-09T23:42:02.696446Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 23:42:02.702396 waagent[2098]: 2025-09-09T23:42:02.702346Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 9 23:42:02.707200 waagent[2098]: 2025-09-09T23:42:02.707166Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 9 23:42:02.707552 waagent[2098]: 2025-09-09T23:42:02.707516Z INFO ExtHandler Sep 9 23:42:02.707605 waagent[2098]: 2025-09-09T23:42:02.707582Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9da4b5bf-a323-4cdd-98bf-3b0d6dc3d0a1 eTag: 2391554178563195161 source: Fabric] Sep 9 23:42:02.707820 waagent[2098]: 2025-09-09T23:42:02.707793Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 9 23:42:02.708250 waagent[2098]: 2025-09-09T23:42:02.708217Z INFO ExtHandler Sep 9 23:42:02.708289 waagent[2098]: 2025-09-09T23:42:02.708272Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 9 23:42:02.711821 waagent[2098]: 2025-09-09T23:42:02.711791Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 9 23:42:02.869948 waagent[2098]: 2025-09-09T23:42:02.869010Z INFO ExtHandler Downloaded certificate {'thumbprint': '9046E32487F7C0B3F3EA556C01E2CCF7DA5A5907', 'hasPrivateKey': True} Sep 9 23:42:02.869948 waagent[2098]: 2025-09-09T23:42:02.869481Z INFO ExtHandler Fetch goal state completed Sep 9 23:42:02.881367 waagent[2098]: 2025-09-09T23:42:02.881317Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Sep 9 23:42:02.884603 waagent[2098]: 2025-09-09T23:42:02.884554Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2098 Sep 9 23:42:02.884713 waagent[2098]: 2025-09-09T23:42:02.884678Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 9 23:42:02.884990 waagent[2098]: 2025-09-09T23:42:02.884958Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 9 23:42:02.886103 waagent[2098]: 2025-09-09T23:42:02.886070Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4426.0.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 9 23:42:02.886423 waagent[2098]: 2025-09-09T23:42:02.886393Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4426.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 9 23:42:02.886538 waagent[2098]: 2025-09-09T23:42:02.886515Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 9 23:42:02.886979 waagent[2098]: 2025-09-09T23:42:02.886947Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 9 23:42:03.068111 waagent[2098]: 2025-09-09T23:42:03.068023Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 9 23:42:03.068226 waagent[2098]: 2025-09-09T23:42:03.068209Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 9 23:42:03.072746 waagent[2098]: 2025-09-09T23:42:03.072705Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 9 23:42:03.077084 systemd[1]: Reload requested from client PID 2113 ('systemctl') (unit waagent.service)... Sep 9 23:42:03.077102 systemd[1]: Reloading... Sep 9 23:42:03.152925 zram_generator::config[2152]: No configuration found. Sep 9 23:42:03.297812 systemd[1]: Reloading finished in 220 ms. Sep 9 23:42:03.309922 waagent[2098]: 2025-09-09T23:42:03.309464Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 9 23:42:03.309922 waagent[2098]: 2025-09-09T23:42:03.309593Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 9 23:42:03.800095 waagent[2098]: 2025-09-09T23:42:03.799273Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 9 23:42:03.800095 waagent[2098]: 2025-09-09T23:42:03.799586Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 9 23:42:03.800366 waagent[2098]: 2025-09-09T23:42:03.800320Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 9 23:42:03.800442 waagent[2098]: 2025-09-09T23:42:03.800401Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 23:42:03.800508 waagent[2098]: 2025-09-09T23:42:03.800486Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 23:42:03.800682 waagent[2098]: 2025-09-09T23:42:03.800653Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 9 23:42:03.801081 waagent[2098]: 2025-09-09T23:42:03.801044Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 9 23:42:03.801155 waagent[2098]: 2025-09-09T23:42:03.801122Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 9 23:42:03.801155 waagent[2098]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 9 23:42:03.801155 waagent[2098]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 9 23:42:03.801155 waagent[2098]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 9 23:42:03.801155 waagent[2098]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 9 23:42:03.801155 waagent[2098]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 9 23:42:03.801155 waagent[2098]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 9 23:42:03.801574 waagent[2098]: 2025-09-09T23:42:03.801535Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 9 23:42:03.801655 waagent[2098]: 2025-09-09T23:42:03.801622Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 23:42:03.801710 waagent[2098]: 2025-09-09T23:42:03.801686Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 23:42:03.801810 waagent[2098]: 2025-09-09T23:42:03.801780Z INFO EnvHandler ExtHandler Configure routes Sep 9 23:42:03.801846 waagent[2098]: 2025-09-09T23:42:03.801831Z INFO EnvHandler ExtHandler Gateway:None Sep 9 23:42:03.801940 waagent[2098]: 2025-09-09T23:42:03.801886Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 9 23:42:03.802051 waagent[2098]: 2025-09-09T23:42:03.802016Z INFO EnvHandler ExtHandler Routes:None Sep 9 23:42:03.802559 waagent[2098]: 2025-09-09T23:42:03.802526Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 9 23:42:03.802665 waagent[2098]: 2025-09-09T23:42:03.802625Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 9 23:42:03.802793 waagent[2098]: 2025-09-09T23:42:03.802773Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 9 23:42:03.808399 waagent[2098]: 2025-09-09T23:42:03.808366Z INFO ExtHandler ExtHandler Sep 9 23:42:03.808533 waagent[2098]: 2025-09-09T23:42:03.808509Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6dfb5dc6-4499-4b75-8dda-366c35ae365f correlation 03bf223c-312d-440c-a7b8-071adfb4dee8 created: 2025-09-09T23:40:36.805008Z] Sep 9 23:42:03.808889 waagent[2098]: 2025-09-09T23:42:03.808858Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 9 23:42:03.809386 waagent[2098]: 2025-09-09T23:42:03.809355Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 9 23:42:03.842562 waagent[2098]: 2025-09-09T23:42:03.842501Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 9 23:42:03.842562 waagent[2098]: Try `iptables -h' or 'iptables --help' for more information.) Sep 9 23:42:03.842887 waagent[2098]: 2025-09-09T23:42:03.842846Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6270AB74-0B1A-428D-B307-FA35E1E3BF6C;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 9 23:42:03.899005 waagent[2098]: 2025-09-09T23:42:03.898951Z INFO MonitorHandler ExtHandler Network interfaces: Sep 9 23:42:03.899005 waagent[2098]: Executing ['ip', '-a', '-o', 'link']: Sep 9 23:42:03.899005 waagent[2098]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 9 23:42:03.899005 waagent[2098]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:00:e0 brd ff:ff:ff:ff:ff:ff Sep 9 23:42:03.899005 waagent[2098]: 3: enP7230s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:00:e0 brd ff:ff:ff:ff:ff:ff\ altname enP7230p0s2 Sep 9 23:42:03.899005 waagent[2098]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 9 23:42:03.899005 waagent[2098]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 9 23:42:03.899005 waagent[2098]: 2: eth0 inet 10.200.20.4/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 9 23:42:03.899005 waagent[2098]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 9 23:42:03.899005 waagent[2098]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 9 23:42:03.899005 waagent[2098]: 2: eth0 inet6 fe80::222:48ff:fe7a:e0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 9 23:42:03.939907 waagent[2098]: 2025-09-09T23:42:03.939843Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 9 23:42:03.939907 waagent[2098]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:03.939907 waagent[2098]: pkts bytes target prot opt in out source destination Sep 9 23:42:03.939907 waagent[2098]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:03.939907 waagent[2098]: pkts bytes target prot opt in out source destination Sep 9 23:42:03.939907 waagent[2098]: Chain OUTPUT (policy ACCEPT 3 packets, 534 bytes) Sep 9 23:42:03.939907 waagent[2098]: pkts bytes target prot opt in out source destination Sep 9 23:42:03.939907 waagent[2098]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 9 23:42:03.939907 waagent[2098]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 9 23:42:03.939907 waagent[2098]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 9 23:42:03.942231 waagent[2098]: 2025-09-09T23:42:03.942183Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 9 23:42:03.942231 waagent[2098]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:03.942231 waagent[2098]: pkts bytes target prot opt in out source destination Sep 9 23:42:03.942231 waagent[2098]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:03.942231 waagent[2098]: pkts bytes target prot opt in out source destination Sep 9 23:42:03.942231 waagent[2098]: Chain OUTPUT (policy ACCEPT 3 packets, 534 bytes) Sep 9 23:42:03.942231 waagent[2098]: pkts bytes target prot opt in out source destination Sep 9 23:42:03.942231 waagent[2098]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 9 23:42:03.942231 waagent[2098]: 6 520 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 9 23:42:03.942231 waagent[2098]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 9 23:42:03.942438 waagent[2098]: 2025-09-09T23:42:03.942400Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 9 23:42:08.216119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:42:08.217667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:08.316703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:08.319422 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:08.455814 kubelet[2247]: E0909 23:42:08.455758 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:08.458131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:08.458351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:08.458982 systemd[1]: kubelet.service: Consumed 101ms CPU time, 107.5M memory peak. Sep 9 23:42:18.561222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 23:42:18.563051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:19.005854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:19.010142 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:19.039665 kubelet[2261]: E0909 23:42:19.039607 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:19.041582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:19.041781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:19.042249 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106M memory peak. Sep 9 23:42:20.623138 chronyd[1832]: Selected source PHC0 Sep 9 23:42:23.609066 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:42:23.610686 systemd[1]: Started sshd@0-10.200.20.4:22-10.200.16.10:50124.service - OpenSSH per-connection server daemon (10.200.16.10:50124). Sep 9 23:42:24.300661 sshd[2269]: Accepted publickey for core from 10.200.16.10 port 50124 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:24.301675 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:24.305506 systemd-logind[1849]: New session 3 of user core. Sep 9 23:42:24.317006 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:42:24.698393 systemd[1]: Started sshd@1-10.200.20.4:22-10.200.16.10:50134.service - OpenSSH per-connection server daemon (10.200.16.10:50134). Sep 9 23:42:25.152617 sshd[2275]: Accepted publickey for core from 10.200.16.10 port 50134 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:25.153705 sshd-session[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:25.157174 systemd-logind[1849]: New session 4 of user core. Sep 9 23:42:25.164172 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:42:25.478020 sshd[2278]: Connection closed by 10.200.16.10 port 50134 Sep 9 23:42:25.478230 sshd-session[2275]: pam_unix(sshd:session): session closed for user core Sep 9 23:42:25.480985 systemd[1]: sshd@1-10.200.20.4:22-10.200.16.10:50134.service: Deactivated successfully. Sep 9 23:42:25.482436 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:42:25.483092 systemd-logind[1849]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:42:25.484263 systemd-logind[1849]: Removed session 4. Sep 9 23:42:25.565144 systemd[1]: Started sshd@2-10.200.20.4:22-10.200.16.10:50138.service - OpenSSH per-connection server daemon (10.200.16.10:50138). Sep 9 23:42:26.028946 sshd[2284]: Accepted publickey for core from 10.200.16.10 port 50138 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:26.030378 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:26.033788 systemd-logind[1849]: New session 5 of user core. Sep 9 23:42:26.044186 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:42:26.367598 sshd[2287]: Connection closed by 10.200.16.10 port 50138 Sep 9 23:42:26.368127 sshd-session[2284]: pam_unix(sshd:session): session closed for user core Sep 9 23:42:26.371040 systemd-logind[1849]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:42:26.371365 systemd[1]: sshd@2-10.200.20.4:22-10.200.16.10:50138.service: Deactivated successfully. Sep 9 23:42:26.372729 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:42:26.376197 systemd-logind[1849]: Removed session 5. Sep 9 23:42:26.441372 systemd[1]: Started sshd@3-10.200.20.4:22-10.200.16.10:50144.service - OpenSSH per-connection server daemon (10.200.16.10:50144). Sep 9 23:42:26.869567 sshd[2293]: Accepted publickey for core from 10.200.16.10 port 50144 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:26.870658 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:26.874198 systemd-logind[1849]: New session 6 of user core. Sep 9 23:42:26.881193 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:42:27.186826 sshd[2296]: Connection closed by 10.200.16.10 port 50144 Sep 9 23:42:27.187327 sshd-session[2293]: pam_unix(sshd:session): session closed for user core Sep 9 23:42:27.190374 systemd[1]: sshd@3-10.200.20.4:22-10.200.16.10:50144.service: Deactivated successfully. Sep 9 23:42:27.191757 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:42:27.192640 systemd-logind[1849]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:42:27.193796 systemd-logind[1849]: Removed session 6. Sep 9 23:42:27.280546 systemd[1]: Started sshd@4-10.200.20.4:22-10.200.16.10:50148.service - OpenSSH per-connection server daemon (10.200.16.10:50148). Sep 9 23:42:27.778347 sshd[2302]: Accepted publickey for core from 10.200.16.10 port 50148 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:27.779848 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:27.783253 systemd-logind[1849]: New session 7 of user core. Sep 9 23:42:27.790191 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:42:28.276937 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:42:28.277174 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:42:28.305467 sudo[2306]: pam_unix(sudo:session): session closed for user root Sep 9 23:42:28.377152 sshd[2305]: Connection closed by 10.200.16.10 port 50148 Sep 9 23:42:28.377815 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Sep 9 23:42:28.381101 systemd[1]: sshd@4-10.200.20.4:22-10.200.16.10:50148.service: Deactivated successfully. Sep 9 23:42:28.382399 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:42:28.383028 systemd-logind[1849]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:42:28.384267 systemd-logind[1849]: Removed session 7. Sep 9 23:42:28.457304 systemd[1]: Started sshd@5-10.200.20.4:22-10.200.16.10:50156.service - OpenSSH per-connection server daemon (10.200.16.10:50156). Sep 9 23:42:28.912891 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 50156 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:28.918685 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:28.922410 systemd-logind[1849]: New session 8 of user core. Sep 9 23:42:28.936030 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:42:29.062167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 23:42:29.063490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:29.169855 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:42:29.170401 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:42:29.595617 sudo[2321]: pam_unix(sudo:session): session closed for user root Sep 9 23:42:29.599357 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:42:29.599854 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:42:29.605031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:29.606886 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:29.611115 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:42:29.642771 kubelet[2328]: E0909 23:42:29.642703 2328 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:29.643000 augenrules[2354]: No rules Sep 9 23:42:29.644106 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:42:29.644273 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:42:29.646082 sudo[2319]: pam_unix(sudo:session): session closed for user root Sep 9 23:42:29.647474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:29.647571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:29.647969 systemd[1]: kubelet.service: Consumed 99ms CPU time, 105.5M memory peak. Sep 9 23:42:29.723926 sshd[2315]: Connection closed by 10.200.16.10 port 50156 Sep 9 23:42:29.724464 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Sep 9 23:42:29.727221 systemd-logind[1849]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:42:29.728638 systemd[1]: sshd@5-10.200.20.4:22-10.200.16.10:50156.service: Deactivated successfully. Sep 9 23:42:29.730086 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:42:29.731536 systemd-logind[1849]: Removed session 8. Sep 9 23:42:29.795330 systemd[1]: Started sshd@6-10.200.20.4:22-10.200.16.10:50162.service - OpenSSH per-connection server daemon (10.200.16.10:50162). Sep 9 23:42:30.209669 sshd[2364]: Accepted publickey for core from 10.200.16.10 port 50162 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:30.210743 sshd-session[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:30.214033 systemd-logind[1849]: New session 9 of user core. Sep 9 23:42:30.222006 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:42:30.445820 sudo[2368]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:42:30.446049 sudo[2368]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:42:32.218648 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:42:32.227143 (dockerd)[2385]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:42:33.293233 dockerd[2385]: time="2025-09-09T23:42:33.293085113Z" level=info msg="Starting up" Sep 9 23:42:33.296481 dockerd[2385]: time="2025-09-09T23:42:33.296460655Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:42:33.304019 dockerd[2385]: time="2025-09-09T23:42:33.303982408Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 23:42:33.345682 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1397979425-merged.mount: Deactivated successfully. Sep 9 23:42:33.429494 dockerd[2385]: time="2025-09-09T23:42:33.429457591Z" level=info msg="Loading containers: start." Sep 9 23:42:33.512954 kernel: Initializing XFRM netlink socket Sep 9 23:42:34.024922 systemd-networkd[1671]: docker0: Link UP Sep 9 23:42:34.043615 dockerd[2385]: time="2025-09-09T23:42:34.043544520Z" level=info msg="Loading containers: done." Sep 9 23:42:34.068947 dockerd[2385]: time="2025-09-09T23:42:34.068824295Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:42:34.068947 dockerd[2385]: time="2025-09-09T23:42:34.068888345Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 23:42:34.069188 dockerd[2385]: time="2025-09-09T23:42:34.069170977Z" level=info msg="Initializing buildkit" Sep 9 23:42:34.148915 dockerd[2385]: time="2025-09-09T23:42:34.148805782Z" level=info msg="Completed buildkit initialization" Sep 9 23:42:34.154598 dockerd[2385]: time="2025-09-09T23:42:34.154554717Z" level=info msg="Daemon has completed initialization" Sep 9 23:42:34.154952 dockerd[2385]: time="2025-09-09T23:42:34.154710906Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:42:34.156314 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:42:34.343403 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck942504170-merged.mount: Deactivated successfully. Sep 9 23:42:34.781984 containerd[1876]: time="2025-09-09T23:42:34.781855895Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 23:42:35.680002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2408287000.mount: Deactivated successfully. Sep 9 23:42:37.333943 containerd[1876]: time="2025-09-09T23:42:37.333389934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:37.337504 containerd[1876]: time="2025-09-09T23:42:37.337346083Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328357" Sep 9 23:42:37.340900 containerd[1876]: time="2025-09-09T23:42:37.340867224Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:37.345389 containerd[1876]: time="2025-09-09T23:42:37.345360505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:37.346079 containerd[1876]: time="2025-09-09T23:42:37.345939119Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 2.564046104s" Sep 9 23:42:37.346079 containerd[1876]: time="2025-09-09T23:42:37.345971257Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 23:42:37.346795 containerd[1876]: time="2025-09-09T23:42:37.346766375Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 23:42:38.473541 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 9 23:42:39.390870 containerd[1876]: time="2025-09-09T23:42:39.390815639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:39.394993 containerd[1876]: time="2025-09-09T23:42:39.394966540Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528552" Sep 9 23:42:39.399658 containerd[1876]: time="2025-09-09T23:42:39.399631868Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:39.405079 containerd[1876]: time="2025-09-09T23:42:39.405031616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:39.405879 containerd[1876]: time="2025-09-09T23:42:39.405507266Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 2.058642079s" Sep 9 23:42:39.405879 containerd[1876]: time="2025-09-09T23:42:39.405535235Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 23:42:39.406003 containerd[1876]: time="2025-09-09T23:42:39.405982195Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 23:42:39.810837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 23:42:39.813043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:39.905616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:39.913111 (kubelet)[2657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:40.056774 kubelet[2657]: E0909 23:42:40.056719 2657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:40.058649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:40.058867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:40.059419 systemd[1]: kubelet.service: Consumed 98ms CPU time, 105.6M memory peak. Sep 9 23:42:41.589557 containerd[1876]: time="2025-09-09T23:42:41.588967452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:41.592584 containerd[1876]: time="2025-09-09T23:42:41.592558019Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483527" Sep 9 23:42:41.596297 containerd[1876]: time="2025-09-09T23:42:41.596272824Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:41.601439 containerd[1876]: time="2025-09-09T23:42:41.601407569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:41.602088 containerd[1876]: time="2025-09-09T23:42:41.602065266Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 2.196057558s" Sep 9 23:42:41.602237 containerd[1876]: time="2025-09-09T23:42:41.602129549Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 23:42:41.602963 containerd[1876]: time="2025-09-09T23:42:41.602943171Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 23:42:42.114371 update_engine[1850]: I20250909 23:42:42.113924 1850 update_attempter.cc:509] Updating boot flags... Sep 9 23:42:42.628818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039971969.mount: Deactivated successfully. Sep 9 23:42:43.264936 containerd[1876]: time="2025-09-09T23:42:43.264753521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:43.268922 containerd[1876]: time="2025-09-09T23:42:43.268767209Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376724" Sep 9 23:42:43.274455 containerd[1876]: time="2025-09-09T23:42:43.274425198Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:43.278939 containerd[1876]: time="2025-09-09T23:42:43.278911559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:43.279655 containerd[1876]: time="2025-09-09T23:42:43.279362904Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.676356323s" Sep 9 23:42:43.279655 containerd[1876]: time="2025-09-09T23:42:43.279386841Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 23:42:43.279762 containerd[1876]: time="2025-09-09T23:42:43.279739350Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 23:42:43.986564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374551343.mount: Deactivated successfully. Sep 9 23:42:45.598939 containerd[1876]: time="2025-09-09T23:42:45.598623190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:45.604278 containerd[1876]: time="2025-09-09T23:42:45.604100409Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 9 23:42:45.609740 containerd[1876]: time="2025-09-09T23:42:45.609713777Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:45.614957 containerd[1876]: time="2025-09-09T23:42:45.614928354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:45.615769 containerd[1876]: time="2025-09-09T23:42:45.615606300Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.335844477s" Sep 9 23:42:45.615769 containerd[1876]: time="2025-09-09T23:42:45.615635125Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 23:42:45.616365 containerd[1876]: time="2025-09-09T23:42:45.616336480Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:42:46.211451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4125688080.mount: Deactivated successfully. Sep 9 23:42:46.243481 containerd[1876]: time="2025-09-09T23:42:46.243424684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:42:46.247765 containerd[1876]: time="2025-09-09T23:42:46.247561212Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 9 23:42:46.251807 containerd[1876]: time="2025-09-09T23:42:46.251775222Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:42:46.258593 containerd[1876]: time="2025-09-09T23:42:46.258558812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:42:46.259159 containerd[1876]: time="2025-09-09T23:42:46.259133858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 642.764737ms" Sep 9 23:42:46.259159 containerd[1876]: time="2025-09-09T23:42:46.259155291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:42:46.259646 containerd[1876]: time="2025-09-09T23:42:46.259612733Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 23:42:47.006918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3504274701.mount: Deactivated successfully. Sep 9 23:42:50.061215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 9 23:42:50.063542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:50.366351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:50.371146 (kubelet)[2855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:50.398412 kubelet[2855]: E0909 23:42:50.398349 2855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:50.400247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:50.400455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:50.401973 systemd[1]: kubelet.service: Consumed 99ms CPU time, 104.9M memory peak. Sep 9 23:42:51.046946 containerd[1876]: time="2025-09-09T23:42:51.046253417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:51.051641 containerd[1876]: time="2025-09-09T23:42:51.051612455Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 9 23:42:51.057506 containerd[1876]: time="2025-09-09T23:42:51.057472705Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:51.063051 containerd[1876]: time="2025-09-09T23:42:51.063025943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:51.063689 containerd[1876]: time="2025-09-09T23:42:51.063663512Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.804016314s" Sep 9 23:42:51.063776 containerd[1876]: time="2025-09-09T23:42:51.063760692Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 23:42:53.553338 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:53.553752 systemd[1]: kubelet.service: Consumed 99ms CPU time, 104.9M memory peak. Sep 9 23:42:53.556376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:53.577717 systemd[1]: Reload requested from client PID 2891 ('systemctl') (unit session-9.scope)... Sep 9 23:42:53.577728 systemd[1]: Reloading... Sep 9 23:42:53.690939 zram_generator::config[2943]: No configuration found. Sep 9 23:42:53.825634 systemd[1]: Reloading finished in 247 ms. Sep 9 23:42:53.872260 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 23:42:53.872323 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 23:42:53.872668 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:53.872718 systemd[1]: kubelet.service: Consumed 66ms CPU time, 95M memory peak. Sep 9 23:42:53.874366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:54.080995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:54.087261 (kubelet)[3004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:42:54.204892 kubelet[3004]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:42:54.205225 kubelet[3004]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:42:54.205225 kubelet[3004]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:42:54.205225 kubelet[3004]: I0909 23:42:54.205000 3004 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:42:54.346054 kubelet[3004]: I0909 23:42:54.345682 3004 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:42:54.346054 kubelet[3004]: I0909 23:42:54.345714 3004 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:42:54.346054 kubelet[3004]: I0909 23:42:54.345943 3004 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:42:54.363978 kubelet[3004]: E0909 23:42:54.363162 3004 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:42:54.364484 kubelet[3004]: I0909 23:42:54.364467 3004 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:42:54.370407 kubelet[3004]: I0909 23:42:54.370395 3004 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:42:54.373000 kubelet[3004]: I0909 23:42:54.372979 3004 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:42:54.373769 kubelet[3004]: I0909 23:42:54.373737 3004 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:42:54.373986 kubelet[3004]: I0909 23:42:54.373835 3004 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.0.0-n-d9fce76d1d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:42:54.374118 kubelet[3004]: I0909 23:42:54.374105 3004 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:42:54.374170 kubelet[3004]: I0909 23:42:54.374162 3004 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:42:54.374319 kubelet[3004]: I0909 23:42:54.374306 3004 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:42:54.377068 kubelet[3004]: I0909 23:42:54.377050 3004 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:42:54.377156 kubelet[3004]: I0909 23:42:54.377147 3004 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:42:54.378077 kubelet[3004]: I0909 23:42:54.378063 3004 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:42:54.378154 kubelet[3004]: I0909 23:42:54.378147 3004 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:42:54.380637 kubelet[3004]: W0909 23:42:54.380586 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-n-d9fce76d1d&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Sep 9 23:42:54.380637 kubelet[3004]: E0909 23:42:54.380637 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-n-d9fce76d1d&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:42:54.381088 kubelet[3004]: W0909 23:42:54.380890 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Sep 9 23:42:54.381136 kubelet[3004]: E0909 23:42:54.381096 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:42:54.381192 kubelet[3004]: I0909 23:42:54.381174 3004 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:42:54.381469 kubelet[3004]: I0909 23:42:54.381450 3004 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:42:54.381504 kubelet[3004]: W0909 23:42:54.381495 3004 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:42:54.382567 kubelet[3004]: I0909 23:42:54.382547 3004 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:42:54.382623 kubelet[3004]: I0909 23:42:54.382577 3004 server.go:1287] "Started kubelet" Sep 9 23:42:54.385567 kubelet[3004]: E0909 23:42:54.385363 3004 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.0.0-n-d9fce76d1d.1863c1cefe8ae136 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.0.0-n-d9fce76d1d,UID:ci-4426.0.0-n-d9fce76d1d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.0.0-n-d9fce76d1d,},FirstTimestamp:2025-09-09 23:42:54.382563638 +0000 UTC m=+0.292726312,LastTimestamp:2025-09-09 23:42:54.382563638 +0000 UTC m=+0.292726312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.0.0-n-d9fce76d1d,}" Sep 9 23:42:54.386047 kubelet[3004]: I0909 23:42:54.386004 3004 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:42:54.386254 kubelet[3004]: I0909 23:42:54.386223 3004 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:42:54.386398 kubelet[3004]: I0909 23:42:54.386385 3004 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:42:54.386507 kubelet[3004]: I0909 23:42:54.386491 3004 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:42:54.387781 kubelet[3004]: I0909 23:42:54.387148 3004 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:42:54.388728 kubelet[3004]: I0909 23:42:54.388709 3004 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:42:54.389240 kubelet[3004]: I0909 23:42:54.389226 3004 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:42:54.389602 kubelet[3004]: E0909 23:42:54.389581 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:54.390149 kubelet[3004]: I0909 23:42:54.390130 3004 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:42:54.390273 kubelet[3004]: I0909 23:42:54.390263 3004 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:42:54.390780 kubelet[3004]: W0909 23:42:54.390754 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Sep 9 23:42:54.390887 kubelet[3004]: E0909 23:42:54.390872 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:42:54.391302 kubelet[3004]: E0909 23:42:54.391279 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-n-d9fce76d1d?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="200ms" Sep 9 23:42:54.394828 kubelet[3004]: I0909 23:42:54.394803 3004 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:42:54.394922 kubelet[3004]: I0909 23:42:54.394838 3004 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:42:54.394949 kubelet[3004]: I0909 23:42:54.394926 3004 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:42:54.395076 kubelet[3004]: E0909 23:42:54.395059 3004 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:42:54.412550 kubelet[3004]: I0909 23:42:54.412528 3004 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:42:54.412550 kubelet[3004]: I0909 23:42:54.412543 3004 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:42:54.412637 kubelet[3004]: I0909 23:42:54.412561 3004 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:42:54.490758 kubelet[3004]: E0909 23:42:54.490717 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:54.591075 kubelet[3004]: E0909 23:42:54.591030 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:54.591832 kubelet[3004]: E0909 23:42:54.591801 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-n-d9fce76d1d?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="400ms" Sep 9 23:42:54.611253 kubelet[3004]: I0909 23:42:54.610314 3004 policy_none.go:49] "None policy: Start" Sep 9 23:42:54.611253 kubelet[3004]: I0909 23:42:54.610969 3004 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:42:54.611253 kubelet[3004]: I0909 23:42:54.610989 3004 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:42:54.623419 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:42:54.635523 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:42:54.638918 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:42:54.646605 kubelet[3004]: I0909 23:42:54.646142 3004 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:42:54.646605 kubelet[3004]: I0909 23:42:54.646317 3004 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:42:54.646605 kubelet[3004]: I0909 23:42:54.646329 3004 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:42:54.646605 kubelet[3004]: I0909 23:42:54.646534 3004 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:42:54.648627 kubelet[3004]: E0909 23:42:54.648602 3004 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:42:54.648859 kubelet[3004]: E0909 23:42:54.648842 3004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:54.649851 kubelet[3004]: I0909 23:42:54.649827 3004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:42:54.651253 kubelet[3004]: I0909 23:42:54.651234 3004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:42:54.651414 kubelet[3004]: I0909 23:42:54.651320 3004 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:42:54.651414 kubelet[3004]: I0909 23:42:54.651340 3004 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:42:54.651414 kubelet[3004]: I0909 23:42:54.651345 3004 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:42:54.651505 kubelet[3004]: E0909 23:42:54.651493 3004 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 9 23:42:54.652533 kubelet[3004]: W0909 23:42:54.652478 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Sep 9 23:42:54.652622 kubelet[3004]: E0909 23:42:54.652520 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:42:54.748611 kubelet[3004]: I0909 23:42:54.748573 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.748979 kubelet[3004]: E0909 23:42:54.748950 3004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.760483 systemd[1]: Created slice kubepods-burstable-podaea4d3c51b09fbe6be248a6ce3db6e69.slice - libcontainer container kubepods-burstable-podaea4d3c51b09fbe6be248a6ce3db6e69.slice. Sep 9 23:42:54.773023 kubelet[3004]: E0909 23:42:54.772996 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.774141 systemd[1]: Created slice kubepods-burstable-pod49496f3c2a886a4cc1c0b3cfeb9a308a.slice - libcontainer container kubepods-burstable-pod49496f3c2a886a4cc1c0b3cfeb9a308a.slice. Sep 9 23:42:54.785008 kubelet[3004]: E0909 23:42:54.784986 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.786854 systemd[1]: Created slice kubepods-burstable-pod802724bf5a92aa097de64c4f676fb945.slice - libcontainer container kubepods-burstable-pod802724bf5a92aa097de64c4f676fb945.slice. Sep 9 23:42:54.788792 kubelet[3004]: E0909 23:42:54.788772 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.791956 kubelet[3004]: I0909 23:42:54.791932 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aea4d3c51b09fbe6be248a6ce3db6e69-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.0.0-n-d9fce76d1d\" (UID: \"aea4d3c51b09fbe6be248a6ce3db6e69\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.792008 kubelet[3004]: I0909 23:42:54.791958 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49496f3c2a886a4cc1c0b3cfeb9a308a-kubeconfig\") pod \"kube-controller-manager-ci-4426.0.0-n-d9fce76d1d\" (UID: \"49496f3c2a886a4cc1c0b3cfeb9a308a\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.792008 kubelet[3004]: I0909 23:42:54.792001 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49496f3c2a886a4cc1c0b3cfeb9a308a-ca-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-d9fce76d1d\" (UID: \"49496f3c2a886a4cc1c0b3cfeb9a308a\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.792051 kubelet[3004]: I0909 23:42:54.792012 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49496f3c2a886a4cc1c0b3cfeb9a308a-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.0.0-n-d9fce76d1d\" (UID: \"49496f3c2a886a4cc1c0b3cfeb9a308a\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.792051 kubelet[3004]: I0909 23:42:54.792021 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49496f3c2a886a4cc1c0b3cfeb9a308a-k8s-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-d9fce76d1d\" (UID: \"49496f3c2a886a4cc1c0b3cfeb9a308a\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.792051 kubelet[3004]: I0909 23:42:54.792030 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49496f3c2a886a4cc1c0b3cfeb9a308a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.0.0-n-d9fce76d1d\" (UID: \"49496f3c2a886a4cc1c0b3cfeb9a308a\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.792051 kubelet[3004]: I0909 23:42:54.792040 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/802724bf5a92aa097de64c4f676fb945-kubeconfig\") pod \"kube-scheduler-ci-4426.0.0-n-d9fce76d1d\" (UID: \"802724bf5a92aa097de64c4f676fb945\") " pod="kube-system/kube-scheduler-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.792114 kubelet[3004]: I0909 23:42:54.792068 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aea4d3c51b09fbe6be248a6ce3db6e69-ca-certs\") pod \"kube-apiserver-ci-4426.0.0-n-d9fce76d1d\" (UID: \"aea4d3c51b09fbe6be248a6ce3db6e69\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.792114 kubelet[3004]: I0909 23:42:54.792079 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aea4d3c51b09fbe6be248a6ce3db6e69-k8s-certs\") pod \"kube-apiserver-ci-4426.0.0-n-d9fce76d1d\" (UID: \"aea4d3c51b09fbe6be248a6ce3db6e69\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.951759 kubelet[3004]: I0909 23:42:54.951664 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.952055 kubelet[3004]: E0909 23:42:54.952022 3004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:54.992771 kubelet[3004]: E0909 23:42:54.992745 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-n-d9fce76d1d?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="800ms" Sep 9 23:42:55.074767 containerd[1876]: time="2025-09-09T23:42:55.074658708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.0.0-n-d9fce76d1d,Uid:aea4d3c51b09fbe6be248a6ce3db6e69,Namespace:kube-system,Attempt:0,}" Sep 9 23:42:55.086671 containerd[1876]: time="2025-09-09T23:42:55.086640732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.0.0-n-d9fce76d1d,Uid:49496f3c2a886a4cc1c0b3cfeb9a308a,Namespace:kube-system,Attempt:0,}" Sep 9 23:42:55.089466 containerd[1876]: time="2025-09-09T23:42:55.089417508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.0.0-n-d9fce76d1d,Uid:802724bf5a92aa097de64c4f676fb945,Namespace:kube-system,Attempt:0,}" Sep 9 23:42:55.212516 containerd[1876]: time="2025-09-09T23:42:55.212416262Z" level=info msg="connecting to shim 8f1b7abac98ffa2082028f30a042536f842f2c6bac5e46a2bcd01b606aa516ab" address="unix:///run/containerd/s/81d4a94a4e2db0091e6a7b59bb36397de5b0f4203cc199c22bbe4e3b1c258c0f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:42:55.226079 containerd[1876]: time="2025-09-09T23:42:55.226051854Z" level=info msg="connecting to shim 8be3d26c3cf4f88e41e551cd253feb7f79fc9389e1c1f6776d25426b2471e467" address="unix:///run/containerd/s/3abb271636a08aeceb5d2aecbd9af89d509f92aa2742d7be5e5ef4d4e0b12e59" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:42:55.238150 systemd[1]: Started cri-containerd-8f1b7abac98ffa2082028f30a042536f842f2c6bac5e46a2bcd01b606aa516ab.scope - libcontainer container 8f1b7abac98ffa2082028f30a042536f842f2c6bac5e46a2bcd01b606aa516ab. Sep 9 23:42:55.238538 containerd[1876]: time="2025-09-09T23:42:55.238501046Z" level=info msg="connecting to shim 50f76226148bebc0d44fce56e851ab0e644f928f7d8150329f366df3ed15519d" address="unix:///run/containerd/s/6f5e16f4e0e357ca49d44839df21201f4b568b1294da2a975e86d5d8e2d8f9ef" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:42:55.258096 systemd[1]: Started cri-containerd-8be3d26c3cf4f88e41e551cd253feb7f79fc9389e1c1f6776d25426b2471e467.scope - libcontainer container 8be3d26c3cf4f88e41e551cd253feb7f79fc9389e1c1f6776d25426b2471e467. Sep 9 23:42:55.262269 systemd[1]: Started cri-containerd-50f76226148bebc0d44fce56e851ab0e644f928f7d8150329f366df3ed15519d.scope - libcontainer container 50f76226148bebc0d44fce56e851ab0e644f928f7d8150329f366df3ed15519d. Sep 9 23:42:55.321355 containerd[1876]: time="2025-09-09T23:42:55.321310126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.0.0-n-d9fce76d1d,Uid:aea4d3c51b09fbe6be248a6ce3db6e69,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f1b7abac98ffa2082028f30a042536f842f2c6bac5e46a2bcd01b606aa516ab\"" Sep 9 23:42:55.324237 containerd[1876]: time="2025-09-09T23:42:55.324206298Z" level=info msg="CreateContainer within sandbox \"8f1b7abac98ffa2082028f30a042536f842f2c6bac5e46a2bcd01b606aa516ab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:42:55.325921 containerd[1876]: time="2025-09-09T23:42:55.325861179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.0.0-n-d9fce76d1d,Uid:49496f3c2a886a4cc1c0b3cfeb9a308a,Namespace:kube-system,Attempt:0,} returns sandbox id \"50f76226148bebc0d44fce56e851ab0e644f928f7d8150329f366df3ed15519d\"" Sep 9 23:42:55.327938 containerd[1876]: time="2025-09-09T23:42:55.327891058Z" level=info msg="CreateContainer within sandbox \"50f76226148bebc0d44fce56e851ab0e644f928f7d8150329f366df3ed15519d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:42:55.330116 containerd[1876]: time="2025-09-09T23:42:55.330083718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.0.0-n-d9fce76d1d,Uid:802724bf5a92aa097de64c4f676fb945,Namespace:kube-system,Attempt:0,} returns sandbox id \"8be3d26c3cf4f88e41e551cd253feb7f79fc9389e1c1f6776d25426b2471e467\"" Sep 9 23:42:55.335748 containerd[1876]: time="2025-09-09T23:42:55.335717105Z" level=info msg="CreateContainer within sandbox \"8be3d26c3cf4f88e41e551cd253feb7f79fc9389e1c1f6776d25426b2471e467\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:42:55.347436 kubelet[3004]: W0909 23:42:55.347391 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Sep 9 23:42:55.347703 kubelet[3004]: E0909 23:42:55.347452 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:42:55.354082 kubelet[3004]: I0909 23:42:55.354060 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:55.354374 kubelet[3004]: E0909 23:42:55.354331 3004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:55.369262 containerd[1876]: time="2025-09-09T23:42:55.368624654Z" level=info msg="Container 58d2efa4d93ca126c412142bfdd9fd714a3bd3893fe3fbe603ffde288fe48992: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:42:55.382392 containerd[1876]: time="2025-09-09T23:42:55.382361443Z" level=info msg="Container f60abf8ea65e55b2e6d07125ad373fa8640ef5a6e1dac6cab4ef9232d66a2c58: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:42:55.400473 containerd[1876]: time="2025-09-09T23:42:55.400066432Z" level=info msg="Container e16afae115b99fa7e2e200fbd5d6d56d467fa04de7fa9e2347c949e0fd8e1694: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:42:55.420884 containerd[1876]: time="2025-09-09T23:42:55.420855609Z" level=info msg="CreateContainer within sandbox \"8f1b7abac98ffa2082028f30a042536f842f2c6bac5e46a2bcd01b606aa516ab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"58d2efa4d93ca126c412142bfdd9fd714a3bd3893fe3fbe603ffde288fe48992\"" Sep 9 23:42:55.421553 containerd[1876]: time="2025-09-09T23:42:55.421530401Z" level=info msg="StartContainer for \"58d2efa4d93ca126c412142bfdd9fd714a3bd3893fe3fbe603ffde288fe48992\"" Sep 9 23:42:55.422466 containerd[1876]: time="2025-09-09T23:42:55.422438888Z" level=info msg="connecting to shim 58d2efa4d93ca126c412142bfdd9fd714a3bd3893fe3fbe603ffde288fe48992" address="unix:///run/containerd/s/81d4a94a4e2db0091e6a7b59bb36397de5b0f4203cc199c22bbe4e3b1c258c0f" protocol=ttrpc version=3 Sep 9 23:42:55.434534 containerd[1876]: time="2025-09-09T23:42:55.434504091Z" level=info msg="CreateContainer within sandbox \"8be3d26c3cf4f88e41e551cd253feb7f79fc9389e1c1f6776d25426b2471e467\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e16afae115b99fa7e2e200fbd5d6d56d467fa04de7fa9e2347c949e0fd8e1694\"" Sep 9 23:42:55.435095 containerd[1876]: time="2025-09-09T23:42:55.435023573Z" level=info msg="StartContainer for \"e16afae115b99fa7e2e200fbd5d6d56d467fa04de7fa9e2347c949e0fd8e1694\"" Sep 9 23:42:55.437678 containerd[1876]: time="2025-09-09T23:42:55.437655872Z" level=info msg="connecting to shim e16afae115b99fa7e2e200fbd5d6d56d467fa04de7fa9e2347c949e0fd8e1694" address="unix:///run/containerd/s/3abb271636a08aeceb5d2aecbd9af89d509f92aa2742d7be5e5ef4d4e0b12e59" protocol=ttrpc version=3 Sep 9 23:42:55.439033 systemd[1]: Started cri-containerd-58d2efa4d93ca126c412142bfdd9fd714a3bd3893fe3fbe603ffde288fe48992.scope - libcontainer container 58d2efa4d93ca126c412142bfdd9fd714a3bd3893fe3fbe603ffde288fe48992. Sep 9 23:42:55.441811 containerd[1876]: time="2025-09-09T23:42:55.441752846Z" level=info msg="CreateContainer within sandbox \"50f76226148bebc0d44fce56e851ab0e644f928f7d8150329f366df3ed15519d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f60abf8ea65e55b2e6d07125ad373fa8640ef5a6e1dac6cab4ef9232d66a2c58\"" Sep 9 23:42:55.443427 containerd[1876]: time="2025-09-09T23:42:55.443341613Z" level=info msg="StartContainer for \"f60abf8ea65e55b2e6d07125ad373fa8640ef5a6e1dac6cab4ef9232d66a2c58\"" Sep 9 23:42:55.447776 containerd[1876]: time="2025-09-09T23:42:55.447743758Z" level=info msg="connecting to shim f60abf8ea65e55b2e6d07125ad373fa8640ef5a6e1dac6cab4ef9232d66a2c58" address="unix:///run/containerd/s/6f5e16f4e0e357ca49d44839df21201f4b568b1294da2a975e86d5d8e2d8f9ef" protocol=ttrpc version=3 Sep 9 23:42:55.463127 systemd[1]: Started cri-containerd-e16afae115b99fa7e2e200fbd5d6d56d467fa04de7fa9e2347c949e0fd8e1694.scope - libcontainer container e16afae115b99fa7e2e200fbd5d6d56d467fa04de7fa9e2347c949e0fd8e1694. Sep 9 23:42:55.471144 systemd[1]: Started cri-containerd-f60abf8ea65e55b2e6d07125ad373fa8640ef5a6e1dac6cab4ef9232d66a2c58.scope - libcontainer container f60abf8ea65e55b2e6d07125ad373fa8640ef5a6e1dac6cab4ef9232d66a2c58. Sep 9 23:42:55.510013 containerd[1876]: time="2025-09-09T23:42:55.509831247Z" level=info msg="StartContainer for \"58d2efa4d93ca126c412142bfdd9fd714a3bd3893fe3fbe603ffde288fe48992\" returns successfully" Sep 9 23:42:55.510013 containerd[1876]: time="2025-09-09T23:42:55.509932946Z" level=info msg="StartContainer for \"e16afae115b99fa7e2e200fbd5d6d56d467fa04de7fa9e2347c949e0fd8e1694\" returns successfully" Sep 9 23:42:55.513431 kubelet[3004]: W0909 23:42:55.513223 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-n-d9fce76d1d&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Sep 9 23:42:55.513431 kubelet[3004]: E0909 23:42:55.513279 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-n-d9fce76d1d&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:42:55.541215 containerd[1876]: time="2025-09-09T23:42:55.541174845Z" level=info msg="StartContainer for \"f60abf8ea65e55b2e6d07125ad373fa8640ef5a6e1dac6cab4ef9232d66a2c58\" returns successfully" Sep 9 23:42:55.662632 kubelet[3004]: E0909 23:42:55.662296 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:55.662632 kubelet[3004]: E0909 23:42:55.662370 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:55.667375 kubelet[3004]: E0909 23:42:55.667354 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:56.158046 kubelet[3004]: I0909 23:42:56.158009 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:56.669404 kubelet[3004]: E0909 23:42:56.669245 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:56.670209 kubelet[3004]: E0909 23:42:56.670188 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:56.744017 kubelet[3004]: E0909 23:42:56.743969 3004 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4426.0.0-n-d9fce76d1d\" not found" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:56.799873 kubelet[3004]: I0909 23:42:56.799498 3004 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:56.800099 kubelet[3004]: E0909 23:42:56.799930 3004 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4426.0.0-n-d9fce76d1d\": node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:56.809124 kubelet[3004]: E0909 23:42:56.809074 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:56.910226 kubelet[3004]: E0909 23:42:56.910173 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:57.010938 kubelet[3004]: E0909 23:42:57.010804 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:57.111506 kubelet[3004]: E0909 23:42:57.111461 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:57.212210 kubelet[3004]: E0909 23:42:57.212167 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:57.312826 kubelet[3004]: E0909 23:42:57.312792 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:57.383488 kubelet[3004]: I0909 23:42:57.383300 3004 apiserver.go:52] "Watching apiserver" Sep 9 23:42:57.390349 kubelet[3004]: I0909 23:42:57.390163 3004 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:57.391027 kubelet[3004]: I0909 23:42:57.391005 3004 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:42:57.394964 kubelet[3004]: E0909 23:42:57.394932 3004 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4426.0.0-n-d9fce76d1d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:57.395129 kubelet[3004]: I0909 23:42:57.395052 3004 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:57.396463 kubelet[3004]: E0909 23:42:57.396422 3004 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.0.0-n-d9fce76d1d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:57.396637 kubelet[3004]: I0909 23:42:57.396538 3004 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:57.397871 kubelet[3004]: E0909 23:42:57.397842 3004 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.0.0-n-d9fce76d1d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:57.667276 kubelet[3004]: I0909 23:42:57.667159 3004 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:57.669048 kubelet[3004]: E0909 23:42:57.669021 3004 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.0.0-n-d9fce76d1d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.120102 systemd[1]: Reload requested from client PID 3274 ('systemctl') (unit session-9.scope)... Sep 9 23:42:59.120124 systemd[1]: Reloading... Sep 9 23:42:59.201927 zram_generator::config[3325]: No configuration found. Sep 9 23:42:59.356395 systemd[1]: Reloading finished in 236 ms. Sep 9 23:42:59.379101 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:59.393625 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:42:59.393849 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:59.393915 systemd[1]: kubelet.service: Consumed 437ms CPU time, 127.4M memory peak. Sep 9 23:42:59.395355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:59.567889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:59.570688 (kubelet)[3385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:42:59.602885 kubelet[3385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:42:59.603179 kubelet[3385]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:42:59.603218 kubelet[3385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:42:59.603326 kubelet[3385]: I0909 23:42:59.603296 3385 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:42:59.607772 kubelet[3385]: I0909 23:42:59.607739 3385 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:42:59.607772 kubelet[3385]: I0909 23:42:59.607769 3385 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:42:59.607971 kubelet[3385]: I0909 23:42:59.607954 3385 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:42:59.608864 kubelet[3385]: I0909 23:42:59.608841 3385 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 23:42:59.614079 kubelet[3385]: I0909 23:42:59.613725 3385 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:42:59.615818 kubelet[3385]: I0909 23:42:59.615799 3385 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:42:59.623701 kubelet[3385]: I0909 23:42:59.623280 3385 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:42:59.623701 kubelet[3385]: I0909 23:42:59.623482 3385 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:42:59.623701 kubelet[3385]: I0909 23:42:59.623505 3385 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.0.0-n-d9fce76d1d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:42:59.623701 kubelet[3385]: I0909 23:42:59.623621 3385 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:42:59.624066 kubelet[3385]: I0909 23:42:59.624052 3385 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:42:59.624154 kubelet[3385]: I0909 23:42:59.624144 3385 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:42:59.624321 kubelet[3385]: I0909 23:42:59.624309 3385 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:42:59.624382 kubelet[3385]: I0909 23:42:59.624373 3385 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:42:59.624444 kubelet[3385]: I0909 23:42:59.624437 3385 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:42:59.624496 kubelet[3385]: I0909 23:42:59.624488 3385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:42:59.625338 kubelet[3385]: I0909 23:42:59.625323 3385 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:42:59.625682 kubelet[3385]: I0909 23:42:59.625662 3385 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:42:59.626059 kubelet[3385]: I0909 23:42:59.626043 3385 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:42:59.626152 kubelet[3385]: I0909 23:42:59.626145 3385 server.go:1287] "Started kubelet" Sep 9 23:42:59.628199 kubelet[3385]: I0909 23:42:59.628072 3385 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:42:59.630299 kubelet[3385]: I0909 23:42:59.628620 3385 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:42:59.630299 kubelet[3385]: I0909 23:42:59.629147 3385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:42:59.630299 kubelet[3385]: I0909 23:42:59.629307 3385 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:42:59.631417 kubelet[3385]: I0909 23:42:59.631399 3385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:42:59.642274 kubelet[3385]: I0909 23:42:59.642251 3385 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:42:59.643567 kubelet[3385]: I0909 23:42:59.643548 3385 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:42:59.644999 kubelet[3385]: E0909 23:42:59.644980 3385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-d9fce76d1d\" not found" Sep 9 23:42:59.645492 kubelet[3385]: I0909 23:42:59.645470 3385 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:42:59.646196 kubelet[3385]: I0909 23:42:59.646036 3385 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:42:59.648549 kubelet[3385]: I0909 23:42:59.648490 3385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:42:59.649813 kubelet[3385]: I0909 23:42:59.649784 3385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:42:59.649813 kubelet[3385]: I0909 23:42:59.649807 3385 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:42:59.649890 kubelet[3385]: I0909 23:42:59.649820 3385 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:42:59.649890 kubelet[3385]: I0909 23:42:59.649826 3385 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:42:59.649890 kubelet[3385]: E0909 23:42:59.649858 3385 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:42:59.650047 kubelet[3385]: I0909 23:42:59.650030 3385 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:42:59.650250 kubelet[3385]: I0909 23:42:59.650232 3385 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:42:59.651522 kubelet[3385]: E0909 23:42:59.651447 3385 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:42:59.655117 kubelet[3385]: I0909 23:42:59.655080 3385 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:42:59.697105 kubelet[3385]: I0909 23:42:59.697080 3385 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:42:59.697105 kubelet[3385]: I0909 23:42:59.697098 3385 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:42:59.697105 kubelet[3385]: I0909 23:42:59.697117 3385 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:42:59.697246 kubelet[3385]: I0909 23:42:59.697233 3385 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:42:59.697263 kubelet[3385]: I0909 23:42:59.697241 3385 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:42:59.697263 kubelet[3385]: I0909 23:42:59.697256 3385 policy_none.go:49] "None policy: Start" Sep 9 23:42:59.697263 kubelet[3385]: I0909 23:42:59.697263 3385 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:42:59.697313 kubelet[3385]: I0909 23:42:59.697271 3385 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:42:59.697355 kubelet[3385]: I0909 23:42:59.697336 3385 state_mem.go:75] "Updated machine memory state" Sep 9 23:42:59.700672 kubelet[3385]: I0909 23:42:59.700657 3385 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:42:59.701137 kubelet[3385]: I0909 23:42:59.701097 3385 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:42:59.701210 kubelet[3385]: I0909 23:42:59.701112 3385 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:42:59.701691 kubelet[3385]: I0909 23:42:59.701639 3385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:42:59.703262 kubelet[3385]: E0909 23:42:59.703240 3385 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:42:59.751305 kubelet[3385]: I0909 23:42:59.751200 3385 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.751741 kubelet[3385]: I0909 23:42:59.751541 3385 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.751741 kubelet[3385]: I0909 23:42:59.751232 3385 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.759590 kubelet[3385]: W0909 23:42:59.759536 3385 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 23:42:59.764861 kubelet[3385]: W0909 23:42:59.764830 3385 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 23:42:59.765389 kubelet[3385]: W0909 23:42:59.765354 3385 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 23:42:59.804071 kubelet[3385]: I0909 23:42:59.804048 3385 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.814450 kubelet[3385]: I0909 23:42:59.814411 3385 kubelet_node_status.go:124] "Node was previously registered" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.814566 kubelet[3385]: I0909 23:42:59.814555 3385 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.847505 kubelet[3385]: I0909 23:42:59.847467 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aea4d3c51b09fbe6be248a6ce3db6e69-k8s-certs\") pod \"kube-apiserver-ci-4426.0.0-n-d9fce76d1d\" (UID: \"aea4d3c51b09fbe6be248a6ce3db6e69\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.847505 kubelet[3385]: I0909 23:42:59.847501 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aea4d3c51b09fbe6be248a6ce3db6e69-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.0.0-n-d9fce76d1d\" (UID: \"aea4d3c51b09fbe6be248a6ce3db6e69\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.847680 kubelet[3385]: I0909 23:42:59.847519 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49496f3c2a886a4cc1c0b3cfeb9a308a-ca-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-d9fce76d1d\" (UID: \"49496f3c2a886a4cc1c0b3cfeb9a308a\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.847680 kubelet[3385]: I0909 23:42:59.847530 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49496f3c2a886a4cc1c0b3cfeb9a308a-kubeconfig\") pod \"kube-controller-manager-ci-4426.0.0-n-d9fce76d1d\" (UID: \"49496f3c2a886a4cc1c0b3cfeb9a308a\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.847680 kubelet[3385]: I0909 23:42:59.847558 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49496f3c2a886a4cc1c0b3cfeb9a308a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.0.0-n-d9fce76d1d\" (UID: \"49496f3c2a886a4cc1c0b3cfeb9a308a\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.847680 kubelet[3385]: I0909 23:42:59.847568 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/802724bf5a92aa097de64c4f676fb945-kubeconfig\") pod \"kube-scheduler-ci-4426.0.0-n-d9fce76d1d\" (UID: \"802724bf5a92aa097de64c4f676fb945\") " pod="kube-system/kube-scheduler-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.847680 kubelet[3385]: I0909 23:42:59.847580 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aea4d3c51b09fbe6be248a6ce3db6e69-ca-certs\") pod \"kube-apiserver-ci-4426.0.0-n-d9fce76d1d\" (UID: \"aea4d3c51b09fbe6be248a6ce3db6e69\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.847761 kubelet[3385]: I0909 23:42:59.847590 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49496f3c2a886a4cc1c0b3cfeb9a308a-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.0.0-n-d9fce76d1d\" (UID: \"49496f3c2a886a4cc1c0b3cfeb9a308a\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:42:59.847761 kubelet[3385]: I0909 23:42:59.847602 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49496f3c2a886a4cc1c0b3cfeb9a308a-k8s-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-d9fce76d1d\" (UID: \"49496f3c2a886a4cc1c0b3cfeb9a308a\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:43:00.152785 sudo[3418]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 23:43:00.153406 sudo[3418]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 23:43:00.385032 sudo[3418]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:00.625663 kubelet[3385]: I0909 23:43:00.625622 3385 apiserver.go:52] "Watching apiserver" Sep 9 23:43:00.646157 kubelet[3385]: I0909 23:43:00.646122 3385 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:43:00.685667 kubelet[3385]: I0909 23:43:00.685634 3385 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:43:00.697545 kubelet[3385]: W0909 23:43:00.697518 3385 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 23:43:00.697634 kubelet[3385]: E0909 23:43:00.697576 3385 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.0.0-n-d9fce76d1d\" already exists" pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" Sep 9 23:43:00.716662 kubelet[3385]: I0909 23:43:00.716609 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426.0.0-n-d9fce76d1d" podStartSLOduration=1.716597538 podStartE2EDuration="1.716597538s" podCreationTimestamp="2025-09-09 23:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:00.706067451 +0000 UTC m=+1.132441717" watchObservedRunningTime="2025-09-09 23:43:00.716597538 +0000 UTC m=+1.142971804" Sep 9 23:43:00.728120 kubelet[3385]: I0909 23:43:00.728076 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426.0.0-n-d9fce76d1d" podStartSLOduration=1.728066225 podStartE2EDuration="1.728066225s" podCreationTimestamp="2025-09-09 23:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:00.717197062 +0000 UTC m=+1.143571344" watchObservedRunningTime="2025-09-09 23:43:00.728066225 +0000 UTC m=+1.154440491" Sep 9 23:43:00.742586 kubelet[3385]: I0909 23:43:00.742545 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426.0.0-n-d9fce76d1d" podStartSLOduration=1.742532835 podStartE2EDuration="1.742532835s" podCreationTimestamp="2025-09-09 23:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:00.72859197 +0000 UTC m=+1.154966236" watchObservedRunningTime="2025-09-09 23:43:00.742532835 +0000 UTC m=+1.168907101" Sep 9 23:43:01.958707 sudo[2368]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:02.029891 sshd[2367]: Connection closed by 10.200.16.10 port 50162 Sep 9 23:43:02.030403 sshd-session[2364]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:02.034131 systemd[1]: sshd@6-10.200.20.4:22-10.200.16.10:50162.service: Deactivated successfully. Sep 9 23:43:02.035988 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:43:02.036254 systemd[1]: session-9.scope: Consumed 3.747s CPU time, 258.6M memory peak. Sep 9 23:43:02.037281 systemd-logind[1849]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:43:02.038679 systemd-logind[1849]: Removed session 9. Sep 9 23:43:05.072685 kubelet[3385]: I0909 23:43:05.072628 3385 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:43:05.073927 containerd[1876]: time="2025-09-09T23:43:05.073214489Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:43:05.074140 kubelet[3385]: I0909 23:43:05.073401 3385 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:43:05.763583 systemd[1]: Created slice kubepods-besteffort-pod8fad0863_16a8_497c_872d_2a80e0809cd8.slice - libcontainer container kubepods-besteffort-pod8fad0863_16a8_497c_872d_2a80e0809cd8.slice. Sep 9 23:43:05.776059 systemd[1]: Created slice kubepods-burstable-pod97ef9171_e0e1_485e_a99c_ae80f46655f4.slice - libcontainer container kubepods-burstable-pod97ef9171_e0e1_485e_a99c_ae80f46655f4.slice. Sep 9 23:43:05.780193 kubelet[3385]: I0909 23:43:05.780142 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-xtables-lock\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.780365 kubelet[3385]: I0909 23:43:05.780177 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-host-proc-sys-kernel\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.780365 kubelet[3385]: I0909 23:43:05.780326 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54wj6\" (UniqueName: \"kubernetes.io/projected/97ef9171-e0e1-485e-a99c-ae80f46655f4-kube-api-access-54wj6\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.780365 kubelet[3385]: I0909 23:43:05.780343 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fad0863-16a8-497c-872d-2a80e0809cd8-lib-modules\") pod \"kube-proxy-6pvq4\" (UID: \"8fad0863-16a8-497c-872d-2a80e0809cd8\") " pod="kube-system/kube-proxy-6pvq4" Sep 9 23:43:05.780537 kubelet[3385]: I0909 23:43:05.780496 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-config-path\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.780761 kubelet[3385]: I0909 23:43:05.780736 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97ef9171-e0e1-485e-a99c-ae80f46655f4-hubble-tls\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.780821 kubelet[3385]: I0909 23:43:05.780776 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dq2z\" (UniqueName: \"kubernetes.io/projected/8fad0863-16a8-497c-872d-2a80e0809cd8-kube-api-access-9dq2z\") pod \"kube-proxy-6pvq4\" (UID: \"8fad0863-16a8-497c-872d-2a80e0809cd8\") " pod="kube-system/kube-proxy-6pvq4" Sep 9 23:43:05.780821 kubelet[3385]: I0909 23:43:05.780793 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cni-path\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.780821 kubelet[3385]: I0909 23:43:05.780805 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-lib-modules\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.780869 kubelet[3385]: I0909 23:43:05.780816 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-bpf-maps\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.780869 kubelet[3385]: I0909 23:43:05.780839 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-cgroup\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.780869 kubelet[3385]: I0909 23:43:05.780852 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-hostproc\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.780869 kubelet[3385]: I0909 23:43:05.780868 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-etc-cni-netd\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.781185 kubelet[3385]: I0909 23:43:05.780916 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-host-proc-sys-net\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.781185 kubelet[3385]: I0909 23:43:05.780931 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8fad0863-16a8-497c-872d-2a80e0809cd8-kube-proxy\") pod \"kube-proxy-6pvq4\" (UID: \"8fad0863-16a8-497c-872d-2a80e0809cd8\") " pod="kube-system/kube-proxy-6pvq4" Sep 9 23:43:05.781185 kubelet[3385]: I0909 23:43:05.780941 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97ef9171-e0e1-485e-a99c-ae80f46655f4-clustermesh-secrets\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.781185 kubelet[3385]: I0909 23:43:05.780950 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-run\") pod \"cilium-gp8lp\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " pod="kube-system/cilium-gp8lp" Sep 9 23:43:05.781185 kubelet[3385]: I0909 23:43:05.780960 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fad0863-16a8-497c-872d-2a80e0809cd8-xtables-lock\") pod \"kube-proxy-6pvq4\" (UID: \"8fad0863-16a8-497c-872d-2a80e0809cd8\") " pod="kube-system/kube-proxy-6pvq4" Sep 9 23:43:06.073648 containerd[1876]: time="2025-09-09T23:43:06.073592471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6pvq4,Uid:8fad0863-16a8-497c-872d-2a80e0809cd8,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:06.083718 containerd[1876]: time="2025-09-09T23:43:06.083568060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gp8lp,Uid:97ef9171-e0e1-485e-a99c-ae80f46655f4,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:06.104423 systemd[1]: Created slice kubepods-besteffort-pode739d1b3_43c5_4ee1_8558_0ff0515ccd26.slice - libcontainer container kubepods-besteffort-pode739d1b3_43c5_4ee1_8558_0ff0515ccd26.slice. Sep 9 23:43:06.183757 kubelet[3385]: I0909 23:43:06.183721 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccwqk\" (UniqueName: \"kubernetes.io/projected/e739d1b3-43c5-4ee1-8558-0ff0515ccd26-kube-api-access-ccwqk\") pod \"cilium-operator-6c4d7847fc-95r5b\" (UID: \"e739d1b3-43c5-4ee1-8558-0ff0515ccd26\") " pod="kube-system/cilium-operator-6c4d7847fc-95r5b" Sep 9 23:43:06.184673 kubelet[3385]: I0909 23:43:06.184160 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e739d1b3-43c5-4ee1-8558-0ff0515ccd26-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-95r5b\" (UID: \"e739d1b3-43c5-4ee1-8558-0ff0515ccd26\") " pod="kube-system/cilium-operator-6c4d7847fc-95r5b" Sep 9 23:43:06.192702 containerd[1876]: time="2025-09-09T23:43:06.192662796Z" level=info msg="connecting to shim 2f90034032e08d231465f760789fb9f8a7e08d086c5dd5ef435bff8b461e0196" address="unix:///run/containerd/s/9e497540fb6002d0d0e9c328ffcb03faa3f4b003ab5c07c8a2d55616570d1e2a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:06.193708 containerd[1876]: time="2025-09-09T23:43:06.193037904Z" level=info msg="connecting to shim 558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8" address="unix:///run/containerd/s/67f33990154b791c33342bd7f36836f3723885fbf2895dd1cb1b66941150683a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:06.209039 systemd[1]: Started cri-containerd-2f90034032e08d231465f760789fb9f8a7e08d086c5dd5ef435bff8b461e0196.scope - libcontainer container 2f90034032e08d231465f760789fb9f8a7e08d086c5dd5ef435bff8b461e0196. Sep 9 23:43:06.211692 systemd[1]: Started cri-containerd-558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8.scope - libcontainer container 558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8. Sep 9 23:43:06.243742 containerd[1876]: time="2025-09-09T23:43:06.243703115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gp8lp,Uid:97ef9171-e0e1-485e-a99c-ae80f46655f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\"" Sep 9 23:43:06.245562 containerd[1876]: time="2025-09-09T23:43:06.245506391Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 23:43:06.247422 containerd[1876]: time="2025-09-09T23:43:06.247393878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6pvq4,Uid:8fad0863-16a8-497c-872d-2a80e0809cd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f90034032e08d231465f760789fb9f8a7e08d086c5dd5ef435bff8b461e0196\"" Sep 9 23:43:06.249946 containerd[1876]: time="2025-09-09T23:43:06.249879497Z" level=info msg="CreateContainer within sandbox \"2f90034032e08d231465f760789fb9f8a7e08d086c5dd5ef435bff8b461e0196\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:43:06.280180 containerd[1876]: time="2025-09-09T23:43:06.280142370Z" level=info msg="Container 4a9c528b211f7320b4e101d6f550b0345a19a277433d9b588eb8d13adc9f23eb: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:06.305931 containerd[1876]: time="2025-09-09T23:43:06.305807634Z" level=info msg="CreateContainer within sandbox \"2f90034032e08d231465f760789fb9f8a7e08d086c5dd5ef435bff8b461e0196\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a9c528b211f7320b4e101d6f550b0345a19a277433d9b588eb8d13adc9f23eb\"" Sep 9 23:43:06.306351 containerd[1876]: time="2025-09-09T23:43:06.306331708Z" level=info msg="StartContainer for \"4a9c528b211f7320b4e101d6f550b0345a19a277433d9b588eb8d13adc9f23eb\"" Sep 9 23:43:06.307767 containerd[1876]: time="2025-09-09T23:43:06.307709922Z" level=info msg="connecting to shim 4a9c528b211f7320b4e101d6f550b0345a19a277433d9b588eb8d13adc9f23eb" address="unix:///run/containerd/s/9e497540fb6002d0d0e9c328ffcb03faa3f4b003ab5c07c8a2d55616570d1e2a" protocol=ttrpc version=3 Sep 9 23:43:06.329026 systemd[1]: Started cri-containerd-4a9c528b211f7320b4e101d6f550b0345a19a277433d9b588eb8d13adc9f23eb.scope - libcontainer container 4a9c528b211f7320b4e101d6f550b0345a19a277433d9b588eb8d13adc9f23eb. Sep 9 23:43:06.360088 containerd[1876]: time="2025-09-09T23:43:06.360052772Z" level=info msg="StartContainer for \"4a9c528b211f7320b4e101d6f550b0345a19a277433d9b588eb8d13adc9f23eb\" returns successfully" Sep 9 23:43:06.407869 containerd[1876]: time="2025-09-09T23:43:06.407828350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-95r5b,Uid:e739d1b3-43c5-4ee1-8558-0ff0515ccd26,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:06.456887 containerd[1876]: time="2025-09-09T23:43:06.456842265Z" level=info msg="connecting to shim c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f" address="unix:///run/containerd/s/1233533020a397045eea34c44d69e0f6486e131738d2f6ff9d16c5bc03ce9581" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:06.473444 systemd[1]: Started cri-containerd-c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f.scope - libcontainer container c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f. Sep 9 23:43:06.514465 containerd[1876]: time="2025-09-09T23:43:06.514227796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-95r5b,Uid:e739d1b3-43c5-4ee1-8558-0ff0515ccd26,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f\"" Sep 9 23:43:06.710357 kubelet[3385]: I0909 23:43:06.710213 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6pvq4" podStartSLOduration=1.709866611 podStartE2EDuration="1.709866611s" podCreationTimestamp="2025-09-09 23:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:06.709786136 +0000 UTC m=+7.136160410" watchObservedRunningTime="2025-09-09 23:43:06.709866611 +0000 UTC m=+7.136240885" Sep 9 23:43:15.743155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount457099141.mount: Deactivated successfully. Sep 9 23:43:17.244223 containerd[1876]: time="2025-09-09T23:43:17.244178156Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:17.247839 containerd[1876]: time="2025-09-09T23:43:17.247817660Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 23:43:17.254664 containerd[1876]: time="2025-09-09T23:43:17.254638020Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:17.255644 containerd[1876]: time="2025-09-09T23:43:17.255463159Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.009925759s" Sep 9 23:43:17.255644 containerd[1876]: time="2025-09-09T23:43:17.255492808Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 23:43:17.256619 containerd[1876]: time="2025-09-09T23:43:17.256601333Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 23:43:17.258280 containerd[1876]: time="2025-09-09T23:43:17.258151144Z" level=info msg="CreateContainer within sandbox \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:43:17.300594 containerd[1876]: time="2025-09-09T23:43:17.300532313Z" level=info msg="Container 298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:17.362351 containerd[1876]: time="2025-09-09T23:43:17.362249646Z" level=info msg="CreateContainer within sandbox \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\"" Sep 9 23:43:17.362909 containerd[1876]: time="2025-09-09T23:43:17.362671100Z" level=info msg="StartContainer for \"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\"" Sep 9 23:43:17.363704 containerd[1876]: time="2025-09-09T23:43:17.363670093Z" level=info msg="connecting to shim 298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3" address="unix:///run/containerd/s/67f33990154b791c33342bd7f36836f3723885fbf2895dd1cb1b66941150683a" protocol=ttrpc version=3 Sep 9 23:43:17.391026 systemd[1]: Started cri-containerd-298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3.scope - libcontainer container 298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3. Sep 9 23:43:17.421703 containerd[1876]: time="2025-09-09T23:43:17.421663800Z" level=info msg="StartContainer for \"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\" returns successfully" Sep 9 23:43:17.423014 systemd[1]: cri-containerd-298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3.scope: Deactivated successfully. Sep 9 23:43:17.425876 containerd[1876]: time="2025-09-09T23:43:17.425840897Z" level=info msg="received exit event container_id:\"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\" id:\"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\" pid:3800 exited_at:{seconds:1757461397 nanos:425308840}" Sep 9 23:43:17.426115 containerd[1876]: time="2025-09-09T23:43:17.426096874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\" id:\"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\" pid:3800 exited_at:{seconds:1757461397 nanos:425308840}" Sep 9 23:43:17.439664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3-rootfs.mount: Deactivated successfully. Sep 9 23:43:19.721362 containerd[1876]: time="2025-09-09T23:43:19.721314239Z" level=info msg="CreateContainer within sandbox \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:43:19.772171 containerd[1876]: time="2025-09-09T23:43:19.770935959Z" level=info msg="Container 1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:19.789761 containerd[1876]: time="2025-09-09T23:43:19.789728352Z" level=info msg="CreateContainer within sandbox \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\"" Sep 9 23:43:19.790325 containerd[1876]: time="2025-09-09T23:43:19.790299899Z" level=info msg="StartContainer for \"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\"" Sep 9 23:43:19.791065 containerd[1876]: time="2025-09-09T23:43:19.791039932Z" level=info msg="connecting to shim 1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20" address="unix:///run/containerd/s/67f33990154b791c33342bd7f36836f3723885fbf2895dd1cb1b66941150683a" protocol=ttrpc version=3 Sep 9 23:43:19.811060 systemd[1]: Started cri-containerd-1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20.scope - libcontainer container 1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20. Sep 9 23:43:19.841926 containerd[1876]: time="2025-09-09T23:43:19.841355018Z" level=info msg="StartContainer for \"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\" returns successfully" Sep 9 23:43:19.855229 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:43:19.855888 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:19.857418 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:43:19.859195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:43:19.863575 containerd[1876]: time="2025-09-09T23:43:19.859295808Z" level=info msg="received exit event container_id:\"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\" id:\"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\" pid:3845 exited_at:{seconds:1757461399 nanos:859145891}" Sep 9 23:43:19.863575 containerd[1876]: time="2025-09-09T23:43:19.860075185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\" id:\"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\" pid:3845 exited_at:{seconds:1757461399 nanos:859145891}" Sep 9 23:43:19.860358 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:43:19.860638 systemd[1]: cri-containerd-1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20.scope: Deactivated successfully. Sep 9 23:43:19.880166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:20.725269 containerd[1876]: time="2025-09-09T23:43:20.725223846Z" level=info msg="CreateContainer within sandbox \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:43:20.755846 containerd[1876]: time="2025-09-09T23:43:20.755804739Z" level=info msg="Container 92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:20.770243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20-rootfs.mount: Deactivated successfully. Sep 9 23:43:20.786440 containerd[1876]: time="2025-09-09T23:43:20.786401345Z" level=info msg="CreateContainer within sandbox \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\"" Sep 9 23:43:20.786948 containerd[1876]: time="2025-09-09T23:43:20.786916546Z" level=info msg="StartContainer for \"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\"" Sep 9 23:43:20.787860 containerd[1876]: time="2025-09-09T23:43:20.787795871Z" level=info msg="connecting to shim 92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f" address="unix:///run/containerd/s/67f33990154b791c33342bd7f36836f3723885fbf2895dd1cb1b66941150683a" protocol=ttrpc version=3 Sep 9 23:43:20.805090 systemd[1]: Started cri-containerd-92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f.scope - libcontainer container 92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f. Sep 9 23:43:20.830992 systemd[1]: cri-containerd-92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f.scope: Deactivated successfully. Sep 9 23:43:20.834278 containerd[1876]: time="2025-09-09T23:43:20.832889730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\" id:\"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\" pid:3896 exited_at:{seconds:1757461400 nanos:832569399}" Sep 9 23:43:20.836551 containerd[1876]: time="2025-09-09T23:43:20.836451343Z" level=info msg="received exit event container_id:\"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\" id:\"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\" pid:3896 exited_at:{seconds:1757461400 nanos:832569399}" Sep 9 23:43:20.838434 containerd[1876]: time="2025-09-09T23:43:20.838310508Z" level=info msg="StartContainer for \"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\" returns successfully" Sep 9 23:43:20.852491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f-rootfs.mount: Deactivated successfully. Sep 9 23:43:21.732381 containerd[1876]: time="2025-09-09T23:43:21.732031035Z" level=info msg="CreateContainer within sandbox \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:43:21.754581 containerd[1876]: time="2025-09-09T23:43:21.754362105Z" level=info msg="Container 2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:21.771267 containerd[1876]: time="2025-09-09T23:43:21.771235156Z" level=info msg="CreateContainer within sandbox \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\"" Sep 9 23:43:21.772160 containerd[1876]: time="2025-09-09T23:43:21.772057727Z" level=info msg="StartContainer for \"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\"" Sep 9 23:43:21.772984 containerd[1876]: time="2025-09-09T23:43:21.772934956Z" level=info msg="connecting to shim 2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98" address="unix:///run/containerd/s/67f33990154b791c33342bd7f36836f3723885fbf2895dd1cb1b66941150683a" protocol=ttrpc version=3 Sep 9 23:43:21.795068 systemd[1]: Started cri-containerd-2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98.scope - libcontainer container 2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98. Sep 9 23:43:21.812869 systemd[1]: cri-containerd-2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98.scope: Deactivated successfully. Sep 9 23:43:21.815599 containerd[1876]: time="2025-09-09T23:43:21.815568974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\" id:\"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\" pid:3941 exited_at:{seconds:1757461401 nanos:815226834}" Sep 9 23:43:21.819439 containerd[1876]: time="2025-09-09T23:43:21.819404188Z" level=info msg="received exit event container_id:\"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\" id:\"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\" pid:3941 exited_at:{seconds:1757461401 nanos:815226834}" Sep 9 23:43:21.825554 containerd[1876]: time="2025-09-09T23:43:21.825521549Z" level=info msg="StartContainer for \"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\" returns successfully" Sep 9 23:43:21.837557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98-rootfs.mount: Deactivated successfully. Sep 9 23:43:22.735307 containerd[1876]: time="2025-09-09T23:43:22.735264979Z" level=info msg="CreateContainer within sandbox \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:43:22.764919 containerd[1876]: time="2025-09-09T23:43:22.764457787Z" level=info msg="Container 8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:22.766519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707708356.mount: Deactivated successfully. Sep 9 23:43:22.786160 containerd[1876]: time="2025-09-09T23:43:22.786123267Z" level=info msg="CreateContainer within sandbox \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\"" Sep 9 23:43:22.787567 containerd[1876]: time="2025-09-09T23:43:22.787539138Z" level=info msg="StartContainer for \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\"" Sep 9 23:43:22.788285 containerd[1876]: time="2025-09-09T23:43:22.788261738Z" level=info msg="connecting to shim 8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38" address="unix:///run/containerd/s/67f33990154b791c33342bd7f36836f3723885fbf2895dd1cb1b66941150683a" protocol=ttrpc version=3 Sep 9 23:43:22.806716 systemd[1]: Started cri-containerd-8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38.scope - libcontainer container 8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38. Sep 9 23:43:22.842044 containerd[1876]: time="2025-09-09T23:43:22.841993944Z" level=info msg="StartContainer for \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" returns successfully" Sep 9 23:43:22.908737 containerd[1876]: time="2025-09-09T23:43:22.908695161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" id:\"19f3057886b5ae630e5db85eb6d105dee1da63cdacdffa24d1a172bce101b5d4\" pid:4014 exited_at:{seconds:1757461402 nanos:908363974}" Sep 9 23:43:22.966395 kubelet[3385]: I0909 23:43:22.966327 3385 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 23:43:23.003628 systemd[1]: Created slice kubepods-burstable-pode828d1cc_d7c8_4b37_b678_eace632928ad.slice - libcontainer container kubepods-burstable-pode828d1cc_d7c8_4b37_b678_eace632928ad.slice. Sep 9 23:43:23.013150 systemd[1]: Created slice kubepods-burstable-pod141d30d6_3857_46eb_a9c9_09822c0d2c2a.slice - libcontainer container kubepods-burstable-pod141d30d6_3857_46eb_a9c9_09822c0d2c2a.slice. Sep 9 23:43:23.085069 kubelet[3385]: I0909 23:43:23.084930 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e828d1cc-d7c8-4b37-b678-eace632928ad-config-volume\") pod \"coredns-668d6bf9bc-tnpx6\" (UID: \"e828d1cc-d7c8-4b37-b678-eace632928ad\") " pod="kube-system/coredns-668d6bf9bc-tnpx6" Sep 9 23:43:23.085200 kubelet[3385]: I0909 23:43:23.085181 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdk4r\" (UniqueName: \"kubernetes.io/projected/e828d1cc-d7c8-4b37-b678-eace632928ad-kube-api-access-xdk4r\") pod \"coredns-668d6bf9bc-tnpx6\" (UID: \"e828d1cc-d7c8-4b37-b678-eace632928ad\") " pod="kube-system/coredns-668d6bf9bc-tnpx6" Sep 9 23:43:23.085230 kubelet[3385]: I0909 23:43:23.085209 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvkbk\" (UniqueName: \"kubernetes.io/projected/141d30d6-3857-46eb-a9c9-09822c0d2c2a-kube-api-access-xvkbk\") pod \"coredns-668d6bf9bc-fqxl4\" (UID: \"141d30d6-3857-46eb-a9c9-09822c0d2c2a\") " pod="kube-system/coredns-668d6bf9bc-fqxl4" Sep 9 23:43:23.085349 kubelet[3385]: I0909 23:43:23.085322 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/141d30d6-3857-46eb-a9c9-09822c0d2c2a-config-volume\") pod \"coredns-668d6bf9bc-fqxl4\" (UID: \"141d30d6-3857-46eb-a9c9-09822c0d2c2a\") " pod="kube-system/coredns-668d6bf9bc-fqxl4" Sep 9 23:43:23.310349 containerd[1876]: time="2025-09-09T23:43:23.310282989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tnpx6,Uid:e828d1cc-d7c8-4b37-b678-eace632928ad,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:23.316971 containerd[1876]: time="2025-09-09T23:43:23.316841148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fqxl4,Uid:141d30d6-3857-46eb-a9c9-09822c0d2c2a,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:23.379365 containerd[1876]: time="2025-09-09T23:43:23.379326914Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:23.383935 containerd[1876]: time="2025-09-09T23:43:23.383907737Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 23:43:23.387872 containerd[1876]: time="2025-09-09T23:43:23.387826242Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:23.388927 containerd[1876]: time="2025-09-09T23:43:23.388744120Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.131985326s" Sep 9 23:43:23.388927 containerd[1876]: time="2025-09-09T23:43:23.388777073Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 23:43:23.390847 containerd[1876]: time="2025-09-09T23:43:23.390807508Z" level=info msg="CreateContainer within sandbox \"c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 23:43:23.408117 containerd[1876]: time="2025-09-09T23:43:23.408090084Z" level=info msg="Container 04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:23.428373 containerd[1876]: time="2025-09-09T23:43:23.428340174Z" level=info msg="CreateContainer within sandbox \"c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\"" Sep 9 23:43:23.429155 containerd[1876]: time="2025-09-09T23:43:23.429077406Z" level=info msg="StartContainer for \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\"" Sep 9 23:43:23.430560 containerd[1876]: time="2025-09-09T23:43:23.430480516Z" level=info msg="connecting to shim 04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e" address="unix:///run/containerd/s/1233533020a397045eea34c44d69e0f6486e131738d2f6ff9d16c5bc03ce9581" protocol=ttrpc version=3 Sep 9 23:43:23.446047 systemd[1]: Started cri-containerd-04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e.scope - libcontainer container 04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e. Sep 9 23:43:23.479627 containerd[1876]: time="2025-09-09T23:43:23.479417677Z" level=info msg="StartContainer for \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\" returns successfully" Sep 9 23:43:23.751609 kubelet[3385]: I0909 23:43:23.751320 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-95r5b" podStartSLOduration=0.877492571 podStartE2EDuration="17.751303744s" podCreationTimestamp="2025-09-09 23:43:06 +0000 UTC" firstStartedPulling="2025-09-09 23:43:06.515590353 +0000 UTC m=+6.941964627" lastFinishedPulling="2025-09-09 23:43:23.389401534 +0000 UTC m=+23.815775800" observedRunningTime="2025-09-09 23:43:23.750504358 +0000 UTC m=+24.176878624" watchObservedRunningTime="2025-09-09 23:43:23.751303744 +0000 UTC m=+24.177678018" Sep 9 23:43:27.109369 systemd-networkd[1671]: cilium_host: Link UP Sep 9 23:43:27.109841 systemd-networkd[1671]: cilium_net: Link UP Sep 9 23:43:27.110350 systemd-networkd[1671]: cilium_host: Gained carrier Sep 9 23:43:27.110743 systemd-networkd[1671]: cilium_net: Gained carrier Sep 9 23:43:27.264277 systemd-networkd[1671]: cilium_vxlan: Link UP Sep 9 23:43:27.264286 systemd-networkd[1671]: cilium_vxlan: Gained carrier Sep 9 23:43:27.549085 kernel: NET: Registered PF_ALG protocol family Sep 9 23:43:27.833023 systemd-networkd[1671]: cilium_host: Gained IPv6LL Sep 9 23:43:27.961057 systemd-networkd[1671]: cilium_net: Gained IPv6LL Sep 9 23:43:28.096570 systemd-networkd[1671]: lxc_health: Link UP Sep 9 23:43:28.098719 systemd-networkd[1671]: lxc_health: Gained carrier Sep 9 23:43:28.115533 kubelet[3385]: I0909 23:43:28.115479 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gp8lp" podStartSLOduration=12.103966215 podStartE2EDuration="23.11546277s" podCreationTimestamp="2025-09-09 23:43:05 +0000 UTC" firstStartedPulling="2025-09-09 23:43:06.245034159 +0000 UTC m=+6.671408425" lastFinishedPulling="2025-09-09 23:43:17.256530714 +0000 UTC m=+17.682904980" observedRunningTime="2025-09-09 23:43:23.769722109 +0000 UTC m=+24.196096375" watchObservedRunningTime="2025-09-09 23:43:28.11546277 +0000 UTC m=+28.541837036" Sep 9 23:43:28.366578 systemd-networkd[1671]: lxc023680bfae90: Link UP Sep 9 23:43:28.381933 kernel: eth0: renamed from tmpa6eac Sep 9 23:43:28.388887 systemd-networkd[1671]: lxc4aa668ed351b: Link UP Sep 9 23:43:28.389923 kernel: eth0: renamed from tmp0f882 Sep 9 23:43:28.391024 systemd-networkd[1671]: lxc023680bfae90: Gained carrier Sep 9 23:43:28.394455 systemd-networkd[1671]: lxc4aa668ed351b: Gained carrier Sep 9 23:43:28.602068 systemd-networkd[1671]: cilium_vxlan: Gained IPv6LL Sep 9 23:43:29.369102 systemd-networkd[1671]: lxc_health: Gained IPv6LL Sep 9 23:43:29.497058 systemd-networkd[1671]: lxc023680bfae90: Gained IPv6LL Sep 9 23:43:29.561100 systemd-networkd[1671]: lxc4aa668ed351b: Gained IPv6LL Sep 9 23:43:30.982213 containerd[1876]: time="2025-09-09T23:43:30.982165960Z" level=info msg="connecting to shim 0f882b4578a65e4b4cb4322a9bcf6761fd395c4587f508f9a9b67f0ad7aaf413" address="unix:///run/containerd/s/7e2fadf0328d0a086396929d72bada08896445bd7c25e1636b7d2d40f0e0b8ca" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:30.993762 containerd[1876]: time="2025-09-09T23:43:30.993613518Z" level=info msg="connecting to shim a6eace2d8a1ed533067c60e96652161f01f0f8a087977f3e81e514164f3e0c31" address="unix:///run/containerd/s/2aa28da88ee9c48d06ff1ff277f79b3c662dfdaf3e5e96c4bf9eac2580a3e9b3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:31.020027 systemd[1]: Started cri-containerd-0f882b4578a65e4b4cb4322a9bcf6761fd395c4587f508f9a9b67f0ad7aaf413.scope - libcontainer container 0f882b4578a65e4b4cb4322a9bcf6761fd395c4587f508f9a9b67f0ad7aaf413. Sep 9 23:43:31.020848 systemd[1]: Started cri-containerd-a6eace2d8a1ed533067c60e96652161f01f0f8a087977f3e81e514164f3e0c31.scope - libcontainer container a6eace2d8a1ed533067c60e96652161f01f0f8a087977f3e81e514164f3e0c31. Sep 9 23:43:31.055982 containerd[1876]: time="2025-09-09T23:43:31.055943455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fqxl4,Uid:141d30d6-3857-46eb-a9c9-09822c0d2c2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f882b4578a65e4b4cb4322a9bcf6761fd395c4587f508f9a9b67f0ad7aaf413\"" Sep 9 23:43:31.060545 containerd[1876]: time="2025-09-09T23:43:31.060503556Z" level=info msg="CreateContainer within sandbox \"0f882b4578a65e4b4cb4322a9bcf6761fd395c4587f508f9a9b67f0ad7aaf413\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:43:31.063841 containerd[1876]: time="2025-09-09T23:43:31.063807248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tnpx6,Uid:e828d1cc-d7c8-4b37-b678-eace632928ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6eace2d8a1ed533067c60e96652161f01f0f8a087977f3e81e514164f3e0c31\"" Sep 9 23:43:31.066855 containerd[1876]: time="2025-09-09T23:43:31.066779689Z" level=info msg="CreateContainer within sandbox \"a6eace2d8a1ed533067c60e96652161f01f0f8a087977f3e81e514164f3e0c31\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:43:31.096972 containerd[1876]: time="2025-09-09T23:43:31.096945669Z" level=info msg="Container 7e2c6de298867d0a4f5cbc973cf7876629b29eed5a34be257607556087b8aa30: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:31.100478 containerd[1876]: time="2025-09-09T23:43:31.100157590Z" level=info msg="Container 87c530cdeecb946de4e49203409a3e4e4cd6decb55f67201c6fad5ce4503c1f0: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:31.127385 containerd[1876]: time="2025-09-09T23:43:31.127354792Z" level=info msg="CreateContainer within sandbox \"0f882b4578a65e4b4cb4322a9bcf6761fd395c4587f508f9a9b67f0ad7aaf413\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e2c6de298867d0a4f5cbc973cf7876629b29eed5a34be257607556087b8aa30\"" Sep 9 23:43:31.127857 containerd[1876]: time="2025-09-09T23:43:31.127838816Z" level=info msg="StartContainer for \"7e2c6de298867d0a4f5cbc973cf7876629b29eed5a34be257607556087b8aa30\"" Sep 9 23:43:31.128798 containerd[1876]: time="2025-09-09T23:43:31.128778535Z" level=info msg="connecting to shim 7e2c6de298867d0a4f5cbc973cf7876629b29eed5a34be257607556087b8aa30" address="unix:///run/containerd/s/7e2fadf0328d0a086396929d72bada08896445bd7c25e1636b7d2d40f0e0b8ca" protocol=ttrpc version=3 Sep 9 23:43:31.144015 systemd[1]: Started cri-containerd-7e2c6de298867d0a4f5cbc973cf7876629b29eed5a34be257607556087b8aa30.scope - libcontainer container 7e2c6de298867d0a4f5cbc973cf7876629b29eed5a34be257607556087b8aa30. Sep 9 23:43:31.156112 containerd[1876]: time="2025-09-09T23:43:31.156077213Z" level=info msg="CreateContainer within sandbox \"a6eace2d8a1ed533067c60e96652161f01f0f8a087977f3e81e514164f3e0c31\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87c530cdeecb946de4e49203409a3e4e4cd6decb55f67201c6fad5ce4503c1f0\"" Sep 9 23:43:31.160210 containerd[1876]: time="2025-09-09T23:43:31.160013213Z" level=info msg="StartContainer for \"87c530cdeecb946de4e49203409a3e4e4cd6decb55f67201c6fad5ce4503c1f0\"" Sep 9 23:43:31.162663 containerd[1876]: time="2025-09-09T23:43:31.162633067Z" level=info msg="connecting to shim 87c530cdeecb946de4e49203409a3e4e4cd6decb55f67201c6fad5ce4503c1f0" address="unix:///run/containerd/s/2aa28da88ee9c48d06ff1ff277f79b3c662dfdaf3e5e96c4bf9eac2580a3e9b3" protocol=ttrpc version=3 Sep 9 23:43:31.181474 containerd[1876]: time="2025-09-09T23:43:31.181418538Z" level=info msg="StartContainer for \"7e2c6de298867d0a4f5cbc973cf7876629b29eed5a34be257607556087b8aa30\" returns successfully" Sep 9 23:43:31.183053 systemd[1]: Started cri-containerd-87c530cdeecb946de4e49203409a3e4e4cd6decb55f67201c6fad5ce4503c1f0.scope - libcontainer container 87c530cdeecb946de4e49203409a3e4e4cd6decb55f67201c6fad5ce4503c1f0. Sep 9 23:43:31.219181 containerd[1876]: time="2025-09-09T23:43:31.219112796Z" level=info msg="StartContainer for \"87c530cdeecb946de4e49203409a3e4e4cd6decb55f67201c6fad5ce4503c1f0\" returns successfully" Sep 9 23:43:31.776117 kubelet[3385]: I0909 23:43:31.776018 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tnpx6" podStartSLOduration=25.776001945 podStartE2EDuration="25.776001945s" podCreationTimestamp="2025-09-09 23:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:31.774413221 +0000 UTC m=+32.200787495" watchObservedRunningTime="2025-09-09 23:43:31.776001945 +0000 UTC m=+32.202376211" Sep 9 23:43:31.807881 kubelet[3385]: I0909 23:43:31.807828 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fqxl4" podStartSLOduration=25.80781273 podStartE2EDuration="25.80781273s" podCreationTimestamp="2025-09-09 23:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:31.806846627 +0000 UTC m=+32.233220901" watchObservedRunningTime="2025-09-09 23:43:31.80781273 +0000 UTC m=+32.234186996" Sep 9 23:43:31.974452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1219487555.mount: Deactivated successfully. Sep 9 23:44:34.424646 systemd[1]: Started sshd@7-10.200.20.4:22-10.200.16.10:36552.service - OpenSSH per-connection server daemon (10.200.16.10:36552). Sep 9 23:44:34.914534 sshd[4706]: Accepted publickey for core from 10.200.16.10 port 36552 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:44:34.915645 sshd-session[4706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:34.919190 systemd-logind[1849]: New session 10 of user core. Sep 9 23:44:34.923035 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:44:35.329279 sshd[4709]: Connection closed by 10.200.16.10 port 36552 Sep 9 23:44:35.329768 sshd-session[4706]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:35.332675 systemd[1]: sshd@7-10.200.20.4:22-10.200.16.10:36552.service: Deactivated successfully. Sep 9 23:44:35.334297 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:44:35.335114 systemd-logind[1849]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:44:35.336268 systemd-logind[1849]: Removed session 10. Sep 9 23:44:40.398345 systemd[1]: Started sshd@8-10.200.20.4:22-10.200.16.10:60932.service - OpenSSH per-connection server daemon (10.200.16.10:60932). Sep 9 23:44:40.819573 sshd[4724]: Accepted publickey for core from 10.200.16.10 port 60932 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:44:40.820694 sshd-session[4724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:40.824708 systemd-logind[1849]: New session 11 of user core. Sep 9 23:44:40.830015 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:44:41.178419 sshd[4727]: Connection closed by 10.200.16.10 port 60932 Sep 9 23:44:41.177862 sshd-session[4724]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:41.180952 systemd-logind[1849]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:44:41.181399 systemd[1]: sshd@8-10.200.20.4:22-10.200.16.10:60932.service: Deactivated successfully. Sep 9 23:44:41.182733 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:44:41.186061 systemd-logind[1849]: Removed session 11. Sep 9 23:44:46.251463 systemd[1]: Started sshd@9-10.200.20.4:22-10.200.16.10:60934.service - OpenSSH per-connection server daemon (10.200.16.10:60934). Sep 9 23:44:46.672282 sshd[4740]: Accepted publickey for core from 10.200.16.10 port 60934 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:44:46.673435 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:46.677306 systemd-logind[1849]: New session 12 of user core. Sep 9 23:44:46.685015 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:44:47.031975 sshd[4743]: Connection closed by 10.200.16.10 port 60934 Sep 9 23:44:47.032601 sshd-session[4740]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:47.035582 systemd[1]: sshd@9-10.200.20.4:22-10.200.16.10:60934.service: Deactivated successfully. Sep 9 23:44:47.037541 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:44:47.038315 systemd-logind[1849]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:44:47.039472 systemd-logind[1849]: Removed session 12. Sep 9 23:44:52.113492 systemd[1]: Started sshd@10-10.200.20.4:22-10.200.16.10:46794.service - OpenSSH per-connection server daemon (10.200.16.10:46794). Sep 9 23:44:52.567850 sshd[4755]: Accepted publickey for core from 10.200.16.10 port 46794 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:44:52.568957 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:52.572665 systemd-logind[1849]: New session 13 of user core. Sep 9 23:44:52.585028 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:44:52.928852 sshd[4758]: Connection closed by 10.200.16.10 port 46794 Sep 9 23:44:52.928309 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:52.932127 systemd-logind[1849]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:44:52.932677 systemd[1]: sshd@10-10.200.20.4:22-10.200.16.10:46794.service: Deactivated successfully. Sep 9 23:44:52.935427 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:44:52.937091 systemd-logind[1849]: Removed session 13. Sep 9 23:44:58.005071 systemd[1]: Started sshd@11-10.200.20.4:22-10.200.16.10:46810.service - OpenSSH per-connection server daemon (10.200.16.10:46810). Sep 9 23:44:58.432722 sshd[4771]: Accepted publickey for core from 10.200.16.10 port 46810 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:44:58.433172 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:58.436628 systemd-logind[1849]: New session 14 of user core. Sep 9 23:44:58.443017 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:44:58.788686 sshd[4774]: Connection closed by 10.200.16.10 port 46810 Sep 9 23:44:58.788947 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:58.792639 systemd[1]: sshd@11-10.200.20.4:22-10.200.16.10:46810.service: Deactivated successfully. Sep 9 23:44:58.795157 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:44:58.795834 systemd-logind[1849]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:44:58.797022 systemd-logind[1849]: Removed session 14. Sep 9 23:44:58.897486 systemd[1]: Started sshd@12-10.200.20.4:22-10.200.16.10:46816.service - OpenSSH per-connection server daemon (10.200.16.10:46816). Sep 9 23:44:59.389736 sshd[4787]: Accepted publickey for core from 10.200.16.10 port 46816 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:44:59.390489 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:59.393969 systemd-logind[1849]: New session 15 of user core. Sep 9 23:44:59.400022 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:44:59.826940 sshd[4790]: Connection closed by 10.200.16.10 port 46816 Sep 9 23:44:59.826990 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:59.830491 systemd-logind[1849]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:44:59.830630 systemd[1]: sshd@12-10.200.20.4:22-10.200.16.10:46816.service: Deactivated successfully. Sep 9 23:44:59.833720 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:44:59.836135 systemd-logind[1849]: Removed session 15. Sep 9 23:44:59.900507 systemd[1]: Started sshd@13-10.200.20.4:22-10.200.16.10:36066.service - OpenSSH per-connection server daemon (10.200.16.10:36066). Sep 9 23:45:00.319167 sshd[4801]: Accepted publickey for core from 10.200.16.10 port 36066 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:00.320223 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:00.323927 systemd-logind[1849]: New session 16 of user core. Sep 9 23:45:00.330015 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:45:00.673415 sshd[4804]: Connection closed by 10.200.16.10 port 36066 Sep 9 23:45:00.674079 sshd-session[4801]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:00.677191 systemd[1]: sshd@13-10.200.20.4:22-10.200.16.10:36066.service: Deactivated successfully. Sep 9 23:45:00.679973 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:45:00.681096 systemd-logind[1849]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:45:00.682691 systemd-logind[1849]: Removed session 16. Sep 9 23:45:05.770121 systemd[1]: Started sshd@14-10.200.20.4:22-10.200.16.10:36076.service - OpenSSH per-connection server daemon (10.200.16.10:36076). Sep 9 23:45:06.272175 sshd[4815]: Accepted publickey for core from 10.200.16.10 port 36076 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:06.273260 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:06.276881 systemd-logind[1849]: New session 17 of user core. Sep 9 23:45:06.284203 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:45:06.663102 sshd[4818]: Connection closed by 10.200.16.10 port 36076 Sep 9 23:45:06.663654 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:06.666890 systemd-logind[1849]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:45:06.667072 systemd[1]: sshd@14-10.200.20.4:22-10.200.16.10:36076.service: Deactivated successfully. Sep 9 23:45:06.668400 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:45:06.671425 systemd-logind[1849]: Removed session 17. Sep 9 23:45:11.742505 systemd[1]: Started sshd@15-10.200.20.4:22-10.200.16.10:55146.service - OpenSSH per-connection server daemon (10.200.16.10:55146). Sep 9 23:45:12.198943 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 55146 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:12.200535 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:12.204799 systemd-logind[1849]: New session 18 of user core. Sep 9 23:45:12.209999 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:45:12.563333 sshd[4834]: Connection closed by 10.200.16.10 port 55146 Sep 9 23:45:12.564029 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:12.567254 systemd-logind[1849]: Session 18 logged out. Waiting for processes to exit. Sep 9 23:45:12.567411 systemd[1]: sshd@15-10.200.20.4:22-10.200.16.10:55146.service: Deactivated successfully. Sep 9 23:45:12.570186 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 23:45:12.571686 systemd-logind[1849]: Removed session 18. Sep 9 23:45:12.653117 systemd[1]: Started sshd@16-10.200.20.4:22-10.200.16.10:55156.service - OpenSSH per-connection server daemon (10.200.16.10:55156). Sep 9 23:45:13.115179 sshd[4845]: Accepted publickey for core from 10.200.16.10 port 55156 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:13.116236 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:13.120033 systemd-logind[1849]: New session 19 of user core. Sep 9 23:45:13.125011 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 23:45:13.551699 sshd[4848]: Connection closed by 10.200.16.10 port 55156 Sep 9 23:45:13.552224 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:13.555523 systemd[1]: sshd@16-10.200.20.4:22-10.200.16.10:55156.service: Deactivated successfully. Sep 9 23:45:13.555815 systemd-logind[1849]: Session 19 logged out. Waiting for processes to exit. Sep 9 23:45:13.558497 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 23:45:13.560672 systemd-logind[1849]: Removed session 19. Sep 9 23:45:13.626140 systemd[1]: Started sshd@17-10.200.20.4:22-10.200.16.10:55168.service - OpenSSH per-connection server daemon (10.200.16.10:55168). Sep 9 23:45:14.046293 sshd[4857]: Accepted publickey for core from 10.200.16.10 port 55168 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:14.047392 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:14.051011 systemd-logind[1849]: New session 20 of user core. Sep 9 23:45:14.058017 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 23:45:14.840553 sshd[4860]: Connection closed by 10.200.16.10 port 55168 Sep 9 23:45:14.841148 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:14.844151 systemd-logind[1849]: Session 20 logged out. Waiting for processes to exit. Sep 9 23:45:14.845444 systemd[1]: sshd@17-10.200.20.4:22-10.200.16.10:55168.service: Deactivated successfully. Sep 9 23:45:14.848489 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 23:45:14.850447 systemd-logind[1849]: Removed session 20. Sep 9 23:45:14.927593 systemd[1]: Started sshd@18-10.200.20.4:22-10.200.16.10:55172.service - OpenSSH per-connection server daemon (10.200.16.10:55172). Sep 9 23:45:15.381334 sshd[4877]: Accepted publickey for core from 10.200.16.10 port 55172 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:15.384222 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:15.388033 systemd-logind[1849]: New session 21 of user core. Sep 9 23:45:15.392035 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 23:45:15.854600 sshd[4880]: Connection closed by 10.200.16.10 port 55172 Sep 9 23:45:15.855188 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:15.858692 systemd-logind[1849]: Session 21 logged out. Waiting for processes to exit. Sep 9 23:45:15.859241 systemd[1]: sshd@18-10.200.20.4:22-10.200.16.10:55172.service: Deactivated successfully. Sep 9 23:45:15.861790 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 23:45:15.863637 systemd-logind[1849]: Removed session 21. Sep 9 23:45:15.932431 systemd[1]: Started sshd@19-10.200.20.4:22-10.200.16.10:55186.service - OpenSSH per-connection server daemon (10.200.16.10:55186). Sep 9 23:45:16.390982 sshd[4890]: Accepted publickey for core from 10.200.16.10 port 55186 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:16.392023 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:16.395537 systemd-logind[1849]: New session 22 of user core. Sep 9 23:45:16.403014 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 23:45:16.750698 sshd[4893]: Connection closed by 10.200.16.10 port 55186 Sep 9 23:45:16.750716 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:16.753674 systemd[1]: sshd@19-10.200.20.4:22-10.200.16.10:55186.service: Deactivated successfully. Sep 9 23:45:16.755248 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 23:45:16.756957 systemd-logind[1849]: Session 22 logged out. Waiting for processes to exit. Sep 9 23:45:16.758381 systemd-logind[1849]: Removed session 22. Sep 9 23:45:21.827177 systemd[1]: Started sshd@20-10.200.20.4:22-10.200.16.10:35864.service - OpenSSH per-connection server daemon (10.200.16.10:35864). Sep 9 23:45:22.250285 sshd[4906]: Accepted publickey for core from 10.200.16.10 port 35864 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:22.251445 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:22.255314 systemd-logind[1849]: New session 23 of user core. Sep 9 23:45:22.268011 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 23:45:22.605090 sshd[4909]: Connection closed by 10.200.16.10 port 35864 Sep 9 23:45:22.605638 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:22.608926 systemd[1]: sshd@20-10.200.20.4:22-10.200.16.10:35864.service: Deactivated successfully. Sep 9 23:45:22.609063 systemd-logind[1849]: Session 23 logged out. Waiting for processes to exit. Sep 9 23:45:22.611015 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 23:45:22.614065 systemd-logind[1849]: Removed session 23. Sep 9 23:45:27.687800 systemd[1]: Started sshd@21-10.200.20.4:22-10.200.16.10:35874.service - OpenSSH per-connection server daemon (10.200.16.10:35874). Sep 9 23:45:28.102354 sshd[4921]: Accepted publickey for core from 10.200.16.10 port 35874 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:28.103444 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:28.107201 systemd-logind[1849]: New session 24 of user core. Sep 9 23:45:28.114010 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 23:45:28.462423 sshd[4924]: Connection closed by 10.200.16.10 port 35874 Sep 9 23:45:28.463082 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:28.466267 systemd[1]: sshd@21-10.200.20.4:22-10.200.16.10:35874.service: Deactivated successfully. Sep 9 23:45:28.467797 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 23:45:28.468590 systemd-logind[1849]: Session 24 logged out. Waiting for processes to exit. Sep 9 23:45:28.470220 systemd-logind[1849]: Removed session 24. Sep 9 23:45:33.544545 systemd[1]: Started sshd@22-10.200.20.4:22-10.200.16.10:36480.service - OpenSSH per-connection server daemon (10.200.16.10:36480). Sep 9 23:45:33.998919 sshd[4935]: Accepted publickey for core from 10.200.16.10 port 36480 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:34.000123 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:34.004139 systemd-logind[1849]: New session 25 of user core. Sep 9 23:45:34.009023 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 23:45:34.359885 sshd[4938]: Connection closed by 10.200.16.10 port 36480 Sep 9 23:45:34.360462 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:34.363582 systemd[1]: sshd@22-10.200.20.4:22-10.200.16.10:36480.service: Deactivated successfully. Sep 9 23:45:34.365653 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 23:45:34.366398 systemd-logind[1849]: Session 25 logged out. Waiting for processes to exit. Sep 9 23:45:34.367861 systemd-logind[1849]: Removed session 25. Sep 9 23:45:34.435436 systemd[1]: Started sshd@23-10.200.20.4:22-10.200.16.10:36488.service - OpenSSH per-connection server daemon (10.200.16.10:36488). Sep 9 23:45:34.858640 sshd[4949]: Accepted publickey for core from 10.200.16.10 port 36488 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:34.859723 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:34.863395 systemd-logind[1849]: New session 26 of user core. Sep 9 23:45:34.879021 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 23:45:36.388522 containerd[1876]: time="2025-09-09T23:45:36.388397977Z" level=info msg="StopContainer for \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\" with timeout 30 (s)" Sep 9 23:45:36.389908 containerd[1876]: time="2025-09-09T23:45:36.389854063Z" level=info msg="Stop container \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\" with signal terminated" Sep 9 23:45:36.402110 systemd[1]: cri-containerd-04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e.scope: Deactivated successfully. Sep 9 23:45:36.405553 containerd[1876]: time="2025-09-09T23:45:36.405515453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\" id:\"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\" pid:4128 exited_at:{seconds:1757461536 nanos:404606592}" Sep 9 23:45:36.405709 containerd[1876]: time="2025-09-09T23:45:36.405652705Z" level=info msg="received exit event container_id:\"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\" id:\"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\" pid:4128 exited_at:{seconds:1757461536 nanos:404606592}" Sep 9 23:45:36.414228 containerd[1876]: time="2025-09-09T23:45:36.414197835Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:45:36.419317 containerd[1876]: time="2025-09-09T23:45:36.419288470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" id:\"fae8e57e850228d8e179988763f0fa46362b182f2c873a8d32637726f61a9635\" pid:4975 exited_at:{seconds:1757461536 nanos:419041078}" Sep 9 23:45:36.421117 containerd[1876]: time="2025-09-09T23:45:36.420891857Z" level=info msg="StopContainer for \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" with timeout 2 (s)" Sep 9 23:45:36.421392 containerd[1876]: time="2025-09-09T23:45:36.421361552Z" level=info msg="Stop container \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" with signal terminated" Sep 9 23:45:36.428733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e-rootfs.mount: Deactivated successfully. Sep 9 23:45:36.431665 systemd-networkd[1671]: lxc_health: Link DOWN Sep 9 23:45:36.431669 systemd-networkd[1671]: lxc_health: Lost carrier Sep 9 23:45:36.444642 systemd[1]: cri-containerd-8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38.scope: Deactivated successfully. Sep 9 23:45:36.446952 systemd[1]: cri-containerd-8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38.scope: Consumed 4.381s CPU time, 123.8M memory peak, 128K read from disk, 12.9M written to disk. Sep 9 23:45:36.447469 containerd[1876]: time="2025-09-09T23:45:36.447444619Z" level=info msg="received exit event container_id:\"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" id:\"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" pid:3980 exited_at:{seconds:1757461536 nanos:445418394}" Sep 9 23:45:36.447762 containerd[1876]: time="2025-09-09T23:45:36.447618681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" id:\"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" pid:3980 exited_at:{seconds:1757461536 nanos:445418394}" Sep 9 23:45:36.463761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38-rootfs.mount: Deactivated successfully. Sep 9 23:45:36.528607 containerd[1876]: time="2025-09-09T23:45:36.528568049Z" level=info msg="StopContainer for \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\" returns successfully" Sep 9 23:45:36.529243 containerd[1876]: time="2025-09-09T23:45:36.529149979Z" level=info msg="StopPodSandbox for \"c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f\"" Sep 9 23:45:36.529299 containerd[1876]: time="2025-09-09T23:45:36.529284168Z" level=info msg="Container to stop \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:45:36.532966 containerd[1876]: time="2025-09-09T23:45:36.532937028Z" level=info msg="StopContainer for \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" returns successfully" Sep 9 23:45:36.535320 containerd[1876]: time="2025-09-09T23:45:36.535137587Z" level=info msg="StopPodSandbox for \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\"" Sep 9 23:45:36.535320 containerd[1876]: time="2025-09-09T23:45:36.535179940Z" level=info msg="Container to stop \"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:45:36.535320 containerd[1876]: time="2025-09-09T23:45:36.535187788Z" level=info msg="Container to stop \"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:45:36.535320 containerd[1876]: time="2025-09-09T23:45:36.535192901Z" level=info msg="Container to stop \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:45:36.535320 containerd[1876]: time="2025-09-09T23:45:36.535198053Z" level=info msg="Container to stop \"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:45:36.535320 containerd[1876]: time="2025-09-09T23:45:36.535205117Z" level=info msg="Container to stop \"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:45:36.535969 systemd[1]: cri-containerd-c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f.scope: Deactivated successfully. Sep 9 23:45:36.542454 containerd[1876]: time="2025-09-09T23:45:36.542321673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f\" id:\"c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f\" pid:3652 exit_status:137 exited_at:{seconds:1757461536 nanos:542125443}" Sep 9 23:45:36.545306 systemd[1]: cri-containerd-558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8.scope: Deactivated successfully. Sep 9 23:45:36.564981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8-rootfs.mount: Deactivated successfully. Sep 9 23:45:36.569794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f-rootfs.mount: Deactivated successfully. Sep 9 23:45:36.588788 containerd[1876]: time="2025-09-09T23:45:36.588733687Z" level=info msg="shim disconnected" id=558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8 namespace=k8s.io Sep 9 23:45:36.588953 containerd[1876]: time="2025-09-09T23:45:36.588758616Z" level=warning msg="cleaning up after shim disconnected" id=558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8 namespace=k8s.io Sep 9 23:45:36.588953 containerd[1876]: time="2025-09-09T23:45:36.588883924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:45:36.590046 containerd[1876]: time="2025-09-09T23:45:36.590024344Z" level=info msg="shim disconnected" id=c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f namespace=k8s.io Sep 9 23:45:36.590173 containerd[1876]: time="2025-09-09T23:45:36.590146844Z" level=warning msg="cleaning up after shim disconnected" id=c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f namespace=k8s.io Sep 9 23:45:36.590296 containerd[1876]: time="2025-09-09T23:45:36.590213182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:45:36.599838 containerd[1876]: time="2025-09-09T23:45:36.599800673Z" level=info msg="received exit event sandbox_id:\"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" exit_status:137 exited_at:{seconds:1757461536 nanos:549469686}" Sep 9 23:45:36.601306 containerd[1876]: time="2025-09-09T23:45:36.601148324Z" level=info msg="received exit event sandbox_id:\"c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f\" exit_status:137 exited_at:{seconds:1757461536 nanos:542125443}" Sep 9 23:45:36.602083 containerd[1876]: time="2025-09-09T23:45:36.602057321Z" level=info msg="TearDown network for sandbox \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" successfully" Sep 9 23:45:36.602344 containerd[1876]: time="2025-09-09T23:45:36.602163269Z" level=info msg="StopPodSandbox for \"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" returns successfully" Sep 9 23:45:36.602503 containerd[1876]: time="2025-09-09T23:45:36.602131276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" id:\"558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8\" pid:3531 exit_status:137 exited_at:{seconds:1757461536 nanos:549469686}" Sep 9 23:45:36.603260 containerd[1876]: time="2025-09-09T23:45:36.603117907Z" level=info msg="TearDown network for sandbox \"c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f\" successfully" Sep 9 23:45:36.603260 containerd[1876]: time="2025-09-09T23:45:36.603140252Z" level=info msg="StopPodSandbox for \"c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f\" returns successfully" Sep 9 23:45:36.603374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-558f856e4942751cc4ce48ff2526366477217adc2dacbbc5532eed97474920a8-shm.mount: Deactivated successfully. Sep 9 23:45:36.727688 kubelet[3385]: I0909 23:45:36.727550 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-xtables-lock\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.727688 kubelet[3385]: I0909 23:45:36.727589 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-host-proc-sys-kernel\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.727688 kubelet[3385]: I0909 23:45:36.727648 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:45:36.729066 kubelet[3385]: I0909 23:45:36.727763 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:45:36.729066 kubelet[3385]: I0909 23:45:36.728398 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-bpf-maps\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729066 kubelet[3385]: I0909 23:45:36.728424 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cni-path\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729066 kubelet[3385]: I0909 23:45:36.728454 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97ef9171-e0e1-485e-a99c-ae80f46655f4-clustermesh-secrets\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729066 kubelet[3385]: I0909 23:45:36.728459 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:45:36.729146 kubelet[3385]: I0909 23:45:36.728472 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54wj6\" (UniqueName: \"kubernetes.io/projected/97ef9171-e0e1-485e-a99c-ae80f46655f4-kube-api-access-54wj6\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729146 kubelet[3385]: I0909 23:45:36.728475 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cni-path" (OuterVolumeSpecName: "cni-path") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:45:36.729146 kubelet[3385]: I0909 23:45:36.728488 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e739d1b3-43c5-4ee1-8558-0ff0515ccd26-cilium-config-path\") pod \"e739d1b3-43c5-4ee1-8558-0ff0515ccd26\" (UID: \"e739d1b3-43c5-4ee1-8558-0ff0515ccd26\") " Sep 9 23:45:36.729146 kubelet[3385]: I0909 23:45:36.728502 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccwqk\" (UniqueName: \"kubernetes.io/projected/e739d1b3-43c5-4ee1-8558-0ff0515ccd26-kube-api-access-ccwqk\") pod \"e739d1b3-43c5-4ee1-8558-0ff0515ccd26\" (UID: \"e739d1b3-43c5-4ee1-8558-0ff0515ccd26\") " Sep 9 23:45:36.729146 kubelet[3385]: I0909 23:45:36.728511 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-lib-modules\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729146 kubelet[3385]: I0909 23:45:36.728520 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-cgroup\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729235 kubelet[3385]: I0909 23:45:36.728538 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-etc-cni-netd\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729235 kubelet[3385]: I0909 23:45:36.728547 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-run\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729235 kubelet[3385]: I0909 23:45:36.728557 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-host-proc-sys-net\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729235 kubelet[3385]: I0909 23:45:36.728570 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-config-path\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729235 kubelet[3385]: I0909 23:45:36.728580 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97ef9171-e0e1-485e-a99c-ae80f46655f4-hubble-tls\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729235 kubelet[3385]: I0909 23:45:36.728589 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-hostproc\") pod \"97ef9171-e0e1-485e-a99c-ae80f46655f4\" (UID: \"97ef9171-e0e1-485e-a99c-ae80f46655f4\") " Sep 9 23:45:36.729318 kubelet[3385]: I0909 23:45:36.728813 3385 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-xtables-lock\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.729667 kubelet[3385]: I0909 23:45:36.729359 3385 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-host-proc-sys-kernel\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.729667 kubelet[3385]: I0909 23:45:36.729390 3385 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-bpf-maps\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.729667 kubelet[3385]: I0909 23:45:36.729398 3385 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cni-path\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.729667 kubelet[3385]: I0909 23:45:36.729440 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-hostproc" (OuterVolumeSpecName: "hostproc") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:45:36.731719 kubelet[3385]: I0909 23:45:36.731656 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:45:36.731719 kubelet[3385]: I0909 23:45:36.731692 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:45:36.731719 kubelet[3385]: I0909 23:45:36.731703 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:45:36.731719 kubelet[3385]: I0909 23:45:36.731712 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:45:36.732131 kubelet[3385]: I0909 23:45:36.732100 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:45:36.733981 kubelet[3385]: I0909 23:45:36.733948 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:45:36.734094 kubelet[3385]: I0909 23:45:36.734072 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ef9171-e0e1-485e-a99c-ae80f46655f4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 23:45:36.734384 kubelet[3385]: I0909 23:45:36.734288 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e739d1b3-43c5-4ee1-8558-0ff0515ccd26-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e739d1b3-43c5-4ee1-8558-0ff0515ccd26" (UID: "e739d1b3-43c5-4ee1-8558-0ff0515ccd26"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:45:36.734864 kubelet[3385]: I0909 23:45:36.734841 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ef9171-e0e1-485e-a99c-ae80f46655f4-kube-api-access-54wj6" (OuterVolumeSpecName: "kube-api-access-54wj6") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "kube-api-access-54wj6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:45:36.735311 kubelet[3385]: I0909 23:45:36.735285 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e739d1b3-43c5-4ee1-8558-0ff0515ccd26-kube-api-access-ccwqk" (OuterVolumeSpecName: "kube-api-access-ccwqk") pod "e739d1b3-43c5-4ee1-8558-0ff0515ccd26" (UID: "e739d1b3-43c5-4ee1-8558-0ff0515ccd26"). InnerVolumeSpecName "kube-api-access-ccwqk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:45:36.735359 kubelet[3385]: I0909 23:45:36.735351 3385 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ef9171-e0e1-485e-a99c-ae80f46655f4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "97ef9171-e0e1-485e-a99c-ae80f46655f4" (UID: "97ef9171-e0e1-485e-a99c-ae80f46655f4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:45:36.830603 kubelet[3385]: I0909 23:45:36.830496 3385 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-config-path\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.830603 kubelet[3385]: I0909 23:45:36.830535 3385 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97ef9171-e0e1-485e-a99c-ae80f46655f4-hubble-tls\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.830603 kubelet[3385]: I0909 23:45:36.830546 3385 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-hostproc\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.830603 kubelet[3385]: I0909 23:45:36.830553 3385 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97ef9171-e0e1-485e-a99c-ae80f46655f4-clustermesh-secrets\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.830603 kubelet[3385]: I0909 23:45:36.830560 3385 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-54wj6\" (UniqueName: \"kubernetes.io/projected/97ef9171-e0e1-485e-a99c-ae80f46655f4-kube-api-access-54wj6\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.830603 kubelet[3385]: I0909 23:45:36.830568 3385 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e739d1b3-43c5-4ee1-8558-0ff0515ccd26-cilium-config-path\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.830603 kubelet[3385]: I0909 23:45:36.830573 3385 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ccwqk\" (UniqueName: \"kubernetes.io/projected/e739d1b3-43c5-4ee1-8558-0ff0515ccd26-kube-api-access-ccwqk\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.830603 kubelet[3385]: I0909 23:45:36.830579 3385 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-lib-modules\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.830854 kubelet[3385]: I0909 23:45:36.830585 3385 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-cgroup\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.830959 kubelet[3385]: I0909 23:45:36.830590 3385 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-etc-cni-netd\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.830959 kubelet[3385]: I0909 23:45:36.830926 3385 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-cilium-run\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.830959 kubelet[3385]: I0909 23:45:36.830934 3385 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97ef9171-e0e1-485e-a99c-ae80f46655f4-host-proc-sys-net\") on node \"ci-4426.0.0-n-d9fce76d1d\" DevicePath \"\"" Sep 9 23:45:36.972842 kubelet[3385]: I0909 23:45:36.972724 3385 scope.go:117] "RemoveContainer" containerID="04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e" Sep 9 23:45:36.975305 containerd[1876]: time="2025-09-09T23:45:36.975246078Z" level=info msg="RemoveContainer for \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\"" Sep 9 23:45:36.979208 systemd[1]: Removed slice kubepods-besteffort-pode739d1b3_43c5_4ee1_8558_0ff0515ccd26.slice - libcontainer container kubepods-besteffort-pode739d1b3_43c5_4ee1_8558_0ff0515ccd26.slice. Sep 9 23:45:36.995015 systemd[1]: Removed slice kubepods-burstable-pod97ef9171_e0e1_485e_a99c_ae80f46655f4.slice - libcontainer container kubepods-burstable-pod97ef9171_e0e1_485e_a99c_ae80f46655f4.slice. Sep 9 23:45:36.995225 systemd[1]: kubepods-burstable-pod97ef9171_e0e1_485e_a99c_ae80f46655f4.slice: Consumed 4.438s CPU time, 124.2M memory peak, 128K read from disk, 12.9M written to disk. Sep 9 23:45:37.002318 containerd[1876]: time="2025-09-09T23:45:37.002290224Z" level=info msg="RemoveContainer for \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\" returns successfully" Sep 9 23:45:37.002824 kubelet[3385]: I0909 23:45:37.002700 3385 scope.go:117] "RemoveContainer" containerID="04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e" Sep 9 23:45:37.003136 containerd[1876]: time="2025-09-09T23:45:37.003059889Z" level=error msg="ContainerStatus for \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\": not found" Sep 9 23:45:37.003367 kubelet[3385]: E0909 23:45:37.003329 3385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\": not found" containerID="04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e" Sep 9 23:45:37.003418 kubelet[3385]: I0909 23:45:37.003362 3385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e"} err="failed to get container status \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\": rpc error: code = NotFound desc = an error occurred when try to find container \"04346b9a942c9a8c4d276ff0e544266aab786930c4f6e5f347d9b1cdba80e23e\": not found" Sep 9 23:45:37.003418 kubelet[3385]: I0909 23:45:37.003412 3385 scope.go:117] "RemoveContainer" containerID="8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38" Sep 9 23:45:37.004942 containerd[1876]: time="2025-09-09T23:45:37.004890179Z" level=info msg="RemoveContainer for \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\"" Sep 9 23:45:37.015816 containerd[1876]: time="2025-09-09T23:45:37.015786616Z" level=info msg="RemoveContainer for \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" returns successfully" Sep 9 23:45:37.016026 kubelet[3385]: I0909 23:45:37.016008 3385 scope.go:117] "RemoveContainer" containerID="2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98" Sep 9 23:45:37.017223 containerd[1876]: time="2025-09-09T23:45:37.017200869Z" level=info msg="RemoveContainer for \"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\"" Sep 9 23:45:37.035627 containerd[1876]: time="2025-09-09T23:45:37.035594082Z" level=info msg="RemoveContainer for \"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\" returns successfully" Sep 9 23:45:37.035778 kubelet[3385]: I0909 23:45:37.035755 3385 scope.go:117] "RemoveContainer" containerID="92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f" Sep 9 23:45:37.037452 containerd[1876]: time="2025-09-09T23:45:37.037427965Z" level=info msg="RemoveContainer for \"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\"" Sep 9 23:45:37.047904 containerd[1876]: time="2025-09-09T23:45:37.047869187Z" level=info msg="RemoveContainer for \"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\" returns successfully" Sep 9 23:45:37.048209 kubelet[3385]: I0909 23:45:37.048190 3385 scope.go:117] "RemoveContainer" containerID="1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20" Sep 9 23:45:37.049593 containerd[1876]: time="2025-09-09T23:45:37.049563681Z" level=info msg="RemoveContainer for \"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\"" Sep 9 23:45:37.060854 containerd[1876]: time="2025-09-09T23:45:37.060817354Z" level=info msg="RemoveContainer for \"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\" returns successfully" Sep 9 23:45:37.061068 kubelet[3385]: I0909 23:45:37.061043 3385 scope.go:117] "RemoveContainer" containerID="298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3" Sep 9 23:45:37.062330 containerd[1876]: time="2025-09-09T23:45:37.062309505Z" level=info msg="RemoveContainer for \"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\"" Sep 9 23:45:37.073149 containerd[1876]: time="2025-09-09T23:45:37.073110299Z" level=info msg="RemoveContainer for \"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\" returns successfully" Sep 9 23:45:37.074807 kubelet[3385]: I0909 23:45:37.074779 3385 scope.go:117] "RemoveContainer" containerID="8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38" Sep 9 23:45:37.078205 containerd[1876]: time="2025-09-09T23:45:37.078169565Z" level=error msg="ContainerStatus for \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\": not found" Sep 9 23:45:37.078336 kubelet[3385]: E0909 23:45:37.078316 3385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\": not found" containerID="8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38" Sep 9 23:45:37.078420 kubelet[3385]: I0909 23:45:37.078400 3385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38"} err="failed to get container status \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cf4d00f5ea6183fbebfe9ea686e5f2fd67779ff3297e510209fde09dd0a9a38\": not found" Sep 9 23:45:37.078472 kubelet[3385]: I0909 23:45:37.078462 3385 scope.go:117] "RemoveContainer" containerID="2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98" Sep 9 23:45:37.080106 containerd[1876]: time="2025-09-09T23:45:37.080064154Z" level=error msg="ContainerStatus for \"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\": not found" Sep 9 23:45:37.080194 kubelet[3385]: E0909 23:45:37.080172 3385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\": not found" containerID="2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98" Sep 9 23:45:37.080227 kubelet[3385]: I0909 23:45:37.080194 3385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98"} err="failed to get container status \"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\": rpc error: code = NotFound desc = an error occurred when try to find container \"2457ca5c842567de337c1154f782331b27359d93d869ac579de030108ebf9e98\": not found" Sep 9 23:45:37.080227 kubelet[3385]: I0909 23:45:37.080206 3385 scope.go:117] "RemoveContainer" containerID="92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f" Sep 9 23:45:37.080408 containerd[1876]: time="2025-09-09T23:45:37.080376612Z" level=error msg="ContainerStatus for \"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\": not found" Sep 9 23:45:37.080791 kubelet[3385]: E0909 23:45:37.080774 3385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\": not found" containerID="92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f" Sep 9 23:45:37.080913 kubelet[3385]: I0909 23:45:37.080887 3385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f"} err="failed to get container status \"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"92b7a9f6d37364bea5d7e21e9ba31228bb2e0903b0c49763ce00e49707799c3f\": not found" Sep 9 23:45:37.081044 kubelet[3385]: I0909 23:45:37.080969 3385 scope.go:117] "RemoveContainer" containerID="1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20" Sep 9 23:45:37.081182 containerd[1876]: time="2025-09-09T23:45:37.081132580Z" level=error msg="ContainerStatus for \"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\": not found" Sep 9 23:45:37.081286 kubelet[3385]: E0909 23:45:37.081270 3385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\": not found" containerID="1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20" Sep 9 23:45:37.081354 kubelet[3385]: I0909 23:45:37.081339 3385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20"} err="failed to get container status \"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c847f9f7c2be92124a4c820a26f5b1501c52cd8c17bbaa85d9d2b82e2db4e20\": not found" Sep 9 23:45:37.081465 kubelet[3385]: I0909 23:45:37.081397 3385 scope.go:117] "RemoveContainer" containerID="298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3" Sep 9 23:45:37.081577 containerd[1876]: time="2025-09-09T23:45:37.081532121Z" level=error msg="ContainerStatus for \"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\": not found" Sep 9 23:45:37.081660 kubelet[3385]: E0909 23:45:37.081637 3385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\": not found" containerID="298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3" Sep 9 23:45:37.081694 kubelet[3385]: I0909 23:45:37.081663 3385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3"} err="failed to get container status \"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"298b23303eb09d8fb9873be40085bcc71237ba2670ccfa805c15df4ad8ca88c3\": not found" Sep 9 23:45:37.428167 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5545a2e34b802a5c77f1ab070524499a02c42a7d4eccf4e20dd7ee2789c190f-shm.mount: Deactivated successfully. Sep 9 23:45:37.428259 systemd[1]: var-lib-kubelet-pods-e739d1b3\x2d43c5\x2d4ee1\x2d8558\x2d0ff0515ccd26-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccwqk.mount: Deactivated successfully. Sep 9 23:45:37.428303 systemd[1]: var-lib-kubelet-pods-97ef9171\x2de0e1\x2d485e\x2da99c\x2dae80f46655f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d54wj6.mount: Deactivated successfully. Sep 9 23:45:37.428336 systemd[1]: var-lib-kubelet-pods-97ef9171\x2de0e1\x2d485e\x2da99c\x2dae80f46655f4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 23:45:37.428372 systemd[1]: var-lib-kubelet-pods-97ef9171\x2de0e1\x2d485e\x2da99c\x2dae80f46655f4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 23:45:37.652341 kubelet[3385]: I0909 23:45:37.652297 3385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ef9171-e0e1-485e-a99c-ae80f46655f4" path="/var/lib/kubelet/pods/97ef9171-e0e1-485e-a99c-ae80f46655f4/volumes" Sep 9 23:45:37.652705 kubelet[3385]: I0909 23:45:37.652686 3385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e739d1b3-43c5-4ee1-8558-0ff0515ccd26" path="/var/lib/kubelet/pods/e739d1b3-43c5-4ee1-8558-0ff0515ccd26/volumes" Sep 9 23:45:38.406465 sshd[4952]: Connection closed by 10.200.16.10 port 36488 Sep 9 23:45:38.406372 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:38.409141 systemd-logind[1849]: Session 26 logged out. Waiting for processes to exit. Sep 9 23:45:38.410423 systemd[1]: sshd@23-10.200.20.4:22-10.200.16.10:36488.service: Deactivated successfully. Sep 9 23:45:38.412783 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 23:45:38.414837 systemd-logind[1849]: Removed session 26. Sep 9 23:45:38.485065 systemd[1]: Started sshd@24-10.200.20.4:22-10.200.16.10:36504.service - OpenSSH per-connection server daemon (10.200.16.10:36504). Sep 9 23:45:38.900508 sshd[5106]: Accepted publickey for core from 10.200.16.10 port 36504 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:38.901632 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:38.905326 systemd-logind[1849]: New session 27 of user core. Sep 9 23:45:38.911021 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 23:45:39.736209 kubelet[3385]: E0909 23:45:39.736170 3385 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 23:45:39.917586 kubelet[3385]: I0909 23:45:39.917547 3385 memory_manager.go:355] "RemoveStaleState removing state" podUID="97ef9171-e0e1-485e-a99c-ae80f46655f4" containerName="cilium-agent" Sep 9 23:45:39.917586 kubelet[3385]: I0909 23:45:39.917577 3385 memory_manager.go:355] "RemoveStaleState removing state" podUID="e739d1b3-43c5-4ee1-8558-0ff0515ccd26" containerName="cilium-operator" Sep 9 23:45:39.929292 systemd[1]: Created slice kubepods-burstable-pod417f9e01_b3e7_40a5_9c15_54fadebf5cba.slice - libcontainer container kubepods-burstable-pod417f9e01_b3e7_40a5_9c15_54fadebf5cba.slice. Sep 9 23:45:39.970785 sshd[5109]: Connection closed by 10.200.16.10 port 36504 Sep 9 23:45:39.971299 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:39.975397 systemd-logind[1849]: Session 27 logged out. Waiting for processes to exit. Sep 9 23:45:39.975537 systemd[1]: sshd@24-10.200.20.4:22-10.200.16.10:36504.service: Deactivated successfully. Sep 9 23:45:39.977465 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 23:45:39.979537 systemd-logind[1849]: Removed session 27. Sep 9 23:45:40.048629 kubelet[3385]: I0909 23:45:40.048356 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/417f9e01-b3e7-40a5-9c15-54fadebf5cba-cni-path\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048629 kubelet[3385]: I0909 23:45:40.048393 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/417f9e01-b3e7-40a5-9c15-54fadebf5cba-clustermesh-secrets\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048629 kubelet[3385]: I0909 23:45:40.048411 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/417f9e01-b3e7-40a5-9c15-54fadebf5cba-cilium-run\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048629 kubelet[3385]: I0909 23:45:40.048421 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/417f9e01-b3e7-40a5-9c15-54fadebf5cba-cilium-config-path\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048629 kubelet[3385]: I0909 23:45:40.048433 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/417f9e01-b3e7-40a5-9c15-54fadebf5cba-hostproc\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048629 kubelet[3385]: I0909 23:45:40.048442 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/417f9e01-b3e7-40a5-9c15-54fadebf5cba-xtables-lock\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048786 kubelet[3385]: I0909 23:45:40.048452 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/417f9e01-b3e7-40a5-9c15-54fadebf5cba-hubble-tls\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048786 kubelet[3385]: I0909 23:45:40.048462 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzs2m\" (UniqueName: \"kubernetes.io/projected/417f9e01-b3e7-40a5-9c15-54fadebf5cba-kube-api-access-mzs2m\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048786 kubelet[3385]: I0909 23:45:40.048473 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/417f9e01-b3e7-40a5-9c15-54fadebf5cba-bpf-maps\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048786 kubelet[3385]: I0909 23:45:40.048481 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/417f9e01-b3e7-40a5-9c15-54fadebf5cba-host-proc-sys-kernel\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048786 kubelet[3385]: I0909 23:45:40.048493 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/417f9e01-b3e7-40a5-9c15-54fadebf5cba-host-proc-sys-net\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048786 kubelet[3385]: I0909 23:45:40.048504 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/417f9e01-b3e7-40a5-9c15-54fadebf5cba-lib-modules\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048872 kubelet[3385]: I0909 23:45:40.048513 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/417f9e01-b3e7-40a5-9c15-54fadebf5cba-cilium-ipsec-secrets\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048872 kubelet[3385]: I0909 23:45:40.048544 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/417f9e01-b3e7-40a5-9c15-54fadebf5cba-etc-cni-netd\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.048872 kubelet[3385]: I0909 23:45:40.048554 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/417f9e01-b3e7-40a5-9c15-54fadebf5cba-cilium-cgroup\") pod \"cilium-hb6xt\" (UID: \"417f9e01-b3e7-40a5-9c15-54fadebf5cba\") " pod="kube-system/cilium-hb6xt" Sep 9 23:45:40.053815 systemd[1]: Started sshd@25-10.200.20.4:22-10.200.16.10:35676.service - OpenSSH per-connection server daemon (10.200.16.10:35676). Sep 9 23:45:40.235146 containerd[1876]: time="2025-09-09T23:45:40.235096858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hb6xt,Uid:417f9e01-b3e7-40a5-9c15-54fadebf5cba,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:40.275667 containerd[1876]: time="2025-09-09T23:45:40.275624627Z" level=info msg="connecting to shim 78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6" address="unix:///run/containerd/s/d10359db95b92ef67f90d151b7068d1f075bf8c9ffc83183c4a25f6286a86bb0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:40.296041 systemd[1]: Started cri-containerd-78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6.scope - libcontainer container 78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6. Sep 9 23:45:40.318512 containerd[1876]: time="2025-09-09T23:45:40.318476047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hb6xt,Uid:417f9e01-b3e7-40a5-9c15-54fadebf5cba,Namespace:kube-system,Attempt:0,} returns sandbox id \"78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6\"" Sep 9 23:45:40.321102 containerd[1876]: time="2025-09-09T23:45:40.320980231Z" level=info msg="CreateContainer within sandbox \"78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:45:40.341538 containerd[1876]: time="2025-09-09T23:45:40.341511001Z" level=info msg="Container afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:40.358166 containerd[1876]: time="2025-09-09T23:45:40.358130941Z" level=info msg="CreateContainer within sandbox \"78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035\"" Sep 9 23:45:40.358670 containerd[1876]: time="2025-09-09T23:45:40.358653334Z" level=info msg="StartContainer for \"afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035\"" Sep 9 23:45:40.360291 containerd[1876]: time="2025-09-09T23:45:40.360229184Z" level=info msg="connecting to shim afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035" address="unix:///run/containerd/s/d10359db95b92ef67f90d151b7068d1f075bf8c9ffc83183c4a25f6286a86bb0" protocol=ttrpc version=3 Sep 9 23:45:40.378023 systemd[1]: Started cri-containerd-afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035.scope - libcontainer container afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035. Sep 9 23:45:40.404394 systemd[1]: cri-containerd-afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035.scope: Deactivated successfully. Sep 9 23:45:40.407105 containerd[1876]: time="2025-09-09T23:45:40.407065276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035\" id:\"afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035\" pid:5185 exited_at:{seconds:1757461540 nanos:406791331}" Sep 9 23:45:40.407828 containerd[1876]: time="2025-09-09T23:45:40.407219553Z" level=info msg="received exit event container_id:\"afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035\" id:\"afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035\" pid:5185 exited_at:{seconds:1757461540 nanos:406791331}" Sep 9 23:45:40.407828 containerd[1876]: time="2025-09-09T23:45:40.407698560Z" level=info msg="StartContainer for \"afc44a67ebb23ae1712bd134206031276cbd52ae8c4766b4f21906e2c525f035\" returns successfully" Sep 9 23:45:40.508427 sshd[5120]: Accepted publickey for core from 10.200.16.10 port 35676 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:40.509580 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:40.513814 systemd-logind[1849]: New session 28 of user core. Sep 9 23:45:40.520021 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 23:45:40.833559 sshd[5219]: Connection closed by 10.200.16.10 port 35676 Sep 9 23:45:40.834092 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:40.837217 systemd[1]: sshd@25-10.200.20.4:22-10.200.16.10:35676.service: Deactivated successfully. Sep 9 23:45:40.840995 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 23:45:40.842338 systemd-logind[1849]: Session 28 logged out. Waiting for processes to exit. Sep 9 23:45:40.843378 systemd-logind[1849]: Removed session 28. Sep 9 23:45:40.923122 systemd[1]: Started sshd@26-10.200.20.4:22-10.200.16.10:35684.service - OpenSSH per-connection server daemon (10.200.16.10:35684). Sep 9 23:45:40.997925 containerd[1876]: time="2025-09-09T23:45:40.997250281Z" level=info msg="CreateContainer within sandbox \"78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:45:41.019370 containerd[1876]: time="2025-09-09T23:45:41.019331592Z" level=info msg="Container 9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:41.039366 containerd[1876]: time="2025-09-09T23:45:41.039330818Z" level=info msg="CreateContainer within sandbox \"78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac\"" Sep 9 23:45:41.040045 containerd[1876]: time="2025-09-09T23:45:41.040018735Z" level=info msg="StartContainer for \"9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac\"" Sep 9 23:45:41.040725 containerd[1876]: time="2025-09-09T23:45:41.040701787Z" level=info msg="connecting to shim 9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac" address="unix:///run/containerd/s/d10359db95b92ef67f90d151b7068d1f075bf8c9ffc83183c4a25f6286a86bb0" protocol=ttrpc version=3 Sep 9 23:45:41.058042 systemd[1]: Started cri-containerd-9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac.scope - libcontainer container 9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac. Sep 9 23:45:41.087233 containerd[1876]: time="2025-09-09T23:45:41.086946598Z" level=info msg="StartContainer for \"9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac\" returns successfully" Sep 9 23:45:41.087317 systemd[1]: cri-containerd-9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac.scope: Deactivated successfully. Sep 9 23:45:41.088462 containerd[1876]: time="2025-09-09T23:45:41.088430473Z" level=info msg="received exit event container_id:\"9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac\" id:\"9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac\" pid:5241 exited_at:{seconds:1757461541 nanos:88221243}" Sep 9 23:45:41.088821 containerd[1876]: time="2025-09-09T23:45:41.088796308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac\" id:\"9024642a4d6caf08594509a9287209541d8bee4f3a1173c3c369db9b644c6aac\" pid:5241 exited_at:{seconds:1757461541 nanos:88221243}" Sep 9 23:45:41.382636 sshd[5226]: Accepted publickey for core from 10.200.16.10 port 35684 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:41.383715 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:41.388632 systemd-logind[1849]: New session 29 of user core. Sep 9 23:45:41.394027 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 23:45:41.997347 containerd[1876]: time="2025-09-09T23:45:41.997299718Z" level=info msg="CreateContainer within sandbox \"78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:45:42.028252 containerd[1876]: time="2025-09-09T23:45:42.027550572Z" level=info msg="Container 05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:42.029664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058177005.mount: Deactivated successfully. Sep 9 23:45:42.047042 containerd[1876]: time="2025-09-09T23:45:42.046993798Z" level=info msg="CreateContainer within sandbox \"78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870\"" Sep 9 23:45:42.047932 containerd[1876]: time="2025-09-09T23:45:42.047800502Z" level=info msg="StartContainer for \"05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870\"" Sep 9 23:45:42.050166 containerd[1876]: time="2025-09-09T23:45:42.050130026Z" level=info msg="connecting to shim 05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870" address="unix:///run/containerd/s/d10359db95b92ef67f90d151b7068d1f075bf8c9ffc83183c4a25f6286a86bb0" protocol=ttrpc version=3 Sep 9 23:45:42.073028 systemd[1]: Started cri-containerd-05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870.scope - libcontainer container 05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870. Sep 9 23:45:42.099618 systemd[1]: cri-containerd-05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870.scope: Deactivated successfully. Sep 9 23:45:42.101786 containerd[1876]: time="2025-09-09T23:45:42.101736882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870\" id:\"05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870\" pid:5291 exited_at:{seconds:1757461542 nanos:101540397}" Sep 9 23:45:42.102564 containerd[1876]: time="2025-09-09T23:45:42.102516497Z" level=info msg="received exit event container_id:\"05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870\" id:\"05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870\" pid:5291 exited_at:{seconds:1757461542 nanos:101540397}" Sep 9 23:45:42.104871 containerd[1876]: time="2025-09-09T23:45:42.104805604Z" level=info msg="StartContainer for \"05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870\" returns successfully" Sep 9 23:45:42.154180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05a59e761a553105074928b2ebb62199146733841c9729851a22e8ccb513b870-rootfs.mount: Deactivated successfully. Sep 9 23:45:42.566919 kubelet[3385]: I0909 23:45:42.566817 3385 setters.go:602] "Node became not ready" node="ci-4426.0.0-n-d9fce76d1d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T23:45:42Z","lastTransitionTime":"2025-09-09T23:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 23:45:43.002194 containerd[1876]: time="2025-09-09T23:45:43.002083660Z" level=info msg="CreateContainer within sandbox \"78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:45:43.024855 containerd[1876]: time="2025-09-09T23:45:43.024420579Z" level=info msg="Container 4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:43.047037 containerd[1876]: time="2025-09-09T23:45:43.046996385Z" level=info msg="CreateContainer within sandbox \"78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425\"" Sep 9 23:45:43.047960 containerd[1876]: time="2025-09-09T23:45:43.047933228Z" level=info msg="StartContainer for \"4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425\"" Sep 9 23:45:43.049594 containerd[1876]: time="2025-09-09T23:45:43.049567788Z" level=info msg="connecting to shim 4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425" address="unix:///run/containerd/s/d10359db95b92ef67f90d151b7068d1f075bf8c9ffc83183c4a25f6286a86bb0" protocol=ttrpc version=3 Sep 9 23:45:43.068025 systemd[1]: Started cri-containerd-4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425.scope - libcontainer container 4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425. Sep 9 23:45:43.086449 systemd[1]: cri-containerd-4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425.scope: Deactivated successfully. Sep 9 23:45:43.088844 containerd[1876]: time="2025-09-09T23:45:43.088814322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425\" id:\"4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425\" pid:5329 exited_at:{seconds:1757461543 nanos:88611692}" Sep 9 23:45:43.093787 containerd[1876]: time="2025-09-09T23:45:43.093114128Z" level=info msg="received exit event container_id:\"4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425\" id:\"4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425\" pid:5329 exited_at:{seconds:1757461543 nanos:88611692}" Sep 9 23:45:43.094583 containerd[1876]: time="2025-09-09T23:45:43.094556162Z" level=info msg="StartContainer for \"4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425\" returns successfully" Sep 9 23:45:43.108096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c1d7626f5183f25a7d9dddae34c4892afa9afa8f0f6749c75d9cf6bfc8ef425-rootfs.mount: Deactivated successfully. Sep 9 23:45:44.005859 containerd[1876]: time="2025-09-09T23:45:44.005817036Z" level=info msg="CreateContainer within sandbox \"78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:45:44.036009 containerd[1876]: time="2025-09-09T23:45:44.035965784Z" level=info msg="Container 3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:44.055317 containerd[1876]: time="2025-09-09T23:45:44.055271086Z" level=info msg="CreateContainer within sandbox \"78da118772e5c8779624e46ee6a0c12450c9975e12e84ee465a000c16d5ed5b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8\"" Sep 9 23:45:44.056247 containerd[1876]: time="2025-09-09T23:45:44.056223305Z" level=info msg="StartContainer for \"3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8\"" Sep 9 23:45:44.057507 containerd[1876]: time="2025-09-09T23:45:44.057478910Z" level=info msg="connecting to shim 3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8" address="unix:///run/containerd/s/d10359db95b92ef67f90d151b7068d1f075bf8c9ffc83183c4a25f6286a86bb0" protocol=ttrpc version=3 Sep 9 23:45:44.076028 systemd[1]: Started cri-containerd-3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8.scope - libcontainer container 3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8. Sep 9 23:45:44.107995 containerd[1876]: time="2025-09-09T23:45:44.107960366Z" level=info msg="StartContainer for \"3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8\" returns successfully" Sep 9 23:45:44.168278 containerd[1876]: time="2025-09-09T23:45:44.168238892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8\" id:\"2d754fa895b515768bc13a10869cd381f14fbeeab7743f33443d865b1f6c12ff\" pid:5398 exited_at:{seconds:1757461544 nanos:167986581}" Sep 9 23:45:44.473944 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 23:45:45.024705 kubelet[3385]: I0909 23:45:45.024639 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hb6xt" podStartSLOduration=6.024622766 podStartE2EDuration="6.024622766s" podCreationTimestamp="2025-09-09 23:45:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:45:45.02405419 +0000 UTC m=+165.450428464" watchObservedRunningTime="2025-09-09 23:45:45.024622766 +0000 UTC m=+165.450997040" Sep 9 23:45:45.786162 containerd[1876]: time="2025-09-09T23:45:45.786083266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8\" id:\"fc94a595357bebe670947ffb51a4729abea8609d2daf7ddb70a299fb725b208c\" pid:5470 exit_status:1 exited_at:{seconds:1757461545 nanos:785629773}" Sep 9 23:45:46.854697 systemd-networkd[1671]: lxc_health: Link UP Sep 9 23:45:46.855368 systemd-networkd[1671]: lxc_health: Gained carrier Sep 9 23:45:47.889551 containerd[1876]: time="2025-09-09T23:45:47.889504876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8\" id:\"557b8bbc1de8c47c5cbbf00789b9e5235653b5042e972681b5afd554ae7ae724\" pid:5913 exited_at:{seconds:1757461547 nanos:889068726}" Sep 9 23:45:48.633122 systemd-networkd[1671]: lxc_health: Gained IPv6LL Sep 9 23:45:49.999971 containerd[1876]: time="2025-09-09T23:45:49.998205390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8\" id:\"4162af9b84bd9fa1f1a03be2169be9072e7059392b12d34ddb3d2ed4520974fe\" pid:5957 exited_at:{seconds:1757461549 nanos:997423757}" Sep 9 23:45:52.081360 containerd[1876]: time="2025-09-09T23:45:52.081314351Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dc3f2eaf8b5712cdee9aec5be74ebab751c6ab01abd664f3d7da1c5f96686d8\" id:\"88b38beb0dfb29ee1c4339f15fa6e01aecdc937bd1fee44ace3526e8d605b3b9\" pid:5982 exited_at:{seconds:1757461552 nanos:81041847}" Sep 9 23:45:52.169264 sshd[5272]: Connection closed by 10.200.16.10 port 35684 Sep 9 23:45:52.169610 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:52.173387 systemd[1]: sshd@26-10.200.20.4:22-10.200.16.10:35684.service: Deactivated successfully. Sep 9 23:45:52.176603 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 23:45:52.177815 systemd-logind[1849]: Session 29 logged out. Waiting for processes to exit. Sep 9 23:45:52.179508 systemd-logind[1849]: Removed session 29.