Aug 13 07:08:34.335068 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 07:08:34.335090 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue Aug 12 21:42:02 -00 2025 Aug 13 07:08:34.335098 kernel: KASLR enabled Aug 13 07:08:34.335104 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Aug 13 07:08:34.335111 kernel: printk: bootconsole [pl11] enabled Aug 13 07:08:34.335117 kernel: efi: EFI v2.7 by EDK II Aug 13 07:08:34.335124 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Aug 13 07:08:34.335130 kernel: random: crng init done Aug 13 07:08:34.335135 kernel: secureboot: Secure boot disabled Aug 13 07:08:34.335141 kernel: ACPI: Early table checksum verification disabled Aug 13 07:08:34.335147 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Aug 13 07:08:34.335153 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:08:34.335158 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:08:34.335166 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Aug 13 07:08:34.335173 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:08:34.335179 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:08:34.335186 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:08:34.335193 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:08:34.335199 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:08:34.335206 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:08:34.335212 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Aug 13 07:08:34.335218 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:08:34.335224 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Aug 13 07:08:34.335230 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Aug 13 07:08:34.335236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Aug 13 07:08:34.335242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Aug 13 07:08:34.335248 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Aug 13 07:08:34.335254 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Aug 13 07:08:34.335262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Aug 13 07:08:34.335268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Aug 13 07:08:34.335274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Aug 13 07:08:34.335280 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Aug 13 07:08:34.335286 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Aug 13 07:08:34.335292 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Aug 13 07:08:34.335298 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Aug 13 07:08:34.335304 kernel: NUMA: NODE_DATA [mem 0x1bf7f1800-0x1bf7f6fff] Aug 13 07:08:34.335310 kernel: Zone ranges: Aug 13 07:08:34.335316 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Aug 13 07:08:34.335322 kernel: DMA32 empty Aug 13 07:08:34.335328 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 07:08:34.335338 kernel: Movable zone start for each node Aug 13 07:08:34.335344 kernel: Early memory node ranges Aug 13 07:08:34.335351 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Aug 13 07:08:34.335357 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Aug 13 07:08:34.335364 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Aug 13 07:08:34.335371 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Aug 13 07:08:34.335378 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Aug 13 07:08:34.335384 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Aug 13 07:08:34.335391 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Aug 13 07:08:34.335397 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Aug 13 07:08:34.335403 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 07:08:34.335410 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Aug 13 07:08:34.335416 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Aug 13 07:08:34.335423 kernel: psci: probing for conduit method from ACPI. Aug 13 07:08:34.337504 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 07:08:34.337520 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 07:08:34.337530 kernel: psci: MIGRATE_INFO_TYPE not supported. Aug 13 07:08:34.337543 kernel: psci: SMC Calling Convention v1.4 Aug 13 07:08:34.337550 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Aug 13 07:08:34.337556 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Aug 13 07:08:34.337563 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 07:08:34.337569 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 07:08:34.337576 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 07:08:34.337583 kernel: Detected PIPT I-cache on CPU0 Aug 13 07:08:34.337589 kernel: CPU features: detected: GIC system register CPU interface Aug 13 07:08:34.337596 kernel: CPU features: detected: Hardware dirty bit management Aug 13 07:08:34.337606 kernel: CPU features: detected: Spectre-BHB Aug 13 07:08:34.337612 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 07:08:34.337622 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 07:08:34.337628 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 07:08:34.337635 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Aug 13 07:08:34.337644 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 07:08:34.337650 kernel: alternatives: applying boot alternatives Aug 13 07:08:34.337659 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c44ba8b4c0c81c1bcadc13a1606b9de202ee4e4226c47e1c865eaa5fc436b169 Aug 13 07:08:34.337666 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:08:34.337673 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:08:34.337680 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:08:34.337689 kernel: Fallback order for Node 0: 0 Aug 13 07:08:34.337696 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Aug 13 07:08:34.337704 kernel: Policy zone: Normal Aug 13 07:08:34.337710 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:08:34.337717 kernel: software IO TLB: area num 2. Aug 13 07:08:34.337723 kernel: software IO TLB: mapped [mem 0x0000000036530000-0x000000003a530000] (64MB) Aug 13 07:08:34.337733 kernel: Memory: 3983536K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 210624K reserved, 0K cma-reserved) Aug 13 07:08:34.337739 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:08:34.337746 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:08:34.337753 kernel: rcu: RCU event tracing is enabled. Aug 13 07:08:34.337760 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:08:34.337766 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:08:34.337775 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:08:34.337784 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:08:34.337791 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:08:34.337797 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 07:08:34.337804 kernel: GICv3: 960 SPIs implemented Aug 13 07:08:34.337810 kernel: GICv3: 0 Extended SPIs implemented Aug 13 07:08:34.337816 kernel: Root IRQ handler: gic_handle_irq Aug 13 07:08:34.337825 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 07:08:34.337832 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Aug 13 07:08:34.337838 kernel: ITS: No ITS available, not enabling LPIs Aug 13 07:08:34.337845 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:08:34.337852 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 07:08:34.337858 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 07:08:34.337867 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 07:08:34.337874 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 07:08:34.337881 kernel: Console: colour dummy device 80x25 Aug 13 07:08:34.337888 kernel: printk: console [tty1] enabled Aug 13 07:08:34.337894 kernel: ACPI: Core revision 20230628 Aug 13 07:08:34.337901 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 07:08:34.337908 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:08:34.337915 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:08:34.337922 kernel: landlock: Up and running. Aug 13 07:08:34.337930 kernel: SELinux: Initializing. Aug 13 07:08:34.337936 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:08:34.337943 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:08:34.337950 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:08:34.337957 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:08:34.337964 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Aug 13 07:08:34.337971 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Aug 13 07:08:34.337984 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 13 07:08:34.337991 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:08:34.337998 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:08:34.338005 kernel: Remapping and enabling EFI services. Aug 13 07:08:34.338012 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:08:34.338021 kernel: Detected PIPT I-cache on CPU1 Aug 13 07:08:34.338028 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Aug 13 07:08:34.338035 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 07:08:34.338042 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 07:08:34.338049 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:08:34.338057 kernel: SMP: Total of 2 processors activated. Aug 13 07:08:34.338065 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 07:08:34.338072 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Aug 13 07:08:34.338079 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 07:08:34.338087 kernel: CPU features: detected: CRC32 instructions Aug 13 07:08:34.338094 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 07:08:34.338101 kernel: CPU features: detected: LSE atomic instructions Aug 13 07:08:34.338108 kernel: CPU features: detected: Privileged Access Never Aug 13 07:08:34.338115 kernel: CPU: All CPU(s) started at EL1 Aug 13 07:08:34.338124 kernel: alternatives: applying system-wide alternatives Aug 13 07:08:34.338131 kernel: devtmpfs: initialized Aug 13 07:08:34.338138 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:08:34.338145 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:08:34.338152 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:08:34.338159 kernel: SMBIOS 3.1.0 present. Aug 13 07:08:34.338166 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Aug 13 07:08:34.338173 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:08:34.338181 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 07:08:34.338190 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 07:08:34.338197 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 07:08:34.338204 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:08:34.338211 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Aug 13 07:08:34.338218 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:08:34.338225 kernel: cpuidle: using governor menu Aug 13 07:08:34.338232 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 07:08:34.338239 kernel: ASID allocator initialised with 32768 entries Aug 13 07:08:34.338246 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:08:34.338255 kernel: Serial: AMBA PL011 UART driver Aug 13 07:08:34.338262 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 07:08:34.338269 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 07:08:34.338276 kernel: Modules: 509248 pages in range for PLT usage Aug 13 07:08:34.338283 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:08:34.338290 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:08:34.338297 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 07:08:34.338304 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 07:08:34.338311 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:08:34.338320 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:08:34.338327 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 07:08:34.338338 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 07:08:34.338347 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:08:34.338356 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:08:34.338363 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:08:34.338370 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:08:34.338377 kernel: ACPI: Interpreter enabled Aug 13 07:08:34.338384 kernel: ACPI: Using GIC for interrupt routing Aug 13 07:08:34.338392 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Aug 13 07:08:34.338400 kernel: printk: console [ttyAMA0] enabled Aug 13 07:08:34.338406 kernel: printk: bootconsole [pl11] disabled Aug 13 07:08:34.338414 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Aug 13 07:08:34.338421 kernel: iommu: Default domain type: Translated Aug 13 07:08:34.338438 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 07:08:34.338446 kernel: efivars: Registered efivars operations Aug 13 07:08:34.338453 kernel: vgaarb: loaded Aug 13 07:08:34.338460 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 07:08:34.338470 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:08:34.338478 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:08:34.338485 kernel: pnp: PnP ACPI init Aug 13 07:08:34.338492 kernel: pnp: PnP ACPI: found 0 devices Aug 13 07:08:34.338499 kernel: NET: Registered PF_INET protocol family Aug 13 07:08:34.338506 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:08:34.338513 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 07:08:34.338520 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:08:34.338527 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:08:34.338536 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 07:08:34.338543 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 07:08:34.338550 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:08:34.338557 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:08:34.338564 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:08:34.338571 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:08:34.338578 kernel: kvm [1]: HYP mode not available Aug 13 07:08:34.338585 kernel: Initialise system trusted keyrings Aug 13 07:08:34.338592 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 07:08:34.338601 kernel: Key type asymmetric registered Aug 13 07:08:34.338608 kernel: Asymmetric key parser 'x509' registered Aug 13 07:08:34.338615 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 07:08:34.338622 kernel: io scheduler mq-deadline registered Aug 13 07:08:34.338629 kernel: io scheduler kyber registered Aug 13 07:08:34.338636 kernel: io scheduler bfq registered Aug 13 07:08:34.338643 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:08:34.338650 kernel: thunder_xcv, ver 1.0 Aug 13 07:08:34.338658 kernel: thunder_bgx, ver 1.0 Aug 13 07:08:34.338666 kernel: nicpf, ver 1.0 Aug 13 07:08:34.338673 kernel: nicvf, ver 1.0 Aug 13 07:08:34.338821 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 07:08:34.338893 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T07:08:33 UTC (1755068913) Aug 13 07:08:34.338902 kernel: efifb: probing for efifb Aug 13 07:08:34.338910 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 07:08:34.338917 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 07:08:34.338924 kernel: efifb: scrolling: redraw Aug 13 07:08:34.338934 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 07:08:34.338942 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:08:34.338949 kernel: fb0: EFI VGA frame buffer device Aug 13 07:08:34.338956 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Aug 13 07:08:34.338963 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 07:08:34.338970 kernel: No ACPI PMU IRQ for CPU0 Aug 13 07:08:34.338977 kernel: No ACPI PMU IRQ for CPU1 Aug 13 07:08:34.338984 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Aug 13 07:08:34.338991 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 07:08:34.339000 kernel: watchdog: Hard watchdog permanently disabled Aug 13 07:08:34.339007 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:08:34.339014 kernel: Segment Routing with IPv6 Aug 13 07:08:34.339021 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:08:34.339028 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:08:34.339035 kernel: Key type dns_resolver registered Aug 13 07:08:34.339042 kernel: registered taskstats version 1 Aug 13 07:08:34.339049 kernel: Loading compiled-in X.509 certificates Aug 13 07:08:34.339056 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: b805f03ae64b71ea1aa3cf76d07ec816116f6d0c' Aug 13 07:08:34.339065 kernel: Key type .fscrypt registered Aug 13 07:08:34.339072 kernel: Key type fscrypt-provisioning registered Aug 13 07:08:34.339079 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:08:34.339086 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:08:34.339093 kernel: ima: No architecture policies found Aug 13 07:08:34.339100 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 07:08:34.339107 kernel: clk: Disabling unused clocks Aug 13 07:08:34.339114 kernel: Freeing unused kernel memory: 38400K Aug 13 07:08:34.339121 kernel: Run /init as init process Aug 13 07:08:34.339130 kernel: with arguments: Aug 13 07:08:34.339137 kernel: /init Aug 13 07:08:34.339144 kernel: with environment: Aug 13 07:08:34.339151 kernel: HOME=/ Aug 13 07:08:34.339158 kernel: TERM=linux Aug 13 07:08:34.339165 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:08:34.339173 systemd[1]: Successfully made /usr/ read-only. Aug 13 07:08:34.339183 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 07:08:34.339193 systemd[1]: Detected virtualization microsoft. Aug 13 07:08:34.339200 systemd[1]: Detected architecture arm64. Aug 13 07:08:34.339208 systemd[1]: Running in initrd. Aug 13 07:08:34.339215 systemd[1]: No hostname configured, using default hostname. Aug 13 07:08:34.339223 systemd[1]: Hostname set to . Aug 13 07:08:34.339230 systemd[1]: Initializing machine ID from random generator. Aug 13 07:08:34.339238 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:08:34.339246 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:08:34.339255 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:08:34.339263 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:08:34.339271 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:08:34.339279 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:08:34.339287 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:08:34.339296 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:08:34.339305 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:08:34.339313 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:08:34.339321 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:08:34.339328 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:08:34.339336 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:08:34.339344 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:08:34.339351 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:08:34.339359 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:08:34.339367 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:08:34.339376 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:08:34.339384 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 07:08:34.339391 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:08:34.339399 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:08:34.339406 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:08:34.339414 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:08:34.339422 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:08:34.344895 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:08:34.344911 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:08:34.344926 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:08:34.344934 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:08:34.344942 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:08:34.344984 systemd-journald[218]: Collecting audit messages is disabled. Aug 13 07:08:34.345006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:34.345016 systemd-journald[218]: Journal started Aug 13 07:08:34.345034 systemd-journald[218]: Runtime Journal (/run/log/journal/c28fcbce77a34a77bbae681e9790d2aa) is 8M, max 78.5M, 70.5M free. Aug 13 07:08:34.345669 systemd-modules-load[220]: Inserted module 'overlay' Aug 13 07:08:34.364470 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:08:34.375448 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:08:34.375964 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:08:34.392999 kernel: Bridge firewalling registered Aug 13 07:08:34.387139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:08:34.392204 systemd-modules-load[220]: Inserted module 'br_netfilter' Aug 13 07:08:34.400261 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:08:34.411563 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:08:34.422926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:34.448742 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:08:34.466636 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:08:34.473624 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:08:34.498823 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:08:34.515577 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:34.523876 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:08:34.537778 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:08:34.551904 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:08:34.577760 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:08:34.586617 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:08:34.602621 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:08:34.625663 dracut-cmdline[251]: dracut-dracut-053 Aug 13 07:08:34.633146 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c44ba8b4c0c81c1bcadc13a1606b9de202ee4e4226c47e1c865eaa5fc436b169 Aug 13 07:08:34.667155 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:08:34.670559 systemd-resolved[252]: Positive Trust Anchors: Aug 13 07:08:34.670569 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:08:34.670601 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:08:34.672790 systemd-resolved[252]: Defaulting to hostname 'linux'. Aug 13 07:08:34.681728 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:08:34.691617 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:08:34.829457 kernel: SCSI subsystem initialized Aug 13 07:08:34.837447 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:08:34.847451 kernel: iscsi: registered transport (tcp) Aug 13 07:08:34.865607 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:08:34.865679 kernel: QLogic iSCSI HBA Driver Aug 13 07:08:34.904750 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:08:34.918702 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:08:34.952815 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:08:34.952885 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:08:34.959639 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:08:35.009467 kernel: raid6: neonx8 gen() 15777 MB/s Aug 13 07:08:35.029448 kernel: raid6: neonx4 gen() 15818 MB/s Aug 13 07:08:35.049443 kernel: raid6: neonx2 gen() 13199 MB/s Aug 13 07:08:35.070440 kernel: raid6: neonx1 gen() 10514 MB/s Aug 13 07:08:35.090439 kernel: raid6: int64x8 gen() 6796 MB/s Aug 13 07:08:35.110439 kernel: raid6: int64x4 gen() 7353 MB/s Aug 13 07:08:35.131440 kernel: raid6: int64x2 gen() 6112 MB/s Aug 13 07:08:35.156069 kernel: raid6: int64x1 gen() 5059 MB/s Aug 13 07:08:35.156085 kernel: raid6: using algorithm neonx4 gen() 15818 MB/s Aug 13 07:08:35.180216 kernel: raid6: .... xor() 12276 MB/s, rmw enabled Aug 13 07:08:35.180229 kernel: raid6: using neon recovery algorithm Aug 13 07:08:35.192662 kernel: xor: measuring software checksum speed Aug 13 07:08:35.192687 kernel: 8regs : 21584 MB/sec Aug 13 07:08:35.196166 kernel: 32regs : 21641 MB/sec Aug 13 07:08:35.203909 kernel: arm64_neon : 26055 MB/sec Aug 13 07:08:35.203935 kernel: xor: using function: arm64_neon (26055 MB/sec) Aug 13 07:08:35.254444 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:08:35.267498 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:08:35.282696 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:08:35.307302 systemd-udevd[437]: Using default interface naming scheme 'v255'. Aug 13 07:08:35.313137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:08:35.331666 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:08:35.348903 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Aug 13 07:08:35.375797 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:08:35.389710 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:08:35.428529 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:08:35.447656 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:08:35.477414 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:08:35.484759 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:08:35.492406 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:08:35.515621 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:08:35.544695 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:08:35.568211 kernel: hv_vmbus: Vmbus version:5.3 Aug 13 07:08:35.570512 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:08:35.593353 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:08:35.604637 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 07:08:35.604659 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 07:08:35.622208 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 07:08:35.622259 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 07:08:35.622269 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 07:08:35.622279 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 07:08:35.631487 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 13 07:08:35.651943 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 13 07:08:35.638409 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:35.652093 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:08:35.695929 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 07:08:35.696095 kernel: scsi host1: storvsc_host_t Aug 13 07:08:35.696192 kernel: scsi host0: storvsc_host_t Aug 13 07:08:35.696279 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 07:08:35.674367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:08:35.674640 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:35.708459 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:35.736447 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 07:08:35.737479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:35.753020 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:08:35.753181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:35.822564 kernel: PTP clock support registered Aug 13 07:08:35.822587 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 07:08:35.822748 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:08:35.822758 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 07:08:35.822768 kernel: hv_vmbus: registering driver hv_utils Aug 13 07:08:35.822784 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 07:08:35.822793 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 07:08:35.822802 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 07:08:35.822810 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 07:08:35.822895 kernel: hv_netvsc 000d3ac2-c27b-000d-3ac2-c27b000d3ac2 eth0: VF slot 1 added Aug 13 07:08:35.785885 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 07:08:35.797062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:36.156284 kernel: hv_vmbus: registering driver hv_pci Aug 13 07:08:36.156310 kernel: hv_pci 851db671-e36a-453b-8db4-f1f9230e2d84: PCI VMBus probing: Using version 0x10004 Aug 13 07:08:36.128651 systemd-resolved[252]: Clock change detected. Flushing caches. Aug 13 07:08:36.163226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:36.200277 kernel: hv_pci 851db671-e36a-453b-8db4-f1f9230e2d84: PCI host bridge to bus e36a:00 Aug 13 07:08:36.200468 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 07:08:36.200582 kernel: pci_bus e36a:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Aug 13 07:08:36.200676 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 07:08:36.206864 kernel: pci_bus e36a:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 07:08:36.212353 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 07:08:36.212568 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 07:08:36.212662 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 07:08:36.218588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:08:36.237652 kernel: pci e36a:00:02.0: [15b3:1018] type 00 class 0x020000 Aug 13 07:08:36.237699 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:08:36.258376 kernel: pci e36a:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 07:08:36.258454 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 07:08:36.267641 kernel: pci e36a:00:02.0: enabling Extended Tags Aug 13 07:08:36.276633 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:36.307986 kernel: pci e36a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e36a:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Aug 13 07:08:36.308178 kernel: pci_bus e36a:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 07:08:36.314454 kernel: pci e36a:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 07:08:36.356148 kernel: mlx5_core e36a:00:02.0: enabling device (0000 -> 0002) Aug 13 07:08:36.363277 kernel: mlx5_core e36a:00:02.0: firmware version: 16.30.1284 Aug 13 07:08:36.560813 kernel: hv_netvsc 000d3ac2-c27b-000d-3ac2-c27b000d3ac2 eth0: VF registering: eth1 Aug 13 07:08:36.561051 kernel: mlx5_core e36a:00:02.0 eth1: joined to eth0 Aug 13 07:08:36.570298 kernel: mlx5_core e36a:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Aug 13 07:08:36.582290 kernel: mlx5_core e36a:00:02.0 enP58218s1: renamed from eth1 Aug 13 07:08:36.739389 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 13 07:08:36.826292 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (492) Aug 13 07:08:36.838828 kernel: BTRFS: device fsid 66ef7c2c-768e-46b2-8baa-a2b24df44a90 devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (491) Aug 13 07:08:36.848032 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 07:08:36.875067 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 13 07:08:36.883416 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 13 07:08:36.910134 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 13 07:08:36.930425 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:08:36.956542 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:08:36.966271 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:08:37.977320 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:08:37.977375 disk-uuid[608]: The operation has completed successfully. Aug 13 07:08:38.045498 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:08:38.045591 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:08:38.094391 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:08:38.107801 sh[694]: Success Aug 13 07:08:38.136323 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 07:08:38.374512 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:08:38.395493 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:08:38.405301 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:08:38.442110 kernel: BTRFS info (device dm-0): first mount of filesystem 66ef7c2c-768e-46b2-8baa-a2b24df44a90 Aug 13 07:08:38.442162 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 07:08:38.449188 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:08:38.455156 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:08:38.459437 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:08:38.718006 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:08:38.723786 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:08:38.744555 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:08:38.758468 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:08:38.790732 kernel: BTRFS info (device sda6): first mount of filesystem 5832a3b0-f866-4304-b935-a4d38424b8f9 Aug 13 07:08:38.790759 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 07:08:38.790769 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:08:38.799293 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:08:38.811327 kernel: BTRFS info (device sda6): last unmount of filesystem 5832a3b0-f866-4304-b935-a4d38424b8f9 Aug 13 07:08:38.817314 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:08:38.833505 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:08:38.882837 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:08:38.898454 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:08:38.937223 systemd-networkd[879]: lo: Link UP Aug 13 07:08:38.937236 systemd-networkd[879]: lo: Gained carrier Aug 13 07:08:38.938863 systemd-networkd[879]: Enumeration completed Aug 13 07:08:38.940855 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:08:38.941288 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:08:38.941291 systemd-networkd[879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:08:38.947314 systemd[1]: Reached target network.target - Network. Aug 13 07:08:39.033278 kernel: mlx5_core e36a:00:02.0 enP58218s1: Link up Aug 13 07:08:39.077023 systemd-networkd[879]: enP58218s1: Link UP Aug 13 07:08:39.081649 kernel: hv_netvsc 000d3ac2-c27b-000d-3ac2-c27b000d3ac2 eth0: Data path switched to VF: enP58218s1 Aug 13 07:08:39.077138 systemd-networkd[879]: eth0: Link UP Aug 13 07:08:39.081382 systemd-networkd[879]: eth0: Gained carrier Aug 13 07:08:39.081395 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:08:39.086465 systemd-networkd[879]: enP58218s1: Gained carrier Aug 13 07:08:39.109841 systemd-networkd[879]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 07:08:39.604096 ignition[794]: Ignition 2.20.0 Aug 13 07:08:39.604108 ignition[794]: Stage: fetch-offline Aug 13 07:08:39.606252 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:08:39.604149 ignition[794]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:39.623397 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:08:39.604159 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:08:39.604252 ignition[794]: parsed url from cmdline: "" Aug 13 07:08:39.604275 ignition[794]: no config URL provided Aug 13 07:08:39.604280 ignition[794]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:08:39.604287 ignition[794]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:08:39.604292 ignition[794]: failed to fetch config: resource requires networking Aug 13 07:08:39.604461 ignition[794]: Ignition finished successfully Aug 13 07:08:39.639432 ignition[888]: Ignition 2.20.0 Aug 13 07:08:39.639438 ignition[888]: Stage: fetch Aug 13 07:08:39.639660 ignition[888]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:39.639670 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:08:39.639770 ignition[888]: parsed url from cmdline: "" Aug 13 07:08:39.639773 ignition[888]: no config URL provided Aug 13 07:08:39.639778 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:08:39.639799 ignition[888]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:08:39.639830 ignition[888]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 07:08:39.800288 ignition[888]: GET result: OK Aug 13 07:08:39.800396 ignition[888]: config has been read from IMDS userdata Aug 13 07:08:39.800442 ignition[888]: parsing config with SHA512: ec6b5a98eb8a85d7774ea8f223803a1a6a0996a89e1a3bdbf549576cf1584f7bba67718a9626121a22ac746d4002c86def0f5504a04298dc82de09c7fb53805f Aug 13 07:08:39.804914 unknown[888]: fetched base config from "system" Aug 13 07:08:39.805376 ignition[888]: fetch: fetch complete Aug 13 07:08:39.804921 unknown[888]: fetched base config from "system" Aug 13 07:08:39.805382 ignition[888]: fetch: fetch passed Aug 13 07:08:39.804925 unknown[888]: fetched user config from "azure" Aug 13 07:08:39.805429 ignition[888]: Ignition finished successfully Aug 13 07:08:39.809043 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:08:39.832977 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:08:39.856372 ignition[895]: Ignition 2.20.0 Aug 13 07:08:39.856387 ignition[895]: Stage: kargs Aug 13 07:08:39.861290 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:08:39.856579 ignition[895]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:39.856599 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:08:39.857683 ignition[895]: kargs: kargs passed Aug 13 07:08:39.857747 ignition[895]: Ignition finished successfully Aug 13 07:08:39.885498 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:08:39.906878 ignition[901]: Ignition 2.20.0 Aug 13 07:08:39.909918 ignition[901]: Stage: disks Aug 13 07:08:39.910135 ignition[901]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:39.915183 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:08:39.910147 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:08:39.925652 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:08:39.911067 ignition[901]: disks: disks passed Aug 13 07:08:39.936310 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:08:39.911116 ignition[901]: Ignition finished successfully Aug 13 07:08:39.948653 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:08:39.959616 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:08:39.968659 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:08:39.997421 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:08:40.051975 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 13 07:08:40.059581 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:08:40.077465 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:08:40.134287 kernel: EXT4-fs (sda9): mounted filesystem 4e885a6c-f4f3-43a5-b152-e0e8bd6b099d r/w with ordered data mode. Quota mode: none. Aug 13 07:08:40.134210 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:08:40.139052 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:08:40.182344 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:08:40.192142 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:08:40.215564 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:08:40.243152 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (920) Aug 13 07:08:40.243296 kernel: BTRFS info (device sda6): first mount of filesystem 5832a3b0-f866-4304-b935-a4d38424b8f9 Aug 13 07:08:40.222892 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:08:40.262109 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 07:08:40.222925 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:08:40.276695 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:08:40.243809 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:08:40.282793 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:08:40.299310 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:08:40.300675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:08:40.747366 systemd-networkd[879]: eth0: Gained IPv6LL Aug 13 07:08:40.764749 coreos-metadata[922]: Aug 13 07:08:40.764 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 07:08:40.772974 coreos-metadata[922]: Aug 13 07:08:40.767 INFO Fetch successful Aug 13 07:08:40.772974 coreos-metadata[922]: Aug 13 07:08:40.767 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 07:08:40.790062 coreos-metadata[922]: Aug 13 07:08:40.789 INFO Fetch successful Aug 13 07:08:40.804163 coreos-metadata[922]: Aug 13 07:08:40.804 INFO wrote hostname ci-4230.2.2-a-6317daa899 to /sysroot/etc/hostname Aug 13 07:08:40.806240 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:08:40.998639 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:08:41.020277 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:08:41.029741 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:08:41.039071 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:08:41.882801 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:08:41.896711 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:08:41.907455 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:08:41.933484 kernel: BTRFS info (device sda6): last unmount of filesystem 5832a3b0-f866-4304-b935-a4d38424b8f9 Aug 13 07:08:41.924392 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:08:41.948386 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:08:41.962241 ignition[1041]: INFO : Ignition 2.20.0 Aug 13 07:08:41.962241 ignition[1041]: INFO : Stage: mount Aug 13 07:08:41.972349 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:41.972349 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:08:41.972349 ignition[1041]: INFO : mount: mount passed Aug 13 07:08:41.972349 ignition[1041]: INFO : Ignition finished successfully Aug 13 07:08:41.968328 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:08:41.994488 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:08:42.013525 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:08:42.038276 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1051) Aug 13 07:08:42.051274 kernel: BTRFS info (device sda6): first mount of filesystem 5832a3b0-f866-4304-b935-a4d38424b8f9 Aug 13 07:08:42.051314 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 07:08:42.055762 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:08:42.063280 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:08:42.064862 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:08:42.088172 ignition[1069]: INFO : Ignition 2.20.0 Aug 13 07:08:42.088172 ignition[1069]: INFO : Stage: files Aug 13 07:08:42.096205 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:42.096205 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:08:42.096205 ignition[1069]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:08:42.114337 ignition[1069]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:08:42.114337 ignition[1069]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:08:42.132264 ignition[1069]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:08:42.140013 ignition[1069]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:08:42.140013 ignition[1069]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:08:42.140013 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 13 07:08:42.140013 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Aug 13 07:08:42.132693 unknown[1069]: wrote ssh authorized keys file for user: core Aug 13 07:08:42.306669 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:08:42.414190 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 13 07:08:42.414190 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:08:42.434544 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 13 07:08:42.638101 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:08:42.710090 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:08:42.710090 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 07:08:42.729295 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Aug 13 07:08:43.184292 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:08:43.411138 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 07:08:43.411138 ignition[1069]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 07:08:43.445038 ignition[1069]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:08:43.455703 ignition[1069]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:08:43.455703 ignition[1069]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 07:08:43.455703 ignition[1069]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:08:43.455703 ignition[1069]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:08:43.455703 ignition[1069]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:08:43.455703 ignition[1069]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:08:43.455703 ignition[1069]: INFO : files: files passed Aug 13 07:08:43.455703 ignition[1069]: INFO : Ignition finished successfully Aug 13 07:08:43.466645 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:08:43.503518 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:08:43.520486 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:08:43.563421 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:08:43.563421 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:08:43.539803 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:08:43.595162 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:08:43.539894 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:08:43.581723 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:08:43.590746 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:08:43.616587 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:08:43.656548 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:08:43.656666 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:08:43.669362 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:08:43.681541 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:08:43.692207 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:08:43.711173 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:08:43.731882 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:08:43.747485 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:08:43.768700 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:08:43.768817 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:08:43.782216 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:08:43.792978 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:08:43.804978 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:08:43.816713 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:08:43.816807 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:08:43.832845 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:08:43.845238 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:08:43.855769 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:08:43.866082 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:08:43.878326 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:08:43.890355 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:08:43.901504 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:08:43.913045 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:08:43.924836 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:08:43.935240 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:08:43.944659 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:08:43.944744 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:08:43.959306 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:08:43.970512 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:08:43.982429 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:08:43.988503 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:08:43.995652 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:08:43.995737 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:08:44.013897 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:08:44.013951 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:08:44.026316 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:08:44.026384 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:08:44.037322 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:08:44.037383 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:08:44.071511 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:08:44.092295 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:08:44.124663 ignition[1122]: INFO : Ignition 2.20.0 Aug 13 07:08:44.124663 ignition[1122]: INFO : Stage: umount Aug 13 07:08:44.124663 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:44.124663 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:08:44.124663 ignition[1122]: INFO : umount: umount passed Aug 13 07:08:44.124663 ignition[1122]: INFO : Ignition finished successfully Aug 13 07:08:44.102167 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:08:44.102281 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:08:44.111773 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:08:44.111859 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:08:44.125005 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:08:44.126287 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:08:44.136593 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:08:44.137007 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:08:44.137055 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:08:44.149619 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:08:44.149689 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:08:44.160136 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:08:44.160181 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:08:44.172286 systemd[1]: Stopped target network.target - Network. Aug 13 07:08:44.181113 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:08:44.181187 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:08:44.193825 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:08:44.204010 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:08:44.209755 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:08:44.217139 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:08:44.227239 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:08:44.239023 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:08:44.239068 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:08:44.254398 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:08:44.254437 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:08:44.264431 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:08:44.264487 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:08:44.274672 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:08:44.274716 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:08:44.287155 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:08:44.298239 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:08:44.320706 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:08:44.320852 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:08:44.338482 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 07:08:44.338699 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:08:44.338941 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:08:44.356903 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 07:08:44.358034 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:08:44.558834 kernel: hv_netvsc 000d3ac2-c27b-000d-3ac2-c27b000d3ac2 eth0: Data path switched from VF: enP58218s1 Aug 13 07:08:44.358113 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:08:44.388429 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:08:44.397565 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:08:44.397649 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:08:44.409366 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:08:44.409422 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:08:44.424702 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:08:44.424762 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:08:44.430688 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:08:44.430733 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:08:44.447222 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:08:44.465355 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 07:08:44.465444 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 07:08:44.481873 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:08:44.482031 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:08:44.494087 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:08:44.494130 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:08:44.505346 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:08:44.505381 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:08:44.517530 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:08:44.517593 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:08:44.541931 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:08:44.541994 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:08:44.558900 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:08:44.558959 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:44.595505 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:08:44.610351 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:08:44.610436 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:08:44.629372 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:08:44.629436 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:08:44.637015 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:08:44.637073 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:08:44.649446 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:08:44.649499 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:44.677244 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 07:08:44.677341 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 07:08:44.677718 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:08:44.677820 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:08:44.690706 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:08:44.886724 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Aug 13 07:08:44.690790 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:08:44.697737 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:08:44.697819 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:08:44.708880 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:08:44.720830 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:08:44.720932 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:08:44.748504 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:08:44.776096 systemd[1]: Switching root. Aug 13 07:08:44.927908 systemd-journald[218]: Journal stopped Aug 13 07:08:49.367694 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:08:49.367720 kernel: SELinux: policy capability open_perms=1 Aug 13 07:08:49.367732 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:08:49.367740 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:08:49.367750 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:08:49.367757 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:08:49.367766 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:08:49.367774 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:08:49.367782 kernel: audit: type=1403 audit(1755068925.865:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:08:49.367791 systemd[1]: Successfully loaded SELinux policy in 132.396ms. Aug 13 07:08:49.367804 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.233ms. Aug 13 07:08:49.367814 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 07:08:49.367822 systemd[1]: Detected virtualization microsoft. Aug 13 07:08:49.367830 systemd[1]: Detected architecture arm64. Aug 13 07:08:49.367839 systemd[1]: Detected first boot. Aug 13 07:08:49.367856 systemd[1]: Hostname set to . Aug 13 07:08:49.367873 systemd[1]: Initializing machine ID from random generator. Aug 13 07:08:49.367882 zram_generator::config[1165]: No configuration found. Aug 13 07:08:49.367892 kernel: NET: Registered PF_VSOCK protocol family Aug 13 07:08:49.367901 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:08:49.367910 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 07:08:49.367919 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:08:49.367935 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:08:49.367946 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:08:49.367955 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:08:49.367964 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:08:49.367973 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:08:49.367982 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:08:49.367995 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:08:49.368006 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:08:49.368015 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:08:49.368024 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:08:49.368033 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:08:49.368045 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:08:49.368054 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:08:49.368063 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:08:49.368072 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:08:49.368086 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:08:49.368095 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 13 07:08:49.368104 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:08:49.368115 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:08:49.368124 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:08:49.368134 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:08:49.368143 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:08:49.368153 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:08:49.368163 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:08:49.368173 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:08:49.368182 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:08:49.368191 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:08:49.368200 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:08:49.368209 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 07:08:49.368220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:08:49.368230 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:08:49.368239 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:08:49.368248 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:08:49.368282 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:08:49.368292 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:08:49.368302 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:08:49.368317 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:08:49.368326 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:08:49.368336 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:08:49.368345 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:08:49.368355 systemd[1]: Reached target machines.target - Containers. Aug 13 07:08:49.368367 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:08:49.368376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:08:49.368387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:08:49.368397 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:08:49.368407 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:08:49.368419 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:08:49.368429 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:08:49.368438 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:08:49.368447 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:08:49.368457 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:08:49.368469 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:08:49.368480 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:08:49.368489 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:08:49.368498 kernel: fuse: init (API version 7.39) Aug 13 07:08:49.368507 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:08:49.368520 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 07:08:49.368530 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:08:49.368539 kernel: loop: module loaded Aug 13 07:08:49.368547 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:08:49.368557 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:08:49.368571 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:08:49.368580 kernel: ACPI: bus type drm_connector registered Aug 13 07:08:49.368589 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 07:08:49.368599 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:08:49.368608 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:08:49.368645 systemd-journald[1269]: Collecting audit messages is disabled. Aug 13 07:08:49.368668 systemd[1]: Stopped verity-setup.service. Aug 13 07:08:49.368683 systemd-journald[1269]: Journal started Aug 13 07:08:49.368704 systemd-journald[1269]: Runtime Journal (/run/log/journal/afb1cec87f1743d2858326af492e1bcf) is 8M, max 78.5M, 70.5M free. Aug 13 07:08:48.361249 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:08:48.369147 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 07:08:48.369538 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:08:48.369907 systemd[1]: systemd-journald.service: Consumed 3.290s CPU time. Aug 13 07:08:49.392098 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:08:49.393017 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:08:49.399405 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:08:49.405766 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:08:49.411493 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:08:49.418201 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:08:49.424393 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:08:49.431310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:08:49.439303 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:08:49.446701 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:08:49.446877 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:08:49.453461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:08:49.453618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:08:49.460071 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:08:49.460235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:08:49.466423 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:08:49.468312 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:08:49.475166 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:08:49.475375 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:08:49.481830 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:08:49.482028 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:08:49.488348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:08:49.494805 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:08:49.502319 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:08:49.509848 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 07:08:49.517155 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:08:49.532959 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:08:49.548370 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:08:49.555527 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:08:49.561737 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:08:49.561783 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:08:49.568807 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 07:08:49.577094 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:08:49.584587 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:08:49.590331 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:08:49.592307 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:08:49.599805 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:08:49.606126 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:08:49.607949 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:08:49.614488 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:08:49.615631 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:08:49.624471 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:08:49.634475 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:08:49.652006 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:08:49.667181 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:08:49.674732 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:08:49.675950 systemd-journald[1269]: Time spent on flushing to /var/log/journal/afb1cec87f1743d2858326af492e1bcf is 17.854ms for 917 entries. Aug 13 07:08:49.675950 systemd-journald[1269]: System Journal (/var/log/journal/afb1cec87f1743d2858326af492e1bcf) is 8M, max 2.6G, 2.6G free. Aug 13 07:08:49.753441 systemd-journald[1269]: Received client request to flush runtime journal. Aug 13 07:08:49.753512 kernel: loop0: detected capacity change from 0 to 28720 Aug 13 07:08:49.687828 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:08:49.705732 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:08:49.714283 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:08:49.723184 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:08:49.736578 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 07:08:49.743689 udevadm[1308]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 07:08:49.755209 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:08:49.806789 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Aug 13 07:08:49.806804 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Aug 13 07:08:49.811281 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:08:49.822505 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:08:49.859217 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:08:49.861136 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 07:08:50.098385 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:08:50.157194 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:08:50.168476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:08:50.184486 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Aug 13 07:08:50.184504 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Aug 13 07:08:50.188490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:08:50.262398 kernel: loop1: detected capacity change from 0 to 123192 Aug 13 07:08:50.662283 kernel: loop2: detected capacity change from 0 to 113512 Aug 13 07:08:50.938903 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:08:50.948511 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:08:50.977241 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Aug 13 07:08:51.045287 kernel: loop3: detected capacity change from 0 to 211168 Aug 13 07:08:51.089310 kernel: loop4: detected capacity change from 0 to 28720 Aug 13 07:08:51.100284 kernel: loop5: detected capacity change from 0 to 123192 Aug 13 07:08:51.114291 kernel: loop6: detected capacity change from 0 to 113512 Aug 13 07:08:51.124290 kernel: loop7: detected capacity change from 0 to 211168 Aug 13 07:08:51.132718 (sd-merge)[1335]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Aug 13 07:08:51.133183 (sd-merge)[1335]: Merged extensions into '/usr'. Aug 13 07:08:51.137351 systemd[1]: Reload requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:08:51.137504 systemd[1]: Reloading... Aug 13 07:08:51.209315 zram_generator::config[1359]: No configuration found. Aug 13 07:08:51.391797 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:08:51.416369 kernel: hv_vmbus: registering driver hv_balloon Aug 13 07:08:51.416467 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 07:08:51.417080 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:08:51.426501 kernel: hv_balloon: Memory hot add disabled on ARM64 Aug 13 07:08:51.504334 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 07:08:51.510289 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 07:08:51.519340 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 07:08:51.526372 kernel: Console: switching to colour dummy device 80x25 Aug 13 07:08:51.528337 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:08:51.542955 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 13 07:08:51.544685 systemd[1]: Reloading finished in 406 ms. Aug 13 07:08:51.554731 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:08:51.562272 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:08:51.597286 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1398) Aug 13 07:08:51.603494 systemd[1]: Starting ensure-sysext.service... Aug 13 07:08:51.614779 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:08:51.628472 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:08:51.643439 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:51.670762 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 07:08:51.701718 systemd[1]: Reload requested from client PID 1468 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:08:51.701734 systemd[1]: Reloading... Aug 13 07:08:51.723631 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:08:51.723844 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:08:51.726697 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:08:51.726917 systemd-tmpfiles[1481]: ACLs are not supported, ignoring. Aug 13 07:08:51.726962 systemd-tmpfiles[1481]: ACLs are not supported, ignoring. Aug 13 07:08:51.749957 systemd-tmpfiles[1481]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:08:51.750108 systemd-tmpfiles[1481]: Skipping /boot Aug 13 07:08:51.765642 systemd-tmpfiles[1481]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:08:51.765790 systemd-tmpfiles[1481]: Skipping /boot Aug 13 07:08:51.802333 zram_generator::config[1552]: No configuration found. Aug 13 07:08:51.915011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:08:52.019076 systemd[1]: Reloading finished in 317 ms. Aug 13 07:08:52.039290 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:08:52.083574 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 07:08:52.090285 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:08:52.096935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:08:52.098583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:08:52.107606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:08:52.115662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:08:52.121456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:08:52.124589 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:08:52.132026 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 07:08:52.134213 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:08:52.145435 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:08:52.160066 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:08:52.169513 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:08:52.175194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:08:52.175425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:52.181720 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:52.190160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:52.203942 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 07:08:52.209167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:08:52.209393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:08:52.218095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:08:52.218285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:08:52.227224 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:08:52.228036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:08:52.237884 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:08:52.258616 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:08:52.267533 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:08:52.275216 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:08:52.292678 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:52.307458 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:08:52.321224 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:08:52.328561 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:08:52.344557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:08:52.360568 augenrules[1664]: No rules Aug 13 07:08:52.363566 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:08:52.384570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:08:52.397409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:08:52.406102 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:08:52.406251 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 07:08:52.406427 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:08:52.413193 systemd-resolved[1625]: Positive Trust Anchors: Aug 13 07:08:52.413425 systemd-resolved[1625]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:08:52.413457 systemd-resolved[1625]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:08:52.416502 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:08:52.417924 lvm[1660]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:08:52.418326 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 07:08:52.426220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:08:52.426417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:08:52.433091 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:08:52.433281 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:08:52.437702 systemd-resolved[1625]: Using system hostname 'ci-4230.2.2-a-6317daa899'. Aug 13 07:08:52.439636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:08:52.439790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:08:52.447928 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:08:52.454781 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:08:52.454986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:08:52.461807 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:08:52.472665 systemd[1]: Finished ensure-sysext.service. Aug 13 07:08:52.481752 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:08:52.487927 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:08:52.498882 systemd-networkd[1470]: lo: Link UP Aug 13 07:08:52.498892 systemd-networkd[1470]: lo: Gained carrier Aug 13 07:08:52.502019 systemd-networkd[1470]: Enumeration completed Aug 13 07:08:52.506446 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:08:52.507462 systemd-networkd[1470]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:08:52.507471 systemd-networkd[1470]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:08:52.514587 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:08:52.514669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:08:52.514827 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:08:52.517293 lvm[1680]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:08:52.521624 systemd[1]: Reached target network.target - Network. Aug 13 07:08:52.537478 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 07:08:52.546126 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:08:52.553849 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:08:52.569289 kernel: mlx5_core e36a:00:02.0 enP58218s1: Link up Aug 13 07:08:52.595304 kernel: hv_netvsc 000d3ac2-c27b-000d-3ac2-c27b000d3ac2 eth0: Data path switched to VF: enP58218s1 Aug 13 07:08:52.597537 systemd-networkd[1470]: enP58218s1: Link UP Aug 13 07:08:52.597934 systemd-networkd[1470]: eth0: Link UP Aug 13 07:08:52.597943 systemd-networkd[1470]: eth0: Gained carrier Aug 13 07:08:52.597962 systemd-networkd[1470]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:08:52.604075 systemd-networkd[1470]: enP58218s1: Gained carrier Aug 13 07:08:52.611309 systemd-networkd[1470]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 07:08:52.648313 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 07:08:53.020721 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:08:53.028898 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:08:54.379452 systemd-networkd[1470]: eth0: Gained IPv6LL Aug 13 07:08:54.380999 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:08:54.388970 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:08:54.870551 ldconfig[1300]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:08:54.890941 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:08:54.901505 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:08:54.916148 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:08:54.922658 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:08:54.928437 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:08:54.935007 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:08:54.941827 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:08:54.947896 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:08:54.954887 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:08:54.962602 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:08:54.962643 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:08:54.968074 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:08:54.977237 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:08:54.985013 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:08:54.992684 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 07:08:55.000041 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 07:08:55.007275 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 07:08:55.015704 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:08:55.022443 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 07:08:55.029641 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:08:55.035631 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:08:55.040999 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:08:55.046242 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:08:55.046288 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:08:55.056397 systemd[1]: Starting chronyd.service - NTP client/server... Aug 13 07:08:55.066434 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:08:55.083466 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:08:55.098827 (chronyd)[1692]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Aug 13 07:08:55.102800 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:08:55.109354 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:08:55.118502 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:08:55.119585 jq[1699]: false Aug 13 07:08:55.125013 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:08:55.124712 chronyd[1702]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Aug 13 07:08:55.125053 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Aug 13 07:08:55.127544 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Aug 13 07:08:55.133662 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Aug 13 07:08:55.135651 KVP[1703]: KVP starting; pid is:1703 Aug 13 07:08:55.138440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:55.141288 kernel: hv_utils: KVP IC version 4.0 Aug 13 07:08:55.140535 KVP[1703]: KVP LIC Version: 3.1 Aug 13 07:08:55.150147 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:08:55.157501 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:08:55.170610 chronyd[1702]: Timezone right/UTC failed leap second check, ignoring Aug 13 07:08:55.170826 chronyd[1702]: Loaded seccomp filter (level 2) Aug 13 07:08:55.173490 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:08:55.173797 extend-filesystems[1700]: Found loop4 Aug 13 07:08:55.173797 extend-filesystems[1700]: Found loop5 Aug 13 07:08:55.173797 extend-filesystems[1700]: Found loop6 Aug 13 07:08:55.173797 extend-filesystems[1700]: Found loop7 Aug 13 07:08:55.173797 extend-filesystems[1700]: Found sda Aug 13 07:08:55.173797 extend-filesystems[1700]: Found sda1 Aug 13 07:08:55.173797 extend-filesystems[1700]: Found sda2 Aug 13 07:08:55.173797 extend-filesystems[1700]: Found sda3 Aug 13 07:08:55.173797 extend-filesystems[1700]: Found usr Aug 13 07:08:55.173797 extend-filesystems[1700]: Found sda4 Aug 13 07:08:55.173797 extend-filesystems[1700]: Found sda6 Aug 13 07:08:55.173797 extend-filesystems[1700]: Found sda7 Aug 13 07:08:55.173797 extend-filesystems[1700]: Found sda9 Aug 13 07:08:55.173797 extend-filesystems[1700]: Checking size of /dev/sda9 Aug 13 07:08:55.343503 extend-filesystems[1700]: Old size kept for /dev/sda9 Aug 13 07:08:55.343503 extend-filesystems[1700]: Found sr0 Aug 13 07:08:55.184443 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:08:55.458179 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1746) Aug 13 07:08:55.305171 dbus-daemon[1695]: [system] SELinux support is enabled Aug 13 07:08:55.458539 coreos-metadata[1694]: Aug 13 07:08:55.405 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 07:08:55.458539 coreos-metadata[1694]: Aug 13 07:08:55.407 INFO Fetch successful Aug 13 07:08:55.458539 coreos-metadata[1694]: Aug 13 07:08:55.407 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 13 07:08:55.458539 coreos-metadata[1694]: Aug 13 07:08:55.410 INFO Fetch successful Aug 13 07:08:55.458539 coreos-metadata[1694]: Aug 13 07:08:55.411 INFO Fetching http://168.63.129.16/machine/f6aa7f4f-ccfb-4307-9de4-e91decaacd56/666d437b%2D8797%2D4de1%2D8ed3%2D6f0e2418af73.%5Fci%2D4230.2.2%2Da%2D6317daa899?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 13 07:08:55.458539 coreos-metadata[1694]: Aug 13 07:08:55.444 INFO Fetch successful Aug 13 07:08:55.458539 coreos-metadata[1694]: Aug 13 07:08:55.444 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 13 07:08:55.203461 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:08:55.224472 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:08:55.246616 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:08:55.247150 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:08:55.256503 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:08:55.278503 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:08:55.459596 update_engine[1730]: I20250813 07:08:55.404744 1730 main.cc:92] Flatcar Update Engine starting Aug 13 07:08:55.459596 update_engine[1730]: I20250813 07:08:55.409842 1730 update_check_scheduler.cc:74] Next update check in 7m28s Aug 13 07:08:55.459849 coreos-metadata[1694]: Aug 13 07:08:55.459 INFO Fetch successful Aug 13 07:08:55.302215 systemd[1]: Started chronyd.service - NTP client/server. Aug 13 07:08:55.459942 jq[1731]: true Aug 13 07:08:55.321538 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:08:55.340746 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:08:55.340966 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:08:55.341248 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:08:55.341419 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:08:55.451908 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:08:55.452105 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:08:55.463088 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:08:55.475426 systemd-logind[1723]: New seat seat0. Aug 13 07:08:55.477939 systemd-logind[1723]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:08:55.481625 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:08:55.504822 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:08:55.505014 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:08:55.530481 (ntainerd)[1801]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:08:55.547478 dbus-daemon[1695]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 07:08:55.560513 jq[1800]: true Aug 13 07:08:55.570818 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:08:55.598238 tar[1797]: linux-arm64/LICENSE Aug 13 07:08:55.599864 tar[1797]: linux-arm64/helm Aug 13 07:08:55.601457 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:08:55.609861 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:08:55.610108 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:08:55.610245 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:08:55.619748 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:08:55.619867 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:08:55.638574 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:08:55.702097 bash[1833]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:08:55.706356 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:08:55.716609 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:08:55.851420 locksmithd[1835]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:08:56.063378 containerd[1801]: time="2025-08-13T07:08:56.062234880Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 13 07:08:56.148820 containerd[1801]: time="2025-08-13T07:08:56.148322080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:56.155286 containerd[1801]: time="2025-08-13T07:08:56.154551280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:08:56.155286 containerd[1801]: time="2025-08-13T07:08:56.154622440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:08:56.155286 containerd[1801]: time="2025-08-13T07:08:56.154641600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:08:56.155629 containerd[1801]: time="2025-08-13T07:08:56.155607560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:08:56.155727 containerd[1801]: time="2025-08-13T07:08:56.155712960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:56.155978 containerd[1801]: time="2025-08-13T07:08:56.155957600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:08:56.156227 containerd[1801]: time="2025-08-13T07:08:56.156149920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:56.156614 containerd[1801]: time="2025-08-13T07:08:56.156557680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:08:56.156614 containerd[1801]: time="2025-08-13T07:08:56.156577600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:56.156614 containerd[1801]: time="2025-08-13T07:08:56.156593920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:08:56.156835 containerd[1801]: time="2025-08-13T07:08:56.156752840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:56.157525 containerd[1801]: time="2025-08-13T07:08:56.157330520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:56.158814 containerd[1801]: time="2025-08-13T07:08:56.158698400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:56.159375 containerd[1801]: time="2025-08-13T07:08:56.159342920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:08:56.159706 containerd[1801]: time="2025-08-13T07:08:56.159633080Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:08:56.159948 containerd[1801]: time="2025-08-13T07:08:56.159830760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:08:56.159948 containerd[1801]: time="2025-08-13T07:08:56.159900880Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.192351680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.192438760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.192455480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.192472720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.192490640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.192661680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.192888640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.192977800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.192994520Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.193009080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.193024960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.193038280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.193050280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:08:56.193293 containerd[1801]: time="2025-08-13T07:08:56.193063480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193079040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193092080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193105160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193116640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193142240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193156160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193169960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193182920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193194640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193207200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193217800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193230080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.193618 containerd[1801]: time="2025-08-13T07:08:56.193245040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.194538 containerd[1801]: time="2025-08-13T07:08:56.194372400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.194538 containerd[1801]: time="2025-08-13T07:08:56.194411000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.194538 containerd[1801]: time="2025-08-13T07:08:56.194476280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.194538 containerd[1801]: time="2025-08-13T07:08:56.194496080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.194538 containerd[1801]: time="2025-08-13T07:08:56.194511880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:08:56.194538 containerd[1801]: time="2025-08-13T07:08:56.195305600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.194538 containerd[1801]: time="2025-08-13T07:08:56.195331040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.194538 containerd[1801]: time="2025-08-13T07:08:56.195343200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:08:56.194538 containerd[1801]: time="2025-08-13T07:08:56.195421200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:08:56.195859 containerd[1801]: time="2025-08-13T07:08:56.195715120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:08:56.195938 containerd[1801]: time="2025-08-13T07:08:56.195922640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:08:56.196121 containerd[1801]: time="2025-08-13T07:08:56.196002160Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:08:56.196121 containerd[1801]: time="2025-08-13T07:08:56.196018080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.196121 containerd[1801]: time="2025-08-13T07:08:56.196032440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:08:56.196121 containerd[1801]: time="2025-08-13T07:08:56.196042880Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:08:56.196878 containerd[1801]: time="2025-08-13T07:08:56.196053640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:08:56.198146 containerd[1801]: time="2025-08-13T07:08:56.197874160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:08:56.198146 containerd[1801]: time="2025-08-13T07:08:56.197947840Z" level=info msg="Connect containerd service" Aug 13 07:08:56.198146 containerd[1801]: time="2025-08-13T07:08:56.198006120Z" level=info msg="using legacy CRI server" Aug 13 07:08:56.198146 containerd[1801]: time="2025-08-13T07:08:56.198013840Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:08:56.199664 containerd[1801]: time="2025-08-13T07:08:56.198411520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:08:56.200564 containerd[1801]: time="2025-08-13T07:08:56.200525240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:08:56.201385 containerd[1801]: time="2025-08-13T07:08:56.201353520Z" level=info msg="Start subscribing containerd event" Aug 13 07:08:56.201867 containerd[1801]: time="2025-08-13T07:08:56.201849360Z" level=info msg="Start recovering state" Aug 13 07:08:56.202101 containerd[1801]: time="2025-08-13T07:08:56.201996040Z" level=info msg="Start event monitor" Aug 13 07:08:56.202230 containerd[1801]: time="2025-08-13T07:08:56.202214640Z" level=info msg="Start snapshots syncer" Aug 13 07:08:56.202542 containerd[1801]: time="2025-08-13T07:08:56.202524160Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:08:56.202659 containerd[1801]: time="2025-08-13T07:08:56.202644320Z" level=info msg="Start streaming server" Aug 13 07:08:56.203725 containerd[1801]: time="2025-08-13T07:08:56.203446320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:08:56.203725 containerd[1801]: time="2025-08-13T07:08:56.203673400Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:08:56.203834 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:08:56.210914 containerd[1801]: time="2025-08-13T07:08:56.210142280Z" level=info msg="containerd successfully booted in 0.149058s" Aug 13 07:08:56.451338 tar[1797]: linux-arm64/README.md Aug 13 07:08:56.464215 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:08:56.513472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:56.538702 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:08:56.909839 sshd_keygen[1732]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:08:56.934205 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:08:56.947590 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:08:56.957664 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Aug 13 07:08:56.967314 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:08:56.967589 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:08:56.987043 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:08:57.003439 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Aug 13 07:08:57.009283 kubelet[1857]: E0813 07:08:57.007092 1857 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:08:57.012212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:08:57.012390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:08:57.013375 systemd[1]: kubelet.service: Consumed 729ms CPU time, 259.6M memory peak. Aug 13 07:08:57.014030 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:08:57.029298 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:08:57.035920 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 13 07:08:57.042580 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:08:57.047803 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:08:57.053405 systemd[1]: Startup finished in 699ms (kernel) + 11.678s (initrd) + 11.318s (userspace) = 23.697s. Aug 13 07:08:57.289272 login[1887]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Aug 13 07:08:57.289900 login[1886]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:57.312825 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:08:57.324578 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:08:57.326769 systemd-logind[1723]: New session 2 of user core. Aug 13 07:08:57.338762 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:08:57.346565 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:08:57.349550 (systemd)[1894]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:08:57.351919 systemd-logind[1723]: New session c1 of user core. Aug 13 07:08:57.595411 systemd[1894]: Queued start job for default target default.target. Aug 13 07:08:57.599227 systemd[1894]: Created slice app.slice - User Application Slice. Aug 13 07:08:57.599253 systemd[1894]: Reached target paths.target - Paths. Aug 13 07:08:57.599321 systemd[1894]: Reached target timers.target - Timers. Aug 13 07:08:57.602437 systemd[1894]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:08:57.612339 systemd[1894]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:08:57.612548 systemd[1894]: Reached target sockets.target - Sockets. Aug 13 07:08:57.612660 systemd[1894]: Reached target basic.target - Basic System. Aug 13 07:08:57.612772 systemd[1894]: Reached target default.target - Main User Target. Aug 13 07:08:57.612793 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:08:57.613052 systemd[1894]: Startup finished in 255ms. Aug 13 07:08:57.614054 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:08:58.290729 login[1887]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:58.295757 systemd-logind[1723]: New session 1 of user core. Aug 13 07:08:58.305451 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:08:58.506611 waagent[1882]: 2025-08-13T07:08:58.506513Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Aug 13 07:08:58.512305 waagent[1882]: 2025-08-13T07:08:58.512219Z INFO Daemon Daemon OS: flatcar 4230.2.2 Aug 13 07:08:58.516837 waagent[1882]: 2025-08-13T07:08:58.516778Z INFO Daemon Daemon Python: 3.11.11 Aug 13 07:08:58.521268 waagent[1882]: 2025-08-13T07:08:58.521053Z INFO Daemon Daemon Run daemon Aug 13 07:08:58.525345 waagent[1882]: 2025-08-13T07:08:58.525292Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.2' Aug 13 07:08:58.533875 waagent[1882]: 2025-08-13T07:08:58.533809Z INFO Daemon Daemon Using waagent for provisioning Aug 13 07:08:58.539016 waagent[1882]: 2025-08-13T07:08:58.538964Z INFO Daemon Daemon Activate resource disk Aug 13 07:08:58.543776 waagent[1882]: 2025-08-13T07:08:58.543686Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 07:08:58.556052 waagent[1882]: 2025-08-13T07:08:58.555985Z INFO Daemon Daemon Found device: None Aug 13 07:08:58.560810 waagent[1882]: 2025-08-13T07:08:58.560756Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 07:08:58.568826 waagent[1882]: 2025-08-13T07:08:58.568771Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 07:08:58.580159 waagent[1882]: 2025-08-13T07:08:58.580110Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 07:08:58.585679 waagent[1882]: 2025-08-13T07:08:58.585626Z INFO Daemon Daemon Running default provisioning handler Aug 13 07:08:58.597292 waagent[1882]: 2025-08-13T07:08:58.596937Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Aug 13 07:08:58.610408 waagent[1882]: 2025-08-13T07:08:58.610334Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 07:08:58.619943 waagent[1882]: 2025-08-13T07:08:58.619881Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 07:08:58.624800 waagent[1882]: 2025-08-13T07:08:58.624748Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 07:08:58.693288 waagent[1882]: 2025-08-13T07:08:58.691606Z INFO Daemon Daemon Successfully mounted dvd Aug 13 07:08:58.721165 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 07:08:58.726281 waagent[1882]: 2025-08-13T07:08:58.724432Z INFO Daemon Daemon Detect protocol endpoint Aug 13 07:08:58.729432 waagent[1882]: 2025-08-13T07:08:58.729366Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 07:08:58.735154 waagent[1882]: 2025-08-13T07:08:58.735096Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 07:08:58.741510 waagent[1882]: 2025-08-13T07:08:58.741451Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 07:08:58.746848 waagent[1882]: 2025-08-13T07:08:58.746790Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 07:08:58.751791 waagent[1882]: 2025-08-13T07:08:58.751738Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 07:08:58.782510 waagent[1882]: 2025-08-13T07:08:58.782459Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 07:08:58.789140 waagent[1882]: 2025-08-13T07:08:58.789106Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 07:08:58.794470 waagent[1882]: 2025-08-13T07:08:58.794376Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 07:08:59.138359 waagent[1882]: 2025-08-13T07:08:59.137991Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 07:08:59.144676 waagent[1882]: 2025-08-13T07:08:59.144607Z INFO Daemon Daemon Forcing an update of the goal state. Aug 13 07:08:59.153554 waagent[1882]: 2025-08-13T07:08:59.153501Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 07:08:59.172662 waagent[1882]: 2025-08-13T07:08:59.172616Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Aug 13 07:08:59.178014 waagent[1882]: 2025-08-13T07:08:59.177967Z INFO Daemon Aug 13 07:08:59.180652 waagent[1882]: 2025-08-13T07:08:59.180599Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6b9819ff-c456-4a8c-9db8-25fd56aebf09 eTag: 9283947303290706792 source: Fabric] Aug 13 07:08:59.191162 waagent[1882]: 2025-08-13T07:08:59.191116Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Aug 13 07:08:59.197409 waagent[1882]: 2025-08-13T07:08:59.197363Z INFO Daemon Aug 13 07:08:59.200059 waagent[1882]: 2025-08-13T07:08:59.200015Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Aug 13 07:08:59.210477 waagent[1882]: 2025-08-13T07:08:59.210437Z INFO Daemon Daemon Downloading artifacts profile blob Aug 13 07:08:59.379408 waagent[1882]: 2025-08-13T07:08:59.379314Z INFO Daemon Downloaded certificate {'thumbprint': '9D5CD3A456091BFC2CC4D35626F9D5772C302DE5', 'hasPrivateKey': False} Aug 13 07:08:59.388580 waagent[1882]: 2025-08-13T07:08:59.388496Z INFO Daemon Downloaded certificate {'thumbprint': '1DA5BF181F45A40662141BA2CE4E21F780FAFED1', 'hasPrivateKey': True} Aug 13 07:08:59.397911 waagent[1882]: 2025-08-13T07:08:59.397856Z INFO Daemon Fetch goal state completed Aug 13 07:08:59.452504 waagent[1882]: 2025-08-13T07:08:59.452440Z INFO Daemon Daemon Starting provisioning Aug 13 07:08:59.457262 waagent[1882]: 2025-08-13T07:08:59.457194Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 07:08:59.461775 waagent[1882]: 2025-08-13T07:08:59.461714Z INFO Daemon Daemon Set hostname [ci-4230.2.2-a-6317daa899] Aug 13 07:08:59.470280 waagent[1882]: 2025-08-13T07:08:59.469495Z INFO Daemon Daemon Publish hostname [ci-4230.2.2-a-6317daa899] Aug 13 07:08:59.475996 waagent[1882]: 2025-08-13T07:08:59.475930Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 07:08:59.483198 waagent[1882]: 2025-08-13T07:08:59.483039Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 07:08:59.496625 systemd-networkd[1470]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:08:59.496634 systemd-networkd[1470]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:08:59.496662 systemd-networkd[1470]: eth0: DHCP lease lost Aug 13 07:08:59.498686 waagent[1882]: 2025-08-13T07:08:59.498600Z INFO Daemon Daemon Create user account if not exists Aug 13 07:08:59.504077 waagent[1882]: 2025-08-13T07:08:59.504012Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 07:08:59.509814 waagent[1882]: 2025-08-13T07:08:59.509756Z INFO Daemon Daemon Configure sudoer Aug 13 07:08:59.514133 waagent[1882]: 2025-08-13T07:08:59.514063Z INFO Daemon Daemon Configure sshd Aug 13 07:08:59.518470 waagent[1882]: 2025-08-13T07:08:59.518401Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Aug 13 07:08:59.530136 waagent[1882]: 2025-08-13T07:08:59.530070Z INFO Daemon Daemon Deploy ssh public key. Aug 13 07:08:59.549332 systemd-networkd[1470]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 07:09:00.618219 waagent[1882]: 2025-08-13T07:09:00.618164Z INFO Daemon Daemon Provisioning complete Aug 13 07:09:00.637440 waagent[1882]: 2025-08-13T07:09:00.637385Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 07:09:00.643564 waagent[1882]: 2025-08-13T07:09:00.643500Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 07:09:00.652750 waagent[1882]: 2025-08-13T07:09:00.652691Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Aug 13 07:09:00.792956 waagent[1950]: 2025-08-13T07:09:00.792875Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Aug 13 07:09:00.793839 waagent[1950]: 2025-08-13T07:09:00.793429Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.2 Aug 13 07:09:00.793839 waagent[1950]: 2025-08-13T07:09:00.793510Z INFO ExtHandler ExtHandler Python: 3.11.11 Aug 13 07:09:01.098343 waagent[1950]: 2025-08-13T07:09:01.098082Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 07:09:01.098447 waagent[1950]: 2025-08-13T07:09:01.098359Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 07:09:01.098472 waagent[1950]: 2025-08-13T07:09:01.098432Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 07:09:01.106983 waagent[1950]: 2025-08-13T07:09:01.106908Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 07:09:01.113121 waagent[1950]: 2025-08-13T07:09:01.113073Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 07:09:01.113690 waagent[1950]: 2025-08-13T07:09:01.113646Z INFO ExtHandler Aug 13 07:09:01.113767 waagent[1950]: 2025-08-13T07:09:01.113737Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d65ee715-a18b-4ca8-9b55-bb5e736c595a eTag: 9283947303290706792 source: Fabric] Aug 13 07:09:01.114063 waagent[1950]: 2025-08-13T07:09:01.114024Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 07:09:01.123387 waagent[1950]: 2025-08-13T07:09:01.123308Z INFO ExtHandler Aug 13 07:09:01.123481 waagent[1950]: 2025-08-13T07:09:01.123449Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 07:09:01.127822 waagent[1950]: 2025-08-13T07:09:01.127785Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 07:09:01.475915 waagent[1950]: 2025-08-13T07:09:01.475771Z INFO ExtHandler Downloaded certificate {'thumbprint': '9D5CD3A456091BFC2CC4D35626F9D5772C302DE5', 'hasPrivateKey': False} Aug 13 07:09:01.476387 waagent[1950]: 2025-08-13T07:09:01.476339Z INFO ExtHandler Downloaded certificate {'thumbprint': '1DA5BF181F45A40662141BA2CE4E21F780FAFED1', 'hasPrivateKey': True} Aug 13 07:09:01.476811 waagent[1950]: 2025-08-13T07:09:01.476768Z INFO ExtHandler Fetch goal state completed Aug 13 07:09:01.492695 waagent[1950]: 2025-08-13T07:09:01.492636Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1950 Aug 13 07:09:01.492849 waagent[1950]: 2025-08-13T07:09:01.492814Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Aug 13 07:09:01.494519 waagent[1950]: 2025-08-13T07:09:01.494473Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.2', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 07:09:01.494904 waagent[1950]: 2025-08-13T07:09:01.494867Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 07:09:01.565509 waagent[1950]: 2025-08-13T07:09:01.565463Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 07:09:01.565728 waagent[1950]: 2025-08-13T07:09:01.565687Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 07:09:01.572438 waagent[1950]: 2025-08-13T07:09:01.572390Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 07:09:01.579098 systemd[1]: Reload requested from client PID 1967 ('systemctl') (unit waagent.service)... Aug 13 07:09:01.579393 systemd[1]: Reloading... Aug 13 07:09:01.665321 zram_generator::config[2009]: No configuration found. Aug 13 07:09:01.773484 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:01.872808 systemd[1]: Reloading finished in 293 ms. Aug 13 07:09:01.894284 waagent[1950]: 2025-08-13T07:09:01.888635Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Aug 13 07:09:01.895371 systemd[1]: Reload requested from client PID 2060 ('systemctl') (unit waagent.service)... Aug 13 07:09:01.895574 systemd[1]: Reloading... Aug 13 07:09:01.991299 zram_generator::config[2102]: No configuration found. Aug 13 07:09:02.098298 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:02.198112 systemd[1]: Reloading finished in 302 ms. Aug 13 07:09:02.217483 waagent[1950]: 2025-08-13T07:09:02.217389Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Aug 13 07:09:02.217618 waagent[1950]: 2025-08-13T07:09:02.217578Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Aug 13 07:09:02.413526 waagent[1950]: 2025-08-13T07:09:02.413382Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Aug 13 07:09:02.414095 waagent[1950]: 2025-08-13T07:09:02.414018Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Aug 13 07:09:02.414935 waagent[1950]: 2025-08-13T07:09:02.414847Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 07:09:02.415353 waagent[1950]: 2025-08-13T07:09:02.415233Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 07:09:02.416322 waagent[1950]: 2025-08-13T07:09:02.415586Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 07:09:02.416322 waagent[1950]: 2025-08-13T07:09:02.415671Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 07:09:02.416322 waagent[1950]: 2025-08-13T07:09:02.415863Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 07:09:02.416322 waagent[1950]: 2025-08-13T07:09:02.416029Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 07:09:02.416322 waagent[1950]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 07:09:02.416322 waagent[1950]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 07:09:02.416322 waagent[1950]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 07:09:02.416322 waagent[1950]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 07:09:02.416322 waagent[1950]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 07:09:02.416322 waagent[1950]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 07:09:02.416762 waagent[1950]: 2025-08-13T07:09:02.416663Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 07:09:02.416828 waagent[1950]: 2025-08-13T07:09:02.416749Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 07:09:02.416919 waagent[1950]: 2025-08-13T07:09:02.416860Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 07:09:02.417337 waagent[1950]: 2025-08-13T07:09:02.417242Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 07:09:02.417536 waagent[1950]: 2025-08-13T07:09:02.417491Z INFO EnvHandler ExtHandler Configure routes Aug 13 07:09:02.417606 waagent[1950]: 2025-08-13T07:09:02.417576Z INFO EnvHandler ExtHandler Gateway:None Aug 13 07:09:02.417654 waagent[1950]: 2025-08-13T07:09:02.417629Z INFO EnvHandler ExtHandler Routes:None Aug 13 07:09:02.418230 waagent[1950]: 2025-08-13T07:09:02.418171Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 07:09:02.418301 waagent[1950]: 2025-08-13T07:09:02.418230Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 07:09:02.418584 waagent[1950]: 2025-08-13T07:09:02.418514Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 07:09:02.424348 waagent[1950]: 2025-08-13T07:09:02.424251Z INFO ExtHandler ExtHandler Aug 13 07:09:02.424445 waagent[1950]: 2025-08-13T07:09:02.424409Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 748ac967-dbb8-44ec-bb7c-9052560c65ac correlation 3841c373-f661-4209-bcbd-c7f692729781 created: 2025-08-13T07:07:53.347382Z] Aug 13 07:09:02.425571 waagent[1950]: 2025-08-13T07:09:02.425488Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 07:09:02.427770 waagent[1950]: 2025-08-13T07:09:02.426911Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Aug 13 07:09:02.463351 waagent[1950]: 2025-08-13T07:09:02.463282Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0209E4A5-1C1C-4646-8436-12225F52CBB4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Aug 13 07:09:02.472808 waagent[1950]: 2025-08-13T07:09:02.472721Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 07:09:02.472808 waagent[1950]: Executing ['ip', '-a', '-o', 'link']: Aug 13 07:09:02.472808 waagent[1950]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 07:09:02.472808 waagent[1950]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:c2:7b brd ff:ff:ff:ff:ff:ff Aug 13 07:09:02.472808 waagent[1950]: 3: enP58218s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:c2:7b brd ff:ff:ff:ff:ff:ff\ altname enP58218p0s2 Aug 13 07:09:02.472808 waagent[1950]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 07:09:02.472808 waagent[1950]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 07:09:02.472808 waagent[1950]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 07:09:02.472808 waagent[1950]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 07:09:02.472808 waagent[1950]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Aug 13 07:09:02.472808 waagent[1950]: 2: eth0 inet6 fe80::20d:3aff:fec2:c27b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 13 07:09:02.497332 waagent[1950]: 2025-08-13T07:09:02.497202Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Aug 13 07:09:02.497332 waagent[1950]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:09:02.497332 waagent[1950]: pkts bytes target prot opt in out source destination Aug 13 07:09:02.497332 waagent[1950]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:09:02.497332 waagent[1950]: pkts bytes target prot opt in out source destination Aug 13 07:09:02.497332 waagent[1950]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:09:02.497332 waagent[1950]: pkts bytes target prot opt in out source destination Aug 13 07:09:02.497332 waagent[1950]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 07:09:02.497332 waagent[1950]: 4 216 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 07:09:02.497332 waagent[1950]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 07:09:02.500914 waagent[1950]: 2025-08-13T07:09:02.500864Z INFO EnvHandler ExtHandler Current Firewall rules: Aug 13 07:09:02.500914 waagent[1950]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:09:02.500914 waagent[1950]: pkts bytes target prot opt in out source destination Aug 13 07:09:02.500914 waagent[1950]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:09:02.500914 waagent[1950]: pkts bytes target prot opt in out source destination Aug 13 07:09:02.500914 waagent[1950]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:09:02.500914 waagent[1950]: pkts bytes target prot opt in out source destination Aug 13 07:09:02.500914 waagent[1950]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 07:09:02.500914 waagent[1950]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 07:09:02.500914 waagent[1950]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 07:09:02.501489 waagent[1950]: 2025-08-13T07:09:02.501452Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 07:09:07.263175 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:09:07.272530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:07.395704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:07.404697 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:09:07.501941 kubelet[2192]: E0813 07:09:07.501875 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:09:07.505538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:09:07.505703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:09:07.506191 systemd[1]: kubelet.service: Consumed 145ms CPU time, 108.5M memory peak. Aug 13 07:09:10.210354 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:09:10.217521 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.16.10:50960.service - OpenSSH per-connection server daemon (10.200.16.10:50960). Aug 13 07:09:10.781286 sshd[2200]: Accepted publickey for core from 10.200.16.10 port 50960 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:09:10.782607 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:10.786966 systemd-logind[1723]: New session 3 of user core. Aug 13 07:09:10.794504 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:09:11.209737 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.16.10:50962.service - OpenSSH per-connection server daemon (10.200.16.10:50962). Aug 13 07:09:11.671240 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 50962 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:09:11.673831 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:11.677894 systemd-logind[1723]: New session 4 of user core. Aug 13 07:09:11.683422 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:09:12.006988 sshd[2207]: Connection closed by 10.200.16.10 port 50962 Aug 13 07:09:12.007431 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:12.011633 systemd[1]: sshd@1-10.200.20.40:22-10.200.16.10:50962.service: Deactivated successfully. Aug 13 07:09:12.013606 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:09:12.014558 systemd-logind[1723]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:09:12.015708 systemd-logind[1723]: Removed session 4. Aug 13 07:09:12.088744 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.16.10:50972.service - OpenSSH per-connection server daemon (10.200.16.10:50972). Aug 13 07:09:12.544413 sshd[2213]: Accepted publickey for core from 10.200.16.10 port 50972 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:09:12.545891 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:12.552311 systemd-logind[1723]: New session 5 of user core. Aug 13 07:09:12.557449 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:09:12.878590 sshd[2215]: Connection closed by 10.200.16.10 port 50972 Aug 13 07:09:12.878426 sshd-session[2213]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:12.881335 systemd[1]: sshd@2-10.200.20.40:22-10.200.16.10:50972.service: Deactivated successfully. Aug 13 07:09:12.883131 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:09:12.885435 systemd-logind[1723]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:09:12.886297 systemd-logind[1723]: Removed session 5. Aug 13 07:09:12.973586 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.16.10:50988.service - OpenSSH per-connection server daemon (10.200.16.10:50988). Aug 13 07:09:13.467977 sshd[2221]: Accepted publickey for core from 10.200.16.10 port 50988 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:09:13.469339 sshd-session[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:13.475340 systemd-logind[1723]: New session 6 of user core. Aug 13 07:09:13.482454 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:09:13.817936 sshd[2223]: Connection closed by 10.200.16.10 port 50988 Aug 13 07:09:13.818720 sshd-session[2221]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:13.822145 systemd[1]: sshd@3-10.200.20.40:22-10.200.16.10:50988.service: Deactivated successfully. Aug 13 07:09:13.823974 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:09:13.824710 systemd-logind[1723]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:09:13.825552 systemd-logind[1723]: Removed session 6. Aug 13 07:09:13.913611 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.16.10:50996.service - OpenSSH per-connection server daemon (10.200.16.10:50996). Aug 13 07:09:14.404028 sshd[2229]: Accepted publickey for core from 10.200.16.10 port 50996 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:09:14.405352 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:14.409439 systemd-logind[1723]: New session 7 of user core. Aug 13 07:09:14.416400 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:09:14.828702 sudo[2232]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:09:14.829024 sudo[2232]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:14.844347 sudo[2232]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:14.928639 sshd[2231]: Connection closed by 10.200.16.10 port 50996 Aug 13 07:09:14.927768 sshd-session[2229]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:14.931594 systemd[1]: sshd@4-10.200.20.40:22-10.200.16.10:50996.service: Deactivated successfully. Aug 13 07:09:14.933664 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:09:14.934950 systemd-logind[1723]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:09:14.936039 systemd-logind[1723]: Removed session 7. Aug 13 07:09:15.016347 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.16.10:51006.service - OpenSSH per-connection server daemon (10.200.16.10:51006). Aug 13 07:09:15.519455 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 51006 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:09:15.520880 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:15.525188 systemd-logind[1723]: New session 8 of user core. Aug 13 07:09:15.533397 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:09:15.796014 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:09:15.796347 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:15.799727 sudo[2242]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:15.804850 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 07:09:15.805126 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:15.823778 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 07:09:15.847615 augenrules[2264]: No rules Aug 13 07:09:15.849147 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:09:15.849599 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 07:09:15.852472 sudo[2241]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:15.929870 sshd[2240]: Connection closed by 10.200.16.10 port 51006 Aug 13 07:09:15.930249 sshd-session[2238]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:15.934654 systemd-logind[1723]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:09:15.935700 systemd[1]: sshd@5-10.200.20.40:22-10.200.16.10:51006.service: Deactivated successfully. Aug 13 07:09:15.937830 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:09:15.939231 systemd-logind[1723]: Removed session 8. Aug 13 07:09:16.025512 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.16.10:51012.service - OpenSSH per-connection server daemon (10.200.16.10:51012). Aug 13 07:09:16.516807 sshd[2273]: Accepted publickey for core from 10.200.16.10 port 51012 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:09:16.518133 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:16.523732 systemd-logind[1723]: New session 9 of user core. Aug 13 07:09:16.530524 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:09:16.793224 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:09:16.793542 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:17.756103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:09:17.767519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:17.770565 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:09:17.773532 (dockerd)[2293]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:09:18.334549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:18.339131 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:09:18.377237 kubelet[2301]: E0813 07:09:18.377192 2301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:09:18.380555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:09:18.380714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:09:18.383354 systemd[1]: kubelet.service: Consumed 130ms CPU time, 104.9M memory peak. Aug 13 07:09:18.881228 dockerd[2293]: time="2025-08-13T07:09:18.881171760Z" level=info msg="Starting up" Aug 13 07:09:18.956689 chronyd[1702]: Selected source PHC0 Aug 13 07:09:19.377984 dockerd[2293]: time="2025-08-13T07:09:19.377861897Z" level=info msg="Loading containers: start." Aug 13 07:09:19.542291 kernel: Initializing XFRM netlink socket Aug 13 07:09:19.635704 systemd-networkd[1470]: docker0: Link UP Aug 13 07:09:19.677360 dockerd[2293]: time="2025-08-13T07:09:19.677310356Z" level=info msg="Loading containers: done." Aug 13 07:09:19.709914 dockerd[2293]: time="2025-08-13T07:09:19.709860818Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:09:19.710083 dockerd[2293]: time="2025-08-13T07:09:19.709980013Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 13 07:09:19.710154 dockerd[2293]: time="2025-08-13T07:09:19.710108807Z" level=info msg="Daemon has completed initialization" Aug 13 07:09:19.785770 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:09:19.786127 dockerd[2293]: time="2025-08-13T07:09:19.785745846Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:09:20.308681 containerd[1801]: time="2025-08-13T07:09:20.308350049Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 07:09:21.282887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376668352.mount: Deactivated successfully. Aug 13 07:09:22.802283 containerd[1801]: time="2025-08-13T07:09:22.802222894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:22.807795 containerd[1801]: time="2025-08-13T07:09:22.807542527Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=27352094" Aug 13 07:09:22.811212 containerd[1801]: time="2025-08-13T07:09:22.811177923Z" level=info msg="ImageCreate event name:\"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:22.816497 containerd[1801]: time="2025-08-13T07:09:22.816444877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:22.818114 containerd[1801]: time="2025-08-13T07:09:22.817601156Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"27348894\" in 2.509209187s" Aug 13 07:09:22.818114 containerd[1801]: time="2025-08-13T07:09:22.817639996Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\"" Aug 13 07:09:22.819116 containerd[1801]: time="2025-08-13T07:09:22.819065354Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 07:09:24.185301 containerd[1801]: time="2025-08-13T07:09:24.184422117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:24.191669 containerd[1801]: time="2025-08-13T07:09:24.191586388Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=23537846" Aug 13 07:09:24.198583 containerd[1801]: time="2025-08-13T07:09:24.198529781Z" level=info msg="ImageCreate event name:\"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:24.209531 containerd[1801]: time="2025-08-13T07:09:24.209460168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:24.211386 containerd[1801]: time="2025-08-13T07:09:24.210583607Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"25092764\" in 1.391481453s" Aug 13 07:09:24.211386 containerd[1801]: time="2025-08-13T07:09:24.210620887Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\"" Aug 13 07:09:24.211386 containerd[1801]: time="2025-08-13T07:09:24.211127886Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 07:09:25.433439 containerd[1801]: time="2025-08-13T07:09:25.433384012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:25.438453 containerd[1801]: time="2025-08-13T07:09:25.438381966Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=18293524" Aug 13 07:09:25.443421 containerd[1801]: time="2025-08-13T07:09:25.443352720Z" level=info msg="ImageCreate event name:\"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:25.449979 containerd[1801]: time="2025-08-13T07:09:25.449914233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:25.451408 containerd[1801]: time="2025-08-13T07:09:25.450923512Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"19848460\" in 1.239766906s" Aug 13 07:09:25.451408 containerd[1801]: time="2025-08-13T07:09:25.450961632Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\"" Aug 13 07:09:25.451992 containerd[1801]: time="2025-08-13T07:09:25.451711031Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 07:09:26.576285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485640340.mount: Deactivated successfully. Aug 13 07:09:26.937694 containerd[1801]: time="2025-08-13T07:09:26.937575816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:26.941382 containerd[1801]: time="2025-08-13T07:09:26.941332331Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=28199472" Aug 13 07:09:26.949609 containerd[1801]: time="2025-08-13T07:09:26.949554162Z" level=info msg="ImageCreate event name:\"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:26.958350 containerd[1801]: time="2025-08-13T07:09:26.958308592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:26.958909 containerd[1801]: time="2025-08-13T07:09:26.958866271Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"28198491\" in 1.50711988s" Aug 13 07:09:26.958909 containerd[1801]: time="2025-08-13T07:09:26.958905951Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\"" Aug 13 07:09:26.959475 containerd[1801]: time="2025-08-13T07:09:26.959438151Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 07:09:27.679748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount973301631.mount: Deactivated successfully. Aug 13 07:09:28.559278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 07:09:28.567527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:28.670748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:28.675174 (kubelet)[2589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:09:28.792286 kubelet[2589]: E0813 07:09:28.792207 2589 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:09:28.794846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:09:28.795225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:09:28.795647 systemd[1]: kubelet.service: Consumed 135ms CPU time, 109M memory peak. Aug 13 07:09:29.989929 containerd[1801]: time="2025-08-13T07:09:29.989858658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:29.994447 containerd[1801]: time="2025-08-13T07:09:29.994160415Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Aug 13 07:09:29.998365 containerd[1801]: time="2025-08-13T07:09:29.998303413Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:30.003827 containerd[1801]: time="2025-08-13T07:09:30.003753330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:30.005774 containerd[1801]: time="2025-08-13T07:09:30.004865329Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 3.045387378s" Aug 13 07:09:30.005774 containerd[1801]: time="2025-08-13T07:09:30.004910689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Aug 13 07:09:30.005774 containerd[1801]: time="2025-08-13T07:09:30.005544248Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:09:30.631910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1120669968.mount: Deactivated successfully. Aug 13 07:09:30.671632 containerd[1801]: time="2025-08-13T07:09:30.671577365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:30.675060 containerd[1801]: time="2025-08-13T07:09:30.675001721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Aug 13 07:09:30.679708 containerd[1801]: time="2025-08-13T07:09:30.679657317Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:30.686186 containerd[1801]: time="2025-08-13T07:09:30.686120190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:30.687047 containerd[1801]: time="2025-08-13T07:09:30.686882549Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 681.307821ms" Aug 13 07:09:30.687047 containerd[1801]: time="2025-08-13T07:09:30.686921029Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 07:09:30.688034 containerd[1801]: time="2025-08-13T07:09:30.687997948Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 07:09:31.499192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154044448.mount: Deactivated successfully. Aug 13 07:09:34.911307 containerd[1801]: time="2025-08-13T07:09:34.910455741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:34.916125 containerd[1801]: time="2025-08-13T07:09:34.915824335Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Aug 13 07:09:34.921306 containerd[1801]: time="2025-08-13T07:09:34.921254170Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:34.930919 containerd[1801]: time="2025-08-13T07:09:34.930856759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:34.932215 containerd[1801]: time="2025-08-13T07:09:34.932087718Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 4.24405305s" Aug 13 07:09:34.932215 containerd[1801]: time="2025-08-13T07:09:34.932123918Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Aug 13 07:09:38.809934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 07:09:38.821832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:39.029908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:39.031705 (kubelet)[2720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:09:39.070991 kubelet[2720]: E0813 07:09:39.070862 2720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:09:39.074031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:09:39.074336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:09:39.076441 systemd[1]: kubelet.service: Consumed 129ms CPU time, 104.8M memory peak. Aug 13 07:09:39.556345 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Aug 13 07:09:40.375455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:40.375599 systemd[1]: kubelet.service: Consumed 129ms CPU time, 104.8M memory peak. Aug 13 07:09:40.377861 update_engine[1730]: I20250813 07:09:40.377297 1730 update_attempter.cc:509] Updating boot flags... Aug 13 07:09:40.381950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:40.420536 systemd[1]: Reload requested from client PID 2741 ('systemctl') (unit session-9.scope)... Aug 13 07:09:40.420551 systemd[1]: Reloading... Aug 13 07:09:40.579335 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2774) Aug 13 07:09:40.636296 zram_generator::config[2836]: No configuration found. Aug 13 07:09:40.720621 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2777) Aug 13 07:09:40.779958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:40.887132 systemd[1]: Reloading finished in 466 ms. Aug 13 07:09:40.938429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:40.946697 (kubelet)[2954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:09:40.991437 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:40.999442 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:09:41.001298 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:41.001358 systemd[1]: kubelet.service: Consumed 122ms CPU time, 102.4M memory peak. Aug 13 07:09:41.008994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:41.116950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:41.121574 (kubelet)[2970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:09:41.191687 kubelet[2970]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:41.191687 kubelet[2970]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:09:41.191687 kubelet[2970]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:41.192041 kubelet[2970]: I0813 07:09:41.191749 2970 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:09:41.764066 kubelet[2970]: I0813 07:09:41.764019 2970 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:09:41.764066 kubelet[2970]: I0813 07:09:41.764056 2970 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:09:41.764368 kubelet[2970]: I0813 07:09:41.764350 2970 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:09:41.783073 kubelet[2970]: I0813 07:09:41.783039 2970 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:09:41.784494 kubelet[2970]: E0813 07:09:41.784387 2970 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 07:09:41.797195 kubelet[2970]: E0813 07:09:41.797154 2970 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:09:41.797578 kubelet[2970]: I0813 07:09:41.797379 2970 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:09:41.800579 kubelet[2970]: I0813 07:09:41.800556 2970 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:09:41.803125 kubelet[2970]: I0813 07:09:41.802693 2970 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:09:41.803125 kubelet[2970]: I0813 07:09:41.802737 2970 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-a-6317daa899","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:09:41.803125 kubelet[2970]: I0813 07:09:41.802917 2970 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:09:41.803125 kubelet[2970]: I0813 07:09:41.802927 2970 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:09:41.803125 kubelet[2970]: I0813 07:09:41.803065 2970 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:41.806722 kubelet[2970]: I0813 07:09:41.806696 2970 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:09:41.806772 kubelet[2970]: I0813 07:09:41.806729 2970 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:09:41.806772 kubelet[2970]: I0813 07:09:41.806758 2970 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:09:41.808553 kubelet[2970]: I0813 07:09:41.808143 2970 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:09:41.810089 kubelet[2970]: E0813 07:09:41.810061 2970 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-a-6317daa899&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:09:41.810592 kubelet[2970]: E0813 07:09:41.810567 2970 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:09:41.811003 kubelet[2970]: I0813 07:09:41.810986 2970 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 07:09:41.811692 kubelet[2970]: I0813 07:09:41.811675 2970 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:09:41.811823 kubelet[2970]: W0813 07:09:41.811811 2970 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:09:41.815780 kubelet[2970]: I0813 07:09:41.815167 2970 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:09:41.815780 kubelet[2970]: I0813 07:09:41.815216 2970 server.go:1289] "Started kubelet" Aug 13 07:09:41.817859 kubelet[2970]: I0813 07:09:41.816967 2970 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:09:41.817859 kubelet[2970]: I0813 07:09:41.817249 2970 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:09:41.817859 kubelet[2970]: I0813 07:09:41.817334 2970 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:09:41.817859 kubelet[2970]: I0813 07:09:41.817390 2970 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:09:41.819070 kubelet[2970]: I0813 07:09:41.819045 2970 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:09:41.825068 kubelet[2970]: I0813 07:09:41.825030 2970 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:09:41.826754 kubelet[2970]: I0813 07:09:41.826371 2970 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:09:41.826754 kubelet[2970]: E0813 07:09:41.826626 2970 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-6317daa899\" not found" Aug 13 07:09:41.827712 kubelet[2970]: I0813 07:09:41.827692 2970 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:09:41.827896 kubelet[2970]: I0813 07:09:41.827886 2970 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:09:41.828093 kubelet[2970]: E0813 07:09:41.826683 2970 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-a-6317daa899.185b41f0b1669793 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-a-6317daa899,UID:ci-4230.2.2-a-6317daa899,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-a-6317daa899,},FirstTimestamp:2025-08-13 07:09:41.815187347 +0000 UTC m=+0.689812093,LastTimestamp:2025-08-13 07:09:41.815187347 +0000 UTC m=+0.689812093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-a-6317daa899,}" Aug 13 07:09:41.829425 kubelet[2970]: E0813 07:09:41.829383 2970 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:09:41.829521 kubelet[2970]: E0813 07:09:41.829492 2970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-6317daa899?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="200ms" Aug 13 07:09:41.829889 kubelet[2970]: I0813 07:09:41.829722 2970 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:09:41.829889 kubelet[2970]: I0813 07:09:41.829808 2970 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:09:41.831710 kubelet[2970]: E0813 07:09:41.831671 2970 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:09:41.831884 kubelet[2970]: I0813 07:09:41.831856 2970 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:09:41.852926 kubelet[2970]: I0813 07:09:41.852445 2970 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:09:41.854290 kubelet[2970]: I0813 07:09:41.854238 2970 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:09:41.854290 kubelet[2970]: I0813 07:09:41.854280 2970 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:09:41.854419 kubelet[2970]: I0813 07:09:41.854304 2970 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:09:41.854419 kubelet[2970]: I0813 07:09:41.854312 2970 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:09:41.854419 kubelet[2970]: E0813 07:09:41.854354 2970 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:09:41.858451 kubelet[2970]: E0813 07:09:41.858416 2970 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 07:09:41.859157 kubelet[2970]: I0813 07:09:41.859133 2970 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:09:41.859157 kubelet[2970]: I0813 07:09:41.859150 2970 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:09:41.859350 kubelet[2970]: I0813 07:09:41.859171 2970 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:41.865274 kubelet[2970]: I0813 07:09:41.865234 2970 policy_none.go:49] "None policy: Start" Aug 13 07:09:41.865274 kubelet[2970]: I0813 07:09:41.865280 2970 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:09:41.865381 kubelet[2970]: I0813 07:09:41.865293 2970 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:09:41.876206 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:09:41.887133 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:09:41.890285 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:09:41.899238 kubelet[2970]: E0813 07:09:41.899211 2970 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:09:41.899996 kubelet[2970]: I0813 07:09:41.899576 2970 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:09:41.899996 kubelet[2970]: I0813 07:09:41.899591 2970 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:09:41.899996 kubelet[2970]: I0813 07:09:41.899907 2970 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:09:41.902055 kubelet[2970]: E0813 07:09:41.901921 2970 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:09:41.902055 kubelet[2970]: E0813 07:09:41.901967 2970 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-a-6317daa899\" not found" Aug 13 07:09:41.967394 systemd[1]: Created slice kubepods-burstable-pod33a011db1b40bd5371fede3be0828f63.slice - libcontainer container kubepods-burstable-pod33a011db1b40bd5371fede3be0828f63.slice. Aug 13 07:09:41.980158 kubelet[2970]: E0813 07:09:41.979679 2970 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-6317daa899\" not found" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:41.985029 systemd[1]: Created slice kubepods-burstable-pod4a2fff348b3720a795567bce6cbb98f1.slice - libcontainer container kubepods-burstable-pod4a2fff348b3720a795567bce6cbb98f1.slice. Aug 13 07:09:41.987413 kubelet[2970]: E0813 07:09:41.987384 2970 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-6317daa899\" not found" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:41.990145 systemd[1]: Created slice kubepods-burstable-pod424e5653b21f275e33fa28e1cbdc4e88.slice - libcontainer container kubepods-burstable-pod424e5653b21f275e33fa28e1cbdc4e88.slice. Aug 13 07:09:41.991882 kubelet[2970]: E0813 07:09:41.991695 2970 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-6317daa899\" not found" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.002094 kubelet[2970]: I0813 07:09:42.001644 2970 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.002094 kubelet[2970]: E0813 07:09:42.002000 2970 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.030014 kubelet[2970]: E0813 07:09:42.029896 2970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-6317daa899?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="400ms" Aug 13 07:09:42.129292 kubelet[2970]: I0813 07:09:42.129070 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a2fff348b3720a795567bce6cbb98f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-a-6317daa899\" (UID: \"4a2fff348b3720a795567bce6cbb98f1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.129292 kubelet[2970]: I0813 07:09:42.129110 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/424e5653b21f275e33fa28e1cbdc4e88-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-a-6317daa899\" (UID: \"424e5653b21f275e33fa28e1cbdc4e88\") " pod="kube-system/kube-scheduler-ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.129292 kubelet[2970]: I0813 07:09:42.129129 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33a011db1b40bd5371fede3be0828f63-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-a-6317daa899\" (UID: \"33a011db1b40bd5371fede3be0828f63\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.129292 kubelet[2970]: I0813 07:09:42.129147 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33a011db1b40bd5371fede3be0828f63-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-a-6317daa899\" (UID: \"33a011db1b40bd5371fede3be0828f63\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.129292 kubelet[2970]: I0813 07:09:42.129163 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33a011db1b40bd5371fede3be0828f63-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-a-6317daa899\" (UID: \"33a011db1b40bd5371fede3be0828f63\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.129481 kubelet[2970]: I0813 07:09:42.129178 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a2fff348b3720a795567bce6cbb98f1-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-6317daa899\" (UID: \"4a2fff348b3720a795567bce6cbb98f1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.129481 kubelet[2970]: I0813 07:09:42.129194 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a2fff348b3720a795567bce6cbb98f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-a-6317daa899\" (UID: \"4a2fff348b3720a795567bce6cbb98f1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.129481 kubelet[2970]: I0813 07:09:42.129209 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a2fff348b3720a795567bce6cbb98f1-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-6317daa899\" (UID: \"4a2fff348b3720a795567bce6cbb98f1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.129481 kubelet[2970]: I0813 07:09:42.129224 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a2fff348b3720a795567bce6cbb98f1-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-a-6317daa899\" (UID: \"4a2fff348b3720a795567bce6cbb98f1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.204303 kubelet[2970]: I0813 07:09:42.203973 2970 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.204704 kubelet[2970]: E0813 07:09:42.204455 2970 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.281729 containerd[1801]: time="2025-08-13T07:09:42.281611923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-a-6317daa899,Uid:33a011db1b40bd5371fede3be0828f63,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:42.289553 containerd[1801]: time="2025-08-13T07:09:42.289392356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-a-6317daa899,Uid:4a2fff348b3720a795567bce6cbb98f1,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:42.293323 containerd[1801]: time="2025-08-13T07:09:42.293028992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-a-6317daa899,Uid:424e5653b21f275e33fa28e1cbdc4e88,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:42.430674 kubelet[2970]: E0813 07:09:42.430626 2970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-6317daa899?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="800ms" Aug 13 07:09:42.606823 kubelet[2970]: I0813 07:09:42.606468 2970 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.606823 kubelet[2970]: E0813 07:09:42.606815 2970 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:42.776117 kubelet[2970]: E0813 07:09:42.776072 2970 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:09:42.881512 kubelet[2970]: E0813 07:09:42.881378 2970 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-a-6317daa899&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:09:42.906621 kubelet[2970]: E0813 07:09:42.906569 2970 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:09:42.951654 kubelet[2970]: E0813 07:09:42.951605 2970 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 07:09:42.963969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927430397.mount: Deactivated successfully. Aug 13 07:09:43.116221 containerd[1801]: time="2025-08-13T07:09:43.115397409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:43.231445 kubelet[2970]: E0813 07:09:43.231326 2970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-6317daa899?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="1.6s" Aug 13 07:09:43.409022 kubelet[2970]: I0813 07:09:43.408990 2970 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:43.409387 kubelet[2970]: E0813 07:09:43.409356 2970 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:43.825133 containerd[1801]: time="2025-08-13T07:09:43.825069269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 13 07:09:43.840309 containerd[1801]: time="2025-08-13T07:09:43.839703101Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:43.848191 containerd[1801]: time="2025-08-13T07:09:43.847184217Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:43.859916 containerd[1801]: time="2025-08-13T07:09:43.859847730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:09:43.866629 containerd[1801]: time="2025-08-13T07:09:43.866584127Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:43.868160 kubelet[2970]: E0813 07:09:43.868118 2970 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 07:09:43.870986 containerd[1801]: time="2025-08-13T07:09:43.870927445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:09:43.875479 containerd[1801]: time="2025-08-13T07:09:43.875374802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:43.878162 containerd[1801]: time="2025-08-13T07:09:43.877681841Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.584579369s" Aug 13 07:09:43.878277 containerd[1801]: time="2025-08-13T07:09:43.878230161Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.596531878s" Aug 13 07:09:43.891933 containerd[1801]: time="2025-08-13T07:09:43.891744713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.602274077s" Aug 13 07:09:44.424712 containerd[1801]: time="2025-08-13T07:09:44.423920549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:44.424712 containerd[1801]: time="2025-08-13T07:09:44.424657988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:44.425709 containerd[1801]: time="2025-08-13T07:09:44.425309148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.425709 containerd[1801]: time="2025-08-13T07:09:44.425540708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:44.425709 containerd[1801]: time="2025-08-13T07:09:44.425601508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:44.425709 containerd[1801]: time="2025-08-13T07:09:44.425617388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.426144 containerd[1801]: time="2025-08-13T07:09:44.425988827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.427013 containerd[1801]: time="2025-08-13T07:09:44.426884907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:44.427013 containerd[1801]: time="2025-08-13T07:09:44.426965547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.427993 containerd[1801]: time="2025-08-13T07:09:44.427493987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:44.427993 containerd[1801]: time="2025-08-13T07:09:44.427673387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.428343 containerd[1801]: time="2025-08-13T07:09:44.428127146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.460509 systemd[1]: Started cri-containerd-5ae2e1151e50e5f55424fc523ff66b9220c47644503b8f94c02aaa6a176c17ea.scope - libcontainer container 5ae2e1151e50e5f55424fc523ff66b9220c47644503b8f94c02aaa6a176c17ea. Aug 13 07:09:44.461680 systemd[1]: Started cri-containerd-b8e5fefe7321e3f774140ab7116c2d5c7afb6e66cae4a478be4ee300abcccb3a.scope - libcontainer container b8e5fefe7321e3f774140ab7116c2d5c7afb6e66cae4a478be4ee300abcccb3a. Aug 13 07:09:44.468099 systemd[1]: Started cri-containerd-3e9bc99fd42723f71f2f1c904251a62493a9a29b28a73c935b7090dc6cb25b79.scope - libcontainer container 3e9bc99fd42723f71f2f1c904251a62493a9a29b28a73c935b7090dc6cb25b79. Aug 13 07:09:44.520223 containerd[1801]: time="2025-08-13T07:09:44.519014258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-a-6317daa899,Uid:33a011db1b40bd5371fede3be0828f63,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8e5fefe7321e3f774140ab7116c2d5c7afb6e66cae4a478be4ee300abcccb3a\"" Aug 13 07:09:44.529394 containerd[1801]: time="2025-08-13T07:09:44.529343972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-a-6317daa899,Uid:424e5653b21f275e33fa28e1cbdc4e88,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e9bc99fd42723f71f2f1c904251a62493a9a29b28a73c935b7090dc6cb25b79\"" Aug 13 07:09:44.530400 containerd[1801]: time="2025-08-13T07:09:44.530317652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-a-6317daa899,Uid:4a2fff348b3720a795567bce6cbb98f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae2e1151e50e5f55424fc523ff66b9220c47644503b8f94c02aaa6a176c17ea\"" Aug 13 07:09:44.531757 containerd[1801]: time="2025-08-13T07:09:44.531704011Z" level=info msg="CreateContainer within sandbox \"b8e5fefe7321e3f774140ab7116c2d5c7afb6e66cae4a478be4ee300abcccb3a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:09:44.540925 containerd[1801]: time="2025-08-13T07:09:44.540881886Z" level=info msg="CreateContainer within sandbox \"3e9bc99fd42723f71f2f1c904251a62493a9a29b28a73c935b7090dc6cb25b79\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:09:44.546667 containerd[1801]: time="2025-08-13T07:09:44.546613763Z" level=info msg="CreateContainer within sandbox \"5ae2e1151e50e5f55424fc523ff66b9220c47644503b8f94c02aaa6a176c17ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:09:44.657607 containerd[1801]: time="2025-08-13T07:09:44.657520464Z" level=info msg="CreateContainer within sandbox \"b8e5fefe7321e3f774140ab7116c2d5c7afb6e66cae4a478be4ee300abcccb3a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7962e0225dd4ee01cd74c8c7a235869d926fbdb7c5731baa3ca0d257435e1c7a\"" Aug 13 07:09:44.659349 containerd[1801]: time="2025-08-13T07:09:44.658505743Z" level=info msg="StartContainer for \"7962e0225dd4ee01cd74c8c7a235869d926fbdb7c5731baa3ca0d257435e1c7a\"" Aug 13 07:09:44.663396 containerd[1801]: time="2025-08-13T07:09:44.663232660Z" level=info msg="CreateContainer within sandbox \"5ae2e1151e50e5f55424fc523ff66b9220c47644503b8f94c02aaa6a176c17ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3c7847c5fdee035d4e46dc6b21a5ef6efa24988e86ec7da8b6b5b9c36485f80a\"" Aug 13 07:09:44.665322 containerd[1801]: time="2025-08-13T07:09:44.664094860Z" level=info msg="StartContainer for \"3c7847c5fdee035d4e46dc6b21a5ef6efa24988e86ec7da8b6b5b9c36485f80a\"" Aug 13 07:09:44.669512 containerd[1801]: time="2025-08-13T07:09:44.669462777Z" level=info msg="CreateContainer within sandbox \"3e9bc99fd42723f71f2f1c904251a62493a9a29b28a73c935b7090dc6cb25b79\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f294fef63bdcb1a4b379ccb539aa473200f3c6095040164ef07d2f7d353de0b1\"" Aug 13 07:09:44.671054 containerd[1801]: time="2025-08-13T07:09:44.671011096Z" level=info msg="StartContainer for \"f294fef63bdcb1a4b379ccb539aa473200f3c6095040164ef07d2f7d353de0b1\"" Aug 13 07:09:44.690067 systemd[1]: Started cri-containerd-7962e0225dd4ee01cd74c8c7a235869d926fbdb7c5731baa3ca0d257435e1c7a.scope - libcontainer container 7962e0225dd4ee01cd74c8c7a235869d926fbdb7c5731baa3ca0d257435e1c7a. Aug 13 07:09:44.710891 systemd[1]: Started cri-containerd-3c7847c5fdee035d4e46dc6b21a5ef6efa24988e86ec7da8b6b5b9c36485f80a.scope - libcontainer container 3c7847c5fdee035d4e46dc6b21a5ef6efa24988e86ec7da8b6b5b9c36485f80a. Aug 13 07:09:44.718464 systemd[1]: Started cri-containerd-f294fef63bdcb1a4b379ccb539aa473200f3c6095040164ef07d2f7d353de0b1.scope - libcontainer container f294fef63bdcb1a4b379ccb539aa473200f3c6095040164ef07d2f7d353de0b1. Aug 13 07:09:44.765112 containerd[1801]: time="2025-08-13T07:09:44.765061726Z" level=info msg="StartContainer for \"7962e0225dd4ee01cd74c8c7a235869d926fbdb7c5731baa3ca0d257435e1c7a\" returns successfully" Aug 13 07:09:44.788587 containerd[1801]: time="2025-08-13T07:09:44.788411953Z" level=info msg="StartContainer for \"3c7847c5fdee035d4e46dc6b21a5ef6efa24988e86ec7da8b6b5b9c36485f80a\" returns successfully" Aug 13 07:09:44.798348 containerd[1801]: time="2025-08-13T07:09:44.798296908Z" level=info msg="StartContainer for \"f294fef63bdcb1a4b379ccb539aa473200f3c6095040164ef07d2f7d353de0b1\" returns successfully" Aug 13 07:09:44.871532 kubelet[2970]: E0813 07:09:44.871495 2970 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-6317daa899\" not found" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:44.872027 kubelet[2970]: E0813 07:09:44.872000 2970 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-6317daa899\" not found" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:44.876594 kubelet[2970]: E0813 07:09:44.876558 2970 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-6317daa899\" not found" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:45.013368 kubelet[2970]: I0813 07:09:45.011796 2970 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:45.878899 kubelet[2970]: E0813 07:09:45.878872 2970 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-6317daa899\" not found" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:45.881482 kubelet[2970]: E0813 07:09:45.881241 2970 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-6317daa899\" not found" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:46.800285 kubelet[2970]: E0813 07:09:46.800106 2970 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.2-a-6317daa899\" not found" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:46.814452 kubelet[2970]: I0813 07:09:46.814193 2970 apiserver.go:52] "Watching apiserver" Aug 13 07:09:46.828271 kubelet[2970]: I0813 07:09:46.828219 2970 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:09:46.879888 kubelet[2970]: E0813 07:09:46.879566 2970 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-6317daa899\" not found" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:46.911059 kubelet[2970]: I0813 07:09:46.910505 2970 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:46.928914 kubelet[2970]: I0813 07:09:46.928378 2970 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" Aug 13 07:09:46.977864 kubelet[2970]: E0813 07:09:46.977826 2970 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-a-6317daa899\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" Aug 13 07:09:46.978153 kubelet[2970]: I0813 07:09:46.978022 2970 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:46.984071 kubelet[2970]: E0813 07:09:46.983866 2970 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.2-a-6317daa899\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:46.984071 kubelet[2970]: I0813 07:09:46.983896 2970 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-a-6317daa899" Aug 13 07:09:46.986125 kubelet[2970]: E0813 07:09:46.986076 2970 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-a-6317daa899\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.2-a-6317daa899" Aug 13 07:09:48.264842 kubelet[2970]: I0813 07:09:48.264790 2970 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-a-6317daa899" Aug 13 07:09:48.277669 kubelet[2970]: I0813 07:09:48.277598 2970 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:49.118623 systemd[1]: Reload requested from client PID 3256 ('systemctl') (unit session-9.scope)... Aug 13 07:09:49.118640 systemd[1]: Reloading... Aug 13 07:09:49.280301 zram_generator::config[3306]: No configuration found. Aug 13 07:09:49.397582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:49.526317 systemd[1]: Reloading finished in 407 ms. Aug 13 07:09:49.550440 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:49.564733 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:09:49.565288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:49.565470 systemd[1]: kubelet.service: Consumed 1.032s CPU time, 128.5M memory peak. Aug 13 07:09:49.571895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:49.716569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:49.722558 (kubelet)[3367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:09:49.808898 kubelet[3367]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:49.808898 kubelet[3367]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:09:49.808898 kubelet[3367]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:49.808898 kubelet[3367]: I0813 07:09:49.808330 3367 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:09:50.075587 kubelet[3367]: I0813 07:09:49.814470 3367 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:09:50.075587 kubelet[3367]: I0813 07:09:49.814497 3367 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:09:50.075587 kubelet[3367]: I0813 07:09:49.814727 3367 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:09:50.075587 kubelet[3367]: I0813 07:09:50.074597 3367 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 07:09:50.077459 kubelet[3367]: I0813 07:09:50.077421 3367 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:09:50.081991 kubelet[3367]: E0813 07:09:50.081714 3367 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:09:50.081991 kubelet[3367]: I0813 07:09:50.081848 3367 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:09:50.088148 kubelet[3367]: I0813 07:09:50.087573 3367 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:09:50.088148 kubelet[3367]: I0813 07:09:50.087809 3367 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:09:50.088148 kubelet[3367]: I0813 07:09:50.087833 3367 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-a-6317daa899","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:09:50.088148 kubelet[3367]: I0813 07:09:50.088071 3367 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:09:50.088433 kubelet[3367]: I0813 07:09:50.088079 3367 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:09:50.088433 kubelet[3367]: I0813 07:09:50.088122 3367 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:50.088972 kubelet[3367]: I0813 07:09:50.088944 3367 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:09:50.088972 kubelet[3367]: I0813 07:09:50.088969 3367 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:09:50.089062 kubelet[3367]: I0813 07:09:50.088995 3367 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:09:50.089062 kubelet[3367]: I0813 07:09:50.089007 3367 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:09:50.091337 kubelet[3367]: I0813 07:09:50.091304 3367 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 07:09:50.092000 kubelet[3367]: I0813 07:09:50.091973 3367 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:09:50.095852 kubelet[3367]: I0813 07:09:50.095825 3367 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:09:50.095958 kubelet[3367]: I0813 07:09:50.095871 3367 server.go:1289] "Started kubelet" Aug 13 07:09:50.101059 kubelet[3367]: I0813 07:09:50.098060 3367 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:09:50.101059 kubelet[3367]: I0813 07:09:50.098700 3367 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:09:50.101059 kubelet[3367]: I0813 07:09:50.099535 3367 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:09:50.105579 kubelet[3367]: I0813 07:09:50.105493 3367 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:09:50.105735 kubelet[3367]: I0813 07:09:50.105711 3367 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:09:50.106320 kubelet[3367]: I0813 07:09:50.105936 3367 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:09:50.112346 kubelet[3367]: I0813 07:09:50.111331 3367 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:09:50.113446 kubelet[3367]: E0813 07:09:50.113408 3367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-6317daa899\" not found" Aug 13 07:09:50.120335 kubelet[3367]: I0813 07:09:50.118584 3367 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:09:50.134356 kubelet[3367]: I0813 07:09:50.120672 3367 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:09:50.137889 kubelet[3367]: I0813 07:09:50.137851 3367 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:09:50.138155 kubelet[3367]: I0813 07:09:50.138132 3367 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:09:50.149026 kubelet[3367]: I0813 07:09:50.148989 3367 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:09:50.152909 kubelet[3367]: I0813 07:09:50.152872 3367 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:09:50.153995 kubelet[3367]: I0813 07:09:50.153969 3367 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:09:50.154121 kubelet[3367]: I0813 07:09:50.154112 3367 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:09:50.154223 kubelet[3367]: I0813 07:09:50.154213 3367 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:09:50.154376 kubelet[3367]: I0813 07:09:50.154367 3367 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:09:50.154492 kubelet[3367]: E0813 07:09:50.154474 3367 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:09:50.192104 sudo[3401]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 07:09:50.192421 sudo[3401]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 07:09:50.203282 kubelet[3367]: I0813 07:09:50.203216 3367 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:09:50.203411 kubelet[3367]: I0813 07:09:50.203250 3367 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:09:50.203411 kubelet[3367]: I0813 07:09:50.203317 3367 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:50.203503 kubelet[3367]: I0813 07:09:50.203480 3367 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:09:50.203534 kubelet[3367]: I0813 07:09:50.203497 3367 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:09:50.203534 kubelet[3367]: I0813 07:09:50.203514 3367 policy_none.go:49] "None policy: Start" Aug 13 07:09:50.203534 kubelet[3367]: I0813 07:09:50.203532 3367 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:09:50.203592 kubelet[3367]: I0813 07:09:50.203542 3367 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:09:50.203761 kubelet[3367]: I0813 07:09:50.203737 3367 state_mem.go:75] "Updated machine memory state" Aug 13 07:09:50.211393 kubelet[3367]: E0813 07:09:50.211361 3367 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:09:50.211893 kubelet[3367]: I0813 07:09:50.211553 3367 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:09:50.211893 kubelet[3367]: I0813 07:09:50.211563 3367 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:09:50.211893 kubelet[3367]: I0813 07:09:50.211813 3367 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:09:50.214365 kubelet[3367]: E0813 07:09:50.214232 3367 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:09:50.256721 kubelet[3367]: I0813 07:09:50.255690 3367 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.256721 kubelet[3367]: I0813 07:09:50.255783 3367 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.256721 kubelet[3367]: I0813 07:09:50.255705 3367 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.270657 kubelet[3367]: I0813 07:09:50.270412 3367 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:50.283072 kubelet[3367]: I0813 07:09:50.282530 3367 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:50.283201 kubelet[3367]: I0813 07:09:50.283133 3367 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:50.283201 kubelet[3367]: E0813 07:09:50.283178 3367 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-a-6317daa899\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.319593 kubelet[3367]: I0813 07:09:50.318582 3367 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.335147 kubelet[3367]: I0813 07:09:50.334956 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33a011db1b40bd5371fede3be0828f63-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-a-6317daa899\" (UID: \"33a011db1b40bd5371fede3be0828f63\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.335438 kubelet[3367]: I0813 07:09:50.335325 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33a011db1b40bd5371fede3be0828f63-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-a-6317daa899\" (UID: \"33a011db1b40bd5371fede3be0828f63\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.335438 kubelet[3367]: I0813 07:09:50.335360 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33a011db1b40bd5371fede3be0828f63-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-a-6317daa899\" (UID: \"33a011db1b40bd5371fede3be0828f63\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.342403 kubelet[3367]: I0813 07:09:50.342281 3367 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.342910 kubelet[3367]: I0813 07:09:50.342605 3367 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.436088 kubelet[3367]: I0813 07:09:50.436007 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a2fff348b3720a795567bce6cbb98f1-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-a-6317daa899\" (UID: \"4a2fff348b3720a795567bce6cbb98f1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.436668 kubelet[3367]: I0813 07:09:50.436323 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/424e5653b21f275e33fa28e1cbdc4e88-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-a-6317daa899\" (UID: \"424e5653b21f275e33fa28e1cbdc4e88\") " pod="kube-system/kube-scheduler-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.436746 kubelet[3367]: I0813 07:09:50.436508 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a2fff348b3720a795567bce6cbb98f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-a-6317daa899\" (UID: \"4a2fff348b3720a795567bce6cbb98f1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.437010 kubelet[3367]: I0813 07:09:50.436803 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a2fff348b3720a795567bce6cbb98f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-a-6317daa899\" (UID: \"4a2fff348b3720a795567bce6cbb98f1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.437010 kubelet[3367]: I0813 07:09:50.436841 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a2fff348b3720a795567bce6cbb98f1-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-6317daa899\" (UID: \"4a2fff348b3720a795567bce6cbb98f1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.437010 kubelet[3367]: I0813 07:09:50.436857 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a2fff348b3720a795567bce6cbb98f1-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-6317daa899\" (UID: \"4a2fff348b3720a795567bce6cbb98f1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" Aug 13 07:09:50.665886 sudo[3401]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:51.090358 kubelet[3367]: I0813 07:09:51.090319 3367 apiserver.go:52] "Watching apiserver" Aug 13 07:09:51.135288 kubelet[3367]: I0813 07:09:51.135219 3367 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:09:51.182398 kubelet[3367]: I0813 07:09:51.182352 3367 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-a-6317daa899" Aug 13 07:09:51.184395 kubelet[3367]: I0813 07:09:51.182538 3367 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" Aug 13 07:09:51.205603 kubelet[3367]: I0813 07:09:51.205363 3367 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:51.205603 kubelet[3367]: E0813 07:09:51.205450 3367 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-a-6317daa899\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-a-6317daa899" Aug 13 07:09:51.209027 kubelet[3367]: I0813 07:09:51.207512 3367 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:51.209027 kubelet[3367]: E0813 07:09:51.207565 3367 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-a-6317daa899\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" Aug 13 07:09:51.239595 kubelet[3367]: I0813 07:09:51.239117 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-a-6317daa899" podStartSLOduration=3.239097718 podStartE2EDuration="3.239097718s" podCreationTimestamp="2025-08-13 07:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:51.225147492 +0000 UTC m=+1.498049463" watchObservedRunningTime="2025-08-13 07:09:51.239097718 +0000 UTC m=+1.511999729" Aug 13 07:09:51.262454 kubelet[3367]: I0813 07:09:51.262383 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-a-6317daa899" podStartSLOduration=1.2619547770000001 podStartE2EDuration="1.261954777s" podCreationTimestamp="2025-08-13 07:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:51.260160858 +0000 UTC m=+1.533062869" watchObservedRunningTime="2025-08-13 07:09:51.261954777 +0000 UTC m=+1.534856748" Aug 13 07:09:51.262998 kubelet[3367]: I0813 07:09:51.262727 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-a-6317daa899" podStartSLOduration=1.262718056 podStartE2EDuration="1.262718056s" podCreationTimestamp="2025-08-13 07:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:51.240633157 +0000 UTC m=+1.513535128" watchObservedRunningTime="2025-08-13 07:09:51.262718056 +0000 UTC m=+1.535620107" Aug 13 07:09:52.760187 sudo[2276]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:52.837051 sshd[2275]: Connection closed by 10.200.16.10 port 51012 Aug 13 07:09:52.837677 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:52.841530 systemd[1]: sshd@6-10.200.20.40:22-10.200.16.10:51012.service: Deactivated successfully. Aug 13 07:09:52.844743 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:09:52.844937 systemd[1]: session-9.scope: Consumed 7.782s CPU time, 263.1M memory peak. Aug 13 07:09:52.847158 systemd-logind[1723]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:09:52.848472 systemd-logind[1723]: Removed session 9. Aug 13 07:09:56.046987 kubelet[3367]: I0813 07:09:56.046951 3367 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:09:56.047754 kubelet[3367]: I0813 07:09:56.047483 3367 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:09:56.047787 containerd[1801]: time="2025-08-13T07:09:56.047317605Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:09:56.914980 systemd[1]: Created slice kubepods-besteffort-pod89aac68c_4c57_4374_9a9b_07a4c4f1dd36.slice - libcontainer container kubepods-besteffort-pod89aac68c_4c57_4374_9a9b_07a4c4f1dd36.slice. Aug 13 07:09:56.930059 systemd[1]: Created slice kubepods-burstable-poded2ad4b9_d11d_40b6_804c_06bf0efe451b.slice - libcontainer container kubepods-burstable-poded2ad4b9_d11d_40b6_804c_06bf0efe451b.slice. Aug 13 07:09:56.979425 kubelet[3367]: I0813 07:09:56.979207 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-hubble-tls\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.979425 kubelet[3367]: I0813 07:09:56.979270 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gfrf\" (UniqueName: \"kubernetes.io/projected/89aac68c-4c57-4374-9a9b-07a4c4f1dd36-kube-api-access-6gfrf\") pod \"kube-proxy-b98pt\" (UID: \"89aac68c-4c57-4374-9a9b-07a4c4f1dd36\") " pod="kube-system/kube-proxy-b98pt" Aug 13 07:09:56.979425 kubelet[3367]: I0813 07:09:56.979293 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-bpf-maps\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.979425 kubelet[3367]: I0813 07:09:56.979310 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cni-path\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.979425 kubelet[3367]: I0813 07:09:56.979330 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-config-path\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.979425 kubelet[3367]: I0813 07:09:56.979349 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89aac68c-4c57-4374-9a9b-07a4c4f1dd36-lib-modules\") pod \"kube-proxy-b98pt\" (UID: \"89aac68c-4c57-4374-9a9b-07a4c4f1dd36\") " pod="kube-system/kube-proxy-b98pt" Aug 13 07:09:56.979703 kubelet[3367]: I0813 07:09:56.979364 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-run\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.979703 kubelet[3367]: I0813 07:09:56.979378 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-cgroup\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.979703 kubelet[3367]: I0813 07:09:56.979393 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-xtables-lock\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.982638 kubelet[3367]: I0813 07:09:56.982407 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-host-proc-sys-net\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.982638 kubelet[3367]: I0813 07:09:56.982478 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-host-proc-sys-kernel\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.982638 kubelet[3367]: I0813 07:09:56.982534 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlsnv\" (UniqueName: \"kubernetes.io/projected/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-kube-api-access-jlsnv\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.982638 kubelet[3367]: I0813 07:09:56.982559 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-etc-cni-netd\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.982638 kubelet[3367]: I0813 07:09:56.982576 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-clustermesh-secrets\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.982951 kubelet[3367]: I0813 07:09:56.982596 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89aac68c-4c57-4374-9a9b-07a4c4f1dd36-kube-proxy\") pod \"kube-proxy-b98pt\" (UID: \"89aac68c-4c57-4374-9a9b-07a4c4f1dd36\") " pod="kube-system/kube-proxy-b98pt" Aug 13 07:09:56.982951 kubelet[3367]: I0813 07:09:56.982612 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89aac68c-4c57-4374-9a9b-07a4c4f1dd36-xtables-lock\") pod \"kube-proxy-b98pt\" (UID: \"89aac68c-4c57-4374-9a9b-07a4c4f1dd36\") " pod="kube-system/kube-proxy-b98pt" Aug 13 07:09:56.982951 kubelet[3367]: I0813 07:09:56.982648 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-hostproc\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:56.982951 kubelet[3367]: I0813 07:09:56.982703 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-lib-modules\") pod \"cilium-x2d2d\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " pod="kube-system/cilium-x2d2d" Aug 13 07:09:57.000749 systemd[1]: Created slice kubepods-besteffort-pod12c1b4e6_dc1b_4943_8e08_b45c5a6beb38.slice - libcontainer container kubepods-besteffort-pod12c1b4e6_dc1b_4943_8e08_b45c5a6beb38.slice. Aug 13 07:09:57.083608 kubelet[3367]: I0813 07:09:57.083559 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp629\" (UniqueName: \"kubernetes.io/projected/12c1b4e6-dc1b-4943-8e08-b45c5a6beb38-kube-api-access-kp629\") pod \"cilium-operator-6c4d7847fc-zppqc\" (UID: \"12c1b4e6-dc1b-4943-8e08-b45c5a6beb38\") " pod="kube-system/cilium-operator-6c4d7847fc-zppqc" Aug 13 07:09:57.086313 kubelet[3367]: I0813 07:09:57.084723 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12c1b4e6-dc1b-4943-8e08-b45c5a6beb38-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zppqc\" (UID: \"12c1b4e6-dc1b-4943-8e08-b45c5a6beb38\") " pod="kube-system/cilium-operator-6c4d7847fc-zppqc" Aug 13 07:09:57.229441 containerd[1801]: time="2025-08-13T07:09:57.229313286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b98pt,Uid:89aac68c-4c57-4374-9a9b-07a4c4f1dd36,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:57.234112 containerd[1801]: time="2025-08-13T07:09:57.234064521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x2d2d,Uid:ed2ad4b9-d11d-40b6-804c-06bf0efe451b,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:57.311882 containerd[1801]: time="2025-08-13T07:09:57.310902048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zppqc,Uid:12c1b4e6-dc1b-4943-8e08-b45c5a6beb38,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:57.316756 containerd[1801]: time="2025-08-13T07:09:57.316244083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:57.316756 containerd[1801]: time="2025-08-13T07:09:57.316443123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:57.316756 containerd[1801]: time="2025-08-13T07:09:57.316461443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:57.316756 containerd[1801]: time="2025-08-13T07:09:57.316603403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:57.317581 containerd[1801]: time="2025-08-13T07:09:57.317246282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:57.317581 containerd[1801]: time="2025-08-13T07:09:57.317312322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:57.317581 containerd[1801]: time="2025-08-13T07:09:57.317329442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:57.317581 containerd[1801]: time="2025-08-13T07:09:57.317409202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:57.340477 systemd[1]: Started cri-containerd-7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506.scope - libcontainer container 7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506. Aug 13 07:09:57.342093 systemd[1]: Started cri-containerd-e0422e9327ccaa10e4ac9a1f6af9707785ad6b3a656ddae7e5836f8cc18040da.scope - libcontainer container e0422e9327ccaa10e4ac9a1f6af9707785ad6b3a656ddae7e5836f8cc18040da. Aug 13 07:09:57.378163 containerd[1801]: time="2025-08-13T07:09:57.378124465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x2d2d,Uid:ed2ad4b9-d11d-40b6-804c-06bf0efe451b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\"" Aug 13 07:09:57.381045 containerd[1801]: time="2025-08-13T07:09:57.380994942Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 07:09:57.384848 containerd[1801]: time="2025-08-13T07:09:57.384800898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b98pt,Uid:89aac68c-4c57-4374-9a9b-07a4c4f1dd36,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0422e9327ccaa10e4ac9a1f6af9707785ad6b3a656ddae7e5836f8cc18040da\"" Aug 13 07:09:57.395702 containerd[1801]: time="2025-08-13T07:09:57.395479408Z" level=info msg="CreateContainer within sandbox \"e0422e9327ccaa10e4ac9a1f6af9707785ad6b3a656ddae7e5836f8cc18040da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:09:57.403848 containerd[1801]: time="2025-08-13T07:09:57.403735240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:57.404114 containerd[1801]: time="2025-08-13T07:09:57.404000200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:57.404114 containerd[1801]: time="2025-08-13T07:09:57.404055440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:57.404418 containerd[1801]: time="2025-08-13T07:09:57.404349320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:57.426519 systemd[1]: Started cri-containerd-97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c.scope - libcontainer container 97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c. Aug 13 07:09:57.456351 containerd[1801]: time="2025-08-13T07:09:57.456247231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zppqc,Uid:12c1b4e6-dc1b-4943-8e08-b45c5a6beb38,Namespace:kube-system,Attempt:0,} returns sandbox id \"97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c\"" Aug 13 07:09:57.460227 containerd[1801]: time="2025-08-13T07:09:57.460154227Z" level=info msg="CreateContainer within sandbox \"e0422e9327ccaa10e4ac9a1f6af9707785ad6b3a656ddae7e5836f8cc18040da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"204883042e8c5c6e742a5595885605114b340b14e2f399076549624458380714\"" Aug 13 07:09:57.460924 containerd[1801]: time="2025-08-13T07:09:57.460802746Z" level=info msg="StartContainer for \"204883042e8c5c6e742a5595885605114b340b14e2f399076549624458380714\"" Aug 13 07:09:57.485456 systemd[1]: Started cri-containerd-204883042e8c5c6e742a5595885605114b340b14e2f399076549624458380714.scope - libcontainer container 204883042e8c5c6e742a5595885605114b340b14e2f399076549624458380714. Aug 13 07:09:57.518684 containerd[1801]: time="2025-08-13T07:09:57.518620412Z" level=info msg="StartContainer for \"204883042e8c5c6e742a5595885605114b340b14e2f399076549624458380714\" returns successfully" Aug 13 07:10:02.342074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount666248321.mount: Deactivated successfully. Aug 13 07:10:04.096320 containerd[1801]: time="2025-08-13T07:10:04.095682919Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:04.101217 containerd[1801]: time="2025-08-13T07:10:04.101142914Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Aug 13 07:10:04.107670 containerd[1801]: time="2025-08-13T07:10:04.107600387Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:04.109444 containerd[1801]: time="2025-08-13T07:10:04.109400546Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.728358204s" Aug 13 07:10:04.109692 containerd[1801]: time="2025-08-13T07:10:04.109576905Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 13 07:10:04.111497 containerd[1801]: time="2025-08-13T07:10:04.111133304Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 07:10:04.119300 containerd[1801]: time="2025-08-13T07:10:04.118910656Z" level=info msg="CreateContainer within sandbox \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:10:04.178169 containerd[1801]: time="2025-08-13T07:10:04.178107077Z" level=info msg="CreateContainer within sandbox \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5\"" Aug 13 07:10:04.179053 containerd[1801]: time="2025-08-13T07:10:04.178996596Z" level=info msg="StartContainer for \"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5\"" Aug 13 07:10:04.207487 systemd[1]: Started cri-containerd-4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5.scope - libcontainer container 4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5. Aug 13 07:10:04.241888 containerd[1801]: time="2025-08-13T07:10:04.241751173Z" level=info msg="StartContainer for \"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5\" returns successfully" Aug 13 07:10:04.251137 systemd[1]: cri-containerd-4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5.scope: Deactivated successfully. Aug 13 07:10:05.154665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5-rootfs.mount: Deactivated successfully. Aug 13 07:10:05.243946 kubelet[3367]: I0813 07:10:05.243878 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b98pt" podStartSLOduration=9.243863052 podStartE2EDuration="9.243863052s" podCreationTimestamp="2025-08-13 07:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:58.214161833 +0000 UTC m=+8.487063844" watchObservedRunningTime="2025-08-13 07:10:05.243863052 +0000 UTC m=+15.516765063" Aug 13 07:10:06.044602 containerd[1801]: time="2025-08-13T07:10:06.044395653Z" level=info msg="shim disconnected" id=4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5 namespace=k8s.io Aug 13 07:10:06.044602 containerd[1801]: time="2025-08-13T07:10:06.044448533Z" level=warning msg="cleaning up after shim disconnected" id=4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5 namespace=k8s.io Aug 13 07:10:06.044602 containerd[1801]: time="2025-08-13T07:10:06.044456493Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:06.232798 containerd[1801]: time="2025-08-13T07:10:06.231825225Z" level=info msg="CreateContainer within sandbox \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:10:06.276974 containerd[1801]: time="2025-08-13T07:10:06.276921980Z" level=info msg="CreateContainer within sandbox \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04\"" Aug 13 07:10:06.278635 containerd[1801]: time="2025-08-13T07:10:06.277650180Z" level=info msg="StartContainer for \"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04\"" Aug 13 07:10:06.309517 systemd[1]: Started cri-containerd-ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04.scope - libcontainer container ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04. Aug 13 07:10:06.338622 containerd[1801]: time="2025-08-13T07:10:06.338571359Z" level=info msg="StartContainer for \"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04\" returns successfully" Aug 13 07:10:06.348809 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:10:06.349031 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:10:06.349864 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:10:06.355767 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:10:06.358120 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 07:10:06.358578 systemd[1]: cri-containerd-ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04.scope: Deactivated successfully. Aug 13 07:10:06.375315 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:10:06.403022 containerd[1801]: time="2025-08-13T07:10:06.402943854Z" level=info msg="shim disconnected" id=ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04 namespace=k8s.io Aug 13 07:10:06.403022 containerd[1801]: time="2025-08-13T07:10:06.403015854Z" level=warning msg="cleaning up after shim disconnected" id=ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04 namespace=k8s.io Aug 13 07:10:06.403022 containerd[1801]: time="2025-08-13T07:10:06.403023694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:07.243291 containerd[1801]: time="2025-08-13T07:10:07.241291809Z" level=info msg="CreateContainer within sandbox \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:10:07.261750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04-rootfs.mount: Deactivated successfully. Aug 13 07:10:07.325400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856887069.mount: Deactivated successfully. Aug 13 07:10:07.476658 containerd[1801]: time="2025-08-13T07:10:07.476605725Z" level=info msg="CreateContainer within sandbox \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0\"" Aug 13 07:10:07.478661 containerd[1801]: time="2025-08-13T07:10:07.478619803Z" level=info msg="StartContainer for \"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0\"" Aug 13 07:10:07.512536 systemd[1]: Started cri-containerd-86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0.scope - libcontainer container 86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0. Aug 13 07:10:07.547630 systemd[1]: cri-containerd-86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0.scope: Deactivated successfully. Aug 13 07:10:07.550784 containerd[1801]: time="2025-08-13T07:10:07.550692568Z" level=info msg="StartContainer for \"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0\" returns successfully" Aug 13 07:10:07.595643 containerd[1801]: time="2025-08-13T07:10:07.595462001Z" level=info msg="shim disconnected" id=86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0 namespace=k8s.io Aug 13 07:10:07.595643 containerd[1801]: time="2025-08-13T07:10:07.595554721Z" level=warning msg="cleaning up after shim disconnected" id=86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0 namespace=k8s.io Aug 13 07:10:07.595643 containerd[1801]: time="2025-08-13T07:10:07.595565001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:07.976688 containerd[1801]: time="2025-08-13T07:10:07.975872806Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:07.979650 containerd[1801]: time="2025-08-13T07:10:07.979594363Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Aug 13 07:10:07.987470 containerd[1801]: time="2025-08-13T07:10:07.987390835Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:07.988941 containerd[1801]: time="2025-08-13T07:10:07.988762353Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.877561649s" Aug 13 07:10:07.988941 containerd[1801]: time="2025-08-13T07:10:07.988818033Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 13 07:10:07.999185 containerd[1801]: time="2025-08-13T07:10:07.999132622Z" level=info msg="CreateContainer within sandbox \"97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 07:10:08.037920 containerd[1801]: time="2025-08-13T07:10:08.037867582Z" level=info msg="CreateContainer within sandbox \"97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\"" Aug 13 07:10:08.040309 containerd[1801]: time="2025-08-13T07:10:08.039350621Z" level=info msg="StartContainer for \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\"" Aug 13 07:10:08.075741 systemd[1]: Started cri-containerd-87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88.scope - libcontainer container 87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88. Aug 13 07:10:08.107697 containerd[1801]: time="2025-08-13T07:10:08.107637510Z" level=info msg="StartContainer for \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\" returns successfully" Aug 13 07:10:08.249365 containerd[1801]: time="2025-08-13T07:10:08.249214803Z" level=info msg="CreateContainer within sandbox \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:10:08.322706 containerd[1801]: time="2025-08-13T07:10:08.322640327Z" level=info msg="CreateContainer within sandbox \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd\"" Aug 13 07:10:08.323434 containerd[1801]: time="2025-08-13T07:10:08.323397966Z" level=info msg="StartContainer for \"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd\"" Aug 13 07:10:08.387493 systemd[1]: Started cri-containerd-b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd.scope - libcontainer container b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd. Aug 13 07:10:08.427471 systemd[1]: cri-containerd-b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd.scope: Deactivated successfully. Aug 13 07:10:08.433210 containerd[1801]: time="2025-08-13T07:10:08.432399893Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded2ad4b9_d11d_40b6_804c_06bf0efe451b.slice/cri-containerd-b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd.scope/memory.events\": no such file or directory" Aug 13 07:10:08.436123 containerd[1801]: time="2025-08-13T07:10:08.436065649Z" level=info msg="StartContainer for \"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd\" returns successfully" Aug 13 07:10:08.493996 kubelet[3367]: I0813 07:10:08.493554 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zppqc" podStartSLOduration=1.961367426 podStartE2EDuration="12.493534789s" podCreationTimestamp="2025-08-13 07:09:56 +0000 UTC" firstStartedPulling="2025-08-13 07:09:57.457686909 +0000 UTC m=+7.730588920" lastFinishedPulling="2025-08-13 07:10:07.989854272 +0000 UTC m=+18.262756283" observedRunningTime="2025-08-13 07:10:08.35748225 +0000 UTC m=+18.630384261" watchObservedRunningTime="2025-08-13 07:10:08.493534789 +0000 UTC m=+18.766436800" Aug 13 07:10:08.745032 containerd[1801]: time="2025-08-13T07:10:08.744818329Z" level=info msg="shim disconnected" id=b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd namespace=k8s.io Aug 13 07:10:08.745032 containerd[1801]: time="2025-08-13T07:10:08.744873888Z" level=warning msg="cleaning up after shim disconnected" id=b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd namespace=k8s.io Aug 13 07:10:08.745032 containerd[1801]: time="2025-08-13T07:10:08.744882688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:09.252774 containerd[1801]: time="2025-08-13T07:10:09.252462162Z" level=info msg="CreateContainer within sandbox \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:10:09.261976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd-rootfs.mount: Deactivated successfully. Aug 13 07:10:09.303452 containerd[1801]: time="2025-08-13T07:10:09.303401909Z" level=info msg="CreateContainer within sandbox \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\"" Aug 13 07:10:09.304686 containerd[1801]: time="2025-08-13T07:10:09.304525548Z" level=info msg="StartContainer for \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\"" Aug 13 07:10:09.339533 systemd[1]: Started cri-containerd-022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42.scope - libcontainer container 022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42. Aug 13 07:10:09.370519 containerd[1801]: time="2025-08-13T07:10:09.370328959Z" level=info msg="StartContainer for \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\" returns successfully" Aug 13 07:10:09.432694 kubelet[3367]: I0813 07:10:09.431906 3367 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:10:09.500991 systemd[1]: Created slice kubepods-burstable-pod4b2f8758_34cb_4888_b1ad_1fc463b4d67f.slice - libcontainer container kubepods-burstable-pod4b2f8758_34cb_4888_b1ad_1fc463b4d67f.slice. Aug 13 07:10:09.510783 systemd[1]: Created slice kubepods-burstable-pod5ffaefa1_c536_4e72_aa56_6d6288fccc2c.slice - libcontainer container kubepods-burstable-pod5ffaefa1_c536_4e72_aa56_6d6288fccc2c.slice. Aug 13 07:10:09.568803 kubelet[3367]: I0813 07:10:09.568756 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b2f8758-34cb-4888-b1ad-1fc463b4d67f-config-volume\") pod \"coredns-674b8bbfcf-dsqpb\" (UID: \"4b2f8758-34cb-4888-b1ad-1fc463b4d67f\") " pod="kube-system/coredns-674b8bbfcf-dsqpb" Aug 13 07:10:09.569136 kubelet[3367]: I0813 07:10:09.568822 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ffaefa1-c536-4e72-aa56-6d6288fccc2c-config-volume\") pod \"coredns-674b8bbfcf-kldtx\" (UID: \"5ffaefa1-c536-4e72-aa56-6d6288fccc2c\") " pod="kube-system/coredns-674b8bbfcf-kldtx" Aug 13 07:10:09.569136 kubelet[3367]: I0813 07:10:09.568848 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx624\" (UniqueName: \"kubernetes.io/projected/5ffaefa1-c536-4e72-aa56-6d6288fccc2c-kube-api-access-sx624\") pod \"coredns-674b8bbfcf-kldtx\" (UID: \"5ffaefa1-c536-4e72-aa56-6d6288fccc2c\") " pod="kube-system/coredns-674b8bbfcf-kldtx" Aug 13 07:10:09.569136 kubelet[3367]: I0813 07:10:09.568865 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cszzk\" (UniqueName: \"kubernetes.io/projected/4b2f8758-34cb-4888-b1ad-1fc463b4d67f-kube-api-access-cszzk\") pod \"coredns-674b8bbfcf-dsqpb\" (UID: \"4b2f8758-34cb-4888-b1ad-1fc463b4d67f\") " pod="kube-system/coredns-674b8bbfcf-dsqpb" Aug 13 07:10:09.809352 containerd[1801]: time="2025-08-13T07:10:09.807731785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dsqpb,Uid:4b2f8758-34cb-4888-b1ad-1fc463b4d67f,Namespace:kube-system,Attempt:0,}" Aug 13 07:10:09.815153 containerd[1801]: time="2025-08-13T07:10:09.815100018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kldtx,Uid:5ffaefa1-c536-4e72-aa56-6d6288fccc2c,Namespace:kube-system,Attempt:0,}" Aug 13 07:10:10.284420 kubelet[3367]: I0813 07:10:10.283859 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x2d2d" podStartSLOduration=7.55341965 podStartE2EDuration="14.283837891s" podCreationTimestamp="2025-08-13 07:09:56 +0000 UTC" firstStartedPulling="2025-08-13 07:09:57.380217063 +0000 UTC m=+7.653119074" lastFinishedPulling="2025-08-13 07:10:04.110635304 +0000 UTC m=+14.383537315" observedRunningTime="2025-08-13 07:10:10.279735376 +0000 UTC m=+20.552637347" watchObservedRunningTime="2025-08-13 07:10:10.283837891 +0000 UTC m=+20.556739982" Aug 13 07:10:12.304958 systemd-networkd[1470]: cilium_host: Link UP Aug 13 07:10:12.307316 systemd-networkd[1470]: cilium_net: Link UP Aug 13 07:10:12.308439 systemd-networkd[1470]: cilium_net: Gained carrier Aug 13 07:10:12.308739 systemd-networkd[1470]: cilium_host: Gained carrier Aug 13 07:10:12.308896 systemd-networkd[1470]: cilium_net: Gained IPv6LL Aug 13 07:10:12.309075 systemd-networkd[1470]: cilium_host: Gained IPv6LL Aug 13 07:10:12.442620 systemd-networkd[1470]: cilium_vxlan: Link UP Aug 13 07:10:12.442632 systemd-networkd[1470]: cilium_vxlan: Gained carrier Aug 13 07:10:12.712444 kernel: NET: Registered PF_ALG protocol family Aug 13 07:10:13.411650 systemd-networkd[1470]: lxc_health: Link UP Aug 13 07:10:13.415157 systemd-networkd[1470]: lxc_health: Gained carrier Aug 13 07:10:13.942475 systemd-networkd[1470]: lxc391e958aeb7c: Link UP Aug 13 07:10:13.943458 systemd-networkd[1470]: lxc8fe16d9b66a0: Link UP Aug 13 07:10:13.956297 kernel: eth0: renamed from tmp0ce75 Aug 13 07:10:13.961394 systemd-networkd[1470]: lxc391e958aeb7c: Gained carrier Aug 13 07:10:13.967376 kernel: eth0: renamed from tmpad60b Aug 13 07:10:13.972769 systemd-networkd[1470]: lxc8fe16d9b66a0: Gained carrier Aug 13 07:10:14.187490 systemd-networkd[1470]: cilium_vxlan: Gained IPv6LL Aug 13 07:10:14.763417 systemd-networkd[1470]: lxc_health: Gained IPv6LL Aug 13 07:10:15.084436 systemd-networkd[1470]: lxc8fe16d9b66a0: Gained IPv6LL Aug 13 07:10:15.659430 systemd-networkd[1470]: lxc391e958aeb7c: Gained IPv6LL Aug 13 07:10:17.861432 containerd[1801]: time="2025-08-13T07:10:17.860794420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:17.861432 containerd[1801]: time="2025-08-13T07:10:17.860864780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:17.861432 containerd[1801]: time="2025-08-13T07:10:17.860883980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:17.862540 containerd[1801]: time="2025-08-13T07:10:17.862351098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:17.891215 systemd[1]: Started cri-containerd-ad60b74f03e7a12c831ecf6a06765fd02e8fbae4a9b046c64e260bd7f474a9cb.scope - libcontainer container ad60b74f03e7a12c831ecf6a06765fd02e8fbae4a9b046c64e260bd7f474a9cb. Aug 13 07:10:17.902284 containerd[1801]: time="2025-08-13T07:10:17.902066501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:17.902647 containerd[1801]: time="2025-08-13T07:10:17.902468061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:17.902647 containerd[1801]: time="2025-08-13T07:10:17.902488301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:17.903468 containerd[1801]: time="2025-08-13T07:10:17.903168460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:17.937714 systemd[1]: Started cri-containerd-0ce75a064d8bb9050d3a1ff220f81dd398c5d267c5c394efa6edcb9253ed5ae2.scope - libcontainer container 0ce75a064d8bb9050d3a1ff220f81dd398c5d267c5c394efa6edcb9253ed5ae2. Aug 13 07:10:17.961109 containerd[1801]: time="2025-08-13T07:10:17.960755367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kldtx,Uid:5ffaefa1-c536-4e72-aa56-6d6288fccc2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad60b74f03e7a12c831ecf6a06765fd02e8fbae4a9b046c64e260bd7f474a9cb\"" Aug 13 07:10:17.981121 containerd[1801]: time="2025-08-13T07:10:17.981060668Z" level=info msg="CreateContainer within sandbox \"ad60b74f03e7a12c831ecf6a06765fd02e8fbae4a9b046c64e260bd7f474a9cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:10:17.992570 containerd[1801]: time="2025-08-13T07:10:17.992516978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dsqpb,Uid:4b2f8758-34cb-4888-b1ad-1fc463b4d67f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ce75a064d8bb9050d3a1ff220f81dd398c5d267c5c394efa6edcb9253ed5ae2\"" Aug 13 07:10:18.005884 containerd[1801]: time="2025-08-13T07:10:18.005731645Z" level=info msg="CreateContainer within sandbox \"0ce75a064d8bb9050d3a1ff220f81dd398c5d267c5c394efa6edcb9253ed5ae2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:10:18.080412 containerd[1801]: time="2025-08-13T07:10:18.080344936Z" level=info msg="CreateContainer within sandbox \"ad60b74f03e7a12c831ecf6a06765fd02e8fbae4a9b046c64e260bd7f474a9cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ef161b948640a3db57f3e77d0bd8425ab941a4ad5410fb9dbfddceb8a5fb241\"" Aug 13 07:10:18.081805 containerd[1801]: time="2025-08-13T07:10:18.081726175Z" level=info msg="StartContainer for \"8ef161b948640a3db57f3e77d0bd8425ab941a4ad5410fb9dbfddceb8a5fb241\"" Aug 13 07:10:18.093536 containerd[1801]: time="2025-08-13T07:10:18.093484124Z" level=info msg="CreateContainer within sandbox \"0ce75a064d8bb9050d3a1ff220f81dd398c5d267c5c394efa6edcb9253ed5ae2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e1c64afe085a797b9ffe3c7aec6841209d513e32143ab397fa3afc0de621891\"" Aug 13 07:10:18.095972 containerd[1801]: time="2025-08-13T07:10:18.095831922Z" level=info msg="StartContainer for \"3e1c64afe085a797b9ffe3c7aec6841209d513e32143ab397fa3afc0de621891\"" Aug 13 07:10:18.125859 systemd[1]: Started cri-containerd-8ef161b948640a3db57f3e77d0bd8425ab941a4ad5410fb9dbfddceb8a5fb241.scope - libcontainer container 8ef161b948640a3db57f3e77d0bd8425ab941a4ad5410fb9dbfddceb8a5fb241. Aug 13 07:10:18.136509 systemd[1]: Started cri-containerd-3e1c64afe085a797b9ffe3c7aec6841209d513e32143ab397fa3afc0de621891.scope - libcontainer container 3e1c64afe085a797b9ffe3c7aec6841209d513e32143ab397fa3afc0de621891. Aug 13 07:10:18.183435 containerd[1801]: time="2025-08-13T07:10:18.181869642Z" level=info msg="StartContainer for \"8ef161b948640a3db57f3e77d0bd8425ab941a4ad5410fb9dbfddceb8a5fb241\" returns successfully" Aug 13 07:10:18.198674 containerd[1801]: time="2025-08-13T07:10:18.198610426Z" level=info msg="StartContainer for \"3e1c64afe085a797b9ffe3c7aec6841209d513e32143ab397fa3afc0de621891\" returns successfully" Aug 13 07:10:18.288858 kubelet[3367]: I0813 07:10:18.287936 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kldtx" podStartSLOduration=22.287920424 podStartE2EDuration="22.287920424s" podCreationTimestamp="2025-08-13 07:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:10:18.285739106 +0000 UTC m=+28.558641157" watchObservedRunningTime="2025-08-13 07:10:18.287920424 +0000 UTC m=+28.560822435" Aug 13 07:10:18.359378 kubelet[3367]: I0813 07:10:18.356773 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dsqpb" podStartSLOduration=22.3567534 podStartE2EDuration="22.3567534s" podCreationTimestamp="2025-08-13 07:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:10:18.353663683 +0000 UTC m=+28.626565694" watchObservedRunningTime="2025-08-13 07:10:18.3567534 +0000 UTC m=+28.629655411" Aug 13 07:10:25.257989 kubelet[3367]: I0813 07:10:25.257900 3367 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:11:29.435580 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.16.10:34076.service - OpenSSH per-connection server daemon (10.200.16.10:34076). Aug 13 07:11:29.930749 sshd[4769]: Accepted publickey for core from 10.200.16.10 port 34076 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:11:29.932960 sshd-session[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:29.937759 systemd-logind[1723]: New session 10 of user core. Aug 13 07:11:29.943441 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:11:30.363302 sshd[4771]: Connection closed by 10.200.16.10 port 34076 Aug 13 07:11:30.363898 sshd-session[4769]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:30.367767 systemd[1]: sshd@7-10.200.20.40:22-10.200.16.10:34076.service: Deactivated successfully. Aug 13 07:11:30.369759 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:11:30.370635 systemd-logind[1723]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:11:30.371902 systemd-logind[1723]: Removed session 10. Aug 13 07:11:35.462529 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.16.10:51078.service - OpenSSH per-connection server daemon (10.200.16.10:51078). Aug 13 07:11:35.949959 sshd[4783]: Accepted publickey for core from 10.200.16.10 port 51078 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:11:35.951701 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:35.957224 systemd-logind[1723]: New session 11 of user core. Aug 13 07:11:35.966453 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:11:36.361595 sshd[4785]: Connection closed by 10.200.16.10 port 51078 Aug 13 07:11:36.362179 sshd-session[4783]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:36.365640 systemd[1]: sshd@8-10.200.20.40:22-10.200.16.10:51078.service: Deactivated successfully. Aug 13 07:11:36.367425 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:11:36.368085 systemd-logind[1723]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:11:36.368983 systemd-logind[1723]: Removed session 11. Aug 13 07:11:41.454229 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.16.10:34564.service - OpenSSH per-connection server daemon (10.200.16.10:34564). Aug 13 07:11:41.950155 sshd[4797]: Accepted publickey for core from 10.200.16.10 port 34564 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:11:41.951664 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:41.956589 systemd-logind[1723]: New session 12 of user core. Aug 13 07:11:41.967448 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:11:42.358786 sshd[4799]: Connection closed by 10.200.16.10 port 34564 Aug 13 07:11:42.358334 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:42.361916 systemd[1]: sshd@9-10.200.20.40:22-10.200.16.10:34564.service: Deactivated successfully. Aug 13 07:11:42.364349 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:11:42.365668 systemd-logind[1723]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:11:42.366836 systemd-logind[1723]: Removed session 12. Aug 13 07:11:47.452591 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.16.10:34574.service - OpenSSH per-connection server daemon (10.200.16.10:34574). Aug 13 07:11:47.942043 sshd[4812]: Accepted publickey for core from 10.200.16.10 port 34574 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:11:47.943554 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:47.947840 systemd-logind[1723]: New session 13 of user core. Aug 13 07:11:47.955454 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:11:48.360413 sshd[4814]: Connection closed by 10.200.16.10 port 34574 Aug 13 07:11:48.360209 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:48.364508 systemd[1]: sshd@10-10.200.20.40:22-10.200.16.10:34574.service: Deactivated successfully. Aug 13 07:11:48.366835 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:11:48.367819 systemd-logind[1723]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:11:48.368782 systemd-logind[1723]: Removed session 13. Aug 13 07:11:48.449969 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.16.10:34588.service - OpenSSH per-connection server daemon (10.200.16.10:34588). Aug 13 07:11:48.950599 sshd[4827]: Accepted publickey for core from 10.200.16.10 port 34588 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:11:48.951965 sshd-session[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:48.956287 systemd-logind[1723]: New session 14 of user core. Aug 13 07:11:48.967442 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:11:49.413761 sshd[4829]: Connection closed by 10.200.16.10 port 34588 Aug 13 07:11:49.414523 sshd-session[4827]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:49.418246 systemd-logind[1723]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:11:49.418962 systemd[1]: sshd@11-10.200.20.40:22-10.200.16.10:34588.service: Deactivated successfully. Aug 13 07:11:49.421576 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:11:49.422914 systemd-logind[1723]: Removed session 14. Aug 13 07:11:49.512572 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.16.10:34594.service - OpenSSH per-connection server daemon (10.200.16.10:34594). Aug 13 07:11:50.002729 sshd[4839]: Accepted publickey for core from 10.200.16.10 port 34594 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:11:50.003888 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:50.008813 systemd-logind[1723]: New session 15 of user core. Aug 13 07:11:50.018460 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:11:50.433814 sshd[4841]: Connection closed by 10.200.16.10 port 34594 Aug 13 07:11:50.434490 sshd-session[4839]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:50.438224 systemd[1]: sshd@12-10.200.20.40:22-10.200.16.10:34594.service: Deactivated successfully. Aug 13 07:11:50.440418 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:11:50.442837 systemd-logind[1723]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:11:50.444847 systemd-logind[1723]: Removed session 15. Aug 13 07:11:55.530554 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.16.10:37694.service - OpenSSH per-connection server daemon (10.200.16.10:37694). Aug 13 07:11:56.025542 sshd[4855]: Accepted publickey for core from 10.200.16.10 port 37694 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:11:56.027181 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:56.032388 systemd-logind[1723]: New session 16 of user core. Aug 13 07:11:56.041471 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:11:56.434851 sshd[4857]: Connection closed by 10.200.16.10 port 37694 Aug 13 07:11:56.435456 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:56.438303 systemd-logind[1723]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:11:56.439607 systemd[1]: sshd@13-10.200.20.40:22-10.200.16.10:37694.service: Deactivated successfully. Aug 13 07:11:56.441638 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:11:56.443152 systemd-logind[1723]: Removed session 16. Aug 13 07:12:01.523985 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.16.10:38650.service - OpenSSH per-connection server daemon (10.200.16.10:38650). Aug 13 07:12:02.022403 sshd[4871]: Accepted publickey for core from 10.200.16.10 port 38650 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:12:02.023774 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:02.027921 systemd-logind[1723]: New session 17 of user core. Aug 13 07:12:02.030408 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:12:02.438288 sshd[4873]: Connection closed by 10.200.16.10 port 38650 Aug 13 07:12:02.438890 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:02.441848 systemd-logind[1723]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:12:02.442020 systemd[1]: sshd@14-10.200.20.40:22-10.200.16.10:38650.service: Deactivated successfully. Aug 13 07:12:02.444956 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:12:02.447297 systemd-logind[1723]: Removed session 17. Aug 13 07:12:02.532567 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.16.10:38660.service - OpenSSH per-connection server daemon (10.200.16.10:38660). Aug 13 07:12:02.986397 sshd[4885]: Accepted publickey for core from 10.200.16.10 port 38660 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:12:02.987801 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:02.992576 systemd-logind[1723]: New session 18 of user core. Aug 13 07:12:02.998491 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:12:03.441664 sshd[4887]: Connection closed by 10.200.16.10 port 38660 Aug 13 07:12:03.441568 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:03.444970 systemd-logind[1723]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:12:03.445570 systemd[1]: sshd@15-10.200.20.40:22-10.200.16.10:38660.service: Deactivated successfully. Aug 13 07:12:03.447950 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:12:03.449092 systemd-logind[1723]: Removed session 18. Aug 13 07:12:03.532531 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.16.10:38672.service - OpenSSH per-connection server daemon (10.200.16.10:38672). Aug 13 07:12:04.022010 sshd[4896]: Accepted publickey for core from 10.200.16.10 port 38672 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:12:04.023435 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:04.027938 systemd-logind[1723]: New session 19 of user core. Aug 13 07:12:04.032456 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:12:04.832679 sshd[4898]: Connection closed by 10.200.16.10 port 38672 Aug 13 07:12:04.832580 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:04.837629 systemd[1]: sshd@16-10.200.20.40:22-10.200.16.10:38672.service: Deactivated successfully. Aug 13 07:12:04.841328 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:12:04.842415 systemd-logind[1723]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:12:04.843414 systemd-logind[1723]: Removed session 19. Aug 13 07:12:04.923251 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.16.10:38674.service - OpenSSH per-connection server daemon (10.200.16.10:38674). Aug 13 07:12:05.419939 sshd[4916]: Accepted publickey for core from 10.200.16.10 port 38674 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:12:05.421390 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:05.425764 systemd-logind[1723]: New session 20 of user core. Aug 13 07:12:05.432677 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:12:05.973401 sshd[4918]: Connection closed by 10.200.16.10 port 38674 Aug 13 07:12:05.973819 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:05.978183 systemd[1]: sshd@17-10.200.20.40:22-10.200.16.10:38674.service: Deactivated successfully. Aug 13 07:12:05.980806 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:12:05.982018 systemd-logind[1723]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:12:05.983079 systemd-logind[1723]: Removed session 20. Aug 13 07:12:06.061368 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.16.10:38690.service - OpenSSH per-connection server daemon (10.200.16.10:38690). Aug 13 07:12:06.518830 sshd[4928]: Accepted publickey for core from 10.200.16.10 port 38690 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:12:06.520305 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:06.525221 systemd-logind[1723]: New session 21 of user core. Aug 13 07:12:06.531501 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:12:06.919265 sshd[4930]: Connection closed by 10.200.16.10 port 38690 Aug 13 07:12:06.919875 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:06.923678 systemd[1]: sshd@18-10.200.20.40:22-10.200.16.10:38690.service: Deactivated successfully. Aug 13 07:12:06.925560 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:12:06.927039 systemd-logind[1723]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:12:06.928518 systemd-logind[1723]: Removed session 21. Aug 13 07:12:12.017536 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.16.10:43632.service - OpenSSH per-connection server daemon (10.200.16.10:43632). Aug 13 07:12:12.507780 sshd[4944]: Accepted publickey for core from 10.200.16.10 port 43632 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:12:12.509221 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:12.514207 systemd-logind[1723]: New session 22 of user core. Aug 13 07:12:12.521638 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:12:12.924402 sshd[4946]: Connection closed by 10.200.16.10 port 43632 Aug 13 07:12:12.924991 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:12.927972 systemd-logind[1723]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:12:12.928914 systemd[1]: sshd@19-10.200.20.40:22-10.200.16.10:43632.service: Deactivated successfully. Aug 13 07:12:12.931689 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:12:12.933706 systemd-logind[1723]: Removed session 22. Aug 13 07:12:18.024110 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.16.10:43648.service - OpenSSH per-connection server daemon (10.200.16.10:43648). Aug 13 07:12:18.518022 sshd[4959]: Accepted publickey for core from 10.200.16.10 port 43648 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:12:18.519609 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:18.524314 systemd-logind[1723]: New session 23 of user core. Aug 13 07:12:18.534450 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:12:18.928357 sshd[4961]: Connection closed by 10.200.16.10 port 43648 Aug 13 07:12:18.927353 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:18.930245 systemd[1]: sshd@20-10.200.20.40:22-10.200.16.10:43648.service: Deactivated successfully. Aug 13 07:12:18.932097 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:12:18.934705 systemd-logind[1723]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:12:18.936161 systemd-logind[1723]: Removed session 23. Aug 13 07:12:19.020538 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.16.10:43650.service - OpenSSH per-connection server daemon (10.200.16.10:43650). Aug 13 07:12:19.513878 sshd[4973]: Accepted publickey for core from 10.200.16.10 port 43650 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:12:19.515305 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:19.520464 systemd-logind[1723]: New session 24 of user core. Aug 13 07:12:19.528442 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:12:21.472110 containerd[1801]: time="2025-08-13T07:12:21.471989857Z" level=info msg="StopContainer for \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\" with timeout 30 (s)" Aug 13 07:12:21.474812 containerd[1801]: time="2025-08-13T07:12:21.472823737Z" level=info msg="Stop container \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\" with signal terminated" Aug 13 07:12:21.489484 containerd[1801]: time="2025-08-13T07:12:21.489360484Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:12:21.494555 systemd[1]: cri-containerd-87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88.scope: Deactivated successfully. Aug 13 07:12:21.503757 containerd[1801]: time="2025-08-13T07:12:21.503720032Z" level=info msg="StopContainer for \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\" with timeout 2 (s)" Aug 13 07:12:21.504396 containerd[1801]: time="2025-08-13T07:12:21.504370032Z" level=info msg="Stop container \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\" with signal terminated" Aug 13 07:12:21.512220 systemd-networkd[1470]: lxc_health: Link DOWN Aug 13 07:12:21.512228 systemd-networkd[1470]: lxc_health: Lost carrier Aug 13 07:12:21.529509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88-rootfs.mount: Deactivated successfully. Aug 13 07:12:21.532536 systemd[1]: cri-containerd-022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42.scope: Deactivated successfully. Aug 13 07:12:21.533267 systemd[1]: cri-containerd-022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42.scope: Consumed 6.751s CPU time, 123.8M memory peak, 144K read from disk, 12.9M written to disk. Aug 13 07:12:21.554935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42-rootfs.mount: Deactivated successfully. Aug 13 07:12:21.616780 containerd[1801]: time="2025-08-13T07:12:21.616557424Z" level=info msg="shim disconnected" id=87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88 namespace=k8s.io Aug 13 07:12:21.616780 containerd[1801]: time="2025-08-13T07:12:21.616624904Z" level=warning msg="cleaning up after shim disconnected" id=87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88 namespace=k8s.io Aug 13 07:12:21.616780 containerd[1801]: time="2025-08-13T07:12:21.616633464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:21.616780 containerd[1801]: time="2025-08-13T07:12:21.616731743Z" level=info msg="shim disconnected" id=022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42 namespace=k8s.io Aug 13 07:12:21.616780 containerd[1801]: time="2025-08-13T07:12:21.616770983Z" level=warning msg="cleaning up after shim disconnected" id=022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42 namespace=k8s.io Aug 13 07:12:21.616780 containerd[1801]: time="2025-08-13T07:12:21.616779063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:21.640740 containerd[1801]: time="2025-08-13T07:12:21.640688925Z" level=info msg="StopContainer for \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\" returns successfully" Aug 13 07:12:21.641721 containerd[1801]: time="2025-08-13T07:12:21.641525364Z" level=info msg="StopPodSandbox for \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\"" Aug 13 07:12:21.641721 containerd[1801]: time="2025-08-13T07:12:21.641573924Z" level=info msg="Container to stop \"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:12:21.641721 containerd[1801]: time="2025-08-13T07:12:21.641588244Z" level=info msg="Container to stop \"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:12:21.641721 containerd[1801]: time="2025-08-13T07:12:21.641597124Z" level=info msg="Container to stop \"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:12:21.641721 containerd[1801]: time="2025-08-13T07:12:21.641605564Z" level=info msg="Container to stop \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:12:21.641721 containerd[1801]: time="2025-08-13T07:12:21.641614124Z" level=info msg="Container to stop \"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:12:21.644275 containerd[1801]: time="2025-08-13T07:12:21.644205642Z" level=info msg="StopContainer for \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\" returns successfully" Aug 13 07:12:21.645086 containerd[1801]: time="2025-08-13T07:12:21.644898561Z" level=info msg="StopPodSandbox for \"97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c\"" Aug 13 07:12:21.645086 containerd[1801]: time="2025-08-13T07:12:21.644955161Z" level=info msg="Container to stop \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:12:21.645392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506-shm.mount: Deactivated successfully. Aug 13 07:12:21.651192 systemd[1]: cri-containerd-7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506.scope: Deactivated successfully. Aug 13 07:12:21.656044 systemd[1]: cri-containerd-97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c.scope: Deactivated successfully. Aug 13 07:12:21.696494 containerd[1801]: time="2025-08-13T07:12:21.696429041Z" level=info msg="shim disconnected" id=7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506 namespace=k8s.io Aug 13 07:12:21.697194 containerd[1801]: time="2025-08-13T07:12:21.697031440Z" level=warning msg="cleaning up after shim disconnected" id=7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506 namespace=k8s.io Aug 13 07:12:21.697194 containerd[1801]: time="2025-08-13T07:12:21.697054120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:21.697194 containerd[1801]: time="2025-08-13T07:12:21.696917000Z" level=info msg="shim disconnected" id=97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c namespace=k8s.io Aug 13 07:12:21.697194 containerd[1801]: time="2025-08-13T07:12:21.697154160Z" level=warning msg="cleaning up after shim disconnected" id=97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c namespace=k8s.io Aug 13 07:12:21.697194 containerd[1801]: time="2025-08-13T07:12:21.697161600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:21.715136 containerd[1801]: time="2025-08-13T07:12:21.715077866Z" level=info msg="TearDown network for sandbox \"97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c\" successfully" Aug 13 07:12:21.715136 containerd[1801]: time="2025-08-13T07:12:21.715123826Z" level=info msg="StopPodSandbox for \"97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c\" returns successfully" Aug 13 07:12:21.716944 containerd[1801]: time="2025-08-13T07:12:21.716899905Z" level=info msg="TearDown network for sandbox \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" successfully" Aug 13 07:12:21.716944 containerd[1801]: time="2025-08-13T07:12:21.716940145Z" level=info msg="StopPodSandbox for \"7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506\" returns successfully" Aug 13 07:12:21.812275 kubelet[3367]: I0813 07:12:21.810424 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-hubble-tls\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812275 kubelet[3367]: I0813 07:12:21.810477 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-lib-modules\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812275 kubelet[3367]: I0813 07:12:21.810498 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-run\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812275 kubelet[3367]: I0813 07:12:21.810513 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-hostproc\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812275 kubelet[3367]: I0813 07:12:21.810532 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12c1b4e6-dc1b-4943-8e08-b45c5a6beb38-cilium-config-path\") pod \"12c1b4e6-dc1b-4943-8e08-b45c5a6beb38\" (UID: \"12c1b4e6-dc1b-4943-8e08-b45c5a6beb38\") " Aug 13 07:12:21.812275 kubelet[3367]: I0813 07:12:21.810548 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cni-path\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812770 kubelet[3367]: I0813 07:12:21.810562 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-cgroup\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812770 kubelet[3367]: I0813 07:12:21.810578 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-clustermesh-secrets\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812770 kubelet[3367]: I0813 07:12:21.810593 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-host-proc-sys-net\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812770 kubelet[3367]: I0813 07:12:21.810608 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-host-proc-sys-kernel\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812770 kubelet[3367]: I0813 07:12:21.810622 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-etc-cni-netd\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812770 kubelet[3367]: I0813 07:12:21.810641 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-bpf-maps\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812894 kubelet[3367]: I0813 07:12:21.810657 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-config-path\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812894 kubelet[3367]: I0813 07:12:21.810674 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlsnv\" (UniqueName: \"kubernetes.io/projected/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-kube-api-access-jlsnv\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812894 kubelet[3367]: I0813 07:12:21.810715 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp629\" (UniqueName: \"kubernetes.io/projected/12c1b4e6-dc1b-4943-8e08-b45c5a6beb38-kube-api-access-kp629\") pod \"12c1b4e6-dc1b-4943-8e08-b45c5a6beb38\" (UID: \"12c1b4e6-dc1b-4943-8e08-b45c5a6beb38\") " Aug 13 07:12:21.812894 kubelet[3367]: I0813 07:12:21.810732 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-xtables-lock\") pod \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\" (UID: \"ed2ad4b9-d11d-40b6-804c-06bf0efe451b\") " Aug 13 07:12:21.812894 kubelet[3367]: I0813 07:12:21.810790 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:12:21.813000 kubelet[3367]: I0813 07:12:21.810827 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:12:21.813000 kubelet[3367]: I0813 07:12:21.810840 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:12:21.813000 kubelet[3367]: I0813 07:12:21.810853 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-hostproc" (OuterVolumeSpecName: "hostproc") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:12:21.813000 kubelet[3367]: I0813 07:12:21.812433 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cni-path" (OuterVolumeSpecName: "cni-path") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:12:21.813000 kubelet[3367]: I0813 07:12:21.812505 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:12:21.813639 kubelet[3367]: I0813 07:12:21.813610 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12c1b4e6-dc1b-4943-8e08-b45c5a6beb38-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "12c1b4e6-dc1b-4943-8e08-b45c5a6beb38" (UID: "12c1b4e6-dc1b-4943-8e08-b45c5a6beb38"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:12:21.815492 kubelet[3367]: I0813 07:12:21.813778 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:12:21.815608 kubelet[3367]: I0813 07:12:21.813798 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:12:21.815667 kubelet[3367]: I0813 07:12:21.813811 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:12:21.815719 kubelet[3367]: I0813 07:12:21.813821 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:12:21.815772 kubelet[3367]: I0813 07:12:21.814025 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:12:21.817611 kubelet[3367]: I0813 07:12:21.817571 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:12:21.818454 kubelet[3367]: I0813 07:12:21.818407 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c1b4e6-dc1b-4943-8e08-b45c5a6beb38-kube-api-access-kp629" (OuterVolumeSpecName: "kube-api-access-kp629") pod "12c1b4e6-dc1b-4943-8e08-b45c5a6beb38" (UID: "12c1b4e6-dc1b-4943-8e08-b45c5a6beb38"). InnerVolumeSpecName "kube-api-access-kp629". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:12:21.819677 kubelet[3367]: I0813 07:12:21.819608 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-kube-api-access-jlsnv" (OuterVolumeSpecName: "kube-api-access-jlsnv") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "kube-api-access-jlsnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:12:21.819677 kubelet[3367]: I0813 07:12:21.819658 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ed2ad4b9-d11d-40b6-804c-06bf0efe451b" (UID: "ed2ad4b9-d11d-40b6-804c-06bf0efe451b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:12:21.911859 kubelet[3367]: I0813 07:12:21.911645 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-run\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.911859 kubelet[3367]: I0813 07:12:21.911679 3367 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-hostproc\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.911859 kubelet[3367]: I0813 07:12:21.911688 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12c1b4e6-dc1b-4943-8e08-b45c5a6beb38-cilium-config-path\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.911859 kubelet[3367]: I0813 07:12:21.911698 3367 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cni-path\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.911859 kubelet[3367]: I0813 07:12:21.911706 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-cgroup\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.911859 kubelet[3367]: I0813 07:12:21.911716 3367 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-clustermesh-secrets\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.911859 kubelet[3367]: I0813 07:12:21.911724 3367 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-host-proc-sys-net\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.911859 kubelet[3367]: I0813 07:12:21.911732 3367 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-host-proc-sys-kernel\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.912184 kubelet[3367]: I0813 07:12:21.911774 3367 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-etc-cni-netd\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.912184 kubelet[3367]: I0813 07:12:21.911782 3367 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-bpf-maps\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.912184 kubelet[3367]: I0813 07:12:21.911792 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-cilium-config-path\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.912184 kubelet[3367]: I0813 07:12:21.911800 3367 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jlsnv\" (UniqueName: \"kubernetes.io/projected/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-kube-api-access-jlsnv\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.912184 kubelet[3367]: I0813 07:12:21.911808 3367 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kp629\" (UniqueName: \"kubernetes.io/projected/12c1b4e6-dc1b-4943-8e08-b45c5a6beb38-kube-api-access-kp629\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.912184 kubelet[3367]: I0813 07:12:21.911818 3367 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-xtables-lock\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.912184 kubelet[3367]: I0813 07:12:21.911827 3367 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-hubble-tls\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:21.912184 kubelet[3367]: I0813 07:12:21.911834 3367 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed2ad4b9-d11d-40b6-804c-06bf0efe451b-lib-modules\") on node \"ci-4230.2.2-a-6317daa899\" DevicePath \"\"" Aug 13 07:12:22.162879 systemd[1]: Removed slice kubepods-burstable-poded2ad4b9_d11d_40b6_804c_06bf0efe451b.slice - libcontainer container kubepods-burstable-poded2ad4b9_d11d_40b6_804c_06bf0efe451b.slice. Aug 13 07:12:22.163001 systemd[1]: kubepods-burstable-poded2ad4b9_d11d_40b6_804c_06bf0efe451b.slice: Consumed 6.829s CPU time, 124.3M memory peak, 144K read from disk, 12.9M written to disk. Aug 13 07:12:22.165167 systemd[1]: Removed slice kubepods-besteffort-pod12c1b4e6_dc1b_4943_8e08_b45c5a6beb38.slice - libcontainer container kubepods-besteffort-pod12c1b4e6_dc1b_4943_8e08_b45c5a6beb38.slice. Aug 13 07:12:22.462905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c-rootfs.mount: Deactivated successfully. Aug 13 07:12:22.463020 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97cb7c055b729ff2a9368485c606a2dd5334c24d2790ac313d4bdcaae43dd70c-shm.mount: Deactivated successfully. Aug 13 07:12:22.463080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bb99db29273fc5201c18888f00e36b173e4c1e6609011c55b78f862d725f506-rootfs.mount: Deactivated successfully. Aug 13 07:12:22.463143 systemd[1]: var-lib-kubelet-pods-12c1b4e6\x2ddc1b\x2d4943\x2d8e08\x2db45c5a6beb38-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkp629.mount: Deactivated successfully. Aug 13 07:12:22.463197 systemd[1]: var-lib-kubelet-pods-ed2ad4b9\x2dd11d\x2d40b6\x2d804c\x2d06bf0efe451b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djlsnv.mount: Deactivated successfully. Aug 13 07:12:22.463249 systemd[1]: var-lib-kubelet-pods-ed2ad4b9\x2dd11d\x2d40b6\x2d804c\x2d06bf0efe451b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 07:12:22.463320 systemd[1]: var-lib-kubelet-pods-ed2ad4b9\x2dd11d\x2d40b6\x2d804c\x2d06bf0efe451b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 07:12:22.497346 kubelet[3367]: I0813 07:12:22.497148 3367 scope.go:117] "RemoveContainer" containerID="87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88" Aug 13 07:12:22.501902 containerd[1801]: time="2025-08-13T07:12:22.500842968Z" level=info msg="RemoveContainer for \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\"" Aug 13 07:12:22.521030 containerd[1801]: time="2025-08-13T07:12:22.520981873Z" level=info msg="RemoveContainer for \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\" returns successfully" Aug 13 07:12:22.521657 kubelet[3367]: I0813 07:12:22.521312 3367 scope.go:117] "RemoveContainer" containerID="87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88" Aug 13 07:12:22.521780 containerd[1801]: time="2025-08-13T07:12:22.521694272Z" level=error msg="ContainerStatus for \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\": not found" Aug 13 07:12:22.522286 kubelet[3367]: E0813 07:12:22.521897 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\": not found" containerID="87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88" Aug 13 07:12:22.522286 kubelet[3367]: I0813 07:12:22.521934 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88"} err="failed to get container status \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\": rpc error: code = NotFound desc = an error occurred when try to find container \"87867d42a04a980c652c4b88bebe7ca49f32f31e0c7ff0b4ab2b8a518503aa88\": not found" Aug 13 07:12:22.522286 kubelet[3367]: I0813 07:12:22.521970 3367 scope.go:117] "RemoveContainer" containerID="022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42" Aug 13 07:12:22.523551 containerd[1801]: time="2025-08-13T07:12:22.523517271Z" level=info msg="RemoveContainer for \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\"" Aug 13 07:12:22.538139 containerd[1801]: time="2025-08-13T07:12:22.538091659Z" level=info msg="RemoveContainer for \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\" returns successfully" Aug 13 07:12:22.538367 kubelet[3367]: I0813 07:12:22.538339 3367 scope.go:117] "RemoveContainer" containerID="b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd" Aug 13 07:12:22.539705 containerd[1801]: time="2025-08-13T07:12:22.539664658Z" level=info msg="RemoveContainer for \"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd\"" Aug 13 07:12:22.554115 containerd[1801]: time="2025-08-13T07:12:22.554072007Z" level=info msg="RemoveContainer for \"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd\" returns successfully" Aug 13 07:12:22.554584 kubelet[3367]: I0813 07:12:22.554479 3367 scope.go:117] "RemoveContainer" containerID="86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0" Aug 13 07:12:22.556009 containerd[1801]: time="2025-08-13T07:12:22.555968565Z" level=info msg="RemoveContainer for \"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0\"" Aug 13 07:12:22.569167 containerd[1801]: time="2025-08-13T07:12:22.569122675Z" level=info msg="RemoveContainer for \"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0\" returns successfully" Aug 13 07:12:22.569432 kubelet[3367]: I0813 07:12:22.569403 3367 scope.go:117] "RemoveContainer" containerID="ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04" Aug 13 07:12:22.570721 containerd[1801]: time="2025-08-13T07:12:22.570681994Z" level=info msg="RemoveContainer for \"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04\"" Aug 13 07:12:22.583047 containerd[1801]: time="2025-08-13T07:12:22.583002224Z" level=info msg="RemoveContainer for \"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04\" returns successfully" Aug 13 07:12:22.583525 kubelet[3367]: I0813 07:12:22.583392 3367 scope.go:117] "RemoveContainer" containerID="4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5" Aug 13 07:12:22.584951 containerd[1801]: time="2025-08-13T07:12:22.584913142Z" level=info msg="RemoveContainer for \"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5\"" Aug 13 07:12:22.603207 containerd[1801]: time="2025-08-13T07:12:22.603046488Z" level=info msg="RemoveContainer for \"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5\" returns successfully" Aug 13 07:12:22.603361 kubelet[3367]: I0813 07:12:22.603312 3367 scope.go:117] "RemoveContainer" containerID="022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42" Aug 13 07:12:22.603920 containerd[1801]: time="2025-08-13T07:12:22.603620168Z" level=error msg="ContainerStatus for \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\": not found" Aug 13 07:12:22.603996 kubelet[3367]: E0813 07:12:22.603810 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\": not found" containerID="022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42" Aug 13 07:12:22.603996 kubelet[3367]: I0813 07:12:22.603843 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42"} err="failed to get container status \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\": rpc error: code = NotFound desc = an error occurred when try to find container \"022637afe6f926484e99e15c520214af6dcd2dadc71db0cd6ed6fc3ea05cab42\": not found" Aug 13 07:12:22.603996 kubelet[3367]: I0813 07:12:22.603862 3367 scope.go:117] "RemoveContainer" containerID="b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd" Aug 13 07:12:22.604442 containerd[1801]: time="2025-08-13T07:12:22.604384967Z" level=error msg="ContainerStatus for \"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd\": not found" Aug 13 07:12:22.604568 kubelet[3367]: E0813 07:12:22.604549 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd\": not found" containerID="b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd" Aug 13 07:12:22.604605 kubelet[3367]: I0813 07:12:22.604576 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd"} err="failed to get container status \"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"b859c2734c5a1a9ea1cd5c6e6b1ef0378c083f64a68bcd17f210a95f5035d6fd\": not found" Aug 13 07:12:22.604605 kubelet[3367]: I0813 07:12:22.604598 3367 scope.go:117] "RemoveContainer" containerID="86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0" Aug 13 07:12:22.604812 containerd[1801]: time="2025-08-13T07:12:22.604782927Z" level=error msg="ContainerStatus for \"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0\": not found" Aug 13 07:12:22.605107 kubelet[3367]: E0813 07:12:22.605081 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0\": not found" containerID="86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0" Aug 13 07:12:22.605163 kubelet[3367]: I0813 07:12:22.605108 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0"} err="failed to get container status \"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0\": rpc error: code = NotFound desc = an error occurred when try to find container \"86de98aacf9be9f8cb19e56ec69b0d6830cad022526273fffdee1a74c622eed0\": not found" Aug 13 07:12:22.605163 kubelet[3367]: I0813 07:12:22.605121 3367 scope.go:117] "RemoveContainer" containerID="ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04" Aug 13 07:12:22.605608 containerd[1801]: time="2025-08-13T07:12:22.605345246Z" level=error msg="ContainerStatus for \"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04\": not found" Aug 13 07:12:22.605676 kubelet[3367]: E0813 07:12:22.605471 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04\": not found" containerID="ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04" Aug 13 07:12:22.605676 kubelet[3367]: I0813 07:12:22.605497 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04"} err="failed to get container status \"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab34eba6b01358c96ca8ee0e16a4a782dc4982036f1d1672a8e8113c558aca04\": not found" Aug 13 07:12:22.605676 kubelet[3367]: I0813 07:12:22.605528 3367 scope.go:117] "RemoveContainer" containerID="4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5" Aug 13 07:12:22.605748 containerd[1801]: time="2025-08-13T07:12:22.605714766Z" level=error msg="ContainerStatus for \"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5\": not found" Aug 13 07:12:22.605854 kubelet[3367]: E0813 07:12:22.605819 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5\": not found" containerID="4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5" Aug 13 07:12:22.605898 kubelet[3367]: I0813 07:12:22.605850 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5"} err="failed to get container status \"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5\": rpc error: code = NotFound desc = an error occurred when try to find container \"4de72590f3b1ed8d5ad68b4c183aa6de2276b5c7cbe27ca5b5356adf08d62fa5\": not found" Aug 13 07:12:23.462705 sshd[4975]: Connection closed by 10.200.16.10 port 43650 Aug 13 07:12:23.463388 sshd-session[4973]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:23.467352 systemd[1]: sshd@21-10.200.20.40:22-10.200.16.10:43650.service: Deactivated successfully. Aug 13 07:12:23.469342 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:12:23.469647 systemd[1]: session-24.scope: Consumed 1.010s CPU time, 25.3M memory peak. Aug 13 07:12:23.470180 systemd-logind[1723]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:12:23.472066 systemd-logind[1723]: Removed session 24. Aug 13 07:12:23.571557 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.16.10:36168.service - OpenSSH per-connection server daemon (10.200.16.10:36168). Aug 13 07:12:24.066228 sshd[5134]: Accepted publickey for core from 10.200.16.10 port 36168 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:12:24.068685 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:24.073669 systemd-logind[1723]: New session 25 of user core. Aug 13 07:12:24.077545 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:12:24.157446 kubelet[3367]: I0813 07:12:24.157381 3367 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c1b4e6-dc1b-4943-8e08-b45c5a6beb38" path="/var/lib/kubelet/pods/12c1b4e6-dc1b-4943-8e08-b45c5a6beb38/volumes" Aug 13 07:12:24.157807 kubelet[3367]: I0813 07:12:24.157797 3367 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed2ad4b9-d11d-40b6-804c-06bf0efe451b" path="/var/lib/kubelet/pods/ed2ad4b9-d11d-40b6-804c-06bf0efe451b/volumes" Aug 13 07:12:25.174245 systemd[1]: Created slice kubepods-burstable-pod31e506f0_6c9e_4e87_8092_bdb9dadf328f.slice - libcontainer container kubepods-burstable-pod31e506f0_6c9e_4e87_8092_bdb9dadf328f.slice. Aug 13 07:12:25.186128 sshd[5136]: Connection closed by 10.200.16.10 port 36168 Aug 13 07:12:25.183727 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:25.192635 systemd[1]: sshd@22-10.200.20.40:22-10.200.16.10:36168.service: Deactivated successfully. Aug 13 07:12:25.197345 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:12:25.198354 systemd-logind[1723]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:12:25.200840 systemd-logind[1723]: Removed session 25. Aug 13 07:12:25.230057 kubelet[3367]: I0813 07:12:25.229981 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31e506f0-6c9e-4e87-8092-bdb9dadf328f-etc-cni-netd\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230057 kubelet[3367]: I0813 07:12:25.230028 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31e506f0-6c9e-4e87-8092-bdb9dadf328f-cilium-cgroup\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230483 kubelet[3367]: I0813 07:12:25.230068 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31e506f0-6c9e-4e87-8092-bdb9dadf328f-lib-modules\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230483 kubelet[3367]: I0813 07:12:25.230110 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31e506f0-6c9e-4e87-8092-bdb9dadf328f-xtables-lock\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230483 kubelet[3367]: I0813 07:12:25.230126 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/31e506f0-6c9e-4e87-8092-bdb9dadf328f-cilium-ipsec-secrets\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230483 kubelet[3367]: I0813 07:12:25.230149 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31e506f0-6c9e-4e87-8092-bdb9dadf328f-host-proc-sys-net\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230483 kubelet[3367]: I0813 07:12:25.230166 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qhzq\" (UniqueName: \"kubernetes.io/projected/31e506f0-6c9e-4e87-8092-bdb9dadf328f-kube-api-access-6qhzq\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230483 kubelet[3367]: I0813 07:12:25.230187 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31e506f0-6c9e-4e87-8092-bdb9dadf328f-cilium-run\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230608 kubelet[3367]: I0813 07:12:25.230201 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31e506f0-6c9e-4e87-8092-bdb9dadf328f-cilium-config-path\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230608 kubelet[3367]: I0813 07:12:25.230234 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31e506f0-6c9e-4e87-8092-bdb9dadf328f-clustermesh-secrets\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230608 kubelet[3367]: I0813 07:12:25.230252 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31e506f0-6c9e-4e87-8092-bdb9dadf328f-hubble-tls\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230608 kubelet[3367]: I0813 07:12:25.230289 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31e506f0-6c9e-4e87-8092-bdb9dadf328f-host-proc-sys-kernel\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230608 kubelet[3367]: I0813 07:12:25.230322 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31e506f0-6c9e-4e87-8092-bdb9dadf328f-hostproc\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230608 kubelet[3367]: I0813 07:12:25.230338 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31e506f0-6c9e-4e87-8092-bdb9dadf328f-cni-path\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.230724 kubelet[3367]: I0813 07:12:25.230355 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31e506f0-6c9e-4e87-8092-bdb9dadf328f-bpf-maps\") pod \"cilium-dqhgh\" (UID: \"31e506f0-6c9e-4e87-8092-bdb9dadf328f\") " pod="kube-system/cilium-dqhgh" Aug 13 07:12:25.248662 kubelet[3367]: E0813 07:12:25.248582 3367 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:12:25.273094 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.16.10:36172.service - OpenSSH per-connection server daemon (10.200.16.10:36172). Aug 13 07:12:25.489377 containerd[1801]: time="2025-08-13T07:12:25.489162528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dqhgh,Uid:31e506f0-6c9e-4e87-8092-bdb9dadf328f,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:25.557621 containerd[1801]: time="2025-08-13T07:12:25.557176950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:25.557621 containerd[1801]: time="2025-08-13T07:12:25.557249430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:25.557621 containerd[1801]: time="2025-08-13T07:12:25.557291670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:25.557621 containerd[1801]: time="2025-08-13T07:12:25.557390869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:25.581491 systemd[1]: Started cri-containerd-674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5.scope - libcontainer container 674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5. Aug 13 07:12:25.607015 containerd[1801]: time="2025-08-13T07:12:25.606837867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dqhgh,Uid:31e506f0-6c9e-4e87-8092-bdb9dadf328f,Namespace:kube-system,Attempt:0,} returns sandbox id \"674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5\"" Aug 13 07:12:25.625079 containerd[1801]: time="2025-08-13T07:12:25.625039531Z" level=info msg="CreateContainer within sandbox \"674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:12:25.691526 containerd[1801]: time="2025-08-13T07:12:25.691475354Z" level=info msg="CreateContainer within sandbox \"674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ddb5106e190c960a32860ee4adb29a1555f9b46b86328a2ccc62edc594a0965\"" Aug 13 07:12:25.692788 containerd[1801]: time="2025-08-13T07:12:25.692698833Z" level=info msg="StartContainer for \"6ddb5106e190c960a32860ee4adb29a1555f9b46b86328a2ccc62edc594a0965\"" Aug 13 07:12:25.719028 systemd[1]: Started cri-containerd-6ddb5106e190c960a32860ee4adb29a1555f9b46b86328a2ccc62edc594a0965.scope - libcontainer container 6ddb5106e190c960a32860ee4adb29a1555f9b46b86328a2ccc62edc594a0965. Aug 13 07:12:25.749748 containerd[1801]: time="2025-08-13T07:12:25.749628424Z" level=info msg="StartContainer for \"6ddb5106e190c960a32860ee4adb29a1555f9b46b86328a2ccc62edc594a0965\" returns successfully" Aug 13 07:12:25.753155 systemd[1]: cri-containerd-6ddb5106e190c960a32860ee4adb29a1555f9b46b86328a2ccc62edc594a0965.scope: Deactivated successfully. Aug 13 07:12:25.774733 sshd[5146]: Accepted publickey for core from 10.200.16.10 port 36172 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:12:25.776909 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:25.782524 systemd-logind[1723]: New session 26 of user core. Aug 13 07:12:25.791631 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:12:25.859700 containerd[1801]: time="2025-08-13T07:12:25.859642529Z" level=info msg="shim disconnected" id=6ddb5106e190c960a32860ee4adb29a1555f9b46b86328a2ccc62edc594a0965 namespace=k8s.io Aug 13 07:12:25.860218 containerd[1801]: time="2025-08-13T07:12:25.860014089Z" level=warning msg="cleaning up after shim disconnected" id=6ddb5106e190c960a32860ee4adb29a1555f9b46b86328a2ccc62edc594a0965 namespace=k8s.io Aug 13 07:12:25.860218 containerd[1801]: time="2025-08-13T07:12:25.860033809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:26.119027 sshd[5243]: Connection closed by 10.200.16.10 port 36172 Aug 13 07:12:26.119789 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:26.123377 systemd[1]: sshd@23-10.200.20.40:22-10.200.16.10:36172.service: Deactivated successfully. Aug 13 07:12:26.125381 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:12:26.126138 systemd-logind[1723]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:12:26.127400 systemd-logind[1723]: Removed session 26. Aug 13 07:12:26.211163 systemd[1]: Started sshd@24-10.200.20.40:22-10.200.16.10:36178.service - OpenSSH per-connection server daemon (10.200.16.10:36178). Aug 13 07:12:26.544774 containerd[1801]: time="2025-08-13T07:12:26.543205580Z" level=info msg="CreateContainer within sandbox \"674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:12:26.598601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2052579830.mount: Deactivated successfully. Aug 13 07:12:26.620299 containerd[1801]: time="2025-08-13T07:12:26.619983234Z" level=info msg="CreateContainer within sandbox \"674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06c6c9b40d6bd880c3f869dd2056797dae1e51c1b7f073db28f8f66cd82c1767\"" Aug 13 07:12:26.621006 containerd[1801]: time="2025-08-13T07:12:26.620963913Z" level=info msg="StartContainer for \"06c6c9b40d6bd880c3f869dd2056797dae1e51c1b7f073db28f8f66cd82c1767\"" Aug 13 07:12:26.653472 systemd[1]: Started cri-containerd-06c6c9b40d6bd880c3f869dd2056797dae1e51c1b7f073db28f8f66cd82c1767.scope - libcontainer container 06c6c9b40d6bd880c3f869dd2056797dae1e51c1b7f073db28f8f66cd82c1767. Aug 13 07:12:26.682786 containerd[1801]: time="2025-08-13T07:12:26.682726380Z" level=info msg="StartContainer for \"06c6c9b40d6bd880c3f869dd2056797dae1e51c1b7f073db28f8f66cd82c1767\" returns successfully" Aug 13 07:12:26.684291 systemd[1]: cri-containerd-06c6c9b40d6bd880c3f869dd2056797dae1e51c1b7f073db28f8f66cd82c1767.scope: Deactivated successfully. Aug 13 07:12:26.707010 sshd[5262]: Accepted publickey for core from 10.200.16.10 port 36178 ssh2: RSA SHA256:mUTVkvCTqAM/q6yF06VEIEfaT11Wyv/ewAABhIXzqTw Aug 13 07:12:26.712750 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:26.716713 systemd-logind[1723]: New session 27 of user core. Aug 13 07:12:26.723440 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 07:12:26.729313 containerd[1801]: time="2025-08-13T07:12:26.729106340Z" level=info msg="shim disconnected" id=06c6c9b40d6bd880c3f869dd2056797dae1e51c1b7f073db28f8f66cd82c1767 namespace=k8s.io Aug 13 07:12:26.729313 containerd[1801]: time="2025-08-13T07:12:26.729159740Z" level=warning msg="cleaning up after shim disconnected" id=06c6c9b40d6bd880c3f869dd2056797dae1e51c1b7f073db28f8f66cd82c1767 namespace=k8s.io Aug 13 07:12:26.729313 containerd[1801]: time="2025-08-13T07:12:26.729168100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:26.740105 containerd[1801]: time="2025-08-13T07:12:26.740045851Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:12:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:12:27.338501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06c6c9b40d6bd880c3f869dd2056797dae1e51c1b7f073db28f8f66cd82c1767-rootfs.mount: Deactivated successfully. Aug 13 07:12:27.535687 containerd[1801]: time="2025-08-13T07:12:27.535530525Z" level=info msg="CreateContainer within sandbox \"674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:12:27.597386 containerd[1801]: time="2025-08-13T07:12:27.597235592Z" level=info msg="CreateContainer within sandbox \"674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a6e4f89921447233768f30db473d9bec9e617e263197f651f5bbeacc05d1533\"" Aug 13 07:12:27.598767 containerd[1801]: time="2025-08-13T07:12:27.598711671Z" level=info msg="StartContainer for \"8a6e4f89921447233768f30db473d9bec9e617e263197f651f5bbeacc05d1533\"" Aug 13 07:12:27.639621 systemd[1]: Started cri-containerd-8a6e4f89921447233768f30db473d9bec9e617e263197f651f5bbeacc05d1533.scope - libcontainer container 8a6e4f89921447233768f30db473d9bec9e617e263197f651f5bbeacc05d1533. Aug 13 07:12:27.670565 systemd[1]: cri-containerd-8a6e4f89921447233768f30db473d9bec9e617e263197f651f5bbeacc05d1533.scope: Deactivated successfully. Aug 13 07:12:27.674510 containerd[1801]: time="2025-08-13T07:12:27.674244246Z" level=info msg="StartContainer for \"8a6e4f89921447233768f30db473d9bec9e617e263197f651f5bbeacc05d1533\" returns successfully" Aug 13 07:12:27.714249 containerd[1801]: time="2025-08-13T07:12:27.714170811Z" level=info msg="shim disconnected" id=8a6e4f89921447233768f30db473d9bec9e617e263197f651f5bbeacc05d1533 namespace=k8s.io Aug 13 07:12:27.714249 containerd[1801]: time="2025-08-13T07:12:27.714244651Z" level=warning msg="cleaning up after shim disconnected" id=8a6e4f89921447233768f30db473d9bec9e617e263197f651f5bbeacc05d1533 namespace=k8s.io Aug 13 07:12:27.714486 containerd[1801]: time="2025-08-13T07:12:27.714252971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:28.338783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a6e4f89921447233768f30db473d9bec9e617e263197f651f5bbeacc05d1533-rootfs.mount: Deactivated successfully. Aug 13 07:12:28.545589 containerd[1801]: time="2025-08-13T07:12:28.545544375Z" level=info msg="CreateContainer within sandbox \"674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:12:28.596045 containerd[1801]: time="2025-08-13T07:12:28.595893412Z" level=info msg="CreateContainer within sandbox \"674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bb390a719345d2111b90fdcc95c23dcc319d1513b45f6ac09df2b385d43abfe1\"" Aug 13 07:12:28.597450 containerd[1801]: time="2025-08-13T07:12:28.597405211Z" level=info msg="StartContainer for \"bb390a719345d2111b90fdcc95c23dcc319d1513b45f6ac09df2b385d43abfe1\"" Aug 13 07:12:28.627450 systemd[1]: Started cri-containerd-bb390a719345d2111b90fdcc95c23dcc319d1513b45f6ac09df2b385d43abfe1.scope - libcontainer container bb390a719345d2111b90fdcc95c23dcc319d1513b45f6ac09df2b385d43abfe1. Aug 13 07:12:28.652179 systemd[1]: cri-containerd-bb390a719345d2111b90fdcc95c23dcc319d1513b45f6ac09df2b385d43abfe1.scope: Deactivated successfully. Aug 13 07:12:28.659421 containerd[1801]: time="2025-08-13T07:12:28.659370837Z" level=info msg="StartContainer for \"bb390a719345d2111b90fdcc95c23dcc319d1513b45f6ac09df2b385d43abfe1\" returns successfully" Aug 13 07:12:28.693851 containerd[1801]: time="2025-08-13T07:12:28.693744288Z" level=info msg="shim disconnected" id=bb390a719345d2111b90fdcc95c23dcc319d1513b45f6ac09df2b385d43abfe1 namespace=k8s.io Aug 13 07:12:28.693851 containerd[1801]: time="2025-08-13T07:12:28.693834407Z" level=warning msg="cleaning up after shim disconnected" id=bb390a719345d2111b90fdcc95c23dcc319d1513b45f6ac09df2b385d43abfe1 namespace=k8s.io Aug 13 07:12:28.693851 containerd[1801]: time="2025-08-13T07:12:28.693844167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:29.338944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb390a719345d2111b90fdcc95c23dcc319d1513b45f6ac09df2b385d43abfe1-rootfs.mount: Deactivated successfully. Aug 13 07:12:29.545660 containerd[1801]: time="2025-08-13T07:12:29.545601394Z" level=info msg="CreateContainer within sandbox \"674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:12:29.613405 containerd[1801]: time="2025-08-13T07:12:29.612551776Z" level=info msg="CreateContainer within sandbox \"674840fec6577e17187458b87829beb8381bd7e21cd516c65e5f3cc49ca3c7f5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"39b1037c6b8ab1993570d757685ab5334225ad194ab577671496c5774f16cb60\"" Aug 13 07:12:29.613728 containerd[1801]: time="2025-08-13T07:12:29.613449415Z" level=info msg="StartContainer for \"39b1037c6b8ab1993570d757685ab5334225ad194ab577671496c5774f16cb60\"" Aug 13 07:12:29.649453 systemd[1]: Started cri-containerd-39b1037c6b8ab1993570d757685ab5334225ad194ab577671496c5774f16cb60.scope - libcontainer container 39b1037c6b8ab1993570d757685ab5334225ad194ab577671496c5774f16cb60. Aug 13 07:12:29.679789 containerd[1801]: time="2025-08-13T07:12:29.679745838Z" level=info msg="StartContainer for \"39b1037c6b8ab1993570d757685ab5334225ad194ab577671496c5774f16cb60\" returns successfully" Aug 13 07:12:30.060295 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Aug 13 07:12:30.560573 kubelet[3367]: I0813 07:12:30.560493 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dqhgh" podStartSLOduration=5.560476719 podStartE2EDuration="5.560476719s" podCreationTimestamp="2025-08-13 07:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:12:30.559143801 +0000 UTC m=+160.832045812" watchObservedRunningTime="2025-08-13 07:12:30.560476719 +0000 UTC m=+160.833378730" Aug 13 07:12:32.810215 systemd-networkd[1470]: lxc_health: Link UP Aug 13 07:12:32.818830 systemd-networkd[1470]: lxc_health: Gained carrier Aug 13 07:12:33.325872 systemd[1]: run-containerd-runc-k8s.io-39b1037c6b8ab1993570d757685ab5334225ad194ab577671496c5774f16cb60-runc.eOpMb5.mount: Deactivated successfully. Aug 13 07:12:34.027442 systemd-networkd[1470]: lxc_health: Gained IPv6LL Aug 13 07:12:39.923300 sshd[5312]: Connection closed by 10.200.16.10 port 36178 Aug 13 07:12:39.923935 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:39.927316 systemd-logind[1723]: Session 27 logged out. Waiting for processes to exit. Aug 13 07:12:39.929302 systemd[1]: sshd@24-10.200.20.40:22-10.200.16.10:36178.service: Deactivated successfully. Aug 13 07:12:39.931886 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 07:12:39.934367 systemd-logind[1723]: Removed session 27.