Sep 12 17:06:57.388428 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:06:57.388459 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Fri Sep 12 15:34:33 -00 2025 Sep 12 17:06:57.388468 kernel: KASLR enabled Sep 12 17:06:57.388485 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 12 17:06:57.388512 kernel: printk: bootconsole [pl11] enabled Sep 12 17:06:57.388519 kernel: efi: EFI v2.7 by EDK II Sep 12 17:06:57.388527 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead5018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 12 17:06:57.388534 kernel: random: crng init done Sep 12 17:06:57.388540 kernel: secureboot: Secure boot disabled Sep 12 17:06:57.388547 kernel: ACPI: Early table checksum verification disabled Sep 12 17:06:57.388553 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 12 17:06:57.388562 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:06:57.388568 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:06:57.388576 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 17:06:57.388584 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:06:57.388590 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:06:57.388598 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:06:57.388607 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:06:57.388614 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:06:57.388621 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:06:57.388628 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 12 17:06:57.388634 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:06:57.388640 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 12 17:06:57.388647 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 17:06:57.388654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 12 17:06:57.388661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 12 17:06:57.388668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 12 17:06:57.388675 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 12 17:06:57.388683 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 12 17:06:57.388689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 12 17:06:57.388696 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 12 17:06:57.388703 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 12 17:06:57.388710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 12 17:06:57.388717 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 12 17:06:57.388724 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 12 17:06:57.388731 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 12 17:06:57.388738 kernel: Zone ranges: Sep 12 17:06:57.388744 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 12 17:06:57.388751 kernel: DMA32 empty Sep 12 17:06:57.388758 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:06:57.388769 kernel: Movable zone start for each node Sep 12 17:06:57.388777 kernel: Early memory node ranges Sep 12 17:06:57.388784 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 12 17:06:57.388792 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 12 17:06:57.388800 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 12 17:06:57.388808 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 12 17:06:57.388815 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 12 17:06:57.388823 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 12 17:06:57.388830 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 12 17:06:57.388838 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 12 17:06:57.388845 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:06:57.388853 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 12 17:06:57.388859 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 12 17:06:57.388866 kernel: psci: probing for conduit method from ACPI. Sep 12 17:06:57.388874 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:06:57.388882 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:06:57.388889 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 12 17:06:57.388899 kernel: psci: SMC Calling Convention v1.4 Sep 12 17:06:57.388906 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 12 17:06:57.388912 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 12 17:06:57.388920 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:06:57.388927 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:06:57.388935 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:06:57.388943 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:06:57.388950 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:06:57.388957 kernel: CPU features: detected: Hardware dirty bit management Sep 12 17:06:57.388964 kernel: CPU features: detected: Spectre-BHB Sep 12 17:06:57.388970 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:06:57.388980 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:06:57.388987 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:06:57.388995 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 12 17:06:57.389003 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:06:57.389009 kernel: alternatives: applying boot alternatives Sep 12 17:06:57.389018 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=82b413d7549dba6b35b1edf421a17f61aa80704059d10fedd611b1eff5298abd Sep 12 17:06:57.389026 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:06:57.389033 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:06:57.389040 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:06:57.389048 kernel: Fallback order for Node 0: 0 Sep 12 17:06:57.389055 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 12 17:06:57.389063 kernel: Policy zone: Normal Sep 12 17:06:57.389070 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:06:57.389077 kernel: software IO TLB: area num 2. Sep 12 17:06:57.389084 kernel: software IO TLB: mapped [mem 0x0000000036530000-0x000000003a530000] (64MB) Sep 12 17:06:57.389091 kernel: Memory: 3983528K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 210632K reserved, 0K cma-reserved) Sep 12 17:06:57.389099 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:06:57.389106 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:06:57.389114 kernel: rcu: RCU event tracing is enabled. Sep 12 17:06:57.389121 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:06:57.389128 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:06:57.389135 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:06:57.390182 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:06:57.390216 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:06:57.390224 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:06:57.390232 kernel: GICv3: 960 SPIs implemented Sep 12 17:06:57.390239 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:06:57.390247 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:06:57.390255 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:06:57.390263 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 12 17:06:57.390270 kernel: ITS: No ITS available, not enabling LPIs Sep 12 17:06:57.390279 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:06:57.390286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:06:57.390294 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:06:57.390309 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:06:57.390316 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:06:57.390325 kernel: Console: colour dummy device 80x25 Sep 12 17:06:57.390333 kernel: printk: console [tty1] enabled Sep 12 17:06:57.390340 kernel: ACPI: Core revision 20230628 Sep 12 17:06:57.390348 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:06:57.390355 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:06:57.390364 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:06:57.390371 kernel: landlock: Up and running. Sep 12 17:06:57.390381 kernel: SELinux: Initializing. Sep 12 17:06:57.390389 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:06:57.390397 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:06:57.390404 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:06:57.390412 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:06:57.390420 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 12 17:06:57.390429 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 12 17:06:57.390445 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 17:06:57.390453 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:06:57.390461 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:06:57.390470 kernel: Remapping and enabling EFI services. Sep 12 17:06:57.390479 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:06:57.390489 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:06:57.390497 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 12 17:06:57.390505 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:06:57.390513 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:06:57.390521 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:06:57.390532 kernel: SMP: Total of 2 processors activated. Sep 12 17:06:57.390540 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:06:57.390548 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 12 17:06:57.390557 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:06:57.390565 kernel: CPU features: detected: CRC32 instructions Sep 12 17:06:57.390573 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:06:57.390581 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:06:57.390589 kernel: CPU features: detected: Privileged Access Never Sep 12 17:06:57.390597 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:06:57.390606 kernel: alternatives: applying system-wide alternatives Sep 12 17:06:57.390614 kernel: devtmpfs: initialized Sep 12 17:06:57.390623 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:06:57.390632 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:06:57.390641 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:06:57.390648 kernel: SMBIOS 3.1.0 present. Sep 12 17:06:57.390656 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 17:06:57.390665 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:06:57.390674 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:06:57.390686 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:06:57.390694 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:06:57.390701 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:06:57.390710 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 12 17:06:57.390719 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:06:57.390727 kernel: cpuidle: using governor menu Sep 12 17:06:57.390735 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:06:57.390743 kernel: ASID allocator initialised with 32768 entries Sep 12 17:06:57.390751 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:06:57.390761 kernel: Serial: AMBA PL011 UART driver Sep 12 17:06:57.390771 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:06:57.390779 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 17:06:57.390787 kernel: Modules: 509248 pages in range for PLT usage Sep 12 17:06:57.390796 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:06:57.390805 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:06:57.390814 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:06:57.390822 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:06:57.390830 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:06:57.390841 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:06:57.390849 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:06:57.390858 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:06:57.390865 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:06:57.390874 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:06:57.390882 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:06:57.390890 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:06:57.390899 kernel: ACPI: Interpreter enabled Sep 12 17:06:57.390907 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:06:57.390917 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:06:57.390926 kernel: printk: console [ttyAMA0] enabled Sep 12 17:06:57.390935 kernel: printk: bootconsole [pl11] disabled Sep 12 17:06:57.390943 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 12 17:06:57.390952 kernel: iommu: Default domain type: Translated Sep 12 17:06:57.390960 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:06:57.390969 kernel: efivars: Registered efivars operations Sep 12 17:06:57.390978 kernel: vgaarb: loaded Sep 12 17:06:57.390985 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:06:57.390996 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:06:57.391005 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:06:57.391013 kernel: pnp: PnP ACPI init Sep 12 17:06:57.391021 kernel: pnp: PnP ACPI: found 0 devices Sep 12 17:06:57.391028 kernel: NET: Registered PF_INET protocol family Sep 12 17:06:57.391036 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:06:57.391045 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:06:57.391054 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:06:57.391064 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:06:57.391074 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:06:57.391082 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:06:57.391091 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:06:57.391099 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:06:57.391107 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:06:57.391114 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:06:57.391122 kernel: kvm [1]: HYP mode not available Sep 12 17:06:57.391131 kernel: Initialise system trusted keyrings Sep 12 17:06:57.391140 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:06:57.392219 kernel: Key type asymmetric registered Sep 12 17:06:57.392232 kernel: Asymmetric key parser 'x509' registered Sep 12 17:06:57.392242 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:06:57.392251 kernel: io scheduler mq-deadline registered Sep 12 17:06:57.392260 kernel: io scheduler kyber registered Sep 12 17:06:57.392268 kernel: io scheduler bfq registered Sep 12 17:06:57.392276 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:06:57.392284 kernel: thunder_xcv, ver 1.0 Sep 12 17:06:57.392293 kernel: thunder_bgx, ver 1.0 Sep 12 17:06:57.392309 kernel: nicpf, ver 1.0 Sep 12 17:06:57.392316 kernel: nicvf, ver 1.0 Sep 12 17:06:57.392511 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:06:57.392601 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:06:56 UTC (1757696816) Sep 12 17:06:57.392613 kernel: efifb: probing for efifb Sep 12 17:06:57.392621 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 17:06:57.392630 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 17:06:57.392638 kernel: efifb: scrolling: redraw Sep 12 17:06:57.392650 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:06:57.392658 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:06:57.392665 kernel: fb0: EFI VGA frame buffer device Sep 12 17:06:57.392673 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 12 17:06:57.392682 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:06:57.392691 kernel: No ACPI PMU IRQ for CPU0 Sep 12 17:06:57.392699 kernel: No ACPI PMU IRQ for CPU1 Sep 12 17:06:57.392707 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 12 17:06:57.392714 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:06:57.392725 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:06:57.392733 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:06:57.392741 kernel: Segment Routing with IPv6 Sep 12 17:06:57.392749 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:06:57.392758 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:06:57.392765 kernel: Key type dns_resolver registered Sep 12 17:06:57.392773 kernel: registered taskstats version 1 Sep 12 17:06:57.392782 kernel: Loading compiled-in X.509 certificates Sep 12 17:06:57.392792 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: d6f11852774cea54e4c26b4ad4f8effa8d89e628' Sep 12 17:06:57.392802 kernel: Key type .fscrypt registered Sep 12 17:06:57.392809 kernel: Key type fscrypt-provisioning registered Sep 12 17:06:57.392817 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:06:57.392825 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:06:57.392834 kernel: ima: No architecture policies found Sep 12 17:06:57.392843 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:06:57.392851 kernel: clk: Disabling unused clocks Sep 12 17:06:57.392858 kernel: Freeing unused kernel memory: 38400K Sep 12 17:06:57.392865 kernel: Run /init as init process Sep 12 17:06:57.392876 kernel: with arguments: Sep 12 17:06:57.392885 kernel: /init Sep 12 17:06:57.392893 kernel: with environment: Sep 12 17:06:57.392901 kernel: HOME=/ Sep 12 17:06:57.392909 kernel: TERM=linux Sep 12 17:06:57.392916 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:06:57.392926 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:06:57.392939 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:06:57.392951 systemd[1]: Detected virtualization microsoft. Sep 12 17:06:57.392958 systemd[1]: Detected architecture arm64. Sep 12 17:06:57.392966 systemd[1]: Running in initrd. Sep 12 17:06:57.392975 systemd[1]: No hostname configured, using default hostname. Sep 12 17:06:57.392985 systemd[1]: Hostname set to . Sep 12 17:06:57.392993 systemd[1]: Initializing machine ID from random generator. Sep 12 17:06:57.393002 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:06:57.393010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:06:57.393022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:06:57.393031 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:06:57.393041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:06:57.393050 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:06:57.393059 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:06:57.393069 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:06:57.393080 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:06:57.393090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:06:57.393100 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:06:57.393107 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:06:57.393116 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:06:57.393125 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:06:57.393134 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:06:57.393143 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:06:57.393172 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:06:57.393184 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:06:57.393193 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:06:57.393202 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:06:57.393210 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:06:57.393219 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:06:57.393229 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:06:57.393237 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:06:57.393248 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:06:57.393258 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:06:57.393266 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:06:57.393275 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:06:57.393285 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:06:57.393322 systemd-journald[218]: Collecting audit messages is disabled. Sep 12 17:06:57.393348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:06:57.393358 systemd-journald[218]: Journal started Sep 12 17:06:57.393379 systemd-journald[218]: Runtime Journal (/run/log/journal/df8d65770ff14b94af0182999c2f6e54) is 8M, max 78.5M, 70.5M free. Sep 12 17:06:57.386585 systemd-modules-load[220]: Inserted module 'overlay' Sep 12 17:06:57.415166 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:06:57.415193 kernel: Bridge firewalling registered Sep 12 17:06:57.419353 systemd-modules-load[220]: Inserted module 'br_netfilter' Sep 12 17:06:57.437582 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:06:57.439346 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:06:57.445859 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:06:57.458874 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:06:57.469997 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:06:57.480797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:06:57.505479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:06:57.521688 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:06:57.542661 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:06:57.565400 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:06:57.578104 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:06:57.589614 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:06:57.603250 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:06:57.616554 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:06:57.641629 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:06:57.657384 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:06:57.672062 dracut-cmdline[253]: dracut-dracut-053 Sep 12 17:06:57.689508 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=82b413d7549dba6b35b1edf421a17f61aa80704059d10fedd611b1eff5298abd Sep 12 17:06:57.681659 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:06:57.692929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:06:57.775293 systemd-resolved[258]: Positive Trust Anchors: Sep 12 17:06:57.775310 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:06:57.775340 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:06:57.778940 systemd-resolved[258]: Defaulting to hostname 'linux'. Sep 12 17:06:57.779917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:06:57.788572 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:06:57.898172 kernel: SCSI subsystem initialized Sep 12 17:06:57.906185 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:06:57.918182 kernel: iscsi: registered transport (tcp) Sep 12 17:06:57.936349 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:06:57.936423 kernel: QLogic iSCSI HBA Driver Sep 12 17:06:57.982206 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:06:58.000452 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:06:58.036679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:06:58.036757 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:06:58.044992 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:06:58.103186 kernel: raid6: neonx8 gen() 15525 MB/s Sep 12 17:06:58.124158 kernel: raid6: neonx4 gen() 15779 MB/s Sep 12 17:06:58.145174 kernel: raid6: neonx2 gen() 13079 MB/s Sep 12 17:06:58.167213 kernel: raid6: neonx1 gen() 10437 MB/s Sep 12 17:06:58.188170 kernel: raid6: int64x8 gen() 6751 MB/s Sep 12 17:06:58.209167 kernel: raid6: int64x4 gen() 7287 MB/s Sep 12 17:06:58.231168 kernel: raid6: int64x2 gen() 6058 MB/s Sep 12 17:06:58.257431 kernel: raid6: int64x1 gen() 5031 MB/s Sep 12 17:06:58.257494 kernel: raid6: using algorithm neonx4 gen() 15779 MB/s Sep 12 17:06:58.284484 kernel: raid6: .... xor() 12290 MB/s, rmw enabled Sep 12 17:06:58.284577 kernel: raid6: using neon recovery algorithm Sep 12 17:06:58.297883 kernel: xor: measuring software checksum speed Sep 12 17:06:58.297960 kernel: 8regs : 21596 MB/sec Sep 12 17:06:58.305161 kernel: 32regs : 20029 MB/sec Sep 12 17:06:58.305176 kernel: arm64_neon : 27663 MB/sec Sep 12 17:06:58.309576 kernel: xor: using function: arm64_neon (27663 MB/sec) Sep 12 17:06:58.361173 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:06:58.373104 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:06:58.399456 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:06:58.423750 systemd-udevd[439]: Using default interface naming scheme 'v255'. Sep 12 17:06:58.429199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:06:58.448384 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:06:58.481759 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Sep 12 17:06:58.515355 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:06:58.534444 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:06:58.580427 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:06:58.610355 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:06:58.638983 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:06:58.651734 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:06:58.666283 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:06:58.676747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:06:58.703446 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:06:58.724667 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:06:58.749610 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 17:06:58.749638 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 17:06:58.752760 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:06:58.783046 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 17:06:58.783085 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 17:06:58.783095 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:06:58.758297 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:06:58.848699 kernel: scsi host0: storvsc_host_t Sep 12 17:06:58.848900 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 17:06:58.849006 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 12 17:06:58.849159 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 17:06:58.849172 kernel: scsi host1: storvsc_host_t Sep 12 17:06:58.849281 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:06:58.849292 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 17:06:58.823202 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:06:58.872979 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 17:06:58.873005 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 17:06:58.837505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:06:58.918816 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 17:06:58.918980 kernel: PTP clock support registered Sep 12 17:06:58.918991 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:06:58.919011 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 17:06:58.919020 kernel: hv_vmbus: registering driver hv_utils Sep 12 17:06:58.919029 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 17:06:58.919037 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 17:06:58.919161 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 17:06:58.919172 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 17:06:58.837833 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:06:58.858976 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:06:59.116621 systemd-resolved[258]: Clock change detected. Flushing caches. Sep 12 17:06:59.118193 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:06:59.175934 kernel: hv_netvsc 002248b8-561c-0022-48b8-561c002248b8 eth0: VF slot 1 added Sep 12 17:06:59.176182 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 17:06:59.153304 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:06:59.201176 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 17:06:59.201368 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:06:59.208340 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 17:06:59.208563 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 17:06:59.202125 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:06:59.232514 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:06:59.232541 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:06:59.248748 kernel: hv_vmbus: registering driver hv_pci Sep 12 17:06:59.257756 kernel: hv_pci 1f631667-9359-48ba-8f89-a7d635a832da: PCI VMBus probing: Using version 0x10004 Sep 12 17:06:59.267872 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:06:59.496404 kernel: hv_pci 1f631667-9359-48ba-8f89-a7d635a832da: PCI host bridge to bus 9359:00 Sep 12 17:06:59.496620 kernel: pci_bus 9359:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 12 17:06:59.496837 kernel: pci_bus 9359:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:06:59.503114 kernel: pci 9359:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 12 17:06:59.510811 kernel: pci 9359:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:06:59.518489 kernel: pci 9359:00:02.0: enabling Extended Tags Sep 12 17:06:59.537788 kernel: pci 9359:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9359:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 12 17:06:59.550105 kernel: pci_bus 9359:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:06:59.550440 kernel: pci 9359:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:06:59.592294 kernel: mlx5_core 9359:00:02.0: enabling device (0000 -> 0002) Sep 12 17:06:59.599732 kernel: mlx5_core 9359:00:02.0: firmware version: 16.31.2424 Sep 12 17:06:59.897615 kernel: hv_netvsc 002248b8-561c-0022-48b8-561c002248b8 eth0: VF registering: eth1 Sep 12 17:06:59.897891 kernel: mlx5_core 9359:00:02.0 eth1: joined to eth0 Sep 12 17:06:59.907877 kernel: mlx5_core 9359:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 12 17:06:59.919727 kernel: mlx5_core 9359:00:02.0 enP37721s1: renamed from eth1 Sep 12 17:07:00.041715 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 17:07:00.080733 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (491) Sep 12 17:07:00.103737 kernel: BTRFS: device fsid 402ea12e-53e0-48e3-8f03-9fb2de6b0089 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (490) Sep 12 17:07:00.114680 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 17:07:00.139034 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:07:00.155095 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 17:07:00.162513 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 17:07:00.194933 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:07:00.225763 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:07:00.237732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:07:01.247769 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:07:01.247831 disk-uuid[600]: The operation has completed successfully. Sep 12 17:07:01.333233 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:07:01.334971 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:07:01.400947 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:07:01.414548 sh[686]: Success Sep 12 17:07:01.444740 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:07:01.766029 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:07:01.775903 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:07:01.786006 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:07:01.826266 kernel: BTRFS info (device dm-0): first mount of filesystem 402ea12e-53e0-48e3-8f03-9fb2de6b0089 Sep 12 17:07:01.826330 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:07:01.833373 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:07:01.838605 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:07:01.842984 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:07:02.368266 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:07:02.373762 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:07:02.391010 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:07:02.404943 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:07:02.449425 kernel: BTRFS info (device sda6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 17:07:02.449509 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:07:02.454009 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:07:02.501724 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:07:02.513742 kernel: BTRFS info (device sda6): last unmount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 17:07:02.519207 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:07:02.536850 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:07:02.567431 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:07:02.585889 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:07:02.618751 systemd-networkd[867]: lo: Link UP Sep 12 17:07:02.618760 systemd-networkd[867]: lo: Gained carrier Sep 12 17:07:02.624841 systemd-networkd[867]: Enumeration completed Sep 12 17:07:02.625406 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:07:02.637220 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:07:02.637224 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:07:02.638070 systemd[1]: Reached target network.target - Network. Sep 12 17:07:03.089291 kernel: mlx5_core 9359:00:02.0 enP37721s1: Link up Sep 12 17:07:03.089627 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:07:03.178581 kernel: hv_netvsc 002248b8-561c-0022-48b8-561c002248b8 eth0: Data path switched to VF: enP37721s1 Sep 12 17:07:03.178915 systemd-networkd[867]: enP37721s1: Link UP Sep 12 17:07:03.179144 systemd-networkd[867]: eth0: Link UP Sep 12 17:07:03.179561 systemd-networkd[867]: eth0: Gained carrier Sep 12 17:07:03.179573 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:07:03.205286 systemd-networkd[867]: enP37721s1: Gained carrier Sep 12 17:07:03.221125 systemd-networkd[867]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:07:03.944847 ignition[839]: Ignition 2.20.0 Sep 12 17:07:03.944861 ignition[839]: Stage: fetch-offline Sep 12 17:07:03.950662 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:07:03.944906 ignition[839]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:03.944915 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:07:03.945029 ignition[839]: parsed url from cmdline: "" Sep 12 17:07:03.974882 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:07:03.945032 ignition[839]: no config URL provided Sep 12 17:07:03.945037 ignition[839]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:07:03.945044 ignition[839]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:07:03.945049 ignition[839]: failed to fetch config: resource requires networking Sep 12 17:07:03.945352 ignition[839]: Ignition finished successfully Sep 12 17:07:03.999886 ignition[878]: Ignition 2.20.0 Sep 12 17:07:03.999893 ignition[878]: Stage: fetch Sep 12 17:07:04.000103 ignition[878]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:04.000113 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:07:04.000389 ignition[878]: parsed url from cmdline: "" Sep 12 17:07:04.000393 ignition[878]: no config URL provided Sep 12 17:07:04.000399 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:07:04.000410 ignition[878]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:07:04.000444 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 17:07:04.173620 ignition[878]: GET result: OK Sep 12 17:07:04.173694 ignition[878]: config has been read from IMDS userdata Sep 12 17:07:04.173757 ignition[878]: parsing config with SHA512: d51736be9a26eb91a9a37d821949ba12365f3220b6fb4015e9bbb82757e4cf6bdb5bfa6c2e058cbafb3439b2921cfda02966e48dee020d6b0e50f2fc8fd5c691 Sep 12 17:07:04.178277 unknown[878]: fetched base config from "system" Sep 12 17:07:04.178692 ignition[878]: fetch: fetch complete Sep 12 17:07:04.178286 unknown[878]: fetched base config from "system" Sep 12 17:07:04.178696 ignition[878]: fetch: fetch passed Sep 12 17:07:04.178291 unknown[878]: fetched user config from "azure" Sep 12 17:07:04.178782 ignition[878]: Ignition finished successfully Sep 12 17:07:04.187724 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:07:04.209050 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:07:04.237435 ignition[885]: Ignition 2.20.0 Sep 12 17:07:04.237443 ignition[885]: Stage: kargs Sep 12 17:07:04.242270 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:07:04.237625 ignition[885]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:04.237634 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:07:04.238608 ignition[885]: kargs: kargs passed Sep 12 17:07:04.262025 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:07:04.238661 ignition[885]: Ignition finished successfully Sep 12 17:07:04.288073 ignition[891]: Ignition 2.20.0 Sep 12 17:07:04.291947 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:07:04.288081 ignition[891]: Stage: disks Sep 12 17:07:04.301650 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:07:04.288330 ignition[891]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:04.312609 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:07:04.288341 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:07:04.322828 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:07:04.289435 ignition[891]: disks: disks passed Sep 12 17:07:04.334384 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:07:04.289495 ignition[891]: Ignition finished successfully Sep 12 17:07:04.344523 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:07:04.377999 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:07:04.448069 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 17:07:04.458154 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:07:04.476943 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:07:04.535732 kernel: EXT4-fs (sda9): mounted filesystem 397cbf4d-cf5b-4786-906a-df7c3e18edd9 r/w with ordered data mode. Quota mode: none. Sep 12 17:07:04.536980 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:07:04.542764 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:07:04.582862 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:07:04.618008 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (912) Sep 12 17:07:04.618095 kernel: BTRFS info (device sda6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 17:07:04.625365 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:07:04.625582 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:07:04.635972 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:07:04.652215 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:07:04.651979 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:07:04.666646 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:07:04.666693 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:07:04.681891 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:07:04.699340 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:07:04.705623 systemd-networkd[867]: eth0: Gained IPv6LL Sep 12 17:07:04.725050 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:07:05.363609 coreos-metadata[927]: Sep 12 17:07:05.363 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:07:05.374342 coreos-metadata[927]: Sep 12 17:07:05.374 INFO Fetch successful Sep 12 17:07:05.380212 coreos-metadata[927]: Sep 12 17:07:05.379 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:07:05.405012 coreos-metadata[927]: Sep 12 17:07:05.404 INFO Fetch successful Sep 12 17:07:05.419295 coreos-metadata[927]: Sep 12 17:07:05.419 INFO wrote hostname ci-4230.2.3-a-bc327f6988 to /sysroot/etc/hostname Sep 12 17:07:05.429270 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:07:05.717538 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:07:05.761267 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:07:05.804432 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:07:05.814675 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:07:06.799953 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:07:06.816966 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:07:06.834417 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:07:06.846192 kernel: BTRFS info (device sda6): last unmount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 17:07:06.852434 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:07:06.881754 ignition[1031]: INFO : Ignition 2.20.0 Sep 12 17:07:06.890613 ignition[1031]: INFO : Stage: mount Sep 12 17:07:06.890613 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:06.890613 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:07:06.890613 ignition[1031]: INFO : mount: mount passed Sep 12 17:07:06.890613 ignition[1031]: INFO : Ignition finished successfully Sep 12 17:07:06.885237 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:07:06.896683 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:07:06.925061 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:07:06.944016 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:07:06.983731 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1044) Sep 12 17:07:06.983798 kernel: BTRFS info (device sda6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 17:07:06.996716 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:07:07.001190 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:07:07.009736 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:07:07.011643 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:07:07.044439 ignition[1061]: INFO : Ignition 2.20.0 Sep 12 17:07:07.044439 ignition[1061]: INFO : Stage: files Sep 12 17:07:07.054272 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:07.054272 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:07:07.054272 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:07:07.078389 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:07:07.078389 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:07:07.196251 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:07:07.204631 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:07:07.204631 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:07:07.197318 unknown[1061]: wrote ssh authorized keys file for user: core Sep 12 17:07:07.228616 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:07:07.228616 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 17:07:07.507315 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:07:07.848002 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:07:07.848002 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:07:07.875063 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 17:07:08.032332 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:07:08.108246 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:07:08.108246 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:07:08.133123 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:07:08.133123 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:07:08.133123 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:07:08.133123 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:07:08.133123 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:07:08.133123 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:07:08.133123 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:07:08.133123 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:07:08.214447 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:07:08.214447 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:07:08.214447 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:07:08.214447 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:07:08.214447 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 17:07:08.643830 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:07:08.913540 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:07:08.925947 ignition[1061]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:07:08.977515 ignition[1061]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:07:08.993812 ignition[1061]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:07:08.993812 ignition[1061]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:07:08.993812 ignition[1061]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:07:08.993812 ignition[1061]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:07:08.993812 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:07:08.993812 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:07:08.993812 ignition[1061]: INFO : files: files passed Sep 12 17:07:08.993812 ignition[1061]: INFO : Ignition finished successfully Sep 12 17:07:08.991557 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:07:09.025976 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:07:09.043960 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:07:09.067133 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:07:09.134264 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:07:09.134264 initrd-setup-root-after-ignition[1089]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:07:09.067232 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:07:09.160624 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:07:09.077087 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:07:09.088945 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:07:09.113010 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:07:09.163074 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:07:09.165288 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:07:09.176502 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:07:09.188718 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:07:09.199991 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:07:09.219995 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:07:09.257650 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:07:09.274998 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:07:09.297497 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:07:09.305964 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:07:09.318099 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:07:09.329938 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:07:09.330155 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:07:09.347879 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:07:09.359788 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:07:09.370000 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:07:09.380509 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:07:09.392359 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:07:09.406237 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:07:09.419486 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:07:09.432167 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:07:09.450336 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:07:09.461417 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:07:09.470171 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:07:09.470377 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:07:09.487204 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:07:09.493733 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:07:09.511771 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:07:09.511889 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:07:09.519486 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:07:09.519690 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:07:09.537645 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:07:09.537884 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:07:09.565861 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:07:09.566043 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:07:09.579757 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:07:09.579947 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:07:09.621407 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:07:09.630155 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:07:09.645326 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:07:09.661378 ignition[1114]: INFO : Ignition 2.20.0 Sep 12 17:07:09.661378 ignition[1114]: INFO : Stage: umount Sep 12 17:07:09.661378 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:09.661378 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:07:09.661378 ignition[1114]: INFO : umount: umount passed Sep 12 17:07:09.661378 ignition[1114]: INFO : Ignition finished successfully Sep 12 17:07:09.645527 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:07:09.654762 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:07:09.654899 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:07:09.678251 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:07:09.678366 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:07:09.698529 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:07:09.699419 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:07:09.707099 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:07:09.707175 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:07:09.718015 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:07:09.718082 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:07:09.729910 systemd[1]: Stopped target network.target - Network. Sep 12 17:07:09.739833 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:07:09.739913 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:07:09.752996 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:07:09.763465 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:07:09.775258 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:07:09.782568 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:07:09.793223 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:07:09.803884 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:07:09.803954 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:07:09.815478 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:07:09.815532 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:07:09.826545 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:07:09.826620 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:07:09.837593 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:07:09.837651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:07:09.848287 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:07:09.859575 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:07:09.872491 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:07:09.872682 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:07:09.892055 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:07:09.892213 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:07:09.908942 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:07:09.909221 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:07:09.909356 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:07:09.920607 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:07:09.924352 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:07:09.924432 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:07:09.946938 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:07:09.958187 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:07:09.958316 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:07:09.972567 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:07:09.972657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:07:09.989688 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:07:09.989763 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:07:09.996446 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:07:09.996503 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:07:10.009095 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:07:10.019764 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:07:10.019882 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:07:10.019933 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:07:10.020548 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:07:10.021875 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:07:10.046383 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:07:10.047821 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:07:10.065421 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:07:10.065502 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:07:10.076324 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:07:10.076377 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:07:10.088290 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:07:10.088363 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:07:10.108087 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:07:10.108173 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:07:10.127341 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:07:10.127429 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:07:10.145944 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:07:10.146008 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:07:10.178042 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:07:10.192561 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:07:10.192663 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:07:10.211895 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:07:10.211971 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:07:10.220026 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:07:10.220100 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:07:10.232903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:07:10.232979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:07:10.254930 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:07:10.255012 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:07:10.255436 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:07:10.255530 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:07:10.584721 kernel: hv_netvsc 002248b8-561c-0022-48b8-561c002248b8 eth0: Data path switched from VF: enP37721s1 Sep 12 17:07:10.605615 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:07:10.605796 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:07:10.617766 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:07:10.635252 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:07:10.656947 systemd[1]: Switching root. Sep 12 17:07:10.737750 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 12 17:07:10.737826 systemd-journald[218]: Journal stopped Sep 12 17:07:18.334056 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:07:18.334082 kernel: SELinux: policy capability open_perms=1 Sep 12 17:07:18.334092 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:07:18.334100 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:07:18.334110 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:07:18.334118 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:07:18.334127 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:07:18.334135 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:07:18.334146 kernel: audit: type=1403 audit(1757696831.952:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:07:18.334156 systemd[1]: Successfully loaded SELinux policy in 318.162ms. Sep 12 17:07:18.334167 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.987ms. Sep 12 17:07:18.334177 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:07:18.334186 systemd[1]: Detected virtualization microsoft. Sep 12 17:07:18.334194 systemd[1]: Detected architecture arm64. Sep 12 17:07:18.334203 systemd[1]: Detected first boot. Sep 12 17:07:18.334214 systemd[1]: Hostname set to . Sep 12 17:07:18.334222 systemd[1]: Initializing machine ID from random generator. Sep 12 17:07:18.334231 zram_generator::config[1159]: No configuration found. Sep 12 17:07:18.334240 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:07:18.334248 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:07:18.334258 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:07:18.334267 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:07:18.334277 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:07:18.334286 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:07:18.334295 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:07:18.334305 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:07:18.334314 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:07:18.334322 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:07:18.334331 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:07:18.334343 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:07:18.334352 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:07:18.334361 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:07:18.334369 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:07:18.334378 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:07:18.334387 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:07:18.334396 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:07:18.334405 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:07:18.334416 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:07:18.334424 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:07:18.334433 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:07:18.334444 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:07:18.334454 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:07:18.334463 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:07:18.334472 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:07:18.334481 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:07:18.334492 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:07:18.334501 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:07:18.334510 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:07:18.334519 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:07:18.334528 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:07:18.334539 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:07:18.334550 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:07:18.334559 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:07:18.334569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:07:18.334578 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:07:18.334587 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:07:18.334596 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:07:18.334606 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:07:18.334616 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:07:18.334626 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:07:18.334635 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:07:18.334645 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:07:18.334654 systemd[1]: Reached target machines.target - Containers. Sep 12 17:07:18.334664 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:07:18.334673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:07:18.334683 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:07:18.334693 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:07:18.334715 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:07:18.334726 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:07:18.334735 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:07:18.334744 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:07:18.334754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:07:18.334764 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:07:18.334773 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:07:18.334784 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:07:18.334793 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:07:18.334802 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:07:18.334812 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:07:18.334821 kernel: loop: module loaded Sep 12 17:07:18.334830 kernel: fuse: init (API version 7.39) Sep 12 17:07:18.334839 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:07:18.334848 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:07:18.334857 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:07:18.334868 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:07:18.334877 kernel: ACPI: bus type drm_connector registered Sep 12 17:07:18.334886 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:07:18.334896 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:07:18.334905 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:07:18.334915 systemd[1]: Stopped verity-setup.service. Sep 12 17:07:18.334924 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:07:18.334955 systemd-journald[1249]: Collecting audit messages is disabled. Sep 12 17:07:18.334977 systemd-journald[1249]: Journal started Sep 12 17:07:18.334999 systemd-journald[1249]: Runtime Journal (/run/log/journal/37e8957d076542e2bda00a25eda36956) is 8M, max 78.5M, 70.5M free. Sep 12 17:07:17.215918 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:07:17.221878 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:07:17.222262 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:07:17.222605 systemd[1]: systemd-journald.service: Consumed 3.515s CPU time. Sep 12 17:07:18.351993 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:07:18.353011 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:07:18.361016 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:07:18.367125 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:07:18.374195 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:07:18.381039 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:07:18.386652 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:07:18.394343 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:07:18.402167 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:07:18.402343 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:07:18.411109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:07:18.411284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:07:18.418606 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:07:18.418796 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:07:18.425913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:07:18.426087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:07:18.434063 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:07:18.434225 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:07:18.441426 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:07:18.441609 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:07:18.448911 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:07:18.456680 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:07:18.465419 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:07:18.473536 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:07:18.481577 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:07:18.501058 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:07:18.517840 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:07:18.525497 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:07:18.532246 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:07:18.532291 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:07:18.539062 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:07:18.548040 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:07:18.556393 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:07:18.563385 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:07:18.564635 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:07:18.573156 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:07:18.580254 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:07:18.581827 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:07:18.588576 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:07:18.589967 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:07:18.597928 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:07:18.610604 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:07:18.619605 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:07:18.631143 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:07:18.640320 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:07:18.650229 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:07:18.653312 systemd-journald[1249]: Time spent on flushing to /var/log/journal/37e8957d076542e2bda00a25eda36956 is 25.304ms for 915 entries. Sep 12 17:07:18.653312 systemd-journald[1249]: System Journal (/var/log/journal/37e8957d076542e2bda00a25eda36956) is 8M, max 2.6G, 2.6G free. Sep 12 17:07:18.729619 systemd-journald[1249]: Received client request to flush runtime journal. Sep 12 17:07:18.729657 kernel: loop0: detected capacity change from 0 to 207008 Sep 12 17:07:18.682309 udevadm[1302]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:07:18.682529 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:07:18.692094 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:07:18.712936 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:07:18.731375 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:07:18.750238 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:07:18.751136 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:07:18.793479 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:07:18.794512 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:07:18.817046 kernel: loop1: detected capacity change from 0 to 123192 Sep 12 17:07:18.904689 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Sep 12 17:07:18.904729 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Sep 12 17:07:18.909665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:07:18.925042 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:07:19.315741 kernel: loop2: detected capacity change from 0 to 28720 Sep 12 17:07:19.461289 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:07:19.475035 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:07:19.492562 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Sep 12 17:07:19.492586 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Sep 12 17:07:19.497132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:07:19.766730 kernel: loop3: detected capacity change from 0 to 113512 Sep 12 17:07:20.261982 kernel: loop4: detected capacity change from 0 to 207008 Sep 12 17:07:20.283773 kernel: loop5: detected capacity change from 0 to 123192 Sep 12 17:07:20.300776 kernel: loop6: detected capacity change from 0 to 28720 Sep 12 17:07:20.317887 kernel: loop7: detected capacity change from 0 to 113512 Sep 12 17:07:20.342477 (sd-merge)[1327]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 17:07:20.343023 (sd-merge)[1327]: Merged extensions into '/usr'. Sep 12 17:07:20.347767 systemd[1]: Reload requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:07:20.347789 systemd[1]: Reloading... Sep 12 17:07:20.434086 zram_generator::config[1354]: No configuration found. Sep 12 17:07:20.573444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:07:20.646802 systemd[1]: Reloading finished in 298 ms. Sep 12 17:07:20.661729 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:07:20.670804 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:07:20.690076 systemd[1]: Starting ensure-sysext.service... Sep 12 17:07:20.696082 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:07:20.705993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:07:20.741484 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Sep 12 17:07:20.753353 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:07:20.753574 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:07:20.754265 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:07:20.754484 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Sep 12 17:07:20.754537 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Sep 12 17:07:20.768473 systemd[1]: Reload requested from client PID 1411 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:07:20.768490 systemd[1]: Reloading... Sep 12 17:07:20.825404 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:07:20.825422 systemd-tmpfiles[1412]: Skipping /boot Sep 12 17:07:20.835858 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:07:20.835873 systemd-tmpfiles[1412]: Skipping /boot Sep 12 17:07:20.849747 zram_generator::config[1440]: No configuration found. Sep 12 17:07:20.978517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:07:21.052612 systemd[1]: Reloading finished in 283 ms. Sep 12 17:07:21.061642 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:07:21.090978 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:07:21.149982 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:07:21.158885 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:07:21.170050 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:07:21.179309 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:07:21.199557 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 12 17:07:21.205861 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:07:21.212910 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:07:21.221417 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:07:21.229942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:07:21.239250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:07:21.246929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:07:21.246978 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:07:21.247042 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:07:21.254012 systemd[1]: Finished ensure-sysext.service. Sep 12 17:07:21.259471 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:07:21.259724 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:07:21.268429 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:07:21.269105 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:07:21.276175 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:07:21.276361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:07:21.284402 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:07:21.284606 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:07:21.300579 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:07:21.309304 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:07:21.309375 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:07:21.316044 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:07:21.370260 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:07:21.432560 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:07:21.532513 systemd-resolved[1503]: Positive Trust Anchors: Sep 12 17:07:21.532962 systemd-resolved[1503]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:07:21.533040 systemd-resolved[1503]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:07:21.535039 augenrules[1541]: No rules Sep 12 17:07:21.536495 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:07:21.537763 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:07:21.546542 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:07:21.563894 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:07:21.569786 systemd-resolved[1503]: Using system hostname 'ci-4230.2.3-a-bc327f6988'. Sep 12 17:07:21.573646 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:07:21.587869 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:07:21.656673 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:07:21.751742 kernel: hv_vmbus: registering driver hv_balloon Sep 12 17:07:21.751849 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:07:21.760389 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 17:07:21.766931 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 12 17:07:21.778887 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 12 17:07:21.819746 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 17:07:21.819147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:07:21.839156 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 17:07:21.840890 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 17:07:21.842539 systemd-networkd[1553]: lo: Link UP Sep 12 17:07:21.843050 systemd-networkd[1553]: lo: Gained carrier Sep 12 17:07:21.845567 systemd-networkd[1553]: Enumeration completed Sep 12 17:07:21.849646 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:07:21.850332 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:07:21.850564 systemd-networkd[1553]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:07:21.850568 systemd-networkd[1553]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:07:21.862758 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:07:21.870139 systemd[1]: Reached target network.target - Network. Sep 12 17:07:21.888082 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:07:21.904105 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:07:21.930722 kernel: mlx5_core 9359:00:02.0 enP37721s1: Link up Sep 12 17:07:21.935550 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1550) Sep 12 17:07:21.935576 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:07:21.938262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:07:21.941500 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:07:21.955061 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:07:21.966910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:07:21.992036 kernel: hv_netvsc 002248b8-561c-0022-48b8-561c002248b8 eth0: Data path switched to VF: enP37721s1 Sep 12 17:07:21.994913 systemd-networkd[1553]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:07:21.994952 systemd-networkd[1553]: enP37721s1: Link UP Sep 12 17:07:21.995287 systemd-networkd[1553]: eth0: Link UP Sep 12 17:07:21.995290 systemd-networkd[1553]: eth0: Gained carrier Sep 12 17:07:21.995304 systemd-networkd[1553]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:07:22.000388 systemd-networkd[1553]: enP37721s1: Gained carrier Sep 12 17:07:22.012992 systemd-networkd[1553]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:07:22.017686 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:07:22.065756 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:07:22.082904 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:07:22.152153 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:07:22.168066 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:07:22.175987 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:07:22.274788 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:07:22.308437 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:07:22.317651 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:07:22.329034 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:07:22.341567 lvm[1664]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:07:22.371568 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:07:22.946411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:07:23.475027 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:07:23.483049 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:07:23.638836 systemd-networkd[1553]: eth0: Gained IPv6LL Sep 12 17:07:23.641200 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:07:23.649288 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:07:26.991046 ldconfig[1294]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:07:27.003238 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:07:27.015875 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:07:27.035909 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:07:27.044294 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:07:27.050786 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:07:27.057854 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:07:27.067567 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:07:27.074550 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:07:27.083020 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:07:27.090885 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:07:27.090967 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:07:27.096565 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:07:27.114768 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:07:27.123026 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:07:27.131564 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:07:27.139265 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:07:27.152810 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:07:27.164319 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:07:27.184907 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:07:27.193096 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:07:27.200271 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:07:27.206898 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:07:27.213110 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:07:27.213142 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:07:27.232837 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 17:07:27.240432 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:07:27.250947 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:07:27.269906 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:07:27.276862 (chronyd)[1676]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 12 17:07:27.278924 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:07:27.286782 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:07:27.292895 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:07:27.293085 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 12 17:07:27.294548 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 17:07:27.302805 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 17:07:27.304686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:27.315943 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:07:27.318666 KVP[1685]: KVP starting; pid is:1685 Sep 12 17:07:27.322677 jq[1683]: false Sep 12 17:07:27.328325 KVP[1685]: KVP LIC Version: 3.1 Sep 12 17:07:27.328752 kernel: hv_utils: KVP IC version 4.0 Sep 12 17:07:27.329908 chronyd[1690]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 12 17:07:27.335175 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:07:27.342786 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:07:27.354531 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:07:27.365374 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:07:27.383341 chronyd[1690]: Timezone right/UTC failed leap second check, ignoring Sep 12 17:07:27.381009 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:07:27.390017 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:07:27.391557 chronyd[1690]: Loaded seccomp filter (level 2) Sep 12 17:07:27.390605 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:07:27.400733 extend-filesystems[1684]: Found loop4 Sep 12 17:07:27.400733 extend-filesystems[1684]: Found loop5 Sep 12 17:07:27.400733 extend-filesystems[1684]: Found loop6 Sep 12 17:07:27.400733 extend-filesystems[1684]: Found loop7 Sep 12 17:07:27.400733 extend-filesystems[1684]: Found sda Sep 12 17:07:27.400733 extend-filesystems[1684]: Found sda1 Sep 12 17:07:27.400733 extend-filesystems[1684]: Found sda2 Sep 12 17:07:27.400733 extend-filesystems[1684]: Found sda3 Sep 12 17:07:27.400733 extend-filesystems[1684]: Found usr Sep 12 17:07:27.400733 extend-filesystems[1684]: Found sda4 Sep 12 17:07:27.400733 extend-filesystems[1684]: Found sda6 Sep 12 17:07:27.400733 extend-filesystems[1684]: Found sda7 Sep 12 17:07:27.400733 extend-filesystems[1684]: Found sda9 Sep 12 17:07:27.400733 extend-filesystems[1684]: Checking size of /dev/sda9 Sep 12 17:07:27.403943 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:07:27.534556 update_engine[1700]: I20250912 17:07:27.497134 1700 main.cc:92] Flatcar Update Engine starting Sep 12 17:07:27.413637 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:07:27.536362 jq[1705]: true Sep 12 17:07:27.446983 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 17:07:27.462206 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:07:27.462445 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:07:27.470136 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:07:27.470527 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:07:27.490012 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:07:27.490258 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:07:27.504639 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:07:27.540818 jq[1718]: true Sep 12 17:07:27.557051 (ntainerd)[1719]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:07:27.560836 systemd-logind[1697]: New seat seat0. Sep 12 17:07:27.563656 systemd-logind[1697]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:07:27.564099 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:07:27.573045 extend-filesystems[1684]: Old size kept for /dev/sda9 Sep 12 17:07:27.573045 extend-filesystems[1684]: Found sr0 Sep 12 17:07:27.574584 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:07:27.574984 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:07:27.657384 tar[1712]: linux-arm64/LICENSE Sep 12 17:07:27.657384 tar[1712]: linux-arm64/helm Sep 12 17:07:27.709948 dbus-daemon[1679]: [system] SELinux support is enabled Sep 12 17:07:27.710161 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:07:27.724073 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:07:27.724756 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:07:27.734918 update_engine[1700]: I20250912 17:07:27.733314 1700 update_check_scheduler.cc:74] Next update check in 6m57s Sep 12 17:07:27.726073 dbus-daemon[1679]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:07:27.738936 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:07:27.738963 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:07:27.753565 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:07:27.763130 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1752) Sep 12 17:07:27.765758 bash[1749]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:07:27.778238 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:07:27.788542 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:07:27.801878 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:07:27.870793 coreos-metadata[1678]: Sep 12 17:07:27.870 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:07:27.876948 coreos-metadata[1678]: Sep 12 17:07:27.876 INFO Fetch successful Sep 12 17:07:27.877211 coreos-metadata[1678]: Sep 12 17:07:27.877 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 17:07:27.882682 coreos-metadata[1678]: Sep 12 17:07:27.882 INFO Fetch successful Sep 12 17:07:27.882682 coreos-metadata[1678]: Sep 12 17:07:27.882 INFO Fetching http://168.63.129.16/machine/eeefd176-94d1-45af-84d1-90de2ba0dd9e/4790a9fc%2D2e0a%2D4496%2D8eaf%2D5ddf03f00900.%5Fci%2D4230.2.3%2Da%2Dbc327f6988?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 17:07:27.886621 coreos-metadata[1678]: Sep 12 17:07:27.885 INFO Fetch successful Sep 12 17:07:27.886621 coreos-metadata[1678]: Sep 12 17:07:27.885 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:07:27.899243 coreos-metadata[1678]: Sep 12 17:07:27.898 INFO Fetch successful Sep 12 17:07:27.982753 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:07:27.990244 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:07:28.169720 locksmithd[1778]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:07:28.235568 sshd_keygen[1717]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:07:28.241245 containerd[1719]: time="2025-09-12T17:07:28.241134960Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 17:07:28.287831 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:07:28.311088 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:07:28.330943 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 17:07:28.331399 containerd[1719]: time="2025-09-12T17:07:28.331363200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:28.332959 containerd[1719]: time="2025-09-12T17:07:28.332922120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:07:28.333041 containerd[1719]: time="2025-09-12T17:07:28.333028160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:07:28.333094 containerd[1719]: time="2025-09-12T17:07:28.333081960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:07:28.333320 containerd[1719]: time="2025-09-12T17:07:28.333302720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:07:28.333401 containerd[1719]: time="2025-09-12T17:07:28.333387840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:28.333533 containerd[1719]: time="2025-09-12T17:07:28.333516320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:07:28.333609 containerd[1719]: time="2025-09-12T17:07:28.333595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:28.333901 containerd[1719]: time="2025-09-12T17:07:28.333880280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:07:28.333963 containerd[1719]: time="2025-09-12T17:07:28.333950320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:28.334037 containerd[1719]: time="2025-09-12T17:07:28.334023440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:07:28.334094 containerd[1719]: time="2025-09-12T17:07:28.334082800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:28.334225 containerd[1719]: time="2025-09-12T17:07:28.334210600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:28.334525 containerd[1719]: time="2025-09-12T17:07:28.334506000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:28.334739 containerd[1719]: time="2025-09-12T17:07:28.334720600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:07:28.334809 containerd[1719]: time="2025-09-12T17:07:28.334795480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:07:28.335118 containerd[1719]: time="2025-09-12T17:07:28.334947600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:07:28.335118 containerd[1719]: time="2025-09-12T17:07:28.335009440Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:07:28.351304 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:07:28.352140 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:07:28.371831 containerd[1719]: time="2025-09-12T17:07:28.371788600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:07:28.372045 containerd[1719]: time="2025-09-12T17:07:28.371999720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:07:28.372374 containerd[1719]: time="2025-09-12T17:07:28.372355840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:07:28.372460 containerd[1719]: time="2025-09-12T17:07:28.372447320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:07:28.372660 containerd[1719]: time="2025-09-12T17:07:28.372526240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:07:28.372998 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373156360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373389920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373508520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373524120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373538320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373552720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373565360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373578320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373591440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373605520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373618040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373631920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373644160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:07:28.374849 containerd[1719]: time="2025-09-12T17:07:28.373665040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373679120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373690600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373736680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373768120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373785320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373797240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373810080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373822680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373837960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373849280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373861440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373873880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373887800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373907840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375190 containerd[1719]: time="2025-09-12T17:07:28.373921800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375479 containerd[1719]: time="2025-09-12T17:07:28.373932920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:07:28.375479 containerd[1719]: time="2025-09-12T17:07:28.373980760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:07:28.375479 containerd[1719]: time="2025-09-12T17:07:28.373999960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:07:28.375479 containerd[1719]: time="2025-09-12T17:07:28.374011280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:07:28.375479 containerd[1719]: time="2025-09-12T17:07:28.374023880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:07:28.375479 containerd[1719]: time="2025-09-12T17:07:28.374032800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375479 containerd[1719]: time="2025-09-12T17:07:28.374046440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:07:28.375479 containerd[1719]: time="2025-09-12T17:07:28.374056400Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:07:28.375479 containerd[1719]: time="2025-09-12T17:07:28.374067640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:07:28.375668 containerd[1719]: time="2025-09-12T17:07:28.374356160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:07:28.375668 containerd[1719]: time="2025-09-12T17:07:28.374402760Z" level=info msg="Connect containerd service" Sep 12 17:07:28.375668 containerd[1719]: time="2025-09-12T17:07:28.374443760Z" level=info msg="using legacy CRI server" Sep 12 17:07:28.375668 containerd[1719]: time="2025-09-12T17:07:28.374450840Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:07:28.375668 containerd[1719]: time="2025-09-12T17:07:28.374577520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:07:28.376481 containerd[1719]: time="2025-09-12T17:07:28.376455600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:07:28.376917 containerd[1719]: time="2025-09-12T17:07:28.376899360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:07:28.377033 containerd[1719]: time="2025-09-12T17:07:28.377020160Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:07:28.377226 containerd[1719]: time="2025-09-12T17:07:28.377200160Z" level=info msg="Start subscribing containerd event" Sep 12 17:07:28.377310 containerd[1719]: time="2025-09-12T17:07:28.377297560Z" level=info msg="Start recovering state" Sep 12 17:07:28.377423 containerd[1719]: time="2025-09-12T17:07:28.377410440Z" level=info msg="Start event monitor" Sep 12 17:07:28.377486 containerd[1719]: time="2025-09-12T17:07:28.377475360Z" level=info msg="Start snapshots syncer" Sep 12 17:07:28.377563 containerd[1719]: time="2025-09-12T17:07:28.377552160Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:07:28.377633 containerd[1719]: time="2025-09-12T17:07:28.377611960Z" level=info msg="Start streaming server" Sep 12 17:07:28.377781 containerd[1719]: time="2025-09-12T17:07:28.377769240Z" level=info msg="containerd successfully booted in 0.142489s" Sep 12 17:07:28.385541 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:07:28.404583 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 17:07:28.436925 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:07:28.452101 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:07:28.459032 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:07:28.470296 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:07:28.521613 tar[1712]: linux-arm64/README.md Sep 12 17:07:28.532035 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:07:28.669154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:28.676992 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:07:28.679973 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:07:28.690761 systemd[1]: Startup finished in 747ms (kernel) + 14.672s (initrd) + 17.055s (userspace) = 32.475s. Sep 12 17:07:29.168240 kubelet[1866]: E0912 17:07:29.168183 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:07:29.171380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:07:29.171677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:07:29.173785 systemd[1]: kubelet.service: Consumed 735ms CPU time, 254.3M memory peak. Sep 12 17:07:29.505970 login[1856]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:29.506449 login[1857]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:29.513479 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:07:29.518967 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:07:29.524360 systemd-logind[1697]: New session 2 of user core. Sep 12 17:07:29.527444 systemd-logind[1697]: New session 1 of user core. Sep 12 17:07:29.548445 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:07:29.556111 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:07:29.569811 (systemd)[1879]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:07:29.572327 systemd-logind[1697]: New session c1 of user core. Sep 12 17:07:29.961025 systemd[1879]: Queued start job for default target default.target. Sep 12 17:07:29.969613 systemd[1879]: Created slice app.slice - User Application Slice. Sep 12 17:07:29.969648 systemd[1879]: Reached target paths.target - Paths. Sep 12 17:07:29.969690 systemd[1879]: Reached target timers.target - Timers. Sep 12 17:07:29.970997 systemd[1879]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:07:29.983450 systemd[1879]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:07:29.983560 systemd[1879]: Reached target sockets.target - Sockets. Sep 12 17:07:29.983600 systemd[1879]: Reached target basic.target - Basic System. Sep 12 17:07:29.983629 systemd[1879]: Reached target default.target - Main User Target. Sep 12 17:07:29.983654 systemd[1879]: Startup finished in 403ms. Sep 12 17:07:29.983814 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:07:29.992870 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:07:29.993771 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:07:30.507147 waagent[1854]: 2025-09-12T17:07:30.507041Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 12 17:07:30.513037 waagent[1854]: 2025-09-12T17:07:30.512962Z INFO Daemon Daemon OS: flatcar 4230.2.3 Sep 12 17:07:30.517766 waagent[1854]: 2025-09-12T17:07:30.517707Z INFO Daemon Daemon Python: 3.11.11 Sep 12 17:07:30.522435 waagent[1854]: 2025-09-12T17:07:30.522366Z INFO Daemon Daemon Run daemon Sep 12 17:07:30.526807 waagent[1854]: 2025-09-12T17:07:30.526745Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.3' Sep 12 17:07:30.535910 waagent[1854]: 2025-09-12T17:07:30.535841Z INFO Daemon Daemon Using waagent for provisioning Sep 12 17:07:30.541670 waagent[1854]: 2025-09-12T17:07:30.541618Z INFO Daemon Daemon Activate resource disk Sep 12 17:07:30.546501 waagent[1854]: 2025-09-12T17:07:30.546441Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 17:07:30.560182 waagent[1854]: 2025-09-12T17:07:30.560093Z INFO Daemon Daemon Found device: None Sep 12 17:07:30.565080 waagent[1854]: 2025-09-12T17:07:30.565011Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 17:07:30.575119 waagent[1854]: 2025-09-12T17:07:30.575058Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 17:07:30.587589 waagent[1854]: 2025-09-12T17:07:30.587537Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:07:30.593370 waagent[1854]: 2025-09-12T17:07:30.593317Z INFO Daemon Daemon Running default provisioning handler Sep 12 17:07:30.604616 waagent[1854]: 2025-09-12T17:07:30.604542Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 17:07:30.618759 waagent[1854]: 2025-09-12T17:07:30.618667Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 17:07:30.630050 waagent[1854]: 2025-09-12T17:07:30.629986Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 17:07:30.635556 waagent[1854]: 2025-09-12T17:07:30.635498Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 17:07:30.757587 waagent[1854]: 2025-09-12T17:07:30.757417Z INFO Daemon Daemon Successfully mounted dvd Sep 12 17:07:30.815955 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 17:07:30.817597 waagent[1854]: 2025-09-12T17:07:30.817498Z INFO Daemon Daemon Detect protocol endpoint Sep 12 17:07:30.822732 waagent[1854]: 2025-09-12T17:07:30.822649Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:07:30.828730 waagent[1854]: 2025-09-12T17:07:30.828658Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 17:07:30.835964 waagent[1854]: 2025-09-12T17:07:30.835903Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 17:07:30.841953 waagent[1854]: 2025-09-12T17:07:30.841898Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 17:07:30.847396 waagent[1854]: 2025-09-12T17:07:30.847343Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 17:07:30.910327 waagent[1854]: 2025-09-12T17:07:30.910276Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 17:07:30.918108 waagent[1854]: 2025-09-12T17:07:30.918077Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 17:07:30.925016 waagent[1854]: 2025-09-12T17:07:30.924954Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 17:07:31.354742 waagent[1854]: 2025-09-12T17:07:31.354526Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 17:07:31.362049 waagent[1854]: 2025-09-12T17:07:31.361965Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 17:07:31.372118 waagent[1854]: 2025-09-12T17:07:31.372061Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:07:31.397095 waagent[1854]: 2025-09-12T17:07:31.397046Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 12 17:07:31.403386 waagent[1854]: 2025-09-12T17:07:31.403331Z INFO Daemon Sep 12 17:07:31.406300 waagent[1854]: 2025-09-12T17:07:31.406251Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5651f1fa-7c61-4866-a622-bc766934d573 eTag: 4148465225420366375 source: Fabric] Sep 12 17:07:31.419069 waagent[1854]: 2025-09-12T17:07:31.419019Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 17:07:31.427163 waagent[1854]: 2025-09-12T17:07:31.427111Z INFO Daemon Sep 12 17:07:31.430435 waagent[1854]: 2025-09-12T17:07:31.430361Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:07:31.442751 waagent[1854]: 2025-09-12T17:07:31.442688Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 17:07:31.614384 waagent[1854]: 2025-09-12T17:07:31.614228Z INFO Daemon Downloaded certificate {'thumbprint': '87A2832D1440D14312213F32869FCAB104B24144', 'hasPrivateKey': True} Sep 12 17:07:31.624861 waagent[1854]: 2025-09-12T17:07:31.624798Z INFO Daemon Fetch goal state completed Sep 12 17:07:31.676968 waagent[1854]: 2025-09-12T17:07:31.676916Z INFO Daemon Daemon Starting provisioning Sep 12 17:07:31.682724 waagent[1854]: 2025-09-12T17:07:31.682648Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 17:07:31.687849 waagent[1854]: 2025-09-12T17:07:31.687790Z INFO Daemon Daemon Set hostname [ci-4230.2.3-a-bc327f6988] Sep 12 17:07:31.720720 waagent[1854]: 2025-09-12T17:07:31.718486Z INFO Daemon Daemon Publish hostname [ci-4230.2.3-a-bc327f6988] Sep 12 17:07:31.725093 waagent[1854]: 2025-09-12T17:07:31.725030Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 17:07:31.732147 waagent[1854]: 2025-09-12T17:07:31.732090Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 17:07:31.745016 systemd-networkd[1553]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:07:31.745025 systemd-networkd[1553]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:07:31.745054 systemd-networkd[1553]: eth0: DHCP lease lost Sep 12 17:07:31.746215 waagent[1854]: 2025-09-12T17:07:31.746130Z INFO Daemon Daemon Create user account if not exists Sep 12 17:07:31.752376 waagent[1854]: 2025-09-12T17:07:31.752309Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 17:07:31.758581 waagent[1854]: 2025-09-12T17:07:31.758508Z INFO Daemon Daemon Configure sudoer Sep 12 17:07:31.763596 waagent[1854]: 2025-09-12T17:07:31.763530Z INFO Daemon Daemon Configure sshd Sep 12 17:07:31.768465 waagent[1854]: 2025-09-12T17:07:31.768404Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 17:07:31.782399 waagent[1854]: 2025-09-12T17:07:31.782027Z INFO Daemon Daemon Deploy ssh public key. Sep 12 17:07:31.796774 systemd-networkd[1553]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:07:32.981893 waagent[1854]: 2025-09-12T17:07:32.981836Z INFO Daemon Daemon Provisioning complete Sep 12 17:07:33.000763 waagent[1854]: 2025-09-12T17:07:33.000690Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 17:07:33.008115 waagent[1854]: 2025-09-12T17:07:33.008050Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 17:07:33.018132 waagent[1854]: 2025-09-12T17:07:33.018073Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 12 17:07:33.158043 waagent[1930]: 2025-09-12T17:07:33.157956Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 12 17:07:33.158367 waagent[1930]: 2025-09-12T17:07:33.158114Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.3 Sep 12 17:07:33.158367 waagent[1930]: 2025-09-12T17:07:33.158167Z INFO ExtHandler ExtHandler Python: 3.11.11 Sep 12 17:07:33.421477 waagent[1930]: 2025-09-12T17:07:33.421375Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 12 17:07:33.421675 waagent[1930]: 2025-09-12T17:07:33.421634Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:07:33.421761 waagent[1930]: 2025-09-12T17:07:33.421726Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:07:33.430516 waagent[1930]: 2025-09-12T17:07:33.430439Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:07:33.437011 waagent[1930]: 2025-09-12T17:07:33.436966Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 12 17:07:33.437570 waagent[1930]: 2025-09-12T17:07:33.437527Z INFO ExtHandler Sep 12 17:07:33.437642 waagent[1930]: 2025-09-12T17:07:33.437613Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 380f082f-5efa-4370-8a85-a50e397e69db eTag: 4148465225420366375 source: Fabric] Sep 12 17:07:33.437962 waagent[1930]: 2025-09-12T17:07:33.437921Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 17:07:33.438540 waagent[1930]: 2025-09-12T17:07:33.438494Z INFO ExtHandler Sep 12 17:07:33.438601 waagent[1930]: 2025-09-12T17:07:33.438573Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:07:33.443162 waagent[1930]: 2025-09-12T17:07:33.443125Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 17:07:33.513217 waagent[1930]: 2025-09-12T17:07:33.513116Z INFO ExtHandler Downloaded certificate {'thumbprint': '87A2832D1440D14312213F32869FCAB104B24144', 'hasPrivateKey': True} Sep 12 17:07:33.513778 waagent[1930]: 2025-09-12T17:07:33.513721Z INFO ExtHandler Fetch goal state completed Sep 12 17:07:33.534243 waagent[1930]: 2025-09-12T17:07:33.534181Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1930 Sep 12 17:07:33.534404 waagent[1930]: 2025-09-12T17:07:33.534370Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 17:07:33.536051 waagent[1930]: 2025-09-12T17:07:33.536002Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.3', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 17:07:33.536414 waagent[1930]: 2025-09-12T17:07:33.536378Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 17:07:33.605503 waagent[1930]: 2025-09-12T17:07:33.605454Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 17:07:33.605754 waagent[1930]: 2025-09-12T17:07:33.605686Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 17:07:33.611737 waagent[1930]: 2025-09-12T17:07:33.611624Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 17:07:33.618224 systemd[1]: Reload requested from client PID 1943 ('systemctl') (unit waagent.service)... Sep 12 17:07:33.618476 systemd[1]: Reloading... Sep 12 17:07:33.713726 zram_generator::config[1982]: No configuration found. Sep 12 17:07:33.829828 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:07:33.931950 systemd[1]: Reloading finished in 313 ms. Sep 12 17:07:33.944643 waagent[1930]: 2025-09-12T17:07:33.944270Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 12 17:07:33.950780 systemd[1]: Reload requested from client PID 2036 ('systemctl') (unit waagent.service)... Sep 12 17:07:33.950795 systemd[1]: Reloading... Sep 12 17:07:34.048781 zram_generator::config[2079]: No configuration found. Sep 12 17:07:34.154721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:07:34.254603 systemd[1]: Reloading finished in 303 ms. Sep 12 17:07:34.273144 waagent[1930]: 2025-09-12T17:07:34.272261Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 17:07:34.273144 waagent[1930]: 2025-09-12T17:07:34.272444Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 17:07:34.729741 waagent[1930]: 2025-09-12T17:07:34.729212Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 17:07:34.729941 waagent[1930]: 2025-09-12T17:07:34.729868Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 12 17:07:34.730826 waagent[1930]: 2025-09-12T17:07:34.730720Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 17:07:34.730908 waagent[1930]: 2025-09-12T17:07:34.730869Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:07:34.731300 waagent[1930]: 2025-09-12T17:07:34.731069Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:07:34.731510 waagent[1930]: 2025-09-12T17:07:34.731451Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 17:07:34.731800 waagent[1930]: 2025-09-12T17:07:34.731692Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 17:07:34.732262 waagent[1930]: 2025-09-12T17:07:34.732207Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 17:07:34.732564 waagent[1930]: 2025-09-12T17:07:34.732518Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 17:07:34.732564 waagent[1930]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 17:07:34.732564 waagent[1930]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 17:07:34.732564 waagent[1930]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 17:07:34.732564 waagent[1930]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:07:34.732564 waagent[1930]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:07:34.732564 waagent[1930]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:07:34.733159 waagent[1930]: 2025-09-12T17:07:34.733077Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 17:07:34.733233 waagent[1930]: 2025-09-12T17:07:34.733183Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:07:34.733333 waagent[1930]: 2025-09-12T17:07:34.733259Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:07:34.733505 waagent[1930]: 2025-09-12T17:07:34.733437Z INFO EnvHandler ExtHandler Configure routes Sep 12 17:07:34.733550 waagent[1930]: 2025-09-12T17:07:34.733530Z INFO EnvHandler ExtHandler Gateway:None Sep 12 17:07:34.733733 waagent[1930]: 2025-09-12T17:07:34.733576Z INFO EnvHandler ExtHandler Routes:None Sep 12 17:07:34.734075 waagent[1930]: 2025-09-12T17:07:34.734001Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 17:07:34.734226 waagent[1930]: 2025-09-12T17:07:34.734159Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 17:07:34.735235 waagent[1930]: 2025-09-12T17:07:34.735180Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 17:07:34.741484 waagent[1930]: 2025-09-12T17:07:34.741413Z INFO ExtHandler ExtHandler Sep 12 17:07:34.742096 waagent[1930]: 2025-09-12T17:07:34.742034Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 01ece3c1-f446-45a5-a78f-997666b328d6 correlation e209f5e7-5449-4318-990e-c9dab8d1ed38 created: 2025-09-12T17:06:16.689827Z] Sep 12 17:07:34.744202 waagent[1930]: 2025-09-12T17:07:34.742759Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 17:07:34.744202 waagent[1930]: 2025-09-12T17:07:34.743341Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 12 17:07:34.784430 waagent[1930]: 2025-09-12T17:07:34.784349Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F344E1AB-08B0-4D6A-AB50-BE89838E7DA7;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 12 17:07:34.824886 waagent[1930]: 2025-09-12T17:07:34.824800Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 17:07:34.824886 waagent[1930]: Executing ['ip', '-a', '-o', 'link']: Sep 12 17:07:34.824886 waagent[1930]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 17:07:34.824886 waagent[1930]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:56:1c brd ff:ff:ff:ff:ff:ff Sep 12 17:07:34.824886 waagent[1930]: 3: enP37721s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:56:1c brd ff:ff:ff:ff:ff:ff\ altname enP37721p0s2 Sep 12 17:07:34.824886 waagent[1930]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 17:07:34.824886 waagent[1930]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 17:07:34.824886 waagent[1930]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 17:07:34.824886 waagent[1930]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 17:07:34.824886 waagent[1930]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 17:07:34.824886 waagent[1930]: 2: eth0 inet6 fe80::222:48ff:feb8:561c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 17:07:34.873740 waagent[1930]: 2025-09-12T17:07:34.872838Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 12 17:07:34.873740 waagent[1930]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:07:34.873740 waagent[1930]: pkts bytes target prot opt in out source destination Sep 12 17:07:34.873740 waagent[1930]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:07:34.873740 waagent[1930]: pkts bytes target prot opt in out source destination Sep 12 17:07:34.873740 waagent[1930]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:07:34.873740 waagent[1930]: pkts bytes target prot opt in out source destination Sep 12 17:07:34.873740 waagent[1930]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:07:34.873740 waagent[1930]: 3 164 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:07:34.873740 waagent[1930]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:07:34.876260 waagent[1930]: 2025-09-12T17:07:34.876177Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 17:07:34.876260 waagent[1930]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:07:34.876260 waagent[1930]: pkts bytes target prot opt in out source destination Sep 12 17:07:34.876260 waagent[1930]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:07:34.876260 waagent[1930]: pkts bytes target prot opt in out source destination Sep 12 17:07:34.876260 waagent[1930]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:07:34.876260 waagent[1930]: pkts bytes target prot opt in out source destination Sep 12 17:07:34.876260 waagent[1930]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:07:34.876260 waagent[1930]: 3 164 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:07:34.876260 waagent[1930]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:07:34.876513 waagent[1930]: 2025-09-12T17:07:34.876485Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 12 17:07:39.281508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:07:39.289897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:39.395346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:39.399505 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:07:39.539099 kubelet[2168]: E0912 17:07:39.538965 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:07:39.542424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:07:39.542573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:07:39.543098 systemd[1]: kubelet.service: Consumed 129ms CPU time, 107.7M memory peak. Sep 12 17:07:49.781695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:07:49.789925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:49.888797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:49.893158 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:07:49.985115 kubelet[2183]: E0912 17:07:49.985050 2183 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:07:49.987943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:07:49.988224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:07:49.988813 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107M memory peak. Sep 12 17:07:51.183530 chronyd[1690]: Selected source PHC0 Sep 12 17:08:00.031636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:08:00.040156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:08:00.152508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:08:00.157134 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:08:00.284801 kubelet[2198]: E0912 17:08:00.284430 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:08:00.286888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:08:00.287046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:08:00.287814 systemd[1]: kubelet.service: Consumed 140ms CPU time, 109.3M memory peak. Sep 12 17:08:03.791927 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:08:03.803056 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.16.10:47654.service - OpenSSH per-connection server daemon (10.200.16.10:47654). Sep 12 17:08:04.418868 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 47654 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:08:04.420293 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:04.425199 systemd-logind[1697]: New session 3 of user core. Sep 12 17:08:04.433956 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:08:04.810514 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.16.10:47668.service - OpenSSH per-connection server daemon (10.200.16.10:47668). Sep 12 17:08:05.267001 sshd[2210]: Accepted publickey for core from 10.200.16.10 port 47668 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:08:05.268879 sshd-session[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:05.273407 systemd-logind[1697]: New session 4 of user core. Sep 12 17:08:05.283959 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:08:05.680396 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.16.10:47670.service - OpenSSH per-connection server daemon (10.200.16.10:47670). Sep 12 17:08:05.964394 sshd[2212]: Connection closed by 10.200.16.10 port 47668 Sep 12 17:08:05.964229 sshd-session[2210]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:05.968156 systemd[1]: sshd@1-10.200.20.12:22-10.200.16.10:47668.service: Deactivated successfully. Sep 12 17:08:05.969897 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:08:05.971289 systemd-logind[1697]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:08:05.972762 systemd-logind[1697]: Removed session 4. Sep 12 17:08:06.102373 sshd[2215]: Accepted publickey for core from 10.200.16.10 port 47670 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:08:06.103853 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:06.108723 systemd-logind[1697]: New session 5 of user core. Sep 12 17:08:06.114043 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:08:06.420374 sshd[2220]: Connection closed by 10.200.16.10 port 47670 Sep 12 17:08:06.421305 sshd-session[2215]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:06.425652 systemd-logind[1697]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:08:06.426103 systemd[1]: sshd@2-10.200.20.12:22-10.200.16.10:47670.service: Deactivated successfully. Sep 12 17:08:06.428355 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:08:06.431363 systemd-logind[1697]: Removed session 5. Sep 12 17:08:06.513220 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.16.10:47682.service - OpenSSH per-connection server daemon (10.200.16.10:47682). Sep 12 17:08:06.970025 sshd[2226]: Accepted publickey for core from 10.200.16.10 port 47682 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:08:06.971302 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:06.975638 systemd-logind[1697]: New session 6 of user core. Sep 12 17:08:06.983860 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:08:07.299127 sshd[2228]: Connection closed by 10.200.16.10 port 47682 Sep 12 17:08:07.298210 sshd-session[2226]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:07.301228 systemd[1]: sshd@3-10.200.20.12:22-10.200.16.10:47682.service: Deactivated successfully. Sep 12 17:08:07.302914 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:08:07.304796 systemd-logind[1697]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:08:07.306214 systemd-logind[1697]: Removed session 6. Sep 12 17:08:07.374803 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.16.10:47698.service - OpenSSH per-connection server daemon (10.200.16.10:47698). Sep 12 17:08:07.793068 sshd[2234]: Accepted publickey for core from 10.200.16.10 port 47698 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:08:07.794355 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:07.799272 systemd-logind[1697]: New session 7 of user core. Sep 12 17:08:07.804896 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:08:08.208936 sudo[2237]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:08:08.209224 sudo[2237]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:08:08.244740 sudo[2237]: pam_unix(sudo:session): session closed for user root Sep 12 17:08:08.317203 sshd[2236]: Connection closed by 10.200.16.10 port 47698 Sep 12 17:08:08.316293 sshd-session[2234]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:08.320264 systemd[1]: sshd@4-10.200.20.12:22-10.200.16.10:47698.service: Deactivated successfully. Sep 12 17:08:08.323147 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:08:08.323916 systemd-logind[1697]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:08:08.324865 systemd-logind[1697]: Removed session 7. Sep 12 17:08:08.392308 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.16.10:47706.service - OpenSSH per-connection server daemon (10.200.16.10:47706). Sep 12 17:08:08.814608 sshd[2243]: Accepted publickey for core from 10.200.16.10 port 47706 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:08:08.816001 sshd-session[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:08.821693 systemd-logind[1697]: New session 8 of user core. Sep 12 17:08:08.828971 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:08:09.052418 sudo[2247]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:08:09.053068 sudo[2247]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:08:09.056682 sudo[2247]: pam_unix(sudo:session): session closed for user root Sep 12 17:08:09.061594 sudo[2246]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:08:09.062264 sudo[2246]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:08:09.081017 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:08:09.104624 augenrules[2269]: No rules Sep 12 17:08:09.106170 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:08:09.106378 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:08:09.107901 sudo[2246]: pam_unix(sudo:session): session closed for user root Sep 12 17:08:09.173877 sshd[2245]: Connection closed by 10.200.16.10 port 47706 Sep 12 17:08:09.174650 sshd-session[2243]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:09.177913 systemd[1]: sshd@5-10.200.20.12:22-10.200.16.10:47706.service: Deactivated successfully. Sep 12 17:08:09.180461 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:08:09.182674 systemd-logind[1697]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:08:09.184041 systemd-logind[1697]: Removed session 8. Sep 12 17:08:09.250604 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.16.10:47718.service - OpenSSH per-connection server daemon (10.200.16.10:47718). Sep 12 17:08:09.668720 sshd[2278]: Accepted publickey for core from 10.200.16.10 port 47718 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:08:09.670025 sshd-session[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:09.675762 systemd-logind[1697]: New session 9 of user core. Sep 12 17:08:09.680902 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:08:09.894107 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 12 17:08:09.906158 sudo[2281]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:08:09.906481 sudo[2281]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:08:10.531406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 17:08:10.537900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:08:10.681729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:08:10.686298 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:08:10.724736 kubelet[2301]: E0912 17:08:10.724655 2301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:08:10.726893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:08:10.727049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:08:10.727852 systemd[1]: kubelet.service: Consumed 134ms CPU time, 105.1M memory peak. Sep 12 17:08:12.025005 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:08:12.025929 (dockerd)[2313]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:08:13.146227 dockerd[2313]: time="2025-09-12T17:08:13.146157751Z" level=info msg="Starting up" Sep 12 17:08:13.220668 update_engine[1700]: I20250912 17:08:13.220016 1700 update_attempter.cc:509] Updating boot flags... Sep 12 17:08:13.301817 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2337) Sep 12 17:08:13.514448 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2333291965-merged.mount: Deactivated successfully. Sep 12 17:08:13.547952 dockerd[2313]: time="2025-09-12T17:08:13.547693898Z" level=info msg="Loading containers: start." Sep 12 17:08:13.905761 kernel: Initializing XFRM netlink socket Sep 12 17:08:14.095334 systemd-networkd[1553]: docker0: Link UP Sep 12 17:08:14.132102 dockerd[2313]: time="2025-09-12T17:08:14.132051085Z" level=info msg="Loading containers: done." Sep 12 17:08:14.144523 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck988413894-merged.mount: Deactivated successfully. Sep 12 17:08:14.155626 dockerd[2313]: time="2025-09-12T17:08:14.155572761Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:08:14.156107 dockerd[2313]: time="2025-09-12T17:08:14.155735042Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 17:08:14.156107 dockerd[2313]: time="2025-09-12T17:08:14.155881842Z" level=info msg="Daemon has completed initialization" Sep 12 17:08:14.219718 dockerd[2313]: time="2025-09-12T17:08:14.219246180Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:08:14.219477 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:08:15.338065 containerd[1719]: time="2025-09-12T17:08:15.338009513Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:08:16.255391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2141953138.mount: Deactivated successfully. Sep 12 17:08:17.685027 containerd[1719]: time="2025-09-12T17:08:17.684963030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:17.688406 containerd[1719]: time="2025-09-12T17:08:17.688339836Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Sep 12 17:08:17.692876 containerd[1719]: time="2025-09-12T17:08:17.692791284Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:17.699056 containerd[1719]: time="2025-09-12T17:08:17.698973495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:17.700775 containerd[1719]: time="2025-09-12T17:08:17.700193937Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.362130384s" Sep 12 17:08:17.700775 containerd[1719]: time="2025-09-12T17:08:17.700242217Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 17:08:17.702490 containerd[1719]: time="2025-09-12T17:08:17.701941780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:08:19.138766 containerd[1719]: time="2025-09-12T17:08:19.137967677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:19.143734 containerd[1719]: time="2025-09-12T17:08:19.143633048Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Sep 12 17:08:19.151028 containerd[1719]: time="2025-09-12T17:08:19.150940221Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:19.157976 containerd[1719]: time="2025-09-12T17:08:19.157890314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:19.159362 containerd[1719]: time="2025-09-12T17:08:19.159300356Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.457312375s" Sep 12 17:08:19.159852 containerd[1719]: time="2025-09-12T17:08:19.159556877Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 17:08:19.160550 containerd[1719]: time="2025-09-12T17:08:19.160281478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:08:20.542498 containerd[1719]: time="2025-09-12T17:08:20.542427836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:20.546388 containerd[1719]: time="2025-09-12T17:08:20.546324963Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Sep 12 17:08:20.551377 containerd[1719]: time="2025-09-12T17:08:20.551299813Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:20.560732 containerd[1719]: time="2025-09-12T17:08:20.559524588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:20.561365 containerd[1719]: time="2025-09-12T17:08:20.561323991Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.400814912s" Sep 12 17:08:20.561365 containerd[1719]: time="2025-09-12T17:08:20.561367151Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 17:08:20.562654 containerd[1719]: time="2025-09-12T17:08:20.562624193Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:08:20.781493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 17:08:20.789991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:08:20.902267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:08:20.907644 (kubelet)[2631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:08:20.948284 kubelet[2631]: E0912 17:08:20.948199 2631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:08:20.950909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:08:20.951073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:08:20.951957 systemd[1]: kubelet.service: Consumed 145ms CPU time, 105M memory peak. Sep 12 17:08:22.444167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3957224316.mount: Deactivated successfully. Sep 12 17:08:22.798868 containerd[1719]: time="2025-09-12T17:08:22.797935669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:22.800915 containerd[1719]: time="2025-09-12T17:08:22.800846034Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Sep 12 17:08:22.804629 containerd[1719]: time="2025-09-12T17:08:22.804550200Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:22.809639 containerd[1719]: time="2025-09-12T17:08:22.809514368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:22.810498 containerd[1719]: time="2025-09-12T17:08:22.810302369Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 2.247376855s" Sep 12 17:08:22.810498 containerd[1719]: time="2025-09-12T17:08:22.810353089Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 17:08:22.811207 containerd[1719]: time="2025-09-12T17:08:22.811169530Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:08:23.546489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640569950.mount: Deactivated successfully. Sep 12 17:08:25.318167 containerd[1719]: time="2025-09-12T17:08:25.318105262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:25.321174 containerd[1719]: time="2025-09-12T17:08:25.321106587Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 17:08:25.325051 containerd[1719]: time="2025-09-12T17:08:25.324966912Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:25.331485 containerd[1719]: time="2025-09-12T17:08:25.331410601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:25.333019 containerd[1719]: time="2025-09-12T17:08:25.332821644Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.521600873s" Sep 12 17:08:25.333019 containerd[1719]: time="2025-09-12T17:08:25.332872604Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:08:25.334039 containerd[1719]: time="2025-09-12T17:08:25.333637205Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:08:26.000540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057967515.mount: Deactivated successfully. Sep 12 17:08:26.029293 containerd[1719]: time="2025-09-12T17:08:26.029242291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:26.032764 containerd[1719]: time="2025-09-12T17:08:26.032671976Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 17:08:26.037745 containerd[1719]: time="2025-09-12T17:08:26.036819222Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:26.043681 containerd[1719]: time="2025-09-12T17:08:26.043626232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:26.045696 containerd[1719]: time="2025-09-12T17:08:26.045646555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 711.96303ms" Sep 12 17:08:26.045945 containerd[1719]: time="2025-09-12T17:08:26.045926316Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:08:26.046566 containerd[1719]: time="2025-09-12T17:08:26.046531876Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:08:26.768739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864616641.mount: Deactivated successfully. Sep 12 17:08:30.273949 containerd[1719]: time="2025-09-12T17:08:30.273846114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:30.278769 containerd[1719]: time="2025-09-12T17:08:30.278287681Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 12 17:08:30.283744 containerd[1719]: time="2025-09-12T17:08:30.282460967Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:30.289055 containerd[1719]: time="2025-09-12T17:08:30.288994216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:30.290472 containerd[1719]: time="2025-09-12T17:08:30.290421818Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.243637021s" Sep 12 17:08:30.290642 containerd[1719]: time="2025-09-12T17:08:30.290623299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 17:08:30.969112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 12 17:08:30.978085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:08:31.111151 (kubelet)[2784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:08:31.111897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:08:31.154816 kubelet[2784]: E0912 17:08:31.154765 2784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:08:31.158267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:08:31.158549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:08:31.160829 systemd[1]: kubelet.service: Consumed 144ms CPU time, 110.2M memory peak. Sep 12 17:08:35.387970 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:08:35.388156 systemd[1]: kubelet.service: Consumed 144ms CPU time, 110.2M memory peak. Sep 12 17:08:35.395054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:08:35.429415 systemd[1]: Reload requested from client PID 2798 ('systemctl') (unit session-9.scope)... Sep 12 17:08:35.429617 systemd[1]: Reloading... Sep 12 17:08:35.587004 zram_generator::config[2851]: No configuration found. Sep 12 17:08:35.704597 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:08:35.813733 systemd[1]: Reloading finished in 383 ms. Sep 12 17:08:35.862638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:08:35.872160 (kubelet)[2902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:08:35.874748 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:08:35.876274 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:08:35.876605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:08:35.876674 systemd[1]: kubelet.service: Consumed 112ms CPU time, 94.9M memory peak. Sep 12 17:08:35.882106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:08:36.010627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:08:36.021280 (kubelet)[2915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:08:36.138753 kubelet[2915]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:08:36.138753 kubelet[2915]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:08:36.138753 kubelet[2915]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:08:36.138753 kubelet[2915]: I0912 17:08:36.137966 2915 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:08:36.940570 kubelet[2915]: I0912 17:08:36.940519 2915 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:08:36.940782 kubelet[2915]: I0912 17:08:36.940770 2915 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:08:36.941168 kubelet[2915]: I0912 17:08:36.941149 2915 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:08:36.967998 kubelet[2915]: E0912 17:08:36.967940 2915 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:08:36.970735 kubelet[2915]: I0912 17:08:36.970144 2915 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:08:36.979223 kubelet[2915]: E0912 17:08:36.979174 2915 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:08:36.979223 kubelet[2915]: I0912 17:08:36.979218 2915 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:08:36.982778 kubelet[2915]: I0912 17:08:36.982744 2915 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:08:36.983907 kubelet[2915]: I0912 17:08:36.983857 2915 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:08:36.984169 kubelet[2915]: I0912 17:08:36.983912 2915 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.3-a-bc327f6988","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:08:36.984280 kubelet[2915]: I0912 17:08:36.984184 2915 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:08:36.984280 kubelet[2915]: I0912 17:08:36.984198 2915 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:08:36.984391 kubelet[2915]: I0912 17:08:36.984367 2915 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:08:36.987877 kubelet[2915]: I0912 17:08:36.987845 2915 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:08:36.988018 kubelet[2915]: I0912 17:08:36.987997 2915 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:08:36.988058 kubelet[2915]: I0912 17:08:36.988036 2915 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:08:36.988058 kubelet[2915]: I0912 17:08:36.988057 2915 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:08:36.989302 kubelet[2915]: W0912 17:08:36.989239 2915 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.3-a-bc327f6988&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 12 17:08:36.989346 kubelet[2915]: E0912 17:08:36.989321 2915 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.3-a-bc327f6988&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:08:36.993201 kubelet[2915]: W0912 17:08:36.993110 2915 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 12 17:08:36.993201 kubelet[2915]: E0912 17:08:36.993174 2915 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:08:36.993760 kubelet[2915]: I0912 17:08:36.993732 2915 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 17:08:36.994270 kubelet[2915]: I0912 17:08:36.994245 2915 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:08:36.994342 kubelet[2915]: W0912 17:08:36.994325 2915 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:08:36.996691 kubelet[2915]: I0912 17:08:36.996435 2915 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:08:36.996691 kubelet[2915]: I0912 17:08:36.996490 2915 server.go:1287] "Started kubelet" Sep 12 17:08:37.001460 kubelet[2915]: I0912 17:08:37.001427 2915 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:08:37.003812 kubelet[2915]: I0912 17:08:37.003550 2915 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:08:37.005803 kubelet[2915]: I0912 17:08:37.004980 2915 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:08:37.007983 kubelet[2915]: I0912 17:08:37.007872 2915 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:08:37.008246 kubelet[2915]: I0912 17:08:37.008218 2915 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:08:37.008538 kubelet[2915]: I0912 17:08:37.008506 2915 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:08:37.009548 kubelet[2915]: E0912 17:08:37.001400 2915 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.3-a-bc327f6988.18649808885fa4d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.3-a-bc327f6988,UID:ci-4230.2.3-a-bc327f6988,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.3-a-bc327f6988,},FirstTimestamp:2025-09-12 17:08:36.996465876 +0000 UTC m=+0.970815703,LastTimestamp:2025-09-12 17:08:36.996465876 +0000 UTC m=+0.970815703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.3-a-bc327f6988,}" Sep 12 17:08:37.014921 kubelet[2915]: I0912 17:08:37.012913 2915 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:08:37.017938 kubelet[2915]: I0912 17:08:37.012943 2915 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:08:37.017938 kubelet[2915]: E0912 17:08:37.013232 2915 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-bc327f6988\" not found" Sep 12 17:08:37.017938 kubelet[2915]: I0912 17:08:37.017205 2915 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:08:37.017938 kubelet[2915]: W0912 17:08:37.017369 2915 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 12 17:08:37.017938 kubelet[2915]: E0912 17:08:37.017438 2915 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:08:37.017938 kubelet[2915]: E0912 17:08:37.017519 2915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.3-a-bc327f6988?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Sep 12 17:08:37.020166 kubelet[2915]: I0912 17:08:37.020124 2915 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:08:37.024077 kubelet[2915]: I0912 17:08:37.024018 2915 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:08:37.024077 kubelet[2915]: I0912 17:08:37.024055 2915 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:08:37.024984 kubelet[2915]: E0912 17:08:37.024954 2915 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:08:37.046263 kubelet[2915]: I0912 17:08:37.046222 2915 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:08:37.046263 kubelet[2915]: I0912 17:08:37.046246 2915 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:08:37.046263 kubelet[2915]: I0912 17:08:37.046275 2915 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:08:37.054007 kubelet[2915]: I0912 17:08:37.053967 2915 policy_none.go:49] "None policy: Start" Sep 12 17:08:37.054007 kubelet[2915]: I0912 17:08:37.054001 2915 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:08:37.054007 kubelet[2915]: I0912 17:08:37.054020 2915 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:08:37.067178 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:08:37.085171 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:08:37.085515 kubelet[2915]: I0912 17:08:37.085386 2915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:08:37.087888 kubelet[2915]: I0912 17:08:37.087229 2915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:08:37.087888 kubelet[2915]: I0912 17:08:37.087270 2915 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:08:37.087888 kubelet[2915]: I0912 17:08:37.087304 2915 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:08:37.087888 kubelet[2915]: I0912 17:08:37.087313 2915 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:08:37.087888 kubelet[2915]: E0912 17:08:37.087368 2915 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:08:37.091469 kubelet[2915]: W0912 17:08:37.091142 2915 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 12 17:08:37.091469 kubelet[2915]: E0912 17:08:37.091222 2915 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:08:37.095919 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:08:37.105552 kubelet[2915]: I0912 17:08:37.104808 2915 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:08:37.105552 kubelet[2915]: I0912 17:08:37.105081 2915 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:08:37.105552 kubelet[2915]: I0912 17:08:37.105097 2915 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:08:37.105552 kubelet[2915]: I0912 17:08:37.105441 2915 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:08:37.108143 kubelet[2915]: E0912 17:08:37.107946 2915 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:08:37.108143 kubelet[2915]: E0912 17:08:37.108011 2915 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.3-a-bc327f6988\" not found" Sep 12 17:08:37.203002 systemd[1]: Created slice kubepods-burstable-pod59d1bbb9ca622cae3eceeb36f06d8a0a.slice - libcontainer container kubepods-burstable-pod59d1bbb9ca622cae3eceeb36f06d8a0a.slice. Sep 12 17:08:37.208545 kubelet[2915]: I0912 17:08:37.208477 2915 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.209253 kubelet[2915]: E0912 17:08:37.209207 2915 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.212155 kubelet[2915]: E0912 17:08:37.211889 2915 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.3-a-bc327f6988\" not found" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.214212 systemd[1]: Created slice kubepods-burstable-pod3c15a1b78ed24b0f0721a9971683b9aa.slice - libcontainer container kubepods-burstable-pod3c15a1b78ed24b0f0721a9971683b9aa.slice. Sep 12 17:08:37.218143 kubelet[2915]: E0912 17:08:37.218098 2915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.3-a-bc327f6988?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Sep 12 17:08:37.220564 kubelet[2915]: E0912 17:08:37.220523 2915 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.3-a-bc327f6988\" not found" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.224842 systemd[1]: Created slice kubepods-burstable-podad00abafd8ca1cace5d0c5a35fd7c75f.slice - libcontainer container kubepods-burstable-podad00abafd8ca1cace5d0c5a35fd7c75f.slice. Sep 12 17:08:37.227532 kubelet[2915]: E0912 17:08:37.227278 2915 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.3-a-bc327f6988\" not found" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.318684 kubelet[2915]: I0912 17:08:37.318626 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59d1bbb9ca622cae3eceeb36f06d8a0a-k8s-certs\") pod \"kube-apiserver-ci-4230.2.3-a-bc327f6988\" (UID: \"59d1bbb9ca622cae3eceeb36f06d8a0a\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.318684 kubelet[2915]: I0912 17:08:37.318680 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c15a1b78ed24b0f0721a9971683b9aa-ca-certs\") pod \"kube-controller-manager-ci-4230.2.3-a-bc327f6988\" (UID: \"3c15a1b78ed24b0f0721a9971683b9aa\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.318684 kubelet[2915]: I0912 17:08:37.318721 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3c15a1b78ed24b0f0721a9971683b9aa-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.3-a-bc327f6988\" (UID: \"3c15a1b78ed24b0f0721a9971683b9aa\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.318946 kubelet[2915]: I0912 17:08:37.318741 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c15a1b78ed24b0f0721a9971683b9aa-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.3-a-bc327f6988\" (UID: \"3c15a1b78ed24b0f0721a9971683b9aa\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.318946 kubelet[2915]: I0912 17:08:37.318763 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59d1bbb9ca622cae3eceeb36f06d8a0a-ca-certs\") pod \"kube-apiserver-ci-4230.2.3-a-bc327f6988\" (UID: \"59d1bbb9ca622cae3eceeb36f06d8a0a\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.318946 kubelet[2915]: I0912 17:08:37.318782 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59d1bbb9ca622cae3eceeb36f06d8a0a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.3-a-bc327f6988\" (UID: \"59d1bbb9ca622cae3eceeb36f06d8a0a\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.318946 kubelet[2915]: I0912 17:08:37.318798 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c15a1b78ed24b0f0721a9971683b9aa-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.3-a-bc327f6988\" (UID: \"3c15a1b78ed24b0f0721a9971683b9aa\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.318946 kubelet[2915]: I0912 17:08:37.318815 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c15a1b78ed24b0f0721a9971683b9aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.3-a-bc327f6988\" (UID: \"3c15a1b78ed24b0f0721a9971683b9aa\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.319056 kubelet[2915]: I0912 17:08:37.318834 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad00abafd8ca1cace5d0c5a35fd7c75f-kubeconfig\") pod \"kube-scheduler-ci-4230.2.3-a-bc327f6988\" (UID: \"ad00abafd8ca1cace5d0c5a35fd7c75f\") " pod="kube-system/kube-scheduler-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.411845 kubelet[2915]: I0912 17:08:37.411808 2915 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.412294 kubelet[2915]: E0912 17:08:37.412259 2915 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.514286 containerd[1719]: time="2025-09-12T17:08:37.513778188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.3-a-bc327f6988,Uid:59d1bbb9ca622cae3eceeb36f06d8a0a,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:37.522281 containerd[1719]: time="2025-09-12T17:08:37.522224919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.3-a-bc327f6988,Uid:3c15a1b78ed24b0f0721a9971683b9aa,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:37.528643 containerd[1719]: time="2025-09-12T17:08:37.528593128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.3-a-bc327f6988,Uid:ad00abafd8ca1cace5d0c5a35fd7c75f,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:37.619185 kubelet[2915]: E0912 17:08:37.619108 2915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.3-a-bc327f6988?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Sep 12 17:08:37.815239 kubelet[2915]: I0912 17:08:37.814745 2915 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.815239 kubelet[2915]: E0912 17:08:37.815184 2915 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:37.985816 kubelet[2915]: W0912 17:08:37.985775 2915 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 12 17:08:37.985977 kubelet[2915]: E0912 17:08:37.985828 2915 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:08:38.171212 kubelet[2915]: W0912 17:08:38.171065 2915 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 12 17:08:38.171595 kubelet[2915]: E0912 17:08:38.171548 2915 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:08:38.198882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844124230.mount: Deactivated successfully. Sep 12 17:08:38.238739 containerd[1719]: time="2025-09-12T17:08:38.237749689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:08:38.240552 containerd[1719]: time="2025-09-12T17:08:38.240473693Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 17:08:38.251414 containerd[1719]: time="2025-09-12T17:08:38.251334387Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:08:38.259347 containerd[1719]: time="2025-09-12T17:08:38.259276437Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:08:38.262821 containerd[1719]: time="2025-09-12T17:08:38.262763882Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:08:38.275206 containerd[1719]: time="2025-09-12T17:08:38.275132578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:08:38.279882 containerd[1719]: time="2025-09-12T17:08:38.279801144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:08:38.281386 containerd[1719]: time="2025-09-12T17:08:38.280776305Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 766.901916ms" Sep 12 17:08:38.283391 containerd[1719]: time="2025-09-12T17:08:38.283333269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:08:38.293566 containerd[1719]: time="2025-09-12T17:08:38.293503762Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 764.818834ms" Sep 12 17:08:38.326688 kubelet[2915]: W0912 17:08:38.326559 2915 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.3-a-bc327f6988&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 12 17:08:38.327126 kubelet[2915]: E0912 17:08:38.326695 2915 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.3-a-bc327f6988&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:08:38.337917 containerd[1719]: time="2025-09-12T17:08:38.337853099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 815.528539ms" Sep 12 17:08:38.346260 kubelet[2915]: W0912 17:08:38.345272 2915 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 12 17:08:38.346260 kubelet[2915]: E0912 17:08:38.345360 2915 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:08:38.420686 kubelet[2915]: E0912 17:08:38.420611 2915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.3-a-bc327f6988?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="1.6s" Sep 12 17:08:38.618612 kubelet[2915]: I0912 17:08:38.617926 2915 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:38.618612 kubelet[2915]: E0912 17:08:38.618399 2915 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:39.095073 kubelet[2915]: E0912 17:08:39.094975 2915 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:08:39.584885 containerd[1719]: time="2025-09-12T17:08:39.584746680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:08:39.584885 containerd[1719]: time="2025-09-12T17:08:39.584819840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:08:39.585355 containerd[1719]: time="2025-09-12T17:08:39.584837280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:39.585644 containerd[1719]: time="2025-09-12T17:08:39.584726880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:08:39.585844 containerd[1719]: time="2025-09-12T17:08:39.585788442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:08:39.586065 containerd[1719]: time="2025-09-12T17:08:39.586024322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:39.587176 containerd[1719]: time="2025-09-12T17:08:39.586490283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:08:39.587176 containerd[1719]: time="2025-09-12T17:08:39.586560083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:08:39.587176 containerd[1719]: time="2025-09-12T17:08:39.586573763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:39.587176 containerd[1719]: time="2025-09-12T17:08:39.586675123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:39.587694 containerd[1719]: time="2025-09-12T17:08:39.587615284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:39.588023 containerd[1719]: time="2025-09-12T17:08:39.587504004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:39.633809 systemd[1]: run-containerd-runc-k8s.io-9faf6a111b8f43c009d5edb6f0a0b1c6798a51f1f89903019ebd561dd705315b-runc.TIilaN.mount: Deactivated successfully. Sep 12 17:08:39.646996 systemd[1]: Started cri-containerd-9faf6a111b8f43c009d5edb6f0a0b1c6798a51f1f89903019ebd561dd705315b.scope - libcontainer container 9faf6a111b8f43c009d5edb6f0a0b1c6798a51f1f89903019ebd561dd705315b. Sep 12 17:08:39.654672 systemd[1]: Started cri-containerd-13ebafb9883c7808ae36ec37dcc3fbd4eed220ee0d280a52c690a9c848ea56de.scope - libcontainer container 13ebafb9883c7808ae36ec37dcc3fbd4eed220ee0d280a52c690a9c848ea56de. Sep 12 17:08:39.656044 systemd[1]: Started cri-containerd-e8dba5eab5fee5bce09cecd87869b3a92eb979cd2bf1e8f061b7b21759733f2f.scope - libcontainer container e8dba5eab5fee5bce09cecd87869b3a92eb979cd2bf1e8f061b7b21759733f2f. Sep 12 17:08:39.728665 containerd[1719]: time="2025-09-12T17:08:39.728416374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.3-a-bc327f6988,Uid:3c15a1b78ed24b0f0721a9971683b9aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"13ebafb9883c7808ae36ec37dcc3fbd4eed220ee0d280a52c690a9c848ea56de\"" Sep 12 17:08:39.730259 containerd[1719]: time="2025-09-12T17:08:39.730122657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.3-a-bc327f6988,Uid:59d1bbb9ca622cae3eceeb36f06d8a0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9faf6a111b8f43c009d5edb6f0a0b1c6798a51f1f89903019ebd561dd705315b\"" Sep 12 17:08:39.740137 containerd[1719]: time="2025-09-12T17:08:39.740078152Z" level=info msg="CreateContainer within sandbox \"9faf6a111b8f43c009d5edb6f0a0b1c6798a51f1f89903019ebd561dd705315b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:08:39.740460 containerd[1719]: time="2025-09-12T17:08:39.740296992Z" level=info msg="CreateContainer within sandbox \"13ebafb9883c7808ae36ec37dcc3fbd4eed220ee0d280a52c690a9c848ea56de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:08:39.742760 containerd[1719]: time="2025-09-12T17:08:39.742307035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.3-a-bc327f6988,Uid:ad00abafd8ca1cace5d0c5a35fd7c75f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8dba5eab5fee5bce09cecd87869b3a92eb979cd2bf1e8f061b7b21759733f2f\"" Sep 12 17:08:39.747067 containerd[1719]: time="2025-09-12T17:08:39.746999642Z" level=info msg="CreateContainer within sandbox \"e8dba5eab5fee5bce09cecd87869b3a92eb979cd2bf1e8f061b7b21759733f2f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:08:39.846717 containerd[1719]: time="2025-09-12T17:08:39.845887549Z" level=info msg="CreateContainer within sandbox \"13ebafb9883c7808ae36ec37dcc3fbd4eed220ee0d280a52c690a9c848ea56de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a12d88b0d36cb987464b56086ea39c91d504aa3daeeef80911798a4069d8468f\"" Sep 12 17:08:39.847494 containerd[1719]: time="2025-09-12T17:08:39.847415032Z" level=info msg="StartContainer for \"a12d88b0d36cb987464b56086ea39c91d504aa3daeeef80911798a4069d8468f\"" Sep 12 17:08:39.869833 containerd[1719]: time="2025-09-12T17:08:39.869781465Z" level=info msg="CreateContainer within sandbox \"e8dba5eab5fee5bce09cecd87869b3a92eb979cd2bf1e8f061b7b21759733f2f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f9f28d6998fd2684bd84c437f94f9bf381e5c94c84fecbff2384123220a6393a\"" Sep 12 17:08:39.871432 containerd[1719]: time="2025-09-12T17:08:39.870765226Z" level=info msg="StartContainer for \"f9f28d6998fd2684bd84c437f94f9bf381e5c94c84fecbff2384123220a6393a\"" Sep 12 17:08:39.878863 containerd[1719]: time="2025-09-12T17:08:39.878374318Z" level=info msg="CreateContainer within sandbox \"9faf6a111b8f43c009d5edb6f0a0b1c6798a51f1f89903019ebd561dd705315b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"47802c014b5371432ec25dc3cd75b2c2a86c8072c86e163aa4b146b98a1c85eb\"" Sep 12 17:08:39.879062 systemd[1]: Started cri-containerd-a12d88b0d36cb987464b56086ea39c91d504aa3daeeef80911798a4069d8468f.scope - libcontainer container a12d88b0d36cb987464b56086ea39c91d504aa3daeeef80911798a4069d8468f. Sep 12 17:08:39.879680 containerd[1719]: time="2025-09-12T17:08:39.879639920Z" level=info msg="StartContainer for \"47802c014b5371432ec25dc3cd75b2c2a86c8072c86e163aa4b146b98a1c85eb\"" Sep 12 17:08:39.928972 systemd[1]: Started cri-containerd-f9f28d6998fd2684bd84c437f94f9bf381e5c94c84fecbff2384123220a6393a.scope - libcontainer container f9f28d6998fd2684bd84c437f94f9bf381e5c94c84fecbff2384123220a6393a. Sep 12 17:08:39.949091 containerd[1719]: time="2025-09-12T17:08:39.949023583Z" level=info msg="StartContainer for \"a12d88b0d36cb987464b56086ea39c91d504aa3daeeef80911798a4069d8468f\" returns successfully" Sep 12 17:08:39.951154 systemd[1]: Started cri-containerd-47802c014b5371432ec25dc3cd75b2c2a86c8072c86e163aa4b146b98a1c85eb.scope - libcontainer container 47802c014b5371432ec25dc3cd75b2c2a86c8072c86e163aa4b146b98a1c85eb. Sep 12 17:08:40.001480 kubelet[2915]: W0912 17:08:40.001187 2915 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 12 17:08:40.001480 kubelet[2915]: E0912 17:08:40.001276 2915 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:08:40.022109 kubelet[2915]: E0912 17:08:40.021559 2915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.3-a-bc327f6988?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="3.2s" Sep 12 17:08:40.025724 containerd[1719]: time="2025-09-12T17:08:40.025288097Z" level=info msg="StartContainer for \"47802c014b5371432ec25dc3cd75b2c2a86c8072c86e163aa4b146b98a1c85eb\" returns successfully" Sep 12 17:08:40.047911 containerd[1719]: time="2025-09-12T17:08:40.047852851Z" level=info msg="StartContainer for \"f9f28d6998fd2684bd84c437f94f9bf381e5c94c84fecbff2384123220a6393a\" returns successfully" Sep 12 17:08:40.107331 kubelet[2915]: E0912 17:08:40.107167 2915 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.3-a-bc327f6988\" not found" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:40.114281 kubelet[2915]: E0912 17:08:40.113980 2915 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.3-a-bc327f6988\" not found" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:40.124191 kubelet[2915]: E0912 17:08:40.124139 2915 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.3-a-bc327f6988\" not found" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:40.222383 kubelet[2915]: I0912 17:08:40.222327 2915 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:41.124760 kubelet[2915]: E0912 17:08:41.124686 2915 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.3-a-bc327f6988\" not found" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:41.125270 kubelet[2915]: E0912 17:08:41.125154 2915 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.3-a-bc327f6988\" not found" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:43.375605 kubelet[2915]: E0912 17:08:43.375546 2915 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.3-a-bc327f6988\" not found" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:43.421305 kubelet[2915]: I0912 17:08:43.420875 2915 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:43.421305 kubelet[2915]: I0912 17:08:43.421083 2915 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:43.471333 kubelet[2915]: E0912 17:08:43.470997 2915 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.2.3-a-bc327f6988.18649808885fa4d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.3-a-bc327f6988,UID:ci-4230.2.3-a-bc327f6988,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.3-a-bc327f6988,},FirstTimestamp:2025-09-12 17:08:36.996465876 +0000 UTC m=+0.970815703,LastTimestamp:2025-09-12 17:08:36.996465876 +0000 UTC m=+0.970815703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.3-a-bc327f6988,}" Sep 12 17:08:43.578548 kubelet[2915]: E0912 17:08:43.578243 2915 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.2.3-a-bc327f6988.186498088a11f4b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.3-a-bc327f6988,UID:ci-4230.2.3-a-bc327f6988,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230.2.3-a-bc327f6988,},FirstTimestamp:2025-09-12 17:08:37.024928953 +0000 UTC m=+0.999278820,LastTimestamp:2025-09-12 17:08:37.024928953 +0000 UTC m=+0.999278820,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.3-a-bc327f6988,}" Sep 12 17:08:43.586101 kubelet[2915]: E0912 17:08:43.585823 2915 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.3-a-bc327f6988\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:43.586101 kubelet[2915]: I0912 17:08:43.585876 2915 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:43.589153 kubelet[2915]: E0912 17:08:43.588852 2915 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.3-a-bc327f6988\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:43.589153 kubelet[2915]: I0912 17:08:43.588890 2915 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:43.592034 kubelet[2915]: E0912 17:08:43.591983 2915 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.3-a-bc327f6988\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:43.993210 kubelet[2915]: I0912 17:08:43.993165 2915 apiserver.go:52] "Watching apiserver" Sep 12 17:08:44.017423 kubelet[2915]: I0912 17:08:44.017358 2915 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:08:46.003415 kubelet[2915]: I0912 17:08:46.003025 2915 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:46.022761 kubelet[2915]: W0912 17:08:46.021171 2915 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:08:46.776253 kubelet[2915]: I0912 17:08:46.775887 2915 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:46.787358 kubelet[2915]: W0912 17:08:46.787192 2915 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:08:47.062624 systemd[1]: Reload requested from client PID 3193 ('systemctl') (unit session-9.scope)... Sep 12 17:08:47.062646 systemd[1]: Reloading... Sep 12 17:08:47.168991 kubelet[2915]: I0912 17:08:47.168892 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" podStartSLOduration=1.168869787 podStartE2EDuration="1.168869787s" podCreationTimestamp="2025-09-12 17:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:08:47.142075468 +0000 UTC m=+11.116425335" watchObservedRunningTime="2025-09-12 17:08:47.168869787 +0000 UTC m=+11.143219654" Sep 12 17:08:47.243752 zram_generator::config[3255]: No configuration found. Sep 12 17:08:47.356037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:08:47.481781 systemd[1]: Reloading finished in 418 ms. Sep 12 17:08:47.513561 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:08:47.514308 kubelet[2915]: I0912 17:08:47.513898 2915 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:08:47.525639 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:08:47.526064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:08:47.526136 systemd[1]: kubelet.service: Consumed 1.433s CPU time, 128M memory peak. Sep 12 17:08:47.532172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:08:47.682082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:08:47.687378 (kubelet)[3304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:08:47.748738 kubelet[3304]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:08:47.748738 kubelet[3304]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:08:47.748738 kubelet[3304]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:08:47.748738 kubelet[3304]: I0912 17:08:47.748015 3304 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:08:47.755037 kubelet[3304]: I0912 17:08:47.754990 3304 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:08:47.755037 kubelet[3304]: I0912 17:08:47.755028 3304 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:08:47.755377 kubelet[3304]: I0912 17:08:47.755353 3304 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:08:47.758748 kubelet[3304]: I0912 17:08:47.757926 3304 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:08:47.761580 kubelet[3304]: I0912 17:08:47.761540 3304 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:08:47.767033 kubelet[3304]: E0912 17:08:47.766984 3304 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:08:47.767232 kubelet[3304]: I0912 17:08:47.767217 3304 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:08:47.771633 kubelet[3304]: I0912 17:08:47.771597 3304 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:08:47.772107 kubelet[3304]: I0912 17:08:47.772069 3304 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:08:47.772401 kubelet[3304]: I0912 17:08:47.772191 3304 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.3-a-bc327f6988","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:08:47.772551 kubelet[3304]: I0912 17:08:47.772536 3304 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:08:47.772601 kubelet[3304]: I0912 17:08:47.772594 3304 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:08:47.772736 kubelet[3304]: I0912 17:08:47.772725 3304 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:08:47.773008 kubelet[3304]: I0912 17:08:47.772993 3304 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:08:47.773130 kubelet[3304]: I0912 17:08:47.773118 3304 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:08:47.773236 kubelet[3304]: I0912 17:08:47.773228 3304 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:08:47.773314 kubelet[3304]: I0912 17:08:47.773290 3304 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:08:47.778647 kubelet[3304]: I0912 17:08:47.777940 3304 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 17:08:47.778647 kubelet[3304]: I0912 17:08:47.778597 3304 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:08:47.782539 kubelet[3304]: I0912 17:08:47.781141 3304 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:08:47.782539 kubelet[3304]: I0912 17:08:47.781196 3304 server.go:1287] "Started kubelet" Sep 12 17:08:47.785846 kubelet[3304]: I0912 17:08:47.785764 3304 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:08:47.789205 kubelet[3304]: I0912 17:08:47.789169 3304 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:08:47.803891 kubelet[3304]: I0912 17:08:47.803847 3304 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:08:47.804974 kubelet[3304]: I0912 17:08:47.804935 3304 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:08:47.806746 kubelet[3304]: I0912 17:08:47.805481 3304 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:08:47.806746 kubelet[3304]: E0912 17:08:47.805850 3304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-bc327f6988\" not found" Sep 12 17:08:47.809523 kubelet[3304]: I0912 17:08:47.809489 3304 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:08:47.809959 kubelet[3304]: I0912 17:08:47.809941 3304 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:08:47.816241 kubelet[3304]: I0912 17:08:47.816168 3304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:08:47.824322 kubelet[3304]: I0912 17:08:47.823926 3304 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:08:47.824322 kubelet[3304]: I0912 17:08:47.824272 3304 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:08:47.831974 kubelet[3304]: I0912 17:08:47.831935 3304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:08:47.837043 kubelet[3304]: I0912 17:08:47.836129 3304 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:08:47.837043 kubelet[3304]: I0912 17:08:47.836177 3304 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:08:47.837043 kubelet[3304]: I0912 17:08:47.836186 3304 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:08:47.837043 kubelet[3304]: E0912 17:08:47.836264 3304 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:08:47.837806 kubelet[3304]: I0912 17:08:47.833037 3304 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:08:47.839237 kubelet[3304]: I0912 17:08:47.838088 3304 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:08:47.849244 kubelet[3304]: E0912 17:08:47.849203 3304 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:08:47.850183 kubelet[3304]: I0912 17:08:47.850149 3304 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:08:47.914727 kubelet[3304]: I0912 17:08:47.914666 3304 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:08:47.914727 kubelet[3304]: I0912 17:08:47.914693 3304 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:08:47.914916 kubelet[3304]: I0912 17:08:47.914753 3304 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:08:47.915006 kubelet[3304]: I0912 17:08:47.914977 3304 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:08:47.915038 kubelet[3304]: I0912 17:08:47.915000 3304 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:08:47.915038 kubelet[3304]: I0912 17:08:47.915024 3304 policy_none.go:49] "None policy: Start" Sep 12 17:08:47.915038 kubelet[3304]: I0912 17:08:47.915034 3304 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:08:47.915106 kubelet[3304]: I0912 17:08:47.915044 3304 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:08:47.915176 kubelet[3304]: I0912 17:08:47.915157 3304 state_mem.go:75] "Updated machine memory state" Sep 12 17:08:47.921287 kubelet[3304]: I0912 17:08:47.921231 3304 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:08:47.921498 kubelet[3304]: I0912 17:08:47.921476 3304 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:08:47.921546 kubelet[3304]: I0912 17:08:47.921497 3304 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:08:47.923146 kubelet[3304]: I0912 17:08:47.923119 3304 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:08:47.924767 kubelet[3304]: E0912 17:08:47.924741 3304 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:08:47.940409 kubelet[3304]: I0912 17:08:47.939204 3304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:47.941017 kubelet[3304]: I0912 17:08:47.940924 3304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:47.942771 kubelet[3304]: I0912 17:08:47.942695 3304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:47.960608 kubelet[3304]: W0912 17:08:47.960401 3304 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:08:47.960608 kubelet[3304]: E0912 17:08:47.960518 3304 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.3-a-bc327f6988\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:47.960944 kubelet[3304]: W0912 17:08:47.960784 3304 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:08:47.962605 kubelet[3304]: W0912 17:08:47.962565 3304 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:08:47.962695 kubelet[3304]: E0912 17:08:47.962662 3304 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.3-a-bc327f6988\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.036791 kubelet[3304]: I0912 17:08:48.036756 3304 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.053904 kubelet[3304]: I0912 17:08:48.053167 3304 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.053904 kubelet[3304]: I0912 17:08:48.053278 3304 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.111690 kubelet[3304]: I0912 17:08:48.111633 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59d1bbb9ca622cae3eceeb36f06d8a0a-ca-certs\") pod \"kube-apiserver-ci-4230.2.3-a-bc327f6988\" (UID: \"59d1bbb9ca622cae3eceeb36f06d8a0a\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.111900 kubelet[3304]: I0912 17:08:48.111689 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59d1bbb9ca622cae3eceeb36f06d8a0a-k8s-certs\") pod \"kube-apiserver-ci-4230.2.3-a-bc327f6988\" (UID: \"59d1bbb9ca622cae3eceeb36f06d8a0a\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.111900 kubelet[3304]: I0912 17:08:48.111774 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59d1bbb9ca622cae3eceeb36f06d8a0a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.3-a-bc327f6988\" (UID: \"59d1bbb9ca622cae3eceeb36f06d8a0a\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.111900 kubelet[3304]: I0912 17:08:48.111799 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3c15a1b78ed24b0f0721a9971683b9aa-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.3-a-bc327f6988\" (UID: \"3c15a1b78ed24b0f0721a9971683b9aa\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.111900 kubelet[3304]: I0912 17:08:48.111820 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c15a1b78ed24b0f0721a9971683b9aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.3-a-bc327f6988\" (UID: \"3c15a1b78ed24b0f0721a9971683b9aa\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.111900 kubelet[3304]: I0912 17:08:48.111842 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad00abafd8ca1cace5d0c5a35fd7c75f-kubeconfig\") pod \"kube-scheduler-ci-4230.2.3-a-bc327f6988\" (UID: \"ad00abafd8ca1cace5d0c5a35fd7c75f\") " pod="kube-system/kube-scheduler-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.112087 kubelet[3304]: I0912 17:08:48.111858 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c15a1b78ed24b0f0721a9971683b9aa-ca-certs\") pod \"kube-controller-manager-ci-4230.2.3-a-bc327f6988\" (UID: \"3c15a1b78ed24b0f0721a9971683b9aa\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.112087 kubelet[3304]: I0912 17:08:48.111878 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c15a1b78ed24b0f0721a9971683b9aa-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.3-a-bc327f6988\" (UID: \"3c15a1b78ed24b0f0721a9971683b9aa\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.112087 kubelet[3304]: I0912 17:08:48.111894 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c15a1b78ed24b0f0721a9971683b9aa-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.3-a-bc327f6988\" (UID: \"3c15a1b78ed24b0f0721a9971683b9aa\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.182116 sudo[3336]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:08:48.182435 sudo[3336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:08:48.678833 sudo[3336]: pam_unix(sudo:session): session closed for user root Sep 12 17:08:48.775041 kubelet[3304]: I0912 17:08:48.774681 3304 apiserver.go:52] "Watching apiserver" Sep 12 17:08:48.810250 kubelet[3304]: I0912 17:08:48.809826 3304 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:08:48.884646 kubelet[3304]: I0912 17:08:48.883545 3304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.912848 kubelet[3304]: W0912 17:08:48.911742 3304 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:08:48.912848 kubelet[3304]: E0912 17:08:48.911931 3304 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.3-a-bc327f6988\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.3-a-bc327f6988" Sep 12 17:08:48.966739 kubelet[3304]: I0912 17:08:48.965978 3304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.3-a-bc327f6988" podStartSLOduration=1.965953853 podStartE2EDuration="1.965953853s" podCreationTimestamp="2025-09-12 17:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:08:48.946404622 +0000 UTC m=+1.251183235" watchObservedRunningTime="2025-09-12 17:08:48.965953853 +0000 UTC m=+1.270732426" Sep 12 17:08:50.424316 sudo[2281]: pam_unix(sudo:session): session closed for user root Sep 12 17:08:50.497359 sshd[2280]: Connection closed by 10.200.16.10 port 47718 Sep 12 17:08:50.498097 sshd-session[2278]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:50.502037 systemd[1]: sshd@6-10.200.20.12:22-10.200.16.10:47718.service: Deactivated successfully. Sep 12 17:08:50.507740 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:08:50.508816 systemd[1]: session-9.scope: Consumed 6.818s CPU time, 260.8M memory peak. Sep 12 17:08:50.512693 systemd-logind[1697]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:08:50.513885 systemd-logind[1697]: Removed session 9. Sep 12 17:08:53.169618 kubelet[3304]: I0912 17:08:53.169570 3304 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:08:53.170857 containerd[1719]: time="2025-09-12T17:08:53.170691566Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:08:53.171196 kubelet[3304]: I0912 17:08:53.171007 3304 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:08:53.897924 systemd[1]: Created slice kubepods-besteffort-pod99c3bc8c_845e_4187_9877_65e126b79fa7.slice - libcontainer container kubepods-besteffort-pod99c3bc8c_845e_4187_9877_65e126b79fa7.slice. Sep 12 17:08:53.922801 systemd[1]: Created slice kubepods-burstable-pod3d8069e0_24d9_439b_8e7d_2007a826be4d.slice - libcontainer container kubepods-burstable-pod3d8069e0_24d9_439b_8e7d_2007a826be4d.slice. Sep 12 17:08:53.947309 kubelet[3304]: I0912 17:08:53.946577 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n96pm\" (UniqueName: \"kubernetes.io/projected/99c3bc8c-845e-4187-9877-65e126b79fa7-kube-api-access-n96pm\") pod \"kube-proxy-d9x47\" (UID: \"99c3bc8c-845e-4187-9877-65e126b79fa7\") " pod="kube-system/kube-proxy-d9x47" Sep 12 17:08:53.947309 kubelet[3304]: I0912 17:08:53.946622 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-host-proc-sys-net\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947309 kubelet[3304]: I0912 17:08:53.946641 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-run\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947309 kubelet[3304]: I0912 17:08:53.946658 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-config-path\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947309 kubelet[3304]: I0912 17:08:53.946675 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-cgroup\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947564 kubelet[3304]: I0912 17:08:53.946690 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg65x\" (UniqueName: \"kubernetes.io/projected/3d8069e0-24d9-439b-8e7d-2007a826be4d-kube-api-access-pg65x\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947564 kubelet[3304]: I0912 17:08:53.946742 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-hostproc\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947564 kubelet[3304]: I0912 17:08:53.946756 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-lib-modules\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947564 kubelet[3304]: I0912 17:08:53.946774 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/99c3bc8c-845e-4187-9877-65e126b79fa7-kube-proxy\") pod \"kube-proxy-d9x47\" (UID: \"99c3bc8c-845e-4187-9877-65e126b79fa7\") " pod="kube-system/kube-proxy-d9x47" Sep 12 17:08:53.947564 kubelet[3304]: I0912 17:08:53.946790 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-xtables-lock\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947564 kubelet[3304]: I0912 17:08:53.946805 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-host-proc-sys-kernel\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947689 kubelet[3304]: I0912 17:08:53.946819 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d8069e0-24d9-439b-8e7d-2007a826be4d-hubble-tls\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947689 kubelet[3304]: I0912 17:08:53.946836 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cni-path\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947689 kubelet[3304]: I0912 17:08:53.946852 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-etc-cni-netd\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947689 kubelet[3304]: I0912 17:08:53.946867 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99c3bc8c-845e-4187-9877-65e126b79fa7-lib-modules\") pod \"kube-proxy-d9x47\" (UID: \"99c3bc8c-845e-4187-9877-65e126b79fa7\") " pod="kube-system/kube-proxy-d9x47" Sep 12 17:08:53.947689 kubelet[3304]: I0912 17:08:53.946885 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d8069e0-24d9-439b-8e7d-2007a826be4d-clustermesh-secrets\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:53.947689 kubelet[3304]: I0912 17:08:53.946904 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99c3bc8c-845e-4187-9877-65e126b79fa7-xtables-lock\") pod \"kube-proxy-d9x47\" (UID: \"99c3bc8c-845e-4187-9877-65e126b79fa7\") " pod="kube-system/kube-proxy-d9x47" Sep 12 17:08:53.947856 kubelet[3304]: I0912 17:08:53.946924 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-bpf-maps\") pod \"cilium-l2ctq\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " pod="kube-system/cilium-l2ctq" Sep 12 17:08:54.214122 containerd[1719]: time="2025-09-12T17:08:54.213965662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9x47,Uid:99c3bc8c-845e-4187-9877-65e126b79fa7,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:54.227978 containerd[1719]: time="2025-09-12T17:08:54.227891604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2ctq,Uid:3d8069e0-24d9-439b-8e7d-2007a826be4d,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:54.291867 systemd[1]: Created slice kubepods-besteffort-pod92031893_e486_48ff_a4d1_859e81208606.slice - libcontainer container kubepods-besteffort-pod92031893_e486_48ff_a4d1_859e81208606.slice. Sep 12 17:08:54.315434 containerd[1719]: time="2025-09-12T17:08:54.314477305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:08:54.315434 containerd[1719]: time="2025-09-12T17:08:54.314550545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:08:54.315434 containerd[1719]: time="2025-09-12T17:08:54.314567185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:54.315434 containerd[1719]: time="2025-09-12T17:08:54.314670385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:54.335823 containerd[1719]: time="2025-09-12T17:08:54.335687379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:08:54.335997 containerd[1719]: time="2025-09-12T17:08:54.335862020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:08:54.335997 containerd[1719]: time="2025-09-12T17:08:54.335893180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:54.336121 containerd[1719]: time="2025-09-12T17:08:54.336085100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:54.348862 systemd[1]: Started cri-containerd-088ab9c49266cf598fbbc9ff571227fba57cc6109bd73b54d7fcb8c2a0a53804.scope - libcontainer container 088ab9c49266cf598fbbc9ff571227fba57cc6109bd73b54d7fcb8c2a0a53804. Sep 12 17:08:54.350076 kubelet[3304]: I0912 17:08:54.349962 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92031893-e486-48ff-a4d1-859e81208606-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7v48k\" (UID: \"92031893-e486-48ff-a4d1-859e81208606\") " pod="kube-system/cilium-operator-6c4d7847fc-7v48k" Sep 12 17:08:54.350076 kubelet[3304]: I0912 17:08:54.350012 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glpn\" (UniqueName: \"kubernetes.io/projected/92031893-e486-48ff-a4d1-859e81208606-kube-api-access-9glpn\") pod \"cilium-operator-6c4d7847fc-7v48k\" (UID: \"92031893-e486-48ff-a4d1-859e81208606\") " pod="kube-system/cilium-operator-6c4d7847fc-7v48k" Sep 12 17:08:54.365921 systemd[1]: Started cri-containerd-709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c.scope - libcontainer container 709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c. Sep 12 17:08:54.396684 containerd[1719]: time="2025-09-12T17:08:54.396385278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9x47,Uid:99c3bc8c-845e-4187-9877-65e126b79fa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"088ab9c49266cf598fbbc9ff571227fba57cc6109bd73b54d7fcb8c2a0a53804\"" Sep 12 17:08:54.404849 containerd[1719]: time="2025-09-12T17:08:54.404796732Z" level=info msg="CreateContainer within sandbox \"088ab9c49266cf598fbbc9ff571227fba57cc6109bd73b54d7fcb8c2a0a53804\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:08:54.406915 containerd[1719]: time="2025-09-12T17:08:54.406875335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2ctq,Uid:3d8069e0-24d9-439b-8e7d-2007a826be4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\"" Sep 12 17:08:54.410821 containerd[1719]: time="2025-09-12T17:08:54.410364941Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:08:54.466920 containerd[1719]: time="2025-09-12T17:08:54.466766512Z" level=info msg="CreateContainer within sandbox \"088ab9c49266cf598fbbc9ff571227fba57cc6109bd73b54d7fcb8c2a0a53804\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2bd3a89549143dad231027a115136d2b50ffc9c1c9d4fc1a7be5e5aa3785e168\"" Sep 12 17:08:54.469748 containerd[1719]: time="2025-09-12T17:08:54.468317155Z" level=info msg="StartContainer for \"2bd3a89549143dad231027a115136d2b50ffc9c1c9d4fc1a7be5e5aa3785e168\"" Sep 12 17:08:54.496961 systemd[1]: Started cri-containerd-2bd3a89549143dad231027a115136d2b50ffc9c1c9d4fc1a7be5e5aa3785e168.scope - libcontainer container 2bd3a89549143dad231027a115136d2b50ffc9c1c9d4fc1a7be5e5aa3785e168. Sep 12 17:08:54.543938 containerd[1719]: time="2025-09-12T17:08:54.543869438Z" level=info msg="StartContainer for \"2bd3a89549143dad231027a115136d2b50ffc9c1c9d4fc1a7be5e5aa3785e168\" returns successfully" Sep 12 17:08:54.597570 containerd[1719]: time="2025-09-12T17:08:54.597520125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7v48k,Uid:92031893-e486-48ff-a4d1-859e81208606,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:54.650976 containerd[1719]: time="2025-09-12T17:08:54.650576531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:08:54.651195 containerd[1719]: time="2025-09-12T17:08:54.650931292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:08:54.651195 containerd[1719]: time="2025-09-12T17:08:54.650973132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:54.651934 containerd[1719]: time="2025-09-12T17:08:54.651867333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:54.675988 systemd[1]: Started cri-containerd-63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e.scope - libcontainer container 63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e. Sep 12 17:08:54.724266 containerd[1719]: time="2025-09-12T17:08:54.723803450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7v48k,Uid:92031893-e486-48ff-a4d1-859e81208606,Namespace:kube-system,Attempt:0,} returns sandbox id \"63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e\"" Sep 12 17:08:54.920574 kubelet[3304]: I0912 17:08:54.919156 3304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d9x47" podStartSLOduration=1.919136527 podStartE2EDuration="1.919136527s" podCreationTimestamp="2025-09-12 17:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:08:54.919121487 +0000 UTC m=+7.223900060" watchObservedRunningTime="2025-09-12 17:08:54.919136527 +0000 UTC m=+7.223915060" Sep 12 17:08:58.629910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685935613.mount: Deactivated successfully. Sep 12 17:09:00.274365 containerd[1719]: time="2025-09-12T17:09:00.274303141Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:00.277853 containerd[1719]: time="2025-09-12T17:09:00.277794786Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 17:09:00.287477 containerd[1719]: time="2025-09-12T17:09:00.285394598Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:00.287477 containerd[1719]: time="2025-09-12T17:09:00.286899240Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.876482779s" Sep 12 17:09:00.287477 containerd[1719]: time="2025-09-12T17:09:00.286935680Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 17:09:00.291097 containerd[1719]: time="2025-09-12T17:09:00.291034487Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:09:00.293192 containerd[1719]: time="2025-09-12T17:09:00.293142610Z" level=info msg="CreateContainer within sandbox \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:09:00.336435 containerd[1719]: time="2025-09-12T17:09:00.336319796Z" level=info msg="CreateContainer within sandbox \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1\"" Sep 12 17:09:00.339337 containerd[1719]: time="2025-09-12T17:09:00.339177440Z" level=info msg="StartContainer for \"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1\"" Sep 12 17:09:00.384995 systemd[1]: Started cri-containerd-49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1.scope - libcontainer container 49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1. Sep 12 17:09:00.423996 systemd[1]: cri-containerd-49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1.scope: Deactivated successfully. Sep 12 17:09:01.069358 containerd[1719]: time="2025-09-12T17:09:01.069139281Z" level=info msg="StartContainer for \"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1\" returns successfully" Sep 12 17:09:01.315821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1-rootfs.mount: Deactivated successfully. Sep 12 17:09:02.179656 containerd[1719]: time="2025-09-12T17:09:02.179414105Z" level=info msg="shim disconnected" id=49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1 namespace=k8s.io Sep 12 17:09:02.179656 containerd[1719]: time="2025-09-12T17:09:02.179494305Z" level=warning msg="cleaning up after shim disconnected" id=49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1 namespace=k8s.io Sep 12 17:09:02.179656 containerd[1719]: time="2025-09-12T17:09:02.179503505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:02.863335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1634812896.mount: Deactivated successfully. Sep 12 17:09:03.088521 containerd[1719]: time="2025-09-12T17:09:03.088469581Z" level=info msg="CreateContainer within sandbox \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:09:03.124264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927774850.mount: Deactivated successfully. Sep 12 17:09:03.179275 containerd[1719]: time="2025-09-12T17:09:03.179212360Z" level=info msg="CreateContainer within sandbox \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346\"" Sep 12 17:09:03.180019 containerd[1719]: time="2025-09-12T17:09:03.179977281Z" level=info msg="StartContainer for \"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346\"" Sep 12 17:09:03.210950 systemd[1]: Started cri-containerd-783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346.scope - libcontainer container 783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346. Sep 12 17:09:03.257974 containerd[1719]: time="2025-09-12T17:09:03.257785801Z" level=info msg="StartContainer for \"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346\" returns successfully" Sep 12 17:09:03.277687 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:09:03.278340 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:03.279285 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:09:03.286104 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:09:03.288932 systemd[1]: cri-containerd-783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346.scope: Deactivated successfully. Sep 12 17:09:03.312108 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:03.374320 containerd[1719]: time="2025-09-12T17:09:03.373940751Z" level=info msg="shim disconnected" id=783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346 namespace=k8s.io Sep 12 17:09:03.374320 containerd[1719]: time="2025-09-12T17:09:03.374155871Z" level=warning msg="cleaning up after shim disconnected" id=783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346 namespace=k8s.io Sep 12 17:09:03.374320 containerd[1719]: time="2025-09-12T17:09:03.374164631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:03.654337 containerd[1719]: time="2025-09-12T17:09:03.654248105Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:03.657934 containerd[1719]: time="2025-09-12T17:09:03.657629351Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 17:09:03.661529 containerd[1719]: time="2025-09-12T17:09:03.661463197Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:03.663201 containerd[1719]: time="2025-09-12T17:09:03.663046320Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.371951513s" Sep 12 17:09:03.663201 containerd[1719]: time="2025-09-12T17:09:03.663089240Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 17:09:03.666836 containerd[1719]: time="2025-09-12T17:09:03.666636246Z" level=info msg="CreateContainer within sandbox \"63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:09:03.703396 containerd[1719]: time="2025-09-12T17:09:03.703340508Z" level=info msg="CreateContainer within sandbox \"63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\"" Sep 12 17:09:03.704364 containerd[1719]: time="2025-09-12T17:09:03.704186389Z" level=info msg="StartContainer for \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\"" Sep 12 17:09:03.731970 systemd[1]: Started cri-containerd-89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9.scope - libcontainer container 89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9. Sep 12 17:09:03.760633 containerd[1719]: time="2025-09-12T17:09:03.760584285Z" level=info msg="StartContainer for \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\" returns successfully" Sep 12 17:09:04.096358 containerd[1719]: time="2025-09-12T17:09:04.096305292Z" level=info msg="CreateContainer within sandbox \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:09:04.158909 containerd[1719]: time="2025-09-12T17:09:04.158847478Z" level=info msg="CreateContainer within sandbox \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6\"" Sep 12 17:09:04.161954 containerd[1719]: time="2025-09-12T17:09:04.159727520Z" level=info msg="StartContainer for \"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6\"" Sep 12 17:09:04.213230 kubelet[3304]: I0912 17:09:04.213038 3304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7v48k" podStartSLOduration=1.275177742 podStartE2EDuration="10.21301777s" podCreationTimestamp="2025-09-12 17:08:54 +0000 UTC" firstStartedPulling="2025-09-12 17:08:54.726193974 +0000 UTC m=+7.030972547" lastFinishedPulling="2025-09-12 17:09:03.664034002 +0000 UTC m=+15.968812575" observedRunningTime="2025-09-12 17:09:04.128518867 +0000 UTC m=+16.433297440" watchObservedRunningTime="2025-09-12 17:09:04.21301777 +0000 UTC m=+16.517796343" Sep 12 17:09:04.214471 systemd[1]: Started cri-containerd-d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6.scope - libcontainer container d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6. Sep 12 17:09:04.268507 containerd[1719]: time="2025-09-12T17:09:04.268452104Z" level=info msg="StartContainer for \"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6\" returns successfully" Sep 12 17:09:04.281493 systemd[1]: cri-containerd-d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6.scope: Deactivated successfully. Sep 12 17:09:04.564808 containerd[1719]: time="2025-09-12T17:09:04.564568524Z" level=info msg="shim disconnected" id=d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6 namespace=k8s.io Sep 12 17:09:04.564808 containerd[1719]: time="2025-09-12T17:09:04.564626284Z" level=warning msg="cleaning up after shim disconnected" id=d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6 namespace=k8s.io Sep 12 17:09:04.564808 containerd[1719]: time="2025-09-12T17:09:04.564637004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:04.856628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6-rootfs.mount: Deactivated successfully. Sep 12 17:09:05.097814 containerd[1719]: time="2025-09-12T17:09:05.097665186Z" level=info msg="CreateContainer within sandbox \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:09:05.128587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314730325.mount: Deactivated successfully. Sep 12 17:09:05.144357 containerd[1719]: time="2025-09-12T17:09:05.144305544Z" level=info msg="CreateContainer within sandbox \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f\"" Sep 12 17:09:05.145943 containerd[1719]: time="2025-09-12T17:09:05.145893587Z" level=info msg="StartContainer for \"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f\"" Sep 12 17:09:05.181948 systemd[1]: Started cri-containerd-ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f.scope - libcontainer container ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f. Sep 12 17:09:05.205017 systemd[1]: cri-containerd-ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f.scope: Deactivated successfully. Sep 12 17:09:05.212353 containerd[1719]: time="2025-09-12T17:09:05.212307299Z" level=info msg="StartContainer for \"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f\" returns successfully" Sep 12 17:09:05.255728 containerd[1719]: time="2025-09-12T17:09:05.255515453Z" level=info msg="shim disconnected" id=ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f namespace=k8s.io Sep 12 17:09:05.255728 containerd[1719]: time="2025-09-12T17:09:05.255606333Z" level=warning msg="cleaning up after shim disconnected" id=ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f namespace=k8s.io Sep 12 17:09:05.255728 containerd[1719]: time="2025-09-12T17:09:05.255616853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:05.856628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f-rootfs.mount: Deactivated successfully. Sep 12 17:09:06.102719 containerd[1719]: time="2025-09-12T17:09:06.102634005Z" level=info msg="CreateContainer within sandbox \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:09:06.153686 containerd[1719]: time="2025-09-12T17:09:06.153213410Z" level=info msg="CreateContainer within sandbox \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\"" Sep 12 17:09:06.153839 containerd[1719]: time="2025-09-12T17:09:06.153806731Z" level=info msg="StartContainer for \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\"" Sep 12 17:09:06.189245 systemd[1]: Started cri-containerd-fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167.scope - libcontainer container fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167. Sep 12 17:09:06.224291 containerd[1719]: time="2025-09-12T17:09:06.224055770Z" level=info msg="StartContainer for \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\" returns successfully" Sep 12 17:09:06.399030 kubelet[3304]: I0912 17:09:06.398313 3304 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:09:06.438931 kubelet[3304]: W0912 17:09:06.438502 3304 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230.2.3-a-bc327f6988" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object Sep 12 17:09:06.438931 kubelet[3304]: E0912 17:09:06.438563 3304 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4230.2.3-a-bc327f6988\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object" logger="UnhandledError" Sep 12 17:09:06.439083 kubelet[3304]: I0912 17:09:06.439041 3304 status_manager.go:890] "Failed to get status for pod" podUID="5e1e5527-9307-48d0-944d-f7b38e39effe" pod="kube-system/coredns-668d6bf9bc-ttklc" err="pods \"coredns-668d6bf9bc-ttklc\" is forbidden: User \"system:node:ci-4230.2.3-a-bc327f6988\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object" Sep 12 17:09:06.448184 systemd[1]: Created slice kubepods-burstable-pod5e1e5527_9307_48d0_944d_f7b38e39effe.slice - libcontainer container kubepods-burstable-pod5e1e5527_9307_48d0_944d_f7b38e39effe.slice. Sep 12 17:09:06.463609 systemd[1]: Created slice kubepods-burstable-podc77554aa_6f82_46ec_82a5_9edfdfd04a77.slice - libcontainer container kubepods-burstable-podc77554aa_6f82_46ec_82a5_9edfdfd04a77.slice. Sep 12 17:09:06.544455 kubelet[3304]: I0912 17:09:06.544284 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfmxz\" (UniqueName: \"kubernetes.io/projected/c77554aa-6f82-46ec-82a5-9edfdfd04a77-kube-api-access-wfmxz\") pod \"coredns-668d6bf9bc-wnmzh\" (UID: \"c77554aa-6f82-46ec-82a5-9edfdfd04a77\") " pod="kube-system/coredns-668d6bf9bc-wnmzh" Sep 12 17:09:06.544455 kubelet[3304]: I0912 17:09:06.544337 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkztg\" (UniqueName: \"kubernetes.io/projected/5e1e5527-9307-48d0-944d-f7b38e39effe-kube-api-access-bkztg\") pod \"coredns-668d6bf9bc-ttklc\" (UID: \"5e1e5527-9307-48d0-944d-f7b38e39effe\") " pod="kube-system/coredns-668d6bf9bc-ttklc" Sep 12 17:09:06.544455 kubelet[3304]: I0912 17:09:06.544356 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c77554aa-6f82-46ec-82a5-9edfdfd04a77-config-volume\") pod \"coredns-668d6bf9bc-wnmzh\" (UID: \"c77554aa-6f82-46ec-82a5-9edfdfd04a77\") " pod="kube-system/coredns-668d6bf9bc-wnmzh" Sep 12 17:09:06.544455 kubelet[3304]: I0912 17:09:06.544378 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e1e5527-9307-48d0-944d-f7b38e39effe-config-volume\") pod \"coredns-668d6bf9bc-ttklc\" (UID: \"5e1e5527-9307-48d0-944d-f7b38e39effe\") " pod="kube-system/coredns-668d6bf9bc-ttklc" Sep 12 17:09:07.646111 kubelet[3304]: E0912 17:09:07.646045 3304 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:09:07.646582 kubelet[3304]: E0912 17:09:07.646152 3304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c77554aa-6f82-46ec-82a5-9edfdfd04a77-config-volume podName:c77554aa-6f82-46ec-82a5-9edfdfd04a77 nodeName:}" failed. No retries permitted until 2025-09-12 17:09:08.146128895 +0000 UTC m=+20.450907468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c77554aa-6f82-46ec-82a5-9edfdfd04a77-config-volume") pod "coredns-668d6bf9bc-wnmzh" (UID: "c77554aa-6f82-46ec-82a5-9edfdfd04a77") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:09:07.646582 kubelet[3304]: E0912 17:09:07.646045 3304 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:09:07.646582 kubelet[3304]: E0912 17:09:07.646415 3304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5e1e5527-9307-48d0-944d-f7b38e39effe-config-volume podName:5e1e5527-9307-48d0-944d-f7b38e39effe nodeName:}" failed. No retries permitted until 2025-09-12 17:09:08.146403495 +0000 UTC m=+20.451182068 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5e1e5527-9307-48d0-944d-f7b38e39effe-config-volume") pod "coredns-668d6bf9bc-ttklc" (UID: "5e1e5527-9307-48d0-944d-f7b38e39effe") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:09:08.257084 containerd[1719]: time="2025-09-12T17:09:08.256748447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ttklc,Uid:5e1e5527-9307-48d0-944d-f7b38e39effe,Namespace:kube-system,Attempt:0,}" Sep 12 17:09:08.272315 containerd[1719]: time="2025-09-12T17:09:08.271969593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wnmzh,Uid:c77554aa-6f82-46ec-82a5-9edfdfd04a77,Namespace:kube-system,Attempt:0,}" Sep 12 17:09:08.597583 systemd-networkd[1553]: cilium_host: Link UP Sep 12 17:09:08.597697 systemd-networkd[1553]: cilium_net: Link UP Sep 12 17:09:08.599922 systemd-networkd[1553]: cilium_net: Gained carrier Sep 12 17:09:08.600649 systemd-networkd[1553]: cilium_host: Gained carrier Sep 12 17:09:08.600838 systemd-networkd[1553]: cilium_net: Gained IPv6LL Sep 12 17:09:08.835068 systemd-networkd[1553]: cilium_vxlan: Link UP Sep 12 17:09:08.835075 systemd-networkd[1553]: cilium_vxlan: Gained carrier Sep 12 17:09:08.950930 systemd-networkd[1553]: cilium_host: Gained IPv6LL Sep 12 17:09:09.205810 kernel: NET: Registered PF_ALG protocol family Sep 12 17:09:10.150894 systemd-networkd[1553]: lxc_health: Link UP Sep 12 17:09:10.168676 systemd-networkd[1553]: lxc_health: Gained carrier Sep 12 17:09:10.263266 kubelet[3304]: I0912 17:09:10.263166 3304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l2ctq" podStartSLOduration=11.383925256 podStartE2EDuration="17.26314s" podCreationTimestamp="2025-09-12 17:08:53 +0000 UTC" firstStartedPulling="2025-09-12 17:08:54.409375859 +0000 UTC m=+6.714154432" lastFinishedPulling="2025-09-12 17:09:00.288590603 +0000 UTC m=+12.593369176" observedRunningTime="2025-09-12 17:09:07.125374014 +0000 UTC m=+19.430152587" watchObservedRunningTime="2025-09-12 17:09:10.26314 +0000 UTC m=+22.567918573" Sep 12 17:09:10.366351 kernel: eth0: renamed from tmp87ee6 Sep 12 17:09:10.371334 systemd-networkd[1553]: lxcf88895f4cff2: Link UP Sep 12 17:09:10.373853 systemd-networkd[1553]: lxcf88895f4cff2: Gained carrier Sep 12 17:09:10.381130 systemd-networkd[1553]: lxc6d5f19198d61: Link UP Sep 12 17:09:10.396733 kernel: eth0: renamed from tmp887b5 Sep 12 17:09:10.400317 systemd-networkd[1553]: cilium_vxlan: Gained IPv6LL Sep 12 17:09:10.400618 systemd-networkd[1553]: lxc6d5f19198d61: Gained carrier Sep 12 17:09:11.222967 systemd-networkd[1553]: lxc_health: Gained IPv6LL Sep 12 17:09:11.990889 systemd-networkd[1553]: lxcf88895f4cff2: Gained IPv6LL Sep 12 17:09:12.439919 systemd-networkd[1553]: lxc6d5f19198d61: Gained IPv6LL Sep 12 17:09:14.573652 containerd[1719]: time="2025-09-12T17:09:14.573287527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:09:14.573652 containerd[1719]: time="2025-09-12T17:09:14.573404487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:09:14.573652 containerd[1719]: time="2025-09-12T17:09:14.573422607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:09:14.575739 containerd[1719]: time="2025-09-12T17:09:14.574222849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:09:14.607663 containerd[1719]: time="2025-09-12T17:09:14.604228108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:09:14.607663 containerd[1719]: time="2025-09-12T17:09:14.604290028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:09:14.607663 containerd[1719]: time="2025-09-12T17:09:14.604304788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:09:14.607663 containerd[1719]: time="2025-09-12T17:09:14.605234350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:09:14.615967 systemd[1]: Started cri-containerd-887b5df8e7a382dcef8afb6b43481d4f3c495c14dc843cbc2c25704f11a3b44c.scope - libcontainer container 887b5df8e7a382dcef8afb6b43481d4f3c495c14dc843cbc2c25704f11a3b44c. Sep 12 17:09:14.644180 systemd[1]: Started cri-containerd-87ee68eca4927c3eaa29182ff0c4ee1f5014046010ab0d5f2f08fa6b31ae7c60.scope - libcontainer container 87ee68eca4927c3eaa29182ff0c4ee1f5014046010ab0d5f2f08fa6b31ae7c60. Sep 12 17:09:14.700028 containerd[1719]: time="2025-09-12T17:09:14.699936577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wnmzh,Uid:c77554aa-6f82-46ec-82a5-9edfdfd04a77,Namespace:kube-system,Attempt:0,} returns sandbox id \"887b5df8e7a382dcef8afb6b43481d4f3c495c14dc843cbc2c25704f11a3b44c\"" Sep 12 17:09:14.706879 containerd[1719]: time="2025-09-12T17:09:14.706823871Z" level=info msg="CreateContainer within sandbox \"887b5df8e7a382dcef8afb6b43481d4f3c495c14dc843cbc2c25704f11a3b44c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:09:14.710399 containerd[1719]: time="2025-09-12T17:09:14.710335997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ttklc,Uid:5e1e5527-9307-48d0-944d-f7b38e39effe,Namespace:kube-system,Attempt:0,} returns sandbox id \"87ee68eca4927c3eaa29182ff0c4ee1f5014046010ab0d5f2f08fa6b31ae7c60\"" Sep 12 17:09:14.715568 containerd[1719]: time="2025-09-12T17:09:14.715509248Z" level=info msg="CreateContainer within sandbox \"87ee68eca4927c3eaa29182ff0c4ee1f5014046010ab0d5f2f08fa6b31ae7c60\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:09:14.780206 containerd[1719]: time="2025-09-12T17:09:14.780001615Z" level=info msg="CreateContainer within sandbox \"887b5df8e7a382dcef8afb6b43481d4f3c495c14dc843cbc2c25704f11a3b44c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e9c39ec9e098c745e9a9c67b2772dfc51b75fedc5abfac5258dd4979a549e05\"" Sep 12 17:09:14.781499 containerd[1719]: time="2025-09-12T17:09:14.780639816Z" level=info msg="StartContainer for \"9e9c39ec9e098c745e9a9c67b2772dfc51b75fedc5abfac5258dd4979a549e05\"" Sep 12 17:09:14.793201 containerd[1719]: time="2025-09-12T17:09:14.792879560Z" level=info msg="CreateContainer within sandbox \"87ee68eca4927c3eaa29182ff0c4ee1f5014046010ab0d5f2f08fa6b31ae7c60\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a8b4e90adfb18dbd9b31d150f7b0bf3d45577148b251d32148c3eef678c5a1e\"" Sep 12 17:09:14.795083 containerd[1719]: time="2025-09-12T17:09:14.794595444Z" level=info msg="StartContainer for \"9a8b4e90adfb18dbd9b31d150f7b0bf3d45577148b251d32148c3eef678c5a1e\"" Sep 12 17:09:14.820529 systemd[1]: Started cri-containerd-9e9c39ec9e098c745e9a9c67b2772dfc51b75fedc5abfac5258dd4979a549e05.scope - libcontainer container 9e9c39ec9e098c745e9a9c67b2772dfc51b75fedc5abfac5258dd4979a549e05. Sep 12 17:09:14.833934 systemd[1]: Started cri-containerd-9a8b4e90adfb18dbd9b31d150f7b0bf3d45577148b251d32148c3eef678c5a1e.scope - libcontainer container 9a8b4e90adfb18dbd9b31d150f7b0bf3d45577148b251d32148c3eef678c5a1e. Sep 12 17:09:14.869150 containerd[1719]: time="2025-09-12T17:09:14.869013550Z" level=info msg="StartContainer for \"9e9c39ec9e098c745e9a9c67b2772dfc51b75fedc5abfac5258dd4979a549e05\" returns successfully" Sep 12 17:09:14.882054 containerd[1719]: time="2025-09-12T17:09:14.881898096Z" level=info msg="StartContainer for \"9a8b4e90adfb18dbd9b31d150f7b0bf3d45577148b251d32148c3eef678c5a1e\" returns successfully" Sep 12 17:09:15.143632 kubelet[3304]: I0912 17:09:15.143462 3304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ttklc" podStartSLOduration=21.143444132 podStartE2EDuration="21.143444132s" podCreationTimestamp="2025-09-12 17:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:09:15.141068007 +0000 UTC m=+27.445846580" watchObservedRunningTime="2025-09-12 17:09:15.143444132 +0000 UTC m=+27.448222705" Sep 12 17:09:15.196521 kubelet[3304]: I0912 17:09:15.195876 3304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wnmzh" podStartSLOduration=21.195816835 podStartE2EDuration="21.195816835s" podCreationTimestamp="2025-09-12 17:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:09:15.195640035 +0000 UTC m=+27.500418608" watchObservedRunningTime="2025-09-12 17:09:15.195816835 +0000 UTC m=+27.500595408" Sep 12 17:09:15.579258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206814938.mount: Deactivated successfully. Sep 12 17:10:25.487218 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.16.10:53986.service - OpenSSH per-connection server daemon (10.200.16.10:53986). Sep 12 17:10:25.900722 sshd[4692]: Accepted publickey for core from 10.200.16.10 port 53986 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:10:25.902160 sshd-session[4692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:25.906541 systemd-logind[1697]: New session 10 of user core. Sep 12 17:10:25.912915 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:10:26.289623 sshd[4694]: Connection closed by 10.200.16.10 port 53986 Sep 12 17:10:26.289525 sshd-session[4692]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:26.292944 systemd[1]: sshd@7-10.200.20.12:22-10.200.16.10:53986.service: Deactivated successfully. Sep 12 17:10:26.294840 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:10:26.295589 systemd-logind[1697]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:10:26.296838 systemd-logind[1697]: Removed session 10. Sep 12 17:10:31.377915 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.16.10:53348.service - OpenSSH per-connection server daemon (10.200.16.10:53348). Sep 12 17:10:31.842961 sshd[4707]: Accepted publickey for core from 10.200.16.10 port 53348 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:10:31.844408 sshd-session[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:31.848880 systemd-logind[1697]: New session 11 of user core. Sep 12 17:10:31.857953 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:10:32.241981 sshd[4709]: Connection closed by 10.200.16.10 port 53348 Sep 12 17:10:32.242656 sshd-session[4707]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:32.246509 systemd[1]: sshd@8-10.200.20.12:22-10.200.16.10:53348.service: Deactivated successfully. Sep 12 17:10:32.249410 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:10:32.250594 systemd-logind[1697]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:10:32.251671 systemd-logind[1697]: Removed session 11. Sep 12 17:10:37.329002 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.16.10:53356.service - OpenSSH per-connection server daemon (10.200.16.10:53356). Sep 12 17:10:37.781935 sshd[4722]: Accepted publickey for core from 10.200.16.10 port 53356 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:10:37.783282 sshd-session[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:37.788774 systemd-logind[1697]: New session 12 of user core. Sep 12 17:10:37.796894 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:10:38.189759 sshd[4724]: Connection closed by 10.200.16.10 port 53356 Sep 12 17:10:38.189475 sshd-session[4722]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:38.193731 systemd[1]: sshd@9-10.200.20.12:22-10.200.16.10:53356.service: Deactivated successfully. Sep 12 17:10:38.196737 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:10:38.197823 systemd-logind[1697]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:10:38.198726 systemd-logind[1697]: Removed session 12. Sep 12 17:10:43.271969 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.16.10:54096.service - OpenSSH per-connection server daemon (10.200.16.10:54096). Sep 12 17:10:43.688245 sshd[4736]: Accepted publickey for core from 10.200.16.10 port 54096 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:10:43.689170 sshd-session[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:43.694781 systemd-logind[1697]: New session 13 of user core. Sep 12 17:10:43.704890 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:10:44.056514 sshd[4738]: Connection closed by 10.200.16.10 port 54096 Sep 12 17:10:44.057104 sshd-session[4736]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:44.060965 systemd[1]: sshd@10-10.200.20.12:22-10.200.16.10:54096.service: Deactivated successfully. Sep 12 17:10:44.063473 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:10:44.066122 systemd-logind[1697]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:10:44.067151 systemd-logind[1697]: Removed session 13. Sep 12 17:10:44.153150 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.16.10:54100.service - OpenSSH per-connection server daemon (10.200.16.10:54100). Sep 12 17:10:44.608307 sshd[4751]: Accepted publickey for core from 10.200.16.10 port 54100 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:10:44.609207 sshd-session[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:44.617597 systemd-logind[1697]: New session 14 of user core. Sep 12 17:10:44.627898 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:10:45.055069 sshd[4753]: Connection closed by 10.200.16.10 port 54100 Sep 12 17:10:45.055161 sshd-session[4751]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:45.058980 systemd-logind[1697]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:10:45.059803 systemd[1]: sshd@11-10.200.20.12:22-10.200.16.10:54100.service: Deactivated successfully. Sep 12 17:10:45.061911 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:10:45.064643 systemd-logind[1697]: Removed session 14. Sep 12 17:10:45.133068 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.16.10:54102.service - OpenSSH per-connection server daemon (10.200.16.10:54102). Sep 12 17:10:45.589667 sshd[4763]: Accepted publickey for core from 10.200.16.10 port 54102 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:10:45.591125 sshd-session[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:45.596097 systemd-logind[1697]: New session 15 of user core. Sep 12 17:10:45.600865 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:10:45.987723 sshd[4765]: Connection closed by 10.200.16.10 port 54102 Sep 12 17:10:45.988403 sshd-session[4763]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:45.991612 systemd[1]: sshd@12-10.200.20.12:22-10.200.16.10:54102.service: Deactivated successfully. Sep 12 17:10:45.993569 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:10:45.995408 systemd-logind[1697]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:10:45.996534 systemd-logind[1697]: Removed session 15. Sep 12 17:10:51.078145 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.16.10:40364.service - OpenSSH per-connection server daemon (10.200.16.10:40364). Sep 12 17:10:51.532417 sshd[4780]: Accepted publickey for core from 10.200.16.10 port 40364 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:10:51.533942 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:51.539010 systemd-logind[1697]: New session 16 of user core. Sep 12 17:10:51.544903 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:10:51.930522 sshd[4782]: Connection closed by 10.200.16.10 port 40364 Sep 12 17:10:51.931235 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:51.935019 systemd[1]: sshd@13-10.200.20.12:22-10.200.16.10:40364.service: Deactivated successfully. Sep 12 17:10:51.937215 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:10:51.940804 systemd-logind[1697]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:10:51.943994 systemd-logind[1697]: Removed session 16. Sep 12 17:10:57.023000 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.16.10:40374.service - OpenSSH per-connection server daemon (10.200.16.10:40374). Sep 12 17:10:57.437139 sshd[4797]: Accepted publickey for core from 10.200.16.10 port 40374 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:10:57.438530 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:57.443345 systemd-logind[1697]: New session 17 of user core. Sep 12 17:10:57.453091 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:10:57.800110 sshd[4799]: Connection closed by 10.200.16.10 port 40374 Sep 12 17:10:57.799626 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:57.803314 systemd[1]: sshd@14-10.200.20.12:22-10.200.16.10:40374.service: Deactivated successfully. Sep 12 17:10:57.805657 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:10:57.806584 systemd-logind[1697]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:10:57.807643 systemd-logind[1697]: Removed session 17. Sep 12 17:10:57.882154 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.16.10:40378.service - OpenSSH per-connection server daemon (10.200.16.10:40378). Sep 12 17:10:58.297504 sshd[4810]: Accepted publickey for core from 10.200.16.10 port 40378 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:10:58.299367 sshd-session[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:58.304838 systemd-logind[1697]: New session 18 of user core. Sep 12 17:10:58.309921 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:10:58.732900 sshd[4812]: Connection closed by 10.200.16.10 port 40378 Sep 12 17:10:58.733561 sshd-session[4810]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:58.737625 systemd[1]: sshd@15-10.200.20.12:22-10.200.16.10:40378.service: Deactivated successfully. Sep 12 17:10:58.740486 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:10:58.743275 systemd-logind[1697]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:10:58.744276 systemd-logind[1697]: Removed session 18. Sep 12 17:10:58.821981 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.16.10:40384.service - OpenSSH per-connection server daemon (10.200.16.10:40384). Sep 12 17:10:59.278224 sshd[4821]: Accepted publickey for core from 10.200.16.10 port 40384 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:10:59.279575 sshd-session[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:59.286133 systemd-logind[1697]: New session 19 of user core. Sep 12 17:10:59.292906 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:11:00.062946 sshd[4823]: Connection closed by 10.200.16.10 port 40384 Sep 12 17:11:00.062849 sshd-session[4821]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:00.065848 systemd-logind[1697]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:11:00.067656 systemd[1]: sshd@16-10.200.20.12:22-10.200.16.10:40384.service: Deactivated successfully. Sep 12 17:11:00.070203 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:11:00.072267 systemd-logind[1697]: Removed session 19. Sep 12 17:11:00.155006 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.16.10:42610.service - OpenSSH per-connection server daemon (10.200.16.10:42610). Sep 12 17:11:00.609317 sshd[4840]: Accepted publickey for core from 10.200.16.10 port 42610 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:11:00.610689 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:00.614926 systemd-logind[1697]: New session 20 of user core. Sep 12 17:11:00.624904 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:11:01.112305 sshd[4842]: Connection closed by 10.200.16.10 port 42610 Sep 12 17:11:01.112942 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:01.116716 systemd[1]: sshd@17-10.200.20.12:22-10.200.16.10:42610.service: Deactivated successfully. Sep 12 17:11:01.119077 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:11:01.120194 systemd-logind[1697]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:11:01.121354 systemd-logind[1697]: Removed session 20. Sep 12 17:11:01.206257 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.16.10:42612.service - OpenSSH per-connection server daemon (10.200.16.10:42612). Sep 12 17:11:01.661219 sshd[4851]: Accepted publickey for core from 10.200.16.10 port 42612 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:11:01.662640 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:01.668553 systemd-logind[1697]: New session 21 of user core. Sep 12 17:11:01.677941 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:11:02.064347 sshd[4853]: Connection closed by 10.200.16.10 port 42612 Sep 12 17:11:02.065151 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:02.070254 systemd[1]: sshd@18-10.200.20.12:22-10.200.16.10:42612.service: Deactivated successfully. Sep 12 17:11:02.074086 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:11:02.075418 systemd-logind[1697]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:11:02.076768 systemd-logind[1697]: Removed session 21. Sep 12 17:11:07.147898 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.16.10:42622.service - OpenSSH per-connection server daemon (10.200.16.10:42622). Sep 12 17:11:07.605149 sshd[4867]: Accepted publickey for core from 10.200.16.10 port 42622 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:11:07.606453 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:07.610689 systemd-logind[1697]: New session 22 of user core. Sep 12 17:11:07.615873 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:11:07.988642 sshd[4869]: Connection closed by 10.200.16.10 port 42622 Sep 12 17:11:07.989199 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:07.992966 systemd-logind[1697]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:11:07.993520 systemd[1]: sshd@19-10.200.20.12:22-10.200.16.10:42622.service: Deactivated successfully. Sep 12 17:11:07.995483 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:11:07.996570 systemd-logind[1697]: Removed session 22. Sep 12 17:11:13.081132 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.16.10:34150.service - OpenSSH per-connection server daemon (10.200.16.10:34150). Sep 12 17:11:13.535865 sshd[4880]: Accepted publickey for core from 10.200.16.10 port 34150 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:11:13.537203 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:13.541900 systemd-logind[1697]: New session 23 of user core. Sep 12 17:11:13.548892 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:11:13.932608 sshd[4882]: Connection closed by 10.200.16.10 port 34150 Sep 12 17:11:13.933858 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:13.939925 systemd[1]: sshd@20-10.200.20.12:22-10.200.16.10:34150.service: Deactivated successfully. Sep 12 17:11:13.940125 systemd-logind[1697]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:11:13.944695 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:11:13.946177 systemd-logind[1697]: Removed session 23. Sep 12 17:11:19.017660 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.16.10:34164.service - OpenSSH per-connection server daemon (10.200.16.10:34164). Sep 12 17:11:19.433393 sshd[4893]: Accepted publickey for core from 10.200.16.10 port 34164 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:11:19.434726 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:19.438787 systemd-logind[1697]: New session 24 of user core. Sep 12 17:11:19.443882 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:11:19.813881 sshd[4895]: Connection closed by 10.200.16.10 port 34164 Sep 12 17:11:19.813258 sshd-session[4893]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:19.816856 systemd[1]: sshd@21-10.200.20.12:22-10.200.16.10:34164.service: Deactivated successfully. Sep 12 17:11:19.818602 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:11:19.819771 systemd-logind[1697]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:11:19.820871 systemd-logind[1697]: Removed session 24. Sep 12 17:11:19.896317 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.16.10:59740.service - OpenSSH per-connection server daemon (10.200.16.10:59740). Sep 12 17:11:20.309226 sshd[4907]: Accepted publickey for core from 10.200.16.10 port 59740 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:11:20.310549 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:20.315833 systemd-logind[1697]: New session 25 of user core. Sep 12 17:11:20.320908 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:11:23.094186 containerd[1719]: time="2025-09-12T17:11:23.093686378Z" level=info msg="StopContainer for \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\" with timeout 30 (s)" Sep 12 17:11:23.095815 containerd[1719]: time="2025-09-12T17:11:23.094832380Z" level=info msg="Stop container \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\" with signal terminated" Sep 12 17:11:23.103284 containerd[1719]: time="2025-09-12T17:11:23.103229473Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:11:23.111482 systemd[1]: cri-containerd-89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9.scope: Deactivated successfully. Sep 12 17:11:23.119057 containerd[1719]: time="2025-09-12T17:11:23.118374616Z" level=info msg="StopContainer for \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\" with timeout 2 (s)" Sep 12 17:11:23.120145 containerd[1719]: time="2025-09-12T17:11:23.120023019Z" level=info msg="Stop container \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\" with signal terminated" Sep 12 17:11:23.126035 systemd-networkd[1553]: lxc_health: Link DOWN Sep 12 17:11:23.126044 systemd-networkd[1553]: lxc_health: Lost carrier Sep 12 17:11:23.144572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9-rootfs.mount: Deactivated successfully. Sep 12 17:11:23.147542 systemd[1]: cri-containerd-fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167.scope: Deactivated successfully. Sep 12 17:11:23.150541 systemd[1]: cri-containerd-fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167.scope: Consumed 7.115s CPU time, 127.4M memory peak, 128K read from disk, 12.9M written to disk. Sep 12 17:11:23.179263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167-rootfs.mount: Deactivated successfully. Sep 12 17:11:23.214586 containerd[1719]: time="2025-09-12T17:11:23.214114006Z" level=info msg="shim disconnected" id=fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167 namespace=k8s.io Sep 12 17:11:23.214586 containerd[1719]: time="2025-09-12T17:11:23.214354007Z" level=warning msg="cleaning up after shim disconnected" id=fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167 namespace=k8s.io Sep 12 17:11:23.214586 containerd[1719]: time="2025-09-12T17:11:23.214270727Z" level=info msg="shim disconnected" id=89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9 namespace=k8s.io Sep 12 17:11:23.214586 containerd[1719]: time="2025-09-12T17:11:23.214414407Z" level=warning msg="cleaning up after shim disconnected" id=89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9 namespace=k8s.io Sep 12 17:11:23.214586 containerd[1719]: time="2025-09-12T17:11:23.214421247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:23.215095 containerd[1719]: time="2025-09-12T17:11:23.214783287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:23.230045 containerd[1719]: time="2025-09-12T17:11:23.229770551Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:11:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:11:23.238350 containerd[1719]: time="2025-09-12T17:11:23.238296444Z" level=info msg="StopContainer for \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\" returns successfully" Sep 12 17:11:23.239607 containerd[1719]: time="2025-09-12T17:11:23.239569926Z" level=info msg="StopPodSandbox for \"63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e\"" Sep 12 17:11:23.241724 containerd[1719]: time="2025-09-12T17:11:23.239615006Z" level=info msg="Container to stop \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:23.241591 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e-shm.mount: Deactivated successfully. Sep 12 17:11:23.242692 containerd[1719]: time="2025-09-12T17:11:23.242564371Z" level=info msg="StopContainer for \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\" returns successfully" Sep 12 17:11:23.244060 containerd[1719]: time="2025-09-12T17:11:23.243856653Z" level=info msg="StopPodSandbox for \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\"" Sep 12 17:11:23.244060 containerd[1719]: time="2025-09-12T17:11:23.244024773Z" level=info msg="Container to stop \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:23.244060 containerd[1719]: time="2025-09-12T17:11:23.244045093Z" level=info msg="Container to stop \"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:23.244170 containerd[1719]: time="2025-09-12T17:11:23.244053973Z" level=info msg="Container to stop \"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:23.244199 containerd[1719]: time="2025-09-12T17:11:23.244169653Z" level=info msg="Container to stop \"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:23.244199 containerd[1719]: time="2025-09-12T17:11:23.244181933Z" level=info msg="Container to stop \"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:23.246053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c-shm.mount: Deactivated successfully. Sep 12 17:11:23.252852 systemd[1]: cri-containerd-63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e.scope: Deactivated successfully. Sep 12 17:11:23.262409 systemd[1]: cri-containerd-709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c.scope: Deactivated successfully. Sep 12 17:11:23.306396 containerd[1719]: time="2025-09-12T17:11:23.306272311Z" level=info msg="shim disconnected" id=709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c namespace=k8s.io Sep 12 17:11:23.306396 containerd[1719]: time="2025-09-12T17:11:23.306360871Z" level=warning msg="cleaning up after shim disconnected" id=709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c namespace=k8s.io Sep 12 17:11:23.306396 containerd[1719]: time="2025-09-12T17:11:23.306369391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:23.310912 containerd[1719]: time="2025-09-12T17:11:23.310604877Z" level=info msg="shim disconnected" id=63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e namespace=k8s.io Sep 12 17:11:23.310912 containerd[1719]: time="2025-09-12T17:11:23.310673678Z" level=warning msg="cleaning up after shim disconnected" id=63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e namespace=k8s.io Sep 12 17:11:23.310912 containerd[1719]: time="2025-09-12T17:11:23.310681518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:23.329186 containerd[1719]: time="2025-09-12T17:11:23.328976226Z" level=info msg="TearDown network for sandbox \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" successfully" Sep 12 17:11:23.329186 containerd[1719]: time="2025-09-12T17:11:23.329118826Z" level=info msg="StopPodSandbox for \"709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c\" returns successfully" Sep 12 17:11:23.330275 containerd[1719]: time="2025-09-12T17:11:23.330063228Z" level=info msg="TearDown network for sandbox \"63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e\" successfully" Sep 12 17:11:23.330275 containerd[1719]: time="2025-09-12T17:11:23.330089668Z" level=info msg="StopPodSandbox for \"63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e\" returns successfully" Sep 12 17:11:23.370668 kubelet[3304]: I0912 17:11:23.369751 3304 scope.go:117] "RemoveContainer" containerID="89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9" Sep 12 17:11:23.373552 containerd[1719]: time="2025-09-12T17:11:23.373420536Z" level=info msg="RemoveContainer for \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\"" Sep 12 17:11:23.388141 containerd[1719]: time="2025-09-12T17:11:23.388093999Z" level=info msg="RemoveContainer for \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\" returns successfully" Sep 12 17:11:23.388498 kubelet[3304]: I0912 17:11:23.388395 3304 scope.go:117] "RemoveContainer" containerID="89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9" Sep 12 17:11:23.388827 containerd[1719]: time="2025-09-12T17:11:23.388788000Z" level=error msg="ContainerStatus for \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\": not found" Sep 12 17:11:23.388974 kubelet[3304]: E0912 17:11:23.388934 3304 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\": not found" containerID="89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9" Sep 12 17:11:23.389058 kubelet[3304]: I0912 17:11:23.388973 3304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9"} err="failed to get container status \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"89b7fead2f1bd8c8e1d94595f7d5610ed6742c84624f633d92048cff2ceb85a9\": not found" Sep 12 17:11:23.389058 kubelet[3304]: I0912 17:11:23.389055 3304 scope.go:117] "RemoveContainer" containerID="fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167" Sep 12 17:11:23.390218 containerd[1719]: time="2025-09-12T17:11:23.390185682Z" level=info msg="RemoveContainer for \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\"" Sep 12 17:11:23.399046 containerd[1719]: time="2025-09-12T17:11:23.399011856Z" level=info msg="RemoveContainer for \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\" returns successfully" Sep 12 17:11:23.399365 kubelet[3304]: I0912 17:11:23.399305 3304 scope.go:117] "RemoveContainer" containerID="ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f" Sep 12 17:11:23.400662 containerd[1719]: time="2025-09-12T17:11:23.400636178Z" level=info msg="RemoveContainer for \"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f\"" Sep 12 17:11:23.410624 containerd[1719]: time="2025-09-12T17:11:23.410555994Z" level=info msg="RemoveContainer for \"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f\" returns successfully" Sep 12 17:11:23.411023 kubelet[3304]: I0912 17:11:23.410987 3304 scope.go:117] "RemoveContainer" containerID="d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6" Sep 12 17:11:23.412729 containerd[1719]: time="2025-09-12T17:11:23.412371357Z" level=info msg="RemoveContainer for \"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6\"" Sep 12 17:11:23.424113 containerd[1719]: time="2025-09-12T17:11:23.424028335Z" level=info msg="RemoveContainer for \"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6\" returns successfully" Sep 12 17:11:23.424407 kubelet[3304]: I0912 17:11:23.424378 3304 scope.go:117] "RemoveContainer" containerID="783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346" Sep 12 17:11:23.425776 containerd[1719]: time="2025-09-12T17:11:23.425735498Z" level=info msg="RemoveContainer for \"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346\"" Sep 12 17:11:23.434197 containerd[1719]: time="2025-09-12T17:11:23.434156431Z" level=info msg="RemoveContainer for \"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346\" returns successfully" Sep 12 17:11:23.434461 kubelet[3304]: I0912 17:11:23.434432 3304 scope.go:117] "RemoveContainer" containerID="49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1" Sep 12 17:11:23.435861 containerd[1719]: time="2025-09-12T17:11:23.435566873Z" level=info msg="RemoveContainer for \"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1\"" Sep 12 17:11:23.440302 kubelet[3304]: I0912 17:11:23.440267 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-run\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440302 kubelet[3304]: I0912 17:11:23.440305 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-cgroup\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440437 kubelet[3304]: I0912 17:11:23.440328 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cni-path\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440437 kubelet[3304]: I0912 17:11:23.440344 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-lib-modules\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440437 kubelet[3304]: I0912 17:11:23.440366 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9glpn\" (UniqueName: \"kubernetes.io/projected/92031893-e486-48ff-a4d1-859e81208606-kube-api-access-9glpn\") pod \"92031893-e486-48ff-a4d1-859e81208606\" (UID: \"92031893-e486-48ff-a4d1-859e81208606\") " Sep 12 17:11:23.440437 kubelet[3304]: I0912 17:11:23.440387 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg65x\" (UniqueName: \"kubernetes.io/projected/3d8069e0-24d9-439b-8e7d-2007a826be4d-kube-api-access-pg65x\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440437 kubelet[3304]: I0912 17:11:23.440405 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92031893-e486-48ff-a4d1-859e81208606-cilium-config-path\") pod \"92031893-e486-48ff-a4d1-859e81208606\" (UID: \"92031893-e486-48ff-a4d1-859e81208606\") " Sep 12 17:11:23.440437 kubelet[3304]: I0912 17:11:23.440425 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-xtables-lock\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440574 kubelet[3304]: I0912 17:11:23.440447 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d8069e0-24d9-439b-8e7d-2007a826be4d-clustermesh-secrets\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440574 kubelet[3304]: I0912 17:11:23.440465 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-config-path\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440574 kubelet[3304]: I0912 17:11:23.440479 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-hostproc\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440574 kubelet[3304]: I0912 17:11:23.440494 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-etc-cni-netd\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440574 kubelet[3304]: I0912 17:11:23.440511 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-bpf-maps\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440574 kubelet[3304]: I0912 17:11:23.440525 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-host-proc-sys-net\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440851 kubelet[3304]: I0912 17:11:23.440541 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-host-proc-sys-kernel\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.440851 kubelet[3304]: I0912 17:11:23.440557 3304 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d8069e0-24d9-439b-8e7d-2007a826be4d-hubble-tls\") pod \"3d8069e0-24d9-439b-8e7d-2007a826be4d\" (UID: \"3d8069e0-24d9-439b-8e7d-2007a826be4d\") " Sep 12 17:11:23.442641 kubelet[3304]: I0912 17:11:23.440997 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:23.442641 kubelet[3304]: I0912 17:11:23.441048 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:23.442641 kubelet[3304]: I0912 17:11:23.441081 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:23.442641 kubelet[3304]: I0912 17:11:23.441095 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cni-path" (OuterVolumeSpecName: "cni-path") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:23.442641 kubelet[3304]: I0912 17:11:23.441123 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:23.443901 containerd[1719]: time="2025-09-12T17:11:23.443866006Z" level=info msg="RemoveContainer for \"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1\" returns successfully" Sep 12 17:11:23.447928 kubelet[3304]: I0912 17:11:23.447888 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8069e0-24d9-439b-8e7d-2007a826be4d-kube-api-access-pg65x" (OuterVolumeSpecName: "kube-api-access-pg65x") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "kube-api-access-pg65x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:11:23.448177 kubelet[3304]: I0912 17:11:23.448156 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92031893-e486-48ff-a4d1-859e81208606-kube-api-access-9glpn" (OuterVolumeSpecName: "kube-api-access-9glpn") pod "92031893-e486-48ff-a4d1-859e81208606" (UID: "92031893-e486-48ff-a4d1-859e81208606"). InnerVolumeSpecName "kube-api-access-9glpn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:11:23.448288 kubelet[3304]: I0912 17:11:23.448257 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:23.448336 kubelet[3304]: I0912 17:11:23.448223 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-hostproc" (OuterVolumeSpecName: "hostproc") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:23.448336 kubelet[3304]: I0912 17:11:23.448245 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:23.448385 kubelet[3304]: I0912 17:11:23.448342 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:23.448385 kubelet[3304]: I0912 17:11:23.448358 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:23.448588 kubelet[3304]: I0912 17:11:23.448568 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8069e0-24d9-439b-8e7d-2007a826be4d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:11:23.448850 kubelet[3304]: I0912 17:11:23.448815 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92031893-e486-48ff-a4d1-859e81208606-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "92031893-e486-48ff-a4d1-859e81208606" (UID: "92031893-e486-48ff-a4d1-859e81208606"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:11:23.449144 kubelet[3304]: I0912 17:11:23.448827 3304 scope.go:117] "RemoveContainer" containerID="fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167" Sep 12 17:11:23.449534 kubelet[3304]: I0912 17:11:23.449340 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8069e0-24d9-439b-8e7d-2007a826be4d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:11:23.449998 containerd[1719]: time="2025-09-12T17:11:23.449939016Z" level=error msg="ContainerStatus for \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\": not found" Sep 12 17:11:23.450343 kubelet[3304]: E0912 17:11:23.450123 3304 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\": not found" containerID="fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167" Sep 12 17:11:23.450343 kubelet[3304]: I0912 17:11:23.450152 3304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167"} err="failed to get container status \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa8496016a152cb6a4fdb92c710f5e689de5bd53d92f825710edb30b9fb27167\": not found" Sep 12 17:11:23.450343 kubelet[3304]: I0912 17:11:23.450173 3304 scope.go:117] "RemoveContainer" containerID="ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f" Sep 12 17:11:23.450422 containerd[1719]: time="2025-09-12T17:11:23.450362496Z" level=error msg="ContainerStatus for \"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f\": not found" Sep 12 17:11:23.450627 kubelet[3304]: E0912 17:11:23.450526 3304 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f\": not found" containerID="ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f" Sep 12 17:11:23.450627 kubelet[3304]: I0912 17:11:23.450565 3304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f"} err="failed to get container status \"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffc4fb16011f2a269b5a066d43116cf3c5f4e2df400284a1f310813912f8ba5f\": not found" Sep 12 17:11:23.450627 kubelet[3304]: I0912 17:11:23.450582 3304 scope.go:117] "RemoveContainer" containerID="d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6" Sep 12 17:11:23.451191 containerd[1719]: time="2025-09-12T17:11:23.450967257Z" level=error msg="ContainerStatus for \"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6\": not found" Sep 12 17:11:23.451248 kubelet[3304]: E0912 17:11:23.451105 3304 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6\": not found" containerID="d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6" Sep 12 17:11:23.451397 kubelet[3304]: I0912 17:11:23.451320 3304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6"} err="failed to get container status \"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7826fe4b6a2c8ea2d0ff7d5ab5ff07d824be4812aeb2d5b445d31053d8fbfe6\": not found" Sep 12 17:11:23.451397 kubelet[3304]: I0912 17:11:23.451347 3304 scope.go:117] "RemoveContainer" containerID="783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346" Sep 12 17:11:23.451758 containerd[1719]: time="2025-09-12T17:11:23.451725778Z" level=error msg="ContainerStatus for \"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346\": not found" Sep 12 17:11:23.452012 kubelet[3304]: E0912 17:11:23.451911 3304 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346\": not found" containerID="783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346" Sep 12 17:11:23.452012 kubelet[3304]: I0912 17:11:23.451936 3304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346"} err="failed to get container status \"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346\": rpc error: code = NotFound desc = an error occurred when try to find container \"783f33ab14339da43cb5a310909f4730a04b38d5e5bfbc3739c4c17f0bb95346\": not found" Sep 12 17:11:23.452012 kubelet[3304]: I0912 17:11:23.451952 3304 scope.go:117] "RemoveContainer" containerID="49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1" Sep 12 17:11:23.452228 containerd[1719]: time="2025-09-12T17:11:23.452159339Z" level=error msg="ContainerStatus for \"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1\": not found" Sep 12 17:11:23.452365 kubelet[3304]: E0912 17:11:23.452347 3304 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1\": not found" containerID="49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1" Sep 12 17:11:23.452365 kubelet[3304]: I0912 17:11:23.452389 3304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1"} err="failed to get container status \"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1\": rpc error: code = NotFound desc = an error occurred when try to find container \"49847f0324dbfbac99cd14b3616d21e318c9e27b7f8a0815d2d9e57d38808bb1\": not found" Sep 12 17:11:23.452477 kubelet[3304]: I0912 17:11:23.452396 3304 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d8069e0-24d9-439b-8e7d-2007a826be4d" (UID: "3d8069e0-24d9-439b-8e7d-2007a826be4d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:11:23.540869 kubelet[3304]: I0912 17:11:23.540822 3304 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d8069e0-24d9-439b-8e7d-2007a826be4d-clustermesh-secrets\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.540869 kubelet[3304]: I0912 17:11:23.540864 3304 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-etc-cni-netd\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.540869 kubelet[3304]: I0912 17:11:23.540875 3304 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-config-path\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541056 kubelet[3304]: I0912 17:11:23.540885 3304 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-hostproc\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541056 kubelet[3304]: I0912 17:11:23.540896 3304 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-host-proc-sys-net\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541056 kubelet[3304]: I0912 17:11:23.540907 3304 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-host-proc-sys-kernel\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541056 kubelet[3304]: I0912 17:11:23.540916 3304 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d8069e0-24d9-439b-8e7d-2007a826be4d-hubble-tls\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541056 kubelet[3304]: I0912 17:11:23.540924 3304 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-bpf-maps\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541056 kubelet[3304]: I0912 17:11:23.540931 3304 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-run\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541056 kubelet[3304]: I0912 17:11:23.540939 3304 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cilium-cgroup\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541056 kubelet[3304]: I0912 17:11:23.540946 3304 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-cni-path\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541225 kubelet[3304]: I0912 17:11:23.540954 3304 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-lib-modules\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541225 kubelet[3304]: I0912 17:11:23.540962 3304 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9glpn\" (UniqueName: \"kubernetes.io/projected/92031893-e486-48ff-a4d1-859e81208606-kube-api-access-9glpn\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541225 kubelet[3304]: I0912 17:11:23.540978 3304 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pg65x\" (UniqueName: \"kubernetes.io/projected/3d8069e0-24d9-439b-8e7d-2007a826be4d-kube-api-access-pg65x\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541225 kubelet[3304]: I0912 17:11:23.540988 3304 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92031893-e486-48ff-a4d1-859e81208606-cilium-config-path\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.541225 kubelet[3304]: I0912 17:11:23.540997 3304 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d8069e0-24d9-439b-8e7d-2007a826be4d-xtables-lock\") on node \"ci-4230.2.3-a-bc327f6988\" DevicePath \"\"" Sep 12 17:11:23.674433 systemd[1]: Removed slice kubepods-besteffort-pod92031893_e486_48ff_a4d1_859e81208606.slice - libcontainer container kubepods-besteffort-pod92031893_e486_48ff_a4d1_859e81208606.slice. Sep 12 17:11:23.684962 systemd[1]: Removed slice kubepods-burstable-pod3d8069e0_24d9_439b_8e7d_2007a826be4d.slice - libcontainer container kubepods-burstable-pod3d8069e0_24d9_439b_8e7d_2007a826be4d.slice. Sep 12 17:11:23.685170 systemd[1]: kubepods-burstable-pod3d8069e0_24d9_439b_8e7d_2007a826be4d.slice: Consumed 7.195s CPU time, 127.9M memory peak, 128K read from disk, 12.9M written to disk. Sep 12 17:11:23.839205 kubelet[3304]: I0912 17:11:23.839163 3304 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d8069e0-24d9-439b-8e7d-2007a826be4d" path="/var/lib/kubelet/pods/3d8069e0-24d9-439b-8e7d-2007a826be4d/volumes" Sep 12 17:11:23.839738 kubelet[3304]: I0912 17:11:23.839715 3304 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92031893-e486-48ff-a4d1-859e81208606" path="/var/lib/kubelet/pods/92031893-e486-48ff-a4d1-859e81208606/volumes" Sep 12 17:11:24.085689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63811f814d19f0304efa11bcf910a62ec4c0c7cd2a4281d82c53257d14445d9e-rootfs.mount: Deactivated successfully. Sep 12 17:11:24.085992 systemd[1]: var-lib-kubelet-pods-92031893\x2de486\x2d48ff\x2da4d1\x2d859e81208606-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9glpn.mount: Deactivated successfully. Sep 12 17:11:24.086122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-709473a99082188aaec2a4e76fae5d41a1b35954cf4e07fbe6948b0bca52bc1c-rootfs.mount: Deactivated successfully. Sep 12 17:11:24.086260 systemd[1]: var-lib-kubelet-pods-3d8069e0\x2d24d9\x2d439b\x2d8e7d\x2d2007a826be4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpg65x.mount: Deactivated successfully. Sep 12 17:11:24.086388 systemd[1]: var-lib-kubelet-pods-3d8069e0\x2d24d9\x2d439b\x2d8e7d\x2d2007a826be4d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:11:24.086507 systemd[1]: var-lib-kubelet-pods-3d8069e0\x2d24d9\x2d439b\x2d8e7d\x2d2007a826be4d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:11:25.057433 sshd[4909]: Connection closed by 10.200.16.10 port 59740 Sep 12 17:11:25.057929 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:25.061640 systemd[1]: sshd@22-10.200.20.12:22-10.200.16.10:59740.service: Deactivated successfully. Sep 12 17:11:25.065120 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:11:25.065541 systemd[1]: session-25.scope: Consumed 1.833s CPU time, 23.6M memory peak. Sep 12 17:11:25.066156 systemd-logind[1697]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:11:25.067348 systemd-logind[1697]: Removed session 25. Sep 12 17:11:25.139991 systemd[1]: Started sshd@23-10.200.20.12:22-10.200.16.10:59752.service - OpenSSH per-connection server daemon (10.200.16.10:59752). Sep 12 17:11:25.553837 sshd[5069]: Accepted publickey for core from 10.200.16.10 port 59752 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:11:25.555236 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:25.559661 systemd-logind[1697]: New session 26 of user core. Sep 12 17:11:25.566900 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:11:26.950551 kubelet[3304]: I0912 17:11:26.950486 3304 memory_manager.go:355] "RemoveStaleState removing state" podUID="3d8069e0-24d9-439b-8e7d-2007a826be4d" containerName="cilium-agent" Sep 12 17:11:26.950551 kubelet[3304]: I0912 17:11:26.950523 3304 memory_manager.go:355] "RemoveStaleState removing state" podUID="92031893-e486-48ff-a4d1-859e81208606" containerName="cilium-operator" Sep 12 17:11:26.962349 systemd[1]: Created slice kubepods-burstable-pode5da6e3c_565a_46a7_b965_5f1e88941694.slice - libcontainer container kubepods-burstable-pode5da6e3c_565a_46a7_b965_5f1e88941694.slice. Sep 12 17:11:26.964179 kubelet[3304]: W0912 17:11:26.964137 3304 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230.2.3-a-bc327f6988" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object Sep 12 17:11:26.964323 kubelet[3304]: E0912 17:11:26.964187 3304 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230.2.3-a-bc327f6988\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object" logger="UnhandledError" Sep 12 17:11:26.964323 kubelet[3304]: I0912 17:11:26.964235 3304 status_manager.go:890] "Failed to get status for pod" podUID="e5da6e3c-565a-46a7-b965-5f1e88941694" pod="kube-system/cilium-57q8w" err="pods \"cilium-57q8w\" is forbidden: User \"system:node:ci-4230.2.3-a-bc327f6988\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object" Sep 12 17:11:26.964383 kubelet[3304]: W0912 17:11:26.964331 3304 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4230.2.3-a-bc327f6988" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object Sep 12 17:11:26.964383 kubelet[3304]: E0912 17:11:26.964347 3304 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230.2.3-a-bc327f6988\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object" logger="UnhandledError" Sep 12 17:11:26.964431 kubelet[3304]: W0912 17:11:26.964386 3304 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230.2.3-a-bc327f6988" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object Sep 12 17:11:26.964431 kubelet[3304]: E0912 17:11:26.964396 3304 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230.2.3-a-bc327f6988\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object" logger="UnhandledError" Sep 12 17:11:26.964476 kubelet[3304]: W0912 17:11:26.964434 3304 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230.2.3-a-bc327f6988" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object Sep 12 17:11:26.964476 kubelet[3304]: E0912 17:11:26.964444 3304 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230.2.3-a-bc327f6988\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.3-a-bc327f6988' and this object" logger="UnhandledError" Sep 12 17:11:26.974278 sshd[5072]: Connection closed by 10.200.16.10 port 59752 Sep 12 17:11:26.974878 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:26.980907 systemd[1]: sshd@23-10.200.20.12:22-10.200.16.10:59752.service: Deactivated successfully. Sep 12 17:11:26.982633 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:11:26.984751 systemd[1]: session-26.scope: Consumed 1.034s CPU time, 23.6M memory peak. Sep 12 17:11:26.989488 systemd-logind[1697]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:11:26.992693 systemd-logind[1697]: Removed session 26. Sep 12 17:11:27.060876 kubelet[3304]: I0912 17:11:27.060829 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5da6e3c-565a-46a7-b965-5f1e88941694-etc-cni-netd\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.060876 kubelet[3304]: I0912 17:11:27.060879 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5da6e3c-565a-46a7-b965-5f1e88941694-xtables-lock\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061086 kubelet[3304]: I0912 17:11:27.060896 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5da6e3c-565a-46a7-b965-5f1e88941694-clustermesh-secrets\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061086 kubelet[3304]: I0912 17:11:27.060917 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5da6e3c-565a-46a7-b965-5f1e88941694-cilium-config-path\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061086 kubelet[3304]: I0912 17:11:27.060933 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e5da6e3c-565a-46a7-b965-5f1e88941694-cilium-ipsec-secrets\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061086 kubelet[3304]: I0912 17:11:27.060951 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5da6e3c-565a-46a7-b965-5f1e88941694-host-proc-sys-net\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061086 kubelet[3304]: I0912 17:11:27.060968 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5da6e3c-565a-46a7-b965-5f1e88941694-bpf-maps\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061195 kubelet[3304]: I0912 17:11:27.060982 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5da6e3c-565a-46a7-b965-5f1e88941694-cilium-cgroup\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061195 kubelet[3304]: I0912 17:11:27.060998 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5da6e3c-565a-46a7-b965-5f1e88941694-cni-path\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061195 kubelet[3304]: I0912 17:11:27.061011 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5da6e3c-565a-46a7-b965-5f1e88941694-lib-modules\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061195 kubelet[3304]: I0912 17:11:27.061028 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5da6e3c-565a-46a7-b965-5f1e88941694-host-proc-sys-kernel\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061195 kubelet[3304]: I0912 17:11:27.061044 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5da6e3c-565a-46a7-b965-5f1e88941694-hubble-tls\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061195 kubelet[3304]: I0912 17:11:27.061060 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv7kk\" (UniqueName: \"kubernetes.io/projected/e5da6e3c-565a-46a7-b965-5f1e88941694-kube-api-access-cv7kk\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061334 kubelet[3304]: I0912 17:11:27.061080 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5da6e3c-565a-46a7-b965-5f1e88941694-cilium-run\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.061334 kubelet[3304]: I0912 17:11:27.061105 3304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5da6e3c-565a-46a7-b965-5f1e88941694-hostproc\") pod \"cilium-57q8w\" (UID: \"e5da6e3c-565a-46a7-b965-5f1e88941694\") " pod="kube-system/cilium-57q8w" Sep 12 17:11:27.062013 systemd[1]: Started sshd@24-10.200.20.12:22-10.200.16.10:59762.service - OpenSSH per-connection server daemon (10.200.16.10:59762). Sep 12 17:11:27.518815 sshd[5082]: Accepted publickey for core from 10.200.16.10 port 59762 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:11:27.520347 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:27.526568 systemd-logind[1697]: New session 27 of user core. Sep 12 17:11:27.532924 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:11:27.854136 sshd[5085]: Connection closed by 10.200.16.10 port 59762 Sep 12 17:11:27.853201 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:27.859335 systemd[1]: sshd@24-10.200.20.12:22-10.200.16.10:59762.service: Deactivated successfully. Sep 12 17:11:27.861399 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:11:27.863498 systemd-logind[1697]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:11:27.865237 systemd-logind[1697]: Removed session 27. Sep 12 17:11:27.942966 systemd[1]: Started sshd@25-10.200.20.12:22-10.200.16.10:59772.service - OpenSSH per-connection server daemon (10.200.16.10:59772). Sep 12 17:11:27.969819 kubelet[3304]: E0912 17:11:27.969775 3304 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:11:28.162191 kubelet[3304]: E0912 17:11:28.162058 3304 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:28.162191 kubelet[3304]: E0912 17:11:28.162151 3304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5da6e3c-565a-46a7-b965-5f1e88941694-cilium-config-path podName:e5da6e3c-565a-46a7-b965-5f1e88941694 nodeName:}" failed. No retries permitted until 2025-09-12 17:11:28.662127031 +0000 UTC m=+160.966905604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/e5da6e3c-565a-46a7-b965-5f1e88941694-cilium-config-path") pod "cilium-57q8w" (UID: "e5da6e3c-565a-46a7-b965-5f1e88941694") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:28.162191 kubelet[3304]: E0912 17:11:28.162069 3304 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 12 17:11:28.162394 kubelet[3304]: E0912 17:11:28.162201 3304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5da6e3c-565a-46a7-b965-5f1e88941694-clustermesh-secrets podName:e5da6e3c-565a-46a7-b965-5f1e88941694 nodeName:}" failed. No retries permitted until 2025-09-12 17:11:28.662194271 +0000 UTC m=+160.966972844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/e5da6e3c-565a-46a7-b965-5f1e88941694-clustermesh-secrets") pod "cilium-57q8w" (UID: "e5da6e3c-565a-46a7-b965-5f1e88941694") : failed to sync secret cache: timed out waiting for the condition Sep 12 17:11:28.162795 kubelet[3304]: E0912 17:11:28.162080 3304 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Sep 12 17:11:28.162795 kubelet[3304]: E0912 17:11:28.162498 3304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5da6e3c-565a-46a7-b965-5f1e88941694-cilium-ipsec-secrets podName:e5da6e3c-565a-46a7-b965-5f1e88941694 nodeName:}" failed. No retries permitted until 2025-09-12 17:11:28.662484591 +0000 UTC m=+160.967263164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/e5da6e3c-565a-46a7-b965-5f1e88941694-cilium-ipsec-secrets") pod "cilium-57q8w" (UID: "e5da6e3c-565a-46a7-b965-5f1e88941694") : failed to sync secret cache: timed out waiting for the condition Sep 12 17:11:28.394767 sshd[5093]: Accepted publickey for core from 10.200.16.10 port 59772 ssh2: RSA SHA256:6rucgeEO15Mnn7f5+BPVsAo8J/LgcJWRTUfPFf/Pl0s Sep 12 17:11:28.401042 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:28.409091 systemd-logind[1697]: New session 28 of user core. Sep 12 17:11:28.411471 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:11:28.769502 containerd[1719]: time="2025-09-12T17:11:28.769058334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57q8w,Uid:e5da6e3c-565a-46a7-b965-5f1e88941694,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:28.811094 containerd[1719]: time="2025-09-12T17:11:28.811001202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:28.811399 containerd[1719]: time="2025-09-12T17:11:28.811150202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:28.811399 containerd[1719]: time="2025-09-12T17:11:28.811170962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:28.811399 containerd[1719]: time="2025-09-12T17:11:28.811255602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:28.828490 systemd[1]: run-containerd-runc-k8s.io-06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95-runc.iAKkq5.mount: Deactivated successfully. Sep 12 17:11:28.834882 systemd[1]: Started cri-containerd-06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95.scope - libcontainer container 06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95. Sep 12 17:11:28.858077 containerd[1719]: time="2025-09-12T17:11:28.858021958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57q8w,Uid:e5da6e3c-565a-46a7-b965-5f1e88941694,Namespace:kube-system,Attempt:0,} returns sandbox id \"06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95\"" Sep 12 17:11:28.862660 containerd[1719]: time="2025-09-12T17:11:28.862611485Z" level=info msg="CreateContainer within sandbox \"06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:11:28.896073 containerd[1719]: time="2025-09-12T17:11:28.895964379Z" level=info msg="CreateContainer within sandbox \"06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"67198935bfe27c60240648190581a669dc5f7a395f348aa38d05d2dab44ac76a\"" Sep 12 17:11:28.897790 containerd[1719]: time="2025-09-12T17:11:28.896686140Z" level=info msg="StartContainer for \"67198935bfe27c60240648190581a669dc5f7a395f348aa38d05d2dab44ac76a\"" Sep 12 17:11:28.922907 systemd[1]: Started cri-containerd-67198935bfe27c60240648190581a669dc5f7a395f348aa38d05d2dab44ac76a.scope - libcontainer container 67198935bfe27c60240648190581a669dc5f7a395f348aa38d05d2dab44ac76a. Sep 12 17:11:28.964345 containerd[1719]: time="2025-09-12T17:11:28.963635849Z" level=info msg="StartContainer for \"67198935bfe27c60240648190581a669dc5f7a395f348aa38d05d2dab44ac76a\" returns successfully" Sep 12 17:11:28.969650 systemd[1]: cri-containerd-67198935bfe27c60240648190581a669dc5f7a395f348aa38d05d2dab44ac76a.scope: Deactivated successfully. Sep 12 17:11:29.029561 containerd[1719]: time="2025-09-12T17:11:29.029284075Z" level=info msg="shim disconnected" id=67198935bfe27c60240648190581a669dc5f7a395f348aa38d05d2dab44ac76a namespace=k8s.io Sep 12 17:11:29.029561 containerd[1719]: time="2025-09-12T17:11:29.029339115Z" level=warning msg="cleaning up after shim disconnected" id=67198935bfe27c60240648190581a669dc5f7a395f348aa38d05d2dab44ac76a namespace=k8s.io Sep 12 17:11:29.029561 containerd[1719]: time="2025-09-12T17:11:29.029347035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:29.395494 containerd[1719]: time="2025-09-12T17:11:29.395339468Z" level=info msg="CreateContainer within sandbox \"06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:11:29.427864 containerd[1719]: time="2025-09-12T17:11:29.427817881Z" level=info msg="CreateContainer within sandbox \"06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7dd75b9a1df05a3384eca7646b95d408e04a326391b161fc46941d3f953212e0\"" Sep 12 17:11:29.429391 containerd[1719]: time="2025-09-12T17:11:29.428507882Z" level=info msg="StartContainer for \"7dd75b9a1df05a3384eca7646b95d408e04a326391b161fc46941d3f953212e0\"" Sep 12 17:11:29.453918 systemd[1]: Started cri-containerd-7dd75b9a1df05a3384eca7646b95d408e04a326391b161fc46941d3f953212e0.scope - libcontainer container 7dd75b9a1df05a3384eca7646b95d408e04a326391b161fc46941d3f953212e0. Sep 12 17:11:29.484999 containerd[1719]: time="2025-09-12T17:11:29.484455652Z" level=info msg="StartContainer for \"7dd75b9a1df05a3384eca7646b95d408e04a326391b161fc46941d3f953212e0\" returns successfully" Sep 12 17:11:29.488949 systemd[1]: cri-containerd-7dd75b9a1df05a3384eca7646b95d408e04a326391b161fc46941d3f953212e0.scope: Deactivated successfully. Sep 12 17:11:29.520907 containerd[1719]: time="2025-09-12T17:11:29.520834751Z" level=info msg="shim disconnected" id=7dd75b9a1df05a3384eca7646b95d408e04a326391b161fc46941d3f953212e0 namespace=k8s.io Sep 12 17:11:29.521293 containerd[1719]: time="2025-09-12T17:11:29.521115712Z" level=warning msg="cleaning up after shim disconnected" id=7dd75b9a1df05a3384eca7646b95d408e04a326391b161fc46941d3f953212e0 namespace=k8s.io Sep 12 17:11:29.521293 containerd[1719]: time="2025-09-12T17:11:29.521131672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:30.399737 containerd[1719]: time="2025-09-12T17:11:30.398937253Z" level=info msg="CreateContainer within sandbox \"06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:11:30.440044 containerd[1719]: time="2025-09-12T17:11:30.439996480Z" level=info msg="CreateContainer within sandbox \"06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"52c3717e147cdcd570b3a33bd65ad3978cf9af63ad40523bc90464b944546952\"" Sep 12 17:11:30.441938 containerd[1719]: time="2025-09-12T17:11:30.441004042Z" level=info msg="StartContainer for \"52c3717e147cdcd570b3a33bd65ad3978cf9af63ad40523bc90464b944546952\"" Sep 12 17:11:30.475937 systemd[1]: Started cri-containerd-52c3717e147cdcd570b3a33bd65ad3978cf9af63ad40523bc90464b944546952.scope - libcontainer container 52c3717e147cdcd570b3a33bd65ad3978cf9af63ad40523bc90464b944546952. Sep 12 17:11:30.513049 systemd[1]: cri-containerd-52c3717e147cdcd570b3a33bd65ad3978cf9af63ad40523bc90464b944546952.scope: Deactivated successfully. Sep 12 17:11:30.523740 containerd[1719]: time="2025-09-12T17:11:30.523377375Z" level=info msg="StartContainer for \"52c3717e147cdcd570b3a33bd65ad3978cf9af63ad40523bc90464b944546952\" returns successfully" Sep 12 17:11:30.557371 containerd[1719]: time="2025-09-12T17:11:30.557311830Z" level=info msg="shim disconnected" id=52c3717e147cdcd570b3a33bd65ad3978cf9af63ad40523bc90464b944546952 namespace=k8s.io Sep 12 17:11:30.557850 containerd[1719]: time="2025-09-12T17:11:30.557639950Z" level=warning msg="cleaning up after shim disconnected" id=52c3717e147cdcd570b3a33bd65ad3978cf9af63ad40523bc90464b944546952 namespace=k8s.io Sep 12 17:11:30.557850 containerd[1719]: time="2025-09-12T17:11:30.557657430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:30.680452 systemd[1]: run-containerd-runc-k8s.io-52c3717e147cdcd570b3a33bd65ad3978cf9af63ad40523bc90464b944546952-runc.qWpaEe.mount: Deactivated successfully. Sep 12 17:11:30.680561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52c3717e147cdcd570b3a33bd65ad3978cf9af63ad40523bc90464b944546952-rootfs.mount: Deactivated successfully. Sep 12 17:11:31.386321 kubelet[3304]: I0912 17:11:31.384750 3304 setters.go:602] "Node became not ready" node="ci-4230.2.3-a-bc327f6988" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:11:31Z","lastTransitionTime":"2025-09-12T17:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:11:31.405327 containerd[1719]: time="2025-09-12T17:11:31.405096763Z" level=info msg="CreateContainer within sandbox \"06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:11:31.433680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303636592.mount: Deactivated successfully. Sep 12 17:11:31.447003 containerd[1719]: time="2025-09-12T17:11:31.446961951Z" level=info msg="CreateContainer within sandbox \"06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e5d2ff61f325b3574274ca4dbd5167ffd504729dd44a90b5d7ddd62082c9dd88\"" Sep 12 17:11:31.448975 containerd[1719]: time="2025-09-12T17:11:31.447917912Z" level=info msg="StartContainer for \"e5d2ff61f325b3574274ca4dbd5167ffd504729dd44a90b5d7ddd62082c9dd88\"" Sep 12 17:11:31.478902 systemd[1]: Started cri-containerd-e5d2ff61f325b3574274ca4dbd5167ffd504729dd44a90b5d7ddd62082c9dd88.scope - libcontainer container e5d2ff61f325b3574274ca4dbd5167ffd504729dd44a90b5d7ddd62082c9dd88. Sep 12 17:11:31.502865 systemd[1]: cri-containerd-e5d2ff61f325b3574274ca4dbd5167ffd504729dd44a90b5d7ddd62082c9dd88.scope: Deactivated successfully. Sep 12 17:11:31.512403 containerd[1719]: time="2025-09-12T17:11:31.512325457Z" level=info msg="StartContainer for \"e5d2ff61f325b3574274ca4dbd5167ffd504729dd44a90b5d7ddd62082c9dd88\" returns successfully" Sep 12 17:11:31.545186 containerd[1719]: time="2025-09-12T17:11:31.545121150Z" level=info msg="shim disconnected" id=e5d2ff61f325b3574274ca4dbd5167ffd504729dd44a90b5d7ddd62082c9dd88 namespace=k8s.io Sep 12 17:11:31.545186 containerd[1719]: time="2025-09-12T17:11:31.545178550Z" level=warning msg="cleaning up after shim disconnected" id=e5d2ff61f325b3574274ca4dbd5167ffd504729dd44a90b5d7ddd62082c9dd88 namespace=k8s.io Sep 12 17:11:31.545186 containerd[1719]: time="2025-09-12T17:11:31.545187390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:31.680661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5d2ff61f325b3574274ca4dbd5167ffd504729dd44a90b5d7ddd62082c9dd88-rootfs.mount: Deactivated successfully. Sep 12 17:11:32.409565 containerd[1719]: time="2025-09-12T17:11:32.409519310Z" level=info msg="CreateContainer within sandbox \"06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:11:32.448265 containerd[1719]: time="2025-09-12T17:11:32.448216852Z" level=info msg="CreateContainer within sandbox \"06b5df57726f07ce32fb99557b3f7c959cbca5c731335aff08d9b8db2f8cff95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a706a7a3be6fb57688facb0829412786b3efe8dc773642008d0c8a1540db4880\"" Sep 12 17:11:32.450070 containerd[1719]: time="2025-09-12T17:11:32.448891733Z" level=info msg="StartContainer for \"a706a7a3be6fb57688facb0829412786b3efe8dc773642008d0c8a1540db4880\"" Sep 12 17:11:32.475921 systemd[1]: Started cri-containerd-a706a7a3be6fb57688facb0829412786b3efe8dc773642008d0c8a1540db4880.scope - libcontainer container a706a7a3be6fb57688facb0829412786b3efe8dc773642008d0c8a1540db4880. Sep 12 17:11:32.511104 containerd[1719]: time="2025-09-12T17:11:32.510967834Z" level=info msg="StartContainer for \"a706a7a3be6fb57688facb0829412786b3efe8dc773642008d0c8a1540db4880\" returns successfully" Sep 12 17:11:33.274825 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 17:11:36.026009 systemd-networkd[1553]: lxc_health: Link UP Sep 12 17:11:36.039240 systemd-networkd[1553]: lxc_health: Gained carrier Sep 12 17:11:36.792642 kubelet[3304]: I0912 17:11:36.792352 3304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-57q8w" podStartSLOduration=10.792334042 podStartE2EDuration="10.792334042s" podCreationTimestamp="2025-09-12 17:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:33.430152203 +0000 UTC m=+165.734930776" watchObservedRunningTime="2025-09-12 17:11:36.792334042 +0000 UTC m=+169.097112615" Sep 12 17:11:37.655869 systemd-networkd[1553]: lxc_health: Gained IPv6LL Sep 12 17:11:41.451560 sshd[5097]: Connection closed by 10.200.16.10 port 59772 Sep 12 17:11:41.451518 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:41.456778 systemd[1]: sshd@25-10.200.20.12:22-10.200.16.10:59772.service: Deactivated successfully. Sep 12 17:11:41.459464 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:11:41.460290 systemd-logind[1697]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:11:41.462173 systemd-logind[1697]: Removed session 28.