Sep 4 23:44:20.373635 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 23:44:20.373658 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Sep 4 22:21:25 -00 2025 Sep 4 23:44:20.373667 kernel: KASLR enabled Sep 4 23:44:20.373673 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 4 23:44:20.373680 kernel: printk: bootconsole [pl11] enabled Sep 4 23:44:20.373685 kernel: efi: EFI v2.7 by EDK II Sep 4 23:44:20.373693 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3e9dc698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 4 23:44:20.373699 kernel: random: crng init done Sep 4 23:44:20.373705 kernel: secureboot: Secure boot disabled Sep 4 23:44:20.373711 kernel: ACPI: Early table checksum verification disabled Sep 4 23:44:20.373716 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 4 23:44:20.373722 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:20.373728 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:20.373736 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 4 23:44:20.373744 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:20.373750 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:20.373757 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:20.373764 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:20.373771 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:20.373777 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:20.373783 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 4 23:44:20.373790 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:20.373796 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 4 23:44:20.373802 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 4 23:44:20.373808 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 4 23:44:20.373815 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 4 23:44:20.373821 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 4 23:44:20.373827 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 4 23:44:20.373835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 4 23:44:20.373841 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 4 23:44:20.373848 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 4 23:44:20.373854 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 4 23:44:20.373860 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 4 23:44:20.380698 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 4 23:44:20.380713 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 4 23:44:20.380720 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 4 23:44:20.380727 kernel: Zone ranges: Sep 4 23:44:20.380733 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 4 23:44:20.380740 kernel: DMA32 empty Sep 4 23:44:20.380746 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 23:44:20.380767 kernel: Movable zone start for each node Sep 4 23:44:20.380774 kernel: Early memory node ranges Sep 4 23:44:20.380782 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 4 23:44:20.380788 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 4 23:44:20.380795 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 4 23:44:20.380809 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 4 23:44:20.380817 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 4 23:44:20.380823 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 4 23:44:20.380830 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 4 23:44:20.380837 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 4 23:44:20.380846 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 23:44:20.380853 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 4 23:44:20.380860 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 4 23:44:20.380876 kernel: psci: probing for conduit method from ACPI. Sep 4 23:44:20.380883 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 23:44:20.380893 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 23:44:20.380900 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 4 23:44:20.380908 kernel: psci: SMC Calling Convention v1.4 Sep 4 23:44:20.380915 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 4 23:44:20.380922 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 4 23:44:20.380929 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 23:44:20.380935 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 23:44:20.380945 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 23:44:20.380953 kernel: Detected PIPT I-cache on CPU0 Sep 4 23:44:20.380960 kernel: CPU features: detected: GIC system register CPU interface Sep 4 23:44:20.380967 kernel: CPU features: detected: Hardware dirty bit management Sep 4 23:44:20.380973 kernel: CPU features: detected: Spectre-BHB Sep 4 23:44:20.380980 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 23:44:20.380989 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 23:44:20.380996 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 23:44:20.381005 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 4 23:44:20.381012 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 23:44:20.381019 kernel: alternatives: applying boot alternatives Sep 4 23:44:20.381027 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:20.381035 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:44:20.381042 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:44:20.381051 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:44:20.381058 kernel: Fallback order for Node 0: 0 Sep 4 23:44:20.381064 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 4 23:44:20.381073 kernel: Policy zone: Normal Sep 4 23:44:20.381080 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:44:20.381086 kernel: software IO TLB: area num 2. Sep 4 23:44:20.381093 kernel: software IO TLB: mapped [mem 0x0000000036530000-0x000000003a530000] (64MB) Sep 4 23:44:20.381103 kernel: Memory: 3983528K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 210632K reserved, 0K cma-reserved) Sep 4 23:44:20.381110 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:44:20.381117 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:44:20.381124 kernel: rcu: RCU event tracing is enabled. Sep 4 23:44:20.381131 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:44:20.381138 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:44:20.381145 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:44:20.381156 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:44:20.381163 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:44:20.381170 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 23:44:20.381176 kernel: GICv3: 960 SPIs implemented Sep 4 23:44:20.381183 kernel: GICv3: 0 Extended SPIs implemented Sep 4 23:44:20.381190 kernel: Root IRQ handler: gic_handle_irq Sep 4 23:44:20.381199 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 23:44:20.381206 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 4 23:44:20.381213 kernel: ITS: No ITS available, not enabling LPIs Sep 4 23:44:20.381220 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:44:20.381227 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:44:20.381233 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 23:44:20.381244 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 23:44:20.381252 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 23:44:20.381259 kernel: Console: colour dummy device 80x25 Sep 4 23:44:20.381266 kernel: printk: console [tty1] enabled Sep 4 23:44:20.381273 kernel: ACPI: Core revision 20230628 Sep 4 23:44:20.381280 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 23:44:20.381290 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:44:20.381297 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:44:20.381304 kernel: landlock: Up and running. Sep 4 23:44:20.381312 kernel: SELinux: Initializing. Sep 4 23:44:20.381320 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:20.381327 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:20.381334 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:20.381343 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:20.381351 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 4 23:44:20.381358 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 4 23:44:20.381372 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 23:44:20.381380 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:44:20.381390 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:44:20.381397 kernel: Remapping and enabling EFI services. Sep 4 23:44:20.381404 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:44:20.381414 kernel: Detected PIPT I-cache on CPU1 Sep 4 23:44:20.381421 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 4 23:44:20.381431 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:44:20.381439 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 23:44:20.381446 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:44:20.381455 kernel: SMP: Total of 2 processors activated. Sep 4 23:44:20.381462 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 23:44:20.381472 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 4 23:44:20.381481 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 23:44:20.381489 kernel: CPU features: detected: CRC32 instructions Sep 4 23:44:20.381496 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 23:44:20.381503 kernel: CPU features: detected: LSE atomic instructions Sep 4 23:44:20.381511 kernel: CPU features: detected: Privileged Access Never Sep 4 23:44:20.381521 kernel: CPU: All CPU(s) started at EL1 Sep 4 23:44:20.381531 kernel: alternatives: applying system-wide alternatives Sep 4 23:44:20.381538 kernel: devtmpfs: initialized Sep 4 23:44:20.381545 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:44:20.381553 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:44:20.381560 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:44:20.381567 kernel: SMBIOS 3.1.0 present. Sep 4 23:44:20.381575 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 4 23:44:20.381582 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:44:20.381593 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 23:44:20.381602 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 23:44:20.381609 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 23:44:20.381617 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:44:20.381624 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 4 23:44:20.381632 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:44:20.381639 kernel: cpuidle: using governor menu Sep 4 23:44:20.381646 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 23:44:20.381654 kernel: ASID allocator initialised with 32768 entries Sep 4 23:44:20.381663 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:44:20.381672 kernel: Serial: AMBA PL011 UART driver Sep 4 23:44:20.381680 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 23:44:20.381687 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 23:44:20.381694 kernel: Modules: 509248 pages in range for PLT usage Sep 4 23:44:20.381702 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:44:20.381712 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:44:20.381719 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 23:44:20.381726 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 23:44:20.381734 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:44:20.381743 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:44:20.381750 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 23:44:20.381757 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 23:44:20.381765 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:44:20.381774 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:44:20.381782 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:44:20.381789 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:44:20.381796 kernel: ACPI: Interpreter enabled Sep 4 23:44:20.381803 kernel: ACPI: Using GIC for interrupt routing Sep 4 23:44:20.381812 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 4 23:44:20.381820 kernel: printk: console [ttyAMA0] enabled Sep 4 23:44:20.381827 kernel: printk: bootconsole [pl11] disabled Sep 4 23:44:20.381837 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 4 23:44:20.381845 kernel: iommu: Default domain type: Translated Sep 4 23:44:20.381852 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 23:44:20.381859 kernel: efivars: Registered efivars operations Sep 4 23:44:20.381874 kernel: vgaarb: loaded Sep 4 23:44:20.381885 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 23:44:20.381894 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:44:20.381901 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:44:20.381909 kernel: pnp: PnP ACPI init Sep 4 23:44:20.381916 kernel: pnp: PnP ACPI: found 0 devices Sep 4 23:44:20.381923 kernel: NET: Registered PF_INET protocol family Sep 4 23:44:20.381930 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:44:20.381938 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:44:20.381948 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:44:20.381956 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:44:20.381965 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:44:20.381972 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:44:20.381980 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:20.381987 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:20.381994 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:44:20.382002 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:44:20.382009 kernel: kvm [1]: HYP mode not available Sep 4 23:44:20.382016 kernel: Initialise system trusted keyrings Sep 4 23:44:20.382023 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:44:20.382032 kernel: Key type asymmetric registered Sep 4 23:44:20.382040 kernel: Asymmetric key parser 'x509' registered Sep 4 23:44:20.382047 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 23:44:20.382054 kernel: io scheduler mq-deadline registered Sep 4 23:44:20.382061 kernel: io scheduler kyber registered Sep 4 23:44:20.382069 kernel: io scheduler bfq registered Sep 4 23:44:20.382076 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:44:20.382083 kernel: thunder_xcv, ver 1.0 Sep 4 23:44:20.382090 kernel: thunder_bgx, ver 1.0 Sep 4 23:44:20.382099 kernel: nicpf, ver 1.0 Sep 4 23:44:20.382106 kernel: nicvf, ver 1.0 Sep 4 23:44:20.382264 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 23:44:20.382338 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T23:44:19 UTC (1757029459) Sep 4 23:44:20.382349 kernel: efifb: probing for efifb Sep 4 23:44:20.382356 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 23:44:20.382364 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 23:44:20.382371 kernel: efifb: scrolling: redraw Sep 4 23:44:20.382381 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 23:44:20.382389 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 23:44:20.382396 kernel: fb0: EFI VGA frame buffer device Sep 4 23:44:20.382403 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 4 23:44:20.382411 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 23:44:20.382418 kernel: No ACPI PMU IRQ for CPU0 Sep 4 23:44:20.382425 kernel: No ACPI PMU IRQ for CPU1 Sep 4 23:44:20.382432 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 4 23:44:20.382440 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 23:44:20.382449 kernel: watchdog: Hard watchdog permanently disabled Sep 4 23:44:20.382456 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:44:20.382463 kernel: Segment Routing with IPv6 Sep 4 23:44:20.382471 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:44:20.382478 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:44:20.382485 kernel: Key type dns_resolver registered Sep 4 23:44:20.382493 kernel: registered taskstats version 1 Sep 4 23:44:20.382500 kernel: Loading compiled-in X.509 certificates Sep 4 23:44:20.382507 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 83306acb9da7bc81cc6aa49a1c622f78672939c0' Sep 4 23:44:20.382517 kernel: Key type .fscrypt registered Sep 4 23:44:20.382524 kernel: Key type fscrypt-provisioning registered Sep 4 23:44:20.382531 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:44:20.382539 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:44:20.382547 kernel: ima: No architecture policies found Sep 4 23:44:20.382554 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 23:44:20.382562 kernel: clk: Disabling unused clocks Sep 4 23:44:20.382569 kernel: Freeing unused kernel memory: 38400K Sep 4 23:44:20.382577 kernel: Run /init as init process Sep 4 23:44:20.382586 kernel: with arguments: Sep 4 23:44:20.382608 kernel: /init Sep 4 23:44:20.382615 kernel: with environment: Sep 4 23:44:20.382622 kernel: HOME=/ Sep 4 23:44:20.382631 kernel: TERM=linux Sep 4 23:44:20.382638 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:44:20.382647 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:44:20.382657 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:44:20.382667 systemd[1]: Detected virtualization microsoft. Sep 4 23:44:20.382676 systemd[1]: Detected architecture arm64. Sep 4 23:44:20.382683 systemd[1]: Running in initrd. Sep 4 23:44:20.382691 systemd[1]: No hostname configured, using default hostname. Sep 4 23:44:20.382699 systemd[1]: Hostname set to . Sep 4 23:44:20.382707 systemd[1]: Initializing machine ID from random generator. Sep 4 23:44:20.382714 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:44:20.382722 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:20.382732 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:20.382741 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:44:20.382749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:44:20.382757 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:44:20.382766 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:44:20.382776 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:44:20.382785 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:44:20.382794 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:20.382802 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:20.382809 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:44:20.382817 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:44:20.382825 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:44:20.382833 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:44:20.382841 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:44:20.382849 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:44:20.382859 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:44:20.382884 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:44:20.382892 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:20.382900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:20.382908 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:20.382916 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:44:20.382924 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:44:20.382932 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:44:20.382942 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:44:20.382950 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:44:20.382959 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:44:20.382966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:44:20.382995 systemd-journald[218]: Collecting audit messages is disabled. Sep 4 23:44:20.383017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:20.383026 systemd-journald[218]: Journal started Sep 4 23:44:20.383045 systemd-journald[218]: Runtime Journal (/run/log/journal/631ed69d97c845738ff104f6c5bd928e) is 8M, max 78.5M, 70.5M free. Sep 4 23:44:20.383639 systemd-modules-load[220]: Inserted module 'overlay' Sep 4 23:44:20.416074 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:44:20.416132 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:44:20.416171 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:44:20.438990 kernel: Bridge firewalling registered Sep 4 23:44:20.438252 systemd-modules-load[220]: Inserted module 'br_netfilter' Sep 4 23:44:20.447890 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:20.459980 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:44:20.473294 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:20.484561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:20.509124 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:20.520035 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:44:20.548002 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:44:20.565054 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:44:20.583448 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:20.593982 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:20.607732 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:44:20.622387 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:20.649128 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:44:20.667072 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:44:20.689054 dracut-cmdline[252]: dracut-dracut-053 Sep 4 23:44:20.689054 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:20.682181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:44:20.696419 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:20.768519 systemd-resolved[257]: Positive Trust Anchors: Sep 4 23:44:20.768541 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:44:20.768572 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:44:20.770941 systemd-resolved[257]: Defaulting to hostname 'linux'. Sep 4 23:44:20.771781 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:44:20.780925 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:20.887894 kernel: SCSI subsystem initialized Sep 4 23:44:20.895885 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:44:20.906897 kernel: iscsi: registered transport (tcp) Sep 4 23:44:20.924785 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:44:20.924821 kernel: QLogic iSCSI HBA Driver Sep 4 23:44:20.963722 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:44:20.979141 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:44:21.012992 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:44:21.013040 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:44:21.020774 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:44:21.071895 kernel: raid6: neonx8 gen() 15744 MB/s Sep 4 23:44:21.091878 kernel: raid6: neonx4 gen() 15811 MB/s Sep 4 23:44:21.111875 kernel: raid6: neonx2 gen() 13221 MB/s Sep 4 23:44:21.132875 kernel: raid6: neonx1 gen() 10521 MB/s Sep 4 23:44:21.152875 kernel: raid6: int64x8 gen() 6798 MB/s Sep 4 23:44:21.172874 kernel: raid6: int64x4 gen() 7353 MB/s Sep 4 23:44:21.193888 kernel: raid6: int64x2 gen() 6114 MB/s Sep 4 23:44:21.219059 kernel: raid6: int64x1 gen() 5058 MB/s Sep 4 23:44:21.219082 kernel: raid6: using algorithm neonx4 gen() 15811 MB/s Sep 4 23:44:21.244170 kernel: raid6: .... xor() 12487 MB/s, rmw enabled Sep 4 23:44:21.244181 kernel: raid6: using neon recovery algorithm Sep 4 23:44:21.256781 kernel: xor: measuring software checksum speed Sep 4 23:44:21.256808 kernel: 8regs : 21516 MB/sec Sep 4 23:44:21.260419 kernel: 32regs : 21670 MB/sec Sep 4 23:44:21.265327 kernel: arm64_neon : 27851 MB/sec Sep 4 23:44:21.269721 kernel: xor: using function: arm64_neon (27851 MB/sec) Sep 4 23:44:21.320899 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:44:21.331323 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:44:21.350014 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:21.378006 systemd-udevd[438]: Using default interface naming scheme 'v255'. Sep 4 23:44:21.381704 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:21.410150 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:44:21.434467 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Sep 4 23:44:21.463973 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:44:21.486347 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:44:21.526064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:21.546055 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:44:21.577188 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:44:21.590348 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:44:21.605778 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:21.622364 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:44:21.645282 kernel: hv_vmbus: Vmbus version:5.3 Sep 4 23:44:21.645458 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:44:21.663462 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:44:21.669535 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:21.690351 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:21.696534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:21.745513 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 23:44:21.745538 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 23:44:21.745548 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 23:44:21.745561 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 23:44:21.745581 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 23:44:21.696814 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:21.819175 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 23:44:21.819199 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 23:44:21.819209 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 23:44:21.819369 kernel: scsi host0: storvsc_host_t Sep 4 23:44:21.819481 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 23:44:21.819600 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 23:44:21.819612 kernel: scsi host1: storvsc_host_t Sep 4 23:44:21.819966 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 23:44:21.830703 kernel: PTP clock support registered Sep 4 23:44:21.735473 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:21.804881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:21.834227 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:21.848711 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:44:21.867422 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:21.867511 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:21.937991 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 23:44:21.938024 kernel: hv_vmbus: registering driver hv_utils Sep 4 23:44:21.938035 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 23:44:21.938207 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 23:44:21.938218 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 23:44:21.938227 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 23:44:21.938236 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 23:44:21.938324 kernel: hv_netvsc 002248b6-07bc-0022-48b6-07bc002248b6 eth0: VF slot 1 added Sep 4 23:44:21.938426 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 23:44:21.882276 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:21.685330 kernel: hv_vmbus: registering driver hv_pci Sep 4 23:44:21.692249 systemd-journald[218]: Time jumped backwards, rotating. Sep 4 23:44:21.692293 kernel: hv_pci 768aa5ad-0bf9-4d88-9474-8c75cd335f1b: PCI VMBus probing: Using version 0x10004 Sep 4 23:44:21.905893 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:21.714282 kernel: hv_pci 768aa5ad-0bf9-4d88-9474-8c75cd335f1b: PCI host bridge to bus 0bf9:00 Sep 4 23:44:21.677703 systemd-resolved[257]: Clock change detected. Flushing caches. Sep 4 23:44:21.739801 kernel: pci_bus 0bf9:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 4 23:44:21.739991 kernel: pci_bus 0bf9:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 23:44:21.700601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:21.755030 kernel: pci 0bf9:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 4 23:44:21.953725 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:21.973715 kernel: pci 0bf9:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 23:44:21.973757 kernel: pci 0bf9:00:02.0: enabling Extended Tags Sep 4 23:44:22.008571 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 23:44:22.008813 kernel: pci 0bf9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0bf9:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 4 23:44:22.008840 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 23:44:22.013227 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 23:44:22.021052 kernel: pci_bus 0bf9:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 23:44:22.021248 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 23:44:22.029430 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 23:44:22.029609 kernel: pci 0bf9:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 23:44:22.041049 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:22.061895 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:44:22.061916 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 23:44:22.106953 kernel: mlx5_core 0bf9:00:02.0: enabling device (0000 -> 0002) Sep 4 23:44:22.114672 kernel: mlx5_core 0bf9:00:02.0: firmware version: 16.31.2424 Sep 4 23:44:22.397668 kernel: hv_netvsc 002248b6-07bc-0022-48b6-07bc002248b6 eth0: VF registering: eth1 Sep 4 23:44:22.397890 kernel: mlx5_core 0bf9:00:02.0 eth1: joined to eth0 Sep 4 23:44:22.413752 kernel: mlx5_core 0bf9:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 4 23:44:22.425673 kernel: mlx5_core 0bf9:00:02.0 enP3065s1: renamed from eth1 Sep 4 23:44:22.807137 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 23:44:22.838762 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (502) Sep 4 23:44:22.859019 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 23:44:22.925690 kernel: BTRFS: device fsid 74a5374f-334b-4c07-8952-82f9f0ad22ae devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (505) Sep 4 23:44:22.941558 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 23:44:22.949582 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 23:44:22.979870 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:44:23.093384 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 23:44:24.012072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:44:24.012123 disk-uuid[606]: The operation has completed successfully. Sep 4 23:44:24.101614 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:44:24.101742 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:44:24.143875 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:44:24.157741 sh[695]: Success Sep 4 23:44:24.188681 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 23:44:24.640404 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:44:24.661796 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:44:24.671782 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:44:24.715146 kernel: BTRFS info (device dm-0): first mount of filesystem 74a5374f-334b-4c07-8952-82f9f0ad22ae Sep 4 23:44:24.715199 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:24.722436 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:44:24.727679 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:44:24.732100 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:44:25.255197 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:44:25.261065 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:44:25.278920 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:44:25.288874 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:44:25.341984 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:25.342045 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:25.347764 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:44:25.407524 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:44:25.428673 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:44:25.440720 kernel: BTRFS info (device sda6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:25.440915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:44:25.455852 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:44:25.464862 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:44:25.498858 systemd-networkd[872]: lo: Link UP Sep 4 23:44:25.498866 systemd-networkd[872]: lo: Gained carrier Sep 4 23:44:25.504108 systemd-networkd[872]: Enumeration completed Sep 4 23:44:25.504248 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:44:25.505218 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:25.505222 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:44:25.511427 systemd[1]: Reached target network.target - Network. Sep 4 23:44:25.608669 kernel: mlx5_core 0bf9:00:02.0 enP3065s1: Link up Sep 4 23:44:25.691707 kernel: hv_netvsc 002248b6-07bc-0022-48b6-07bc002248b6 eth0: Data path switched to VF: enP3065s1 Sep 4 23:44:25.692377 systemd-networkd[872]: enP3065s1: Link UP Sep 4 23:44:25.692623 systemd-networkd[872]: eth0: Link UP Sep 4 23:44:25.693065 systemd-networkd[872]: eth0: Gained carrier Sep 4 23:44:25.693076 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:25.702163 systemd-networkd[872]: enP3065s1: Gained carrier Sep 4 23:44:25.729694 systemd-networkd[872]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:44:26.457531 ignition[877]: Ignition 2.20.0 Sep 4 23:44:26.457544 ignition[877]: Stage: fetch-offline Sep 4 23:44:26.462711 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:44:26.457580 ignition[877]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:26.478912 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:44:26.457589 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:26.457711 ignition[877]: parsed url from cmdline: "" Sep 4 23:44:26.457715 ignition[877]: no config URL provided Sep 4 23:44:26.457719 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:44:26.457727 ignition[877]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:44:26.457732 ignition[877]: failed to fetch config: resource requires networking Sep 4 23:44:26.458134 ignition[877]: Ignition finished successfully Sep 4 23:44:26.503989 ignition[885]: Ignition 2.20.0 Sep 4 23:44:26.503996 ignition[885]: Stage: fetch Sep 4 23:44:26.504193 ignition[885]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:26.504203 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:26.504327 ignition[885]: parsed url from cmdline: "" Sep 4 23:44:26.504331 ignition[885]: no config URL provided Sep 4 23:44:26.504336 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:44:26.504344 ignition[885]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:44:26.504371 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 23:44:26.648726 ignition[885]: GET result: OK Sep 4 23:44:26.648822 ignition[885]: config has been read from IMDS userdata Sep 4 23:44:26.648869 ignition[885]: parsing config with SHA512: 0f58d93cb320572058fb5b5e15112835b974451a9aa6f6bd18e6b667688689cfa6f3685886785d3433d4e35934895fddad4d143185ea42aca4e31283b6cbc426 Sep 4 23:44:26.653900 unknown[885]: fetched base config from "system" Sep 4 23:44:26.654362 ignition[885]: fetch: fetch complete Sep 4 23:44:26.653909 unknown[885]: fetched base config from "system" Sep 4 23:44:26.654367 ignition[885]: fetch: fetch passed Sep 4 23:44:26.653914 unknown[885]: fetched user config from "azure" Sep 4 23:44:26.654414 ignition[885]: Ignition finished successfully Sep 4 23:44:26.659423 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:44:26.685879 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:44:26.713002 ignition[891]: Ignition 2.20.0 Sep 4 23:44:26.713017 ignition[891]: Stage: kargs Sep 4 23:44:26.713197 ignition[891]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:26.720826 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:44:26.713206 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:26.714161 ignition[891]: kargs: kargs passed Sep 4 23:44:26.714207 ignition[891]: Ignition finished successfully Sep 4 23:44:26.744891 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:44:26.769312 ignition[898]: Ignition 2.20.0 Sep 4 23:44:26.772491 ignition[898]: Stage: disks Sep 4 23:44:26.776516 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:44:26.772716 ignition[898]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:26.783606 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:44:26.772727 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:26.795515 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:44:26.773695 ignition[898]: disks: disks passed Sep 4 23:44:26.808632 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:44:26.773742 ignition[898]: Ignition finished successfully Sep 4 23:44:26.821340 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:44:26.833082 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:44:26.865913 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:44:26.959452 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 23:44:26.967166 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:44:26.986841 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:44:27.047457 kernel: EXT4-fs (sda9): mounted filesystem 22b06923-f972-4753-b92e-d6b25ef15ca3 r/w with ordered data mode. Quota mode: none. Sep 4 23:44:27.047891 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:44:27.056805 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:44:27.143734 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:44:27.170666 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (918) Sep 4 23:44:27.185661 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:27.185714 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:27.191161 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:44:27.195782 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:44:27.218305 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:44:27.206829 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 23:44:27.220366 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:44:27.220413 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:44:27.235780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:44:27.252890 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:44:27.276914 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:44:27.577879 systemd-networkd[872]: eth0: Gained IPv6LL Sep 4 23:44:28.012011 coreos-metadata[935]: Sep 04 23:44:28.011 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 23:44:28.020885 coreos-metadata[935]: Sep 04 23:44:28.020 INFO Fetch successful Sep 4 23:44:28.020885 coreos-metadata[935]: Sep 04 23:44:28.020 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 23:44:28.038187 coreos-metadata[935]: Sep 04 23:44:28.038 INFO Fetch successful Sep 4 23:44:28.059756 coreos-metadata[935]: Sep 04 23:44:28.059 INFO wrote hostname ci-4230.2.2-n-1143fb47ea to /sysroot/etc/hostname Sep 4 23:44:28.069880 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:44:28.509062 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:44:28.586478 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:44:28.613254 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:44:28.622459 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:44:30.029599 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:44:30.049841 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:44:30.067765 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:44:30.087671 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:44:30.093486 kernel: BTRFS info (device sda6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:30.115331 ignition[1037]: INFO : Ignition 2.20.0 Sep 4 23:44:30.115331 ignition[1037]: INFO : Stage: mount Sep 4 23:44:30.125405 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:30.125405 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:30.125405 ignition[1037]: INFO : mount: mount passed Sep 4 23:44:30.125405 ignition[1037]: INFO : Ignition finished successfully Sep 4 23:44:30.121722 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:44:30.160698 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:44:30.168852 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:44:30.189263 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:44:30.224705 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1048) Sep 4 23:44:30.224767 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:30.231074 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:30.235555 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:44:30.243662 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:44:30.245899 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:44:30.272250 ignition[1065]: INFO : Ignition 2.20.0 Sep 4 23:44:30.272250 ignition[1065]: INFO : Stage: files Sep 4 23:44:30.280497 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:30.280497 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:30.280497 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:44:30.306349 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:44:30.306349 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:44:30.376545 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:44:30.385310 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:44:30.385310 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:44:30.377007 unknown[1065]: wrote ssh authorized keys file for user: core Sep 4 23:44:30.430910 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 23:44:30.443604 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 23:44:30.554520 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:44:31.234167 kernel: mlx5_core 0bf9:00:02.0: poll_health:835:(pid 0): device's health compromised - reached miss count Sep 4 23:44:31.947956 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 23:44:31.960844 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:44:31.960844 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 23:44:32.124382 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:44:32.205549 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:44:32.205549 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 4 23:44:32.226465 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 4 23:44:32.545392 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:44:32.765998 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 4 23:44:32.765998 ignition[1065]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:44:32.823818 ignition[1065]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:44:32.835989 ignition[1065]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:44:32.835989 ignition[1065]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:44:32.835989 ignition[1065]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:44:32.835989 ignition[1065]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:44:32.835989 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:44:32.835989 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:44:32.835989 ignition[1065]: INFO : files: files passed Sep 4 23:44:32.835989 ignition[1065]: INFO : Ignition finished successfully Sep 4 23:44:32.836960 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:44:32.872406 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:44:32.886820 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:44:32.970456 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:32.925918 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:44:32.984138 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:32.926009 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:44:33.004750 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:32.938247 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:44:32.951089 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:44:32.994969 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:44:33.044883 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:44:33.045014 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:44:33.058258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:44:33.072564 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:44:33.084104 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:44:33.103897 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:44:33.127016 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:44:33.146898 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:44:33.168535 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:44:33.169675 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:44:33.180934 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:33.194818 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:33.208806 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:44:33.219942 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:44:33.220019 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:44:33.236403 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:44:33.249152 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:44:33.260466 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:44:33.271461 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:44:33.283656 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:44:33.296726 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:44:33.308765 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:44:33.320899 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:44:33.333859 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:44:33.345729 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:44:33.355374 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:44:33.355463 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:44:33.370072 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:33.376146 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:33.388096 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:44:33.393430 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:33.401916 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:44:33.402000 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:44:33.420873 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:44:33.420932 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:44:33.435562 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:44:33.435612 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:44:33.447781 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 23:44:33.447830 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:44:33.482838 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:44:33.498808 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:44:33.533069 ignition[1119]: INFO : Ignition 2.20.0 Sep 4 23:44:33.533069 ignition[1119]: INFO : Stage: umount Sep 4 23:44:33.533069 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:33.533069 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:33.498899 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:33.590398 ignition[1119]: INFO : umount: umount passed Sep 4 23:44:33.590398 ignition[1119]: INFO : Ignition finished successfully Sep 4 23:44:33.526838 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:44:33.538038 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:44:33.538117 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:33.554064 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:44:33.554146 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:44:33.570288 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:44:33.570381 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:44:33.583250 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:44:33.584062 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:44:33.584165 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:44:33.596821 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:44:33.596878 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:44:33.606608 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:44:33.606685 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:44:33.619105 systemd[1]: Stopped target network.target - Network. Sep 4 23:44:33.627351 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:44:33.627430 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:44:33.640937 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:44:33.652921 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:44:33.657960 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:33.664981 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:44:33.675601 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:44:33.686142 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:44:33.686211 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:44:33.698134 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:44:33.698182 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:44:33.709396 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:44:33.709453 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:44:33.719730 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:44:33.719778 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:44:33.730720 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:44:33.741564 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:44:33.762151 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:44:33.762240 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:44:33.772271 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:44:33.772376 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:44:33.792082 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:44:33.793031 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:44:33.793099 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:44:33.804349 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:44:33.804437 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:33.824328 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:34.064467 kernel: hv_netvsc 002248b6-07bc-0022-48b6-07bc002248b6 eth0: Data path switched from VF: enP3065s1 Sep 4 23:44:33.824595 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:44:33.824777 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:44:33.844075 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:44:33.844860 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:44:33.844966 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:33.875843 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:44:33.886623 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:44:33.886733 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:44:33.900261 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:44:33.900321 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:33.916473 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:44:33.916525 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:33.922938 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:33.942595 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:44:33.955865 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:44:33.956060 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:33.966552 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:44:33.966597 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:33.977002 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:44:33.977042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:33.988140 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:44:33.988197 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:44:34.008172 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:44:34.008227 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:44:34.019328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:44:34.019386 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:34.071911 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:44:34.086813 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:44:34.086894 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:34.107359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:34.107418 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:34.120285 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:44:34.120370 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:44:34.207474 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:44:34.207663 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:44:34.218448 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:44:34.242826 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:44:34.271959 systemd[1]: Switching root. Sep 4 23:44:34.387548 systemd-journald[218]: Journal stopped Sep 4 23:44:44.048869 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 4 23:44:44.048895 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:44:44.048906 kernel: SELinux: policy capability open_perms=1 Sep 4 23:44:44.048916 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:44:44.048923 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:44:44.048931 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:44:44.048939 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:44:44.048947 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:44:44.048955 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:44:44.048963 kernel: audit: type=1403 audit(1757029476.133:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:44:44.048973 systemd[1]: Successfully loaded SELinux policy in 221.949ms. Sep 4 23:44:44.048982 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.776ms. Sep 4 23:44:44.048992 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:44:44.049001 systemd[1]: Detected virtualization microsoft. Sep 4 23:44:44.049010 systemd[1]: Detected architecture arm64. Sep 4 23:44:44.049020 systemd[1]: Detected first boot. Sep 4 23:44:44.049029 systemd[1]: Hostname set to . Sep 4 23:44:44.049038 systemd[1]: Initializing machine ID from random generator. Sep 4 23:44:44.049046 zram_generator::config[1162]: No configuration found. Sep 4 23:44:44.049058 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:44:44.049066 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:44:44.049077 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:44:44.049085 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:44:44.049094 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:44:44.049103 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:44:44.049111 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:44:44.049121 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:44:44.049129 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:44:44.049138 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:44:44.049149 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:44:44.049158 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:44:44.049167 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:44:44.049176 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:44:44.049185 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:44.049194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:44.049202 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:44:44.049211 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:44:44.049222 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:44:44.049231 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:44:44.049241 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 23:44:44.049253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:44.049263 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:44:44.049272 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:44:44.049281 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:44:44.049291 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:44:44.049301 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:44.049310 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:44:44.049319 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:44:44.049328 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:44:44.049337 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:44:44.049346 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:44:44.049355 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:44:44.049366 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:44.049376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:44.049385 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:44.049394 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:44:44.049403 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:44:44.049414 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:44:44.049423 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:44:44.049433 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:44:44.049442 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:44:44.049452 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:44:44.049462 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:44:44.049471 systemd[1]: Reached target machines.target - Containers. Sep 4 23:44:44.049480 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:44:44.049490 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:44:44.049501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:44:44.049510 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:44:44.049519 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:44:44.049528 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:44:44.049538 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:44:44.049547 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:44:44.049557 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:44:44.049566 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:44:44.049577 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:44:44.049586 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:44:44.049596 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:44:44.049605 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:44:44.049615 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:44:44.049624 kernel: loop: module loaded Sep 4 23:44:44.049632 kernel: fuse: init (API version 7.39) Sep 4 23:44:44.049651 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:44:44.049665 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:44:44.049675 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:44:44.049685 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:44:44.049694 kernel: ACPI: bus type drm_connector registered Sep 4 23:44:44.049703 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:44:44.049731 systemd-journald[1266]: Collecting audit messages is disabled. Sep 4 23:44:44.049753 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:44:44.049763 systemd-journald[1266]: Journal started Sep 4 23:44:44.049782 systemd-journald[1266]: Runtime Journal (/run/log/journal/d0d598128cca4dd29984ba222653577e) is 8M, max 78.5M, 70.5M free. Sep 4 23:44:42.876735 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:44:42.880525 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 23:44:42.880937 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:44:42.881302 systemd[1]: systemd-journald.service: Consumed 3.475s CPU time. Sep 4 23:44:44.077661 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:44:44.077726 systemd[1]: Stopped verity-setup.service. Sep 4 23:44:44.099682 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:44:44.099018 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:44:44.105969 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:44:44.112926 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:44:44.118954 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:44:44.125817 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:44:44.132368 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:44:44.138590 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:44:44.147480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:44.157508 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:44:44.157833 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:44:44.166562 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:44:44.166848 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:44:44.174808 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:44:44.174978 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:44:44.182185 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:44:44.182356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:44:44.190848 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:44:44.191011 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:44:44.199803 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:44:44.199967 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:44:44.208296 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:44.215385 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:44:44.223843 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:44:44.231849 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:44:44.241089 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:44.261820 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:44:44.272738 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:44:44.282849 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:44:44.290565 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:44:44.290610 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:44:44.298934 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:44:44.315816 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:44:44.323553 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:44:44.329628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:44:44.330845 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:44:44.339887 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:44:44.347097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:44:44.348398 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:44:44.355732 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:44:44.357143 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:44:44.367880 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:44:44.378227 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:44:44.386897 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:44:44.397138 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:44:44.405753 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:44:44.414247 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:44:44.424275 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:44:44.439332 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:44:44.452904 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:44:44.464202 systemd-journald[1266]: Time spent on flushing to /var/log/journal/d0d598128cca4dd29984ba222653577e is 74.988ms for 918 entries. Sep 4 23:44:44.464202 systemd-journald[1266]: System Journal (/var/log/journal/d0d598128cca4dd29984ba222653577e) is 11.8M, max 2.6G, 2.6G free. Sep 4 23:44:44.649014 kernel: loop0: detected capacity change from 0 to 28720 Sep 4 23:44:44.649068 systemd-journald[1266]: Received client request to flush runtime journal. Sep 4 23:44:44.649106 systemd-journald[1266]: /var/log/journal/d0d598128cca4dd29984ba222653577e/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 4 23:44:44.649129 systemd-journald[1266]: Rotating system journal. Sep 4 23:44:44.476084 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 23:44:44.559264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:44.651236 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:44:44.667953 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:44:44.668701 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:44:45.108671 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:44:45.281692 kernel: loop1: detected capacity change from 0 to 113512 Sep 4 23:44:45.301033 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:44:45.315834 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:44:45.538531 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Sep 4 23:44:45.538971 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Sep 4 23:44:45.543685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:45.960676 kernel: loop2: detected capacity change from 0 to 123192 Sep 4 23:44:46.575675 kernel: loop3: detected capacity change from 0 to 203944 Sep 4 23:44:46.617683 kernel: loop4: detected capacity change from 0 to 28720 Sep 4 23:44:46.634672 kernel: loop5: detected capacity change from 0 to 113512 Sep 4 23:44:46.654685 kernel: loop6: detected capacity change from 0 to 123192 Sep 4 23:44:46.678694 kernel: loop7: detected capacity change from 0 to 203944 Sep 4 23:44:46.694528 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 23:44:46.695032 (sd-merge)[1328]: Merged extensions into '/usr'. Sep 4 23:44:46.698495 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:44:46.698509 systemd[1]: Reloading... Sep 4 23:44:46.777676 zram_generator::config[1353]: No configuration found. Sep 4 23:44:46.929492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:44:47.000616 systemd[1]: Reloading finished in 301 ms. Sep 4 23:44:47.016365 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:44:47.024327 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:44:47.039852 systemd[1]: Starting ensure-sysext.service... Sep 4 23:44:47.046404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:44:47.057940 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:47.100060 systemd-udevd[1414]: Using default interface naming scheme 'v255'. Sep 4 23:44:47.105871 systemd[1]: Reload requested from client PID 1412 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:44:47.105888 systemd[1]: Reloading... Sep 4 23:44:47.158175 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:44:47.158417 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:44:47.159121 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:44:47.159882 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Sep 4 23:44:47.160467 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Sep 4 23:44:47.181792 zram_generator::config[1447]: No configuration found. Sep 4 23:44:47.267817 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:44:47.267827 systemd-tmpfiles[1413]: Skipping /boot Sep 4 23:44:47.277755 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:44:47.277906 systemd-tmpfiles[1413]: Skipping /boot Sep 4 23:44:47.289183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:44:47.359788 systemd[1]: Reloading finished in 253 ms. Sep 4 23:44:47.391266 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:47.410913 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:44:47.458913 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:44:47.467441 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:44:47.481941 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:44:47.496749 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:44:47.512165 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 4 23:44:47.519704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:44:47.525966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:44:47.535969 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:44:47.545171 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:44:47.558763 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:44:47.565537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:44:47.565812 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:44:47.565985 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:44:47.573742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:44:47.573962 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:44:47.583084 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:44:47.583262 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:44:47.590289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:44:47.590443 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:44:47.599513 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:44:47.599706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:44:47.615722 systemd[1]: Finished ensure-sysext.service. Sep 4 23:44:47.626143 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:44:47.626295 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:44:47.630806 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:44:47.638362 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:44:47.688987 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:44:47.771120 augenrules[1541]: No rules Sep 4 23:44:47.773279 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:44:47.773496 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:44:47.881064 systemd-resolved[1508]: Positive Trust Anchors: Sep 4 23:44:47.881081 systemd-resolved[1508]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:44:47.881112 systemd-resolved[1508]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:44:48.024368 systemd-resolved[1508]: Using system hostname 'ci-4230.2.2-n-1143fb47ea'. Sep 4 23:44:48.026151 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:44:48.033181 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:48.055898 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:44:48.081984 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:48.099890 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:44:48.173352 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 23:44:48.269047 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 4 23:44:48.303715 systemd-networkd[1558]: lo: Link UP Sep 4 23:44:48.304060 systemd-networkd[1558]: lo: Gained carrier Sep 4 23:44:48.306748 systemd-networkd[1558]: Enumeration completed Sep 4 23:44:48.315757 kernel: hv_vmbus: registering driver hv_balloon Sep 4 23:44:48.315856 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:44:48.315873 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 23:44:48.315892 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 4 23:44:48.314980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:48.339187 systemd-networkd[1558]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:48.339197 systemd-networkd[1558]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:44:48.349887 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:44:48.368730 systemd[1]: Reached target network.target - Network. Sep 4 23:44:48.379852 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:44:48.400889 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:44:48.401665 kernel: mlx5_core 0bf9:00:02.0 enP3065s1: Link up Sep 4 23:44:48.416847 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 23:44:48.416909 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 23:44:48.429417 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 23:44:48.430605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:48.442184 kernel: Console: switching to colour dummy device 80x25 Sep 4 23:44:48.430931 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:48.444049 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:48.446727 kernel: hv_netvsc 002248b6-07bc-0022-48b6-07bc002248b6 eth0: Data path switched to VF: enP3065s1 Sep 4 23:44:48.446048 systemd-networkd[1558]: enP3065s1: Link UP Sep 4 23:44:48.446142 systemd-networkd[1558]: eth0: Link UP Sep 4 23:44:48.446145 systemd-networkd[1558]: eth0: Gained carrier Sep 4 23:44:48.446168 systemd-networkd[1558]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:48.448897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:48.457679 systemd-networkd[1558]: enP3065s1: Gained carrier Sep 4 23:44:48.473031 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 23:44:48.477758 systemd-networkd[1558]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:44:48.488108 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:48.489683 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:48.506878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:48.527736 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1564) Sep 4 23:44:48.537694 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:44:48.618478 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 23:44:48.631819 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:44:48.718720 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:44:48.732864 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:44:48.745719 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:44:48.845689 lvm[1666]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:44:48.903632 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:44:48.911743 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:48.923850 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:44:48.931438 lvm[1669]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:44:48.954240 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:44:49.593764 systemd-networkd[1558]: eth0: Gained IPv6LL Sep 4 23:44:49.596031 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:44:49.604944 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:44:50.205830 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:50.968733 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:44:50.977218 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:44:56.157767 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:44:56.174483 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:44:56.186893 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:44:56.238627 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:44:56.246500 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:44:56.253483 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:44:56.260988 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:44:56.269273 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:44:56.276236 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:44:56.284180 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:44:56.291559 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:44:56.291600 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:44:56.297585 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:44:56.321635 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:44:56.330034 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:44:56.338040 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:44:56.347014 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:44:56.355478 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:44:56.370546 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:44:56.377826 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:44:56.385595 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:44:56.392357 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:44:56.398799 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:44:56.404829 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:44:56.404858 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:44:56.426748 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 23:44:56.434666 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:44:56.447889 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:44:56.463343 (chronyd)[1681]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 23:44:56.466872 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:44:56.474679 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:44:56.482782 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:44:56.489771 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:44:56.489929 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 4 23:44:56.492864 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 4 23:44:56.504090 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 4 23:44:56.506403 KVP[1690]: KVP starting; pid is:1690 Sep 4 23:44:56.516304 kernel: hv_utils: KVP IC version 4.0 Sep 4 23:44:56.516335 jq[1688]: false Sep 4 23:44:56.510819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:44:56.509924 chronyd[1694]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 23:44:56.512680 KVP[1690]: KVP LIC Version: 3.1 Sep 4 23:44:56.523025 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:44:56.530840 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:44:56.538967 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:44:56.547897 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:44:56.561881 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:44:56.568571 extend-filesystems[1689]: Found loop4 Sep 4 23:44:56.575191 extend-filesystems[1689]: Found loop5 Sep 4 23:44:56.575191 extend-filesystems[1689]: Found loop6 Sep 4 23:44:56.575191 extend-filesystems[1689]: Found loop7 Sep 4 23:44:56.575191 extend-filesystems[1689]: Found sda Sep 4 23:44:56.575191 extend-filesystems[1689]: Found sda1 Sep 4 23:44:56.575191 extend-filesystems[1689]: Found sda2 Sep 4 23:44:56.575191 extend-filesystems[1689]: Found sda3 Sep 4 23:44:56.575191 extend-filesystems[1689]: Found usr Sep 4 23:44:56.575191 extend-filesystems[1689]: Found sda4 Sep 4 23:44:56.575191 extend-filesystems[1689]: Found sda6 Sep 4 23:44:56.575191 extend-filesystems[1689]: Found sda7 Sep 4 23:44:56.575191 extend-filesystems[1689]: Found sda9 Sep 4 23:44:56.575191 extend-filesystems[1689]: Checking size of /dev/sda9 Sep 4 23:44:56.579128 chronyd[1694]: Timezone right/UTC failed leap second check, ignoring Sep 4 23:44:56.592400 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:44:56.579322 chronyd[1694]: Loaded seccomp filter (level 2) Sep 4 23:44:56.605773 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:44:56.606317 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:44:56.608963 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:44:56.671254 jq[1710]: true Sep 4 23:44:56.628988 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:44:56.647762 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 23:44:56.667682 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:44:56.667924 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:44:56.669225 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:44:56.671670 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:44:56.683628 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:44:56.683897 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:44:56.694568 extend-filesystems[1689]: Old size kept for /dev/sda9 Sep 4 23:44:56.694568 extend-filesystems[1689]: Found sr0 Sep 4 23:44:56.703354 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:44:56.719309 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:44:56.729100 update_engine[1708]: I20250904 23:44:56.722199 1708 main.cc:92] Flatcar Update Engine starting Sep 4 23:44:56.729285 jq[1727]: true Sep 4 23:44:56.719493 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:44:56.744257 (ntainerd)[1735]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:44:56.779575 systemd-logind[1704]: New seat seat0. Sep 4 23:44:56.785026 systemd-logind[1704]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 23:44:56.785257 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:44:56.801674 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1738) Sep 4 23:44:56.834978 tar[1720]: linux-arm64/helm Sep 4 23:44:56.937449 bash[1780]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:44:56.940695 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:44:56.952251 dbus-daemon[1684]: [system] SELinux support is enabled Sep 4 23:44:56.955915 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:44:56.969110 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 23:44:56.969210 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:44:56.969231 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:44:56.978921 dbus-daemon[1684]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 23:44:56.979839 update_engine[1708]: I20250904 23:44:56.979780 1708 update_check_scheduler.cc:74] Next update check in 4m35s Sep 4 23:44:56.983096 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:44:56.983126 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:44:56.995266 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:44:57.014925 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:44:57.090260 coreos-metadata[1683]: Sep 04 23:44:57.090 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 23:44:57.095416 coreos-metadata[1683]: Sep 04 23:44:57.095 INFO Fetch successful Sep 4 23:44:57.095416 coreos-metadata[1683]: Sep 04 23:44:57.095 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 23:44:57.100148 coreos-metadata[1683]: Sep 04 23:44:57.100 INFO Fetch successful Sep 4 23:44:57.100546 coreos-metadata[1683]: Sep 04 23:44:57.100 INFO Fetching http://168.63.129.16/machine/00f42c32-5909-4b10-a5fa-53c8c9b9eb7c/7851f5b4%2Def99%2D4b7a%2D8215%2D36f0f72a4a30.%5Fci%2D4230.2.2%2Dn%2D1143fb47ea?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 23:44:57.103584 coreos-metadata[1683]: Sep 04 23:44:57.103 INFO Fetch successful Sep 4 23:44:57.103584 coreos-metadata[1683]: Sep 04 23:44:57.103 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 23:44:57.118942 coreos-metadata[1683]: Sep 04 23:44:57.115 INFO Fetch successful Sep 4 23:44:57.162693 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:44:57.174943 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:44:57.366871 locksmithd[1823]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:44:57.395899 containerd[1735]: time="2025-09-04T23:44:57.395627240Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:44:57.424413 containerd[1735]: time="2025-09-04T23:44:57.423768880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:57.425230 containerd[1735]: time="2025-09-04T23:44:57.425184600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:44:57.425230 containerd[1735]: time="2025-09-04T23:44:57.425224480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:44:57.425336 containerd[1735]: time="2025-09-04T23:44:57.425242640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:44:57.425424 containerd[1735]: time="2025-09-04T23:44:57.425399160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:44:57.425465 containerd[1735]: time="2025-09-04T23:44:57.425423760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:57.425509 containerd[1735]: time="2025-09-04T23:44:57.425486520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:44:57.425546 containerd[1735]: time="2025-09-04T23:44:57.425505400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:57.425761 containerd[1735]: time="2025-09-04T23:44:57.425735560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:44:57.425761 containerd[1735]: time="2025-09-04T23:44:57.425757200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:57.425831 containerd[1735]: time="2025-09-04T23:44:57.425770520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:44:57.425831 containerd[1735]: time="2025-09-04T23:44:57.425779960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:57.425877 containerd[1735]: time="2025-09-04T23:44:57.425861320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:57.426137 containerd[1735]: time="2025-09-04T23:44:57.426053600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:57.426219 containerd[1735]: time="2025-09-04T23:44:57.426187600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:44:57.426219 containerd[1735]: time="2025-09-04T23:44:57.426206800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:44:57.426384 containerd[1735]: time="2025-09-04T23:44:57.426279160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:44:57.426447 containerd[1735]: time="2025-09-04T23:44:57.426425200Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:44:57.446314 containerd[1735]: time="2025-09-04T23:44:57.445938120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:44:57.446314 containerd[1735]: time="2025-09-04T23:44:57.446015160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:44:57.446314 containerd[1735]: time="2025-09-04T23:44:57.446032960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:44:57.446314 containerd[1735]: time="2025-09-04T23:44:57.446048680Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:44:57.446314 containerd[1735]: time="2025-09-04T23:44:57.446061800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:44:57.446314 containerd[1735]: time="2025-09-04T23:44:57.446244520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:44:57.446522 containerd[1735]: time="2025-09-04T23:44:57.446483360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446577320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446606800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446623200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446637560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446861840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446884400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446899240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446914480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446928560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446940520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446952040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446974880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.446988760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447108 containerd[1735]: time="2025-09-04T23:44:57.447001280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447015000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447027280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447040080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447052200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447065120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447078520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447095160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447107440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447119400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447132320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447147080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447171000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447184120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.447379 containerd[1735]: time="2025-09-04T23:44:57.447194520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:44:57.448424 containerd[1735]: time="2025-09-04T23:44:57.447242080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:44:57.448424 containerd[1735]: time="2025-09-04T23:44:57.447260200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:44:57.448424 containerd[1735]: time="2025-09-04T23:44:57.447270400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:44:57.448424 containerd[1735]: time="2025-09-04T23:44:57.447283880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:44:57.448424 containerd[1735]: time="2025-09-04T23:44:57.447293360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.448424 containerd[1735]: time="2025-09-04T23:44:57.447305640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:44:57.448424 containerd[1735]: time="2025-09-04T23:44:57.447316280Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:44:57.448424 containerd[1735]: time="2025-09-04T23:44:57.447326640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:44:57.448567 containerd[1735]: time="2025-09-04T23:44:57.447613400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:44:57.448567 containerd[1735]: time="2025-09-04T23:44:57.447683800Z" level=info msg="Connect containerd service" Sep 4 23:44:57.448567 containerd[1735]: time="2025-09-04T23:44:57.447721360Z" level=info msg="using legacy CRI server" Sep 4 23:44:57.448567 containerd[1735]: time="2025-09-04T23:44:57.447728040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:44:57.448567 containerd[1735]: time="2025-09-04T23:44:57.447848000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:44:57.448567 containerd[1735]: time="2025-09-04T23:44:57.448447880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:44:57.448791 containerd[1735]: time="2025-09-04T23:44:57.448751600Z" level=info msg="Start subscribing containerd event" Sep 4 23:44:57.448811 containerd[1735]: time="2025-09-04T23:44:57.448790280Z" level=info msg="Start recovering state" Sep 4 23:44:57.459671 containerd[1735]: time="2025-09-04T23:44:57.448847280Z" level=info msg="Start event monitor" Sep 4 23:44:57.459671 containerd[1735]: time="2025-09-04T23:44:57.448867480Z" level=info msg="Start snapshots syncer" Sep 4 23:44:57.459671 containerd[1735]: time="2025-09-04T23:44:57.448878240Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:44:57.459671 containerd[1735]: time="2025-09-04T23:44:57.448885440Z" level=info msg="Start streaming server" Sep 4 23:44:57.459671 containerd[1735]: time="2025-09-04T23:44:57.449124800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:44:57.459671 containerd[1735]: time="2025-09-04T23:44:57.449179400Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:44:57.459671 containerd[1735]: time="2025-09-04T23:44:57.449233880Z" level=info msg="containerd successfully booted in 0.054405s" Sep 4 23:44:57.449328 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:44:57.528096 tar[1720]: linux-arm64/LICENSE Sep 4 23:44:57.528096 tar[1720]: linux-arm64/README.md Sep 4 23:44:57.540060 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:44:57.881818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:44:57.893155 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:44:58.338445 kubelet[1861]: E0904 23:44:58.338382 1861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:44:58.341113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:44:58.341260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:44:58.343743 systemd[1]: kubelet.service: Consumed 727ms CPU time, 258.6M memory peak. Sep 4 23:44:58.394149 sshd_keygen[1726]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:44:58.413178 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:44:58.426888 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:44:58.433872 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 23:44:58.441600 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:44:58.441836 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:44:58.451903 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:44:58.470561 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 23:44:58.484751 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:44:58.499988 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:44:58.507402 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 23:44:58.514354 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:44:58.521256 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:44:58.529080 systemd[1]: Startup finished in 711ms (kernel) + 16.422s (initrd) + 22.616s (userspace) = 39.750s. Sep 4 23:44:59.174622 login[1890]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 4 23:44:59.212140 login[1889]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:59.224260 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:44:59.236946 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:44:59.239118 systemd-logind[1704]: New session 2 of user core. Sep 4 23:44:59.265463 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:44:59.274367 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:44:59.294641 (systemd)[1897]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:44:59.297474 systemd-logind[1704]: New session c1 of user core. Sep 4 23:44:59.699745 systemd[1897]: Queued start job for default target default.target. Sep 4 23:44:59.707731 systemd[1897]: Created slice app.slice - User Application Slice. Sep 4 23:44:59.707770 systemd[1897]: Reached target paths.target - Paths. Sep 4 23:44:59.707814 systemd[1897]: Reached target timers.target - Timers. Sep 4 23:44:59.709101 systemd[1897]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:44:59.718961 systemd[1897]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:44:59.719026 systemd[1897]: Reached target sockets.target - Sockets. Sep 4 23:44:59.719070 systemd[1897]: Reached target basic.target - Basic System. Sep 4 23:44:59.719100 systemd[1897]: Reached target default.target - Main User Target. Sep 4 23:44:59.719126 systemd[1897]: Startup finished in 414ms. Sep 4 23:44:59.719458 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:44:59.730839 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:45:00.176168 login[1890]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:00.183050 systemd-logind[1704]: New session 1 of user core. Sep 4 23:45:00.192833 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:45:01.274671 waagent[1887]: 2025-09-04T23:45:01.271863Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 4 23:45:01.278537 waagent[1887]: 2025-09-04T23:45:01.278450Z INFO Daemon Daemon OS: flatcar 4230.2.2 Sep 4 23:45:01.283840 waagent[1887]: 2025-09-04T23:45:01.283766Z INFO Daemon Daemon Python: 3.11.11 Sep 4 23:45:01.289165 waagent[1887]: 2025-09-04T23:45:01.289091Z INFO Daemon Daemon Run daemon Sep 4 23:45:01.293897 waagent[1887]: 2025-09-04T23:45:01.293828Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.2' Sep 4 23:45:01.303668 waagent[1887]: 2025-09-04T23:45:01.303579Z INFO Daemon Daemon Using waagent for provisioning Sep 4 23:45:01.309650 waagent[1887]: 2025-09-04T23:45:01.309585Z INFO Daemon Daemon Activate resource disk Sep 4 23:45:01.315002 waagent[1887]: 2025-09-04T23:45:01.314932Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 23:45:01.329159 waagent[1887]: 2025-09-04T23:45:01.329078Z INFO Daemon Daemon Found device: None Sep 4 23:45:01.334400 waagent[1887]: 2025-09-04T23:45:01.334330Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 23:45:01.343514 waagent[1887]: 2025-09-04T23:45:01.343444Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 23:45:01.356188 waagent[1887]: 2025-09-04T23:45:01.356128Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 23:45:01.362262 waagent[1887]: 2025-09-04T23:45:01.362196Z INFO Daemon Daemon Running default provisioning handler Sep 4 23:45:01.383679 waagent[1887]: 2025-09-04T23:45:01.377764Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 23:45:01.392918 waagent[1887]: 2025-09-04T23:45:01.392840Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 23:45:01.403839 waagent[1887]: 2025-09-04T23:45:01.403765Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 23:45:01.409741 waagent[1887]: 2025-09-04T23:45:01.409668Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 23:45:01.525908 waagent[1887]: 2025-09-04T23:45:01.525744Z INFO Daemon Daemon Successfully mounted dvd Sep 4 23:45:01.559191 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 23:45:01.566705 waagent[1887]: 2025-09-04T23:45:01.561955Z INFO Daemon Daemon Detect protocol endpoint Sep 4 23:45:01.567958 waagent[1887]: 2025-09-04T23:45:01.567875Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 23:45:01.574673 waagent[1887]: 2025-09-04T23:45:01.574576Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 23:45:01.581619 waagent[1887]: 2025-09-04T23:45:01.581551Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 23:45:01.588095 waagent[1887]: 2025-09-04T23:45:01.588014Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 23:45:01.594047 waagent[1887]: 2025-09-04T23:45:01.593964Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 23:45:01.653406 waagent[1887]: 2025-09-04T23:45:01.653340Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 23:45:01.660994 waagent[1887]: 2025-09-04T23:45:01.660941Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 23:45:01.668208 waagent[1887]: 2025-09-04T23:45:01.668102Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 23:45:01.968901 waagent[1887]: 2025-09-04T23:45:01.968796Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 23:45:01.976212 waagent[1887]: 2025-09-04T23:45:01.976130Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 23:45:01.986279 waagent[1887]: 2025-09-04T23:45:01.986212Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 23:45:02.031991 waagent[1887]: 2025-09-04T23:45:02.031921Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 4 23:45:02.038434 waagent[1887]: 2025-09-04T23:45:02.038373Z INFO Daemon Sep 4 23:45:02.042015 waagent[1887]: 2025-09-04T23:45:02.041949Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6e96fba4-8f8a-49f2-b272-19a5f9b15817 eTag: 2833667352297199522 source: Fabric] Sep 4 23:45:02.055542 waagent[1887]: 2025-09-04T23:45:02.055468Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 23:45:02.064225 waagent[1887]: 2025-09-04T23:45:02.064158Z INFO Daemon Sep 4 23:45:02.067522 waagent[1887]: 2025-09-04T23:45:02.067444Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 23:45:02.079548 waagent[1887]: 2025-09-04T23:45:02.079505Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 23:45:02.168302 waagent[1887]: 2025-09-04T23:45:02.168198Z INFO Daemon Downloaded certificate {'thumbprint': '6D0D8150B6B9758E0FF32B6C032C2BB2E3272782', 'hasPrivateKey': True} Sep 4 23:45:02.179764 waagent[1887]: 2025-09-04T23:45:02.179678Z INFO Daemon Fetch goal state completed Sep 4 23:45:02.192652 waagent[1887]: 2025-09-04T23:45:02.192583Z INFO Daemon Daemon Starting provisioning Sep 4 23:45:02.199059 waagent[1887]: 2025-09-04T23:45:02.198976Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 23:45:02.204523 waagent[1887]: 2025-09-04T23:45:02.204458Z INFO Daemon Daemon Set hostname [ci-4230.2.2-n-1143fb47ea] Sep 4 23:45:02.231673 waagent[1887]: 2025-09-04T23:45:02.229431Z INFO Daemon Daemon Publish hostname [ci-4230.2.2-n-1143fb47ea] Sep 4 23:45:02.236905 waagent[1887]: 2025-09-04T23:45:02.236827Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 23:45:02.244575 waagent[1887]: 2025-09-04T23:45:02.244508Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 23:45:02.257705 systemd-networkd[1558]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:02.257715 systemd-networkd[1558]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:02.257745 systemd-networkd[1558]: eth0: DHCP lease lost Sep 4 23:45:02.258899 waagent[1887]: 2025-09-04T23:45:02.258797Z INFO Daemon Daemon Create user account if not exists Sep 4 23:45:02.267669 waagent[1887]: 2025-09-04T23:45:02.265598Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 23:45:02.273015 waagent[1887]: 2025-09-04T23:45:02.272903Z INFO Daemon Daemon Configure sudoer Sep 4 23:45:02.278724 waagent[1887]: 2025-09-04T23:45:02.278597Z INFO Daemon Daemon Configure sshd Sep 4 23:45:02.284468 waagent[1887]: 2025-09-04T23:45:02.284383Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 23:45:02.300734 waagent[1887]: 2025-09-04T23:45:02.300370Z INFO Daemon Daemon Deploy ssh public key. Sep 4 23:45:02.309748 systemd-networkd[1558]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:45:03.597314 waagent[1887]: 2025-09-04T23:45:03.597238Z INFO Daemon Daemon Provisioning complete Sep 4 23:45:03.615693 waagent[1887]: 2025-09-04T23:45:03.615605Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 23:45:03.622649 waagent[1887]: 2025-09-04T23:45:03.622546Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 23:45:03.632752 waagent[1887]: 2025-09-04T23:45:03.632670Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 4 23:45:03.770666 waagent[1947]: 2025-09-04T23:45:03.770103Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 4 23:45:03.770666 waagent[1947]: 2025-09-04T23:45:03.770264Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.2 Sep 4 23:45:03.770666 waagent[1947]: 2025-09-04T23:45:03.770317Z INFO ExtHandler ExtHandler Python: 3.11.11 Sep 4 23:45:03.852629 waagent[1947]: 2025-09-04T23:45:03.852479Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 4 23:45:03.853017 waagent[1947]: 2025-09-04T23:45:03.852973Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 23:45:03.853155 waagent[1947]: 2025-09-04T23:45:03.853122Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 23:45:03.861686 waagent[1947]: 2025-09-04T23:45:03.861584Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 23:45:03.867782 waagent[1947]: 2025-09-04T23:45:03.867732Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 4 23:45:03.868691 waagent[1947]: 2025-09-04T23:45:03.868396Z INFO ExtHandler Sep 4 23:45:03.868691 waagent[1947]: 2025-09-04T23:45:03.868477Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0c813695-0b6d-4756-970b-6925a516cfe4 eTag: 2833667352297199522 source: Fabric] Sep 4 23:45:03.868835 waagent[1947]: 2025-09-04T23:45:03.868788Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 23:45:03.869438 waagent[1947]: 2025-09-04T23:45:03.869385Z INFO ExtHandler Sep 4 23:45:03.869502 waagent[1947]: 2025-09-04T23:45:03.869471Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 23:45:03.873894 waagent[1947]: 2025-09-04T23:45:03.873853Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 23:45:03.945977 waagent[1947]: 2025-09-04T23:45:03.945857Z INFO ExtHandler Downloaded certificate {'thumbprint': '6D0D8150B6B9758E0FF32B6C032C2BB2E3272782', 'hasPrivateKey': True} Sep 4 23:45:03.946620 waagent[1947]: 2025-09-04T23:45:03.946562Z INFO ExtHandler Fetch goal state completed Sep 4 23:45:03.961969 waagent[1947]: 2025-09-04T23:45:03.961905Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1947 Sep 4 23:45:03.962151 waagent[1947]: 2025-09-04T23:45:03.962109Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 23:45:03.963850 waagent[1947]: 2025-09-04T23:45:03.963804Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.2', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 23:45:03.964226 waagent[1947]: 2025-09-04T23:45:03.964180Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 4 23:45:04.053452 waagent[1947]: 2025-09-04T23:45:04.053397Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 4 23:45:04.053710 waagent[1947]: 2025-09-04T23:45:04.053639Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 4 23:45:04.060459 waagent[1947]: 2025-09-04T23:45:04.059854Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 4 23:45:04.066748 systemd[1]: Reload requested from client PID 1960 ('systemctl') (unit waagent.service)... Sep 4 23:45:04.066765 systemd[1]: Reloading... Sep 4 23:45:04.173863 zram_generator::config[1999]: No configuration found. Sep 4 23:45:04.268928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:04.371865 systemd[1]: Reloading finished in 304 ms. Sep 4 23:45:04.388452 waagent[1947]: 2025-09-04T23:45:04.388071Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 4 23:45:04.395470 systemd[1]: Reload requested from client PID 2054 ('systemctl') (unit waagent.service)... Sep 4 23:45:04.395486 systemd[1]: Reloading... Sep 4 23:45:04.502904 zram_generator::config[2096]: No configuration found. Sep 4 23:45:04.610593 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:04.709684 systemd[1]: Reloading finished in 313 ms. Sep 4 23:45:04.730672 waagent[1947]: 2025-09-04T23:45:04.727949Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 4 23:45:04.730672 waagent[1947]: 2025-09-04T23:45:04.728124Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 4 23:45:05.276054 waagent[1947]: 2025-09-04T23:45:05.275951Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 4 23:45:05.276810 waagent[1947]: 2025-09-04T23:45:05.276716Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 4 23:45:05.277704 waagent[1947]: 2025-09-04T23:45:05.277603Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 4 23:45:05.278158 waagent[1947]: 2025-09-04T23:45:05.278025Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 4 23:45:05.279171 waagent[1947]: 2025-09-04T23:45:05.278364Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 23:45:05.279171 waagent[1947]: 2025-09-04T23:45:05.278460Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 23:45:05.279171 waagent[1947]: 2025-09-04T23:45:05.278685Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 4 23:45:05.279171 waagent[1947]: 2025-09-04T23:45:05.278887Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 4 23:45:05.279171 waagent[1947]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 4 23:45:05.279171 waagent[1947]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 4 23:45:05.279171 waagent[1947]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 4 23:45:05.279171 waagent[1947]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 4 23:45:05.279171 waagent[1947]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 23:45:05.279171 waagent[1947]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 23:45:05.279513 waagent[1947]: 2025-09-04T23:45:05.279468Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 23:45:05.279714 waagent[1947]: 2025-09-04T23:45:05.279629Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 4 23:45:05.279791 waagent[1947]: 2025-09-04T23:45:05.279728Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 4 23:45:05.280250 waagent[1947]: 2025-09-04T23:45:05.280198Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 4 23:45:05.280432 waagent[1947]: 2025-09-04T23:45:05.280377Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 4 23:45:05.280717 waagent[1947]: 2025-09-04T23:45:05.280631Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 4 23:45:05.281336 waagent[1947]: 2025-09-04T23:45:05.281303Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 23:45:05.281605 waagent[1947]: 2025-09-04T23:45:05.281555Z INFO EnvHandler ExtHandler Configure routes Sep 4 23:45:05.282126 waagent[1947]: 2025-09-04T23:45:05.282078Z INFO EnvHandler ExtHandler Gateway:None Sep 4 23:45:05.284227 waagent[1947]: 2025-09-04T23:45:05.284151Z INFO EnvHandler ExtHandler Routes:None Sep 4 23:45:05.287987 waagent[1947]: 2025-09-04T23:45:05.287929Z INFO ExtHandler ExtHandler Sep 4 23:45:05.288477 waagent[1947]: 2025-09-04T23:45:05.288419Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f8fb6a28-3455-4b34-b18c-0bfc288ed060 correlation 106c9366-6d50-426f-bb83-ce1e2a473762 created: 2025-09-04T23:43:25.826698Z] Sep 4 23:45:05.289959 waagent[1947]: 2025-09-04T23:45:05.289679Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 4 23:45:05.291580 waagent[1947]: 2025-09-04T23:45:05.291516Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Sep 4 23:45:05.326892 waagent[1947]: 2025-09-04T23:45:05.326828Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EDD96BB3-F6BB-41D4-A063-36E90A846A9C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 4 23:45:05.396843 waagent[1947]: 2025-09-04T23:45:05.396761Z INFO MonitorHandler ExtHandler Network interfaces: Sep 4 23:45:05.396843 waagent[1947]: Executing ['ip', '-a', '-o', 'link']: Sep 4 23:45:05.396843 waagent[1947]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 4 23:45:05.396843 waagent[1947]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:07:bc brd ff:ff:ff:ff:ff:ff Sep 4 23:45:05.396843 waagent[1947]: 3: enP3065s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:07:bc brd ff:ff:ff:ff:ff:ff\ altname enP3065p0s2 Sep 4 23:45:05.396843 waagent[1947]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 4 23:45:05.396843 waagent[1947]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 4 23:45:05.396843 waagent[1947]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 4 23:45:05.396843 waagent[1947]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 4 23:45:05.396843 waagent[1947]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 4 23:45:05.396843 waagent[1947]: 2: eth0 inet6 fe80::222:48ff:feb6:7bc/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 23:45:05.497735 waagent[1947]: 2025-09-04T23:45:05.497618Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 4 23:45:05.497735 waagent[1947]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:05.497735 waagent[1947]: pkts bytes target prot opt in out source destination Sep 4 23:45:05.497735 waagent[1947]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:05.497735 waagent[1947]: pkts bytes target prot opt in out source destination Sep 4 23:45:05.497735 waagent[1947]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:05.497735 waagent[1947]: pkts bytes target prot opt in out source destination Sep 4 23:45:05.497735 waagent[1947]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 23:45:05.497735 waagent[1947]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 23:45:05.497735 waagent[1947]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 23:45:05.500987 waagent[1947]: 2025-09-04T23:45:05.500905Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 4 23:45:05.500987 waagent[1947]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:05.500987 waagent[1947]: pkts bytes target prot opt in out source destination Sep 4 23:45:05.500987 waagent[1947]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:05.500987 waagent[1947]: pkts bytes target prot opt in out source destination Sep 4 23:45:05.500987 waagent[1947]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:05.500987 waagent[1947]: pkts bytes target prot opt in out source destination Sep 4 23:45:05.500987 waagent[1947]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 23:45:05.500987 waagent[1947]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 23:45:05.500987 waagent[1947]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 23:45:05.501261 waagent[1947]: 2025-09-04T23:45:05.501221Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 4 23:45:08.431849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:45:08.440839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:08.549675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:08.554119 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:08.661116 kubelet[2185]: E0904 23:45:08.661070 2185 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:08.664199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:08.664349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:08.664849 systemd[1]: kubelet.service: Consumed 198ms CPU time, 105.4M memory peak. Sep 4 23:45:13.047772 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:45:13.049435 systemd[1]: Started sshd@0-10.200.20.36:22-10.200.16.10:59934.service - OpenSSH per-connection server daemon (10.200.16.10:59934). Sep 4 23:45:13.614445 sshd[2194]: Accepted publickey for core from 10.200.16.10 port 59934 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:13.615915 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:13.620867 systemd-logind[1704]: New session 3 of user core. Sep 4 23:45:13.627874 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:45:14.041927 systemd[1]: Started sshd@1-10.200.20.36:22-10.200.16.10:59938.service - OpenSSH per-connection server daemon (10.200.16.10:59938). Sep 4 23:45:14.493868 sshd[2199]: Accepted publickey for core from 10.200.16.10 port 59938 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:14.495146 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:14.499858 systemd-logind[1704]: New session 4 of user core. Sep 4 23:45:14.508842 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:45:14.837147 sshd[2201]: Connection closed by 10.200.16.10 port 59938 Sep 4 23:45:14.837734 sshd-session[2199]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:14.840476 systemd[1]: sshd@1-10.200.20.36:22-10.200.16.10:59938.service: Deactivated successfully. Sep 4 23:45:14.842045 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:45:14.843750 systemd-logind[1704]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:45:14.845217 systemd-logind[1704]: Removed session 4. Sep 4 23:45:14.929947 systemd[1]: Started sshd@2-10.200.20.36:22-10.200.16.10:59954.service - OpenSSH per-connection server daemon (10.200.16.10:59954). Sep 4 23:45:15.383259 sshd[2207]: Accepted publickey for core from 10.200.16.10 port 59954 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:15.384632 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:15.390545 systemd-logind[1704]: New session 5 of user core. Sep 4 23:45:15.401860 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:45:15.732109 sshd[2209]: Connection closed by 10.200.16.10 port 59954 Sep 4 23:45:15.731630 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:15.734990 systemd[1]: sshd@2-10.200.20.36:22-10.200.16.10:59954.service: Deactivated successfully. Sep 4 23:45:15.736844 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:45:15.737788 systemd-logind[1704]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:45:15.738873 systemd-logind[1704]: Removed session 5. Sep 4 23:45:15.818936 systemd[1]: Started sshd@3-10.200.20.36:22-10.200.16.10:59962.service - OpenSSH per-connection server daemon (10.200.16.10:59962). Sep 4 23:45:16.273504 sshd[2215]: Accepted publickey for core from 10.200.16.10 port 59962 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:16.275011 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:16.279368 systemd-logind[1704]: New session 6 of user core. Sep 4 23:45:16.286821 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:45:16.602769 sshd[2217]: Connection closed by 10.200.16.10 port 59962 Sep 4 23:45:16.603258 sshd-session[2215]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:16.606639 systemd[1]: sshd@3-10.200.20.36:22-10.200.16.10:59962.service: Deactivated successfully. Sep 4 23:45:16.608570 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:45:16.609280 systemd-logind[1704]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:45:16.610341 systemd-logind[1704]: Removed session 6. Sep 4 23:45:16.689935 systemd[1]: Started sshd@4-10.200.20.36:22-10.200.16.10:59976.service - OpenSSH per-connection server daemon (10.200.16.10:59976). Sep 4 23:45:17.143129 sshd[2223]: Accepted publickey for core from 10.200.16.10 port 59976 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:17.144409 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:17.148838 systemd-logind[1704]: New session 7 of user core. Sep 4 23:45:17.169841 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:45:17.624588 sudo[2226]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:45:17.624910 sudo[2226]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:17.655745 sudo[2226]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:17.743691 sshd[2225]: Connection closed by 10.200.16.10 port 59976 Sep 4 23:45:17.744564 sshd-session[2223]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:17.748088 systemd-logind[1704]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:45:17.748123 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:45:17.749264 systemd[1]: sshd@4-10.200.20.36:22-10.200.16.10:59976.service: Deactivated successfully. Sep 4 23:45:17.752260 systemd-logind[1704]: Removed session 7. Sep 4 23:45:17.833919 systemd[1]: Started sshd@5-10.200.20.36:22-10.200.16.10:59978.service - OpenSSH per-connection server daemon (10.200.16.10:59978). Sep 4 23:45:18.289613 sshd[2232]: Accepted publickey for core from 10.200.16.10 port 59978 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:18.290993 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:18.295842 systemd-logind[1704]: New session 8 of user core. Sep 4 23:45:18.304816 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:45:18.548186 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:45:18.548489 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:18.552169 sudo[2236]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:18.557284 sudo[2235]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:45:18.557558 sudo[2235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:18.575999 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:45:18.599184 augenrules[2258]: No rules Sep 4 23:45:18.600391 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:45:18.600593 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:45:18.602083 sudo[2235]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:18.681814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:45:18.689895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:18.699139 sshd[2234]: Connection closed by 10.200.16.10 port 59978 Sep 4 23:45:18.699624 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:18.702520 systemd[1]: sshd@5-10.200.20.36:22-10.200.16.10:59978.service: Deactivated successfully. Sep 4 23:45:18.704341 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:45:18.709799 systemd-logind[1704]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:45:18.711017 systemd-logind[1704]: Removed session 8. Sep 4 23:45:18.793842 systemd[1]: Started sshd@6-10.200.20.36:22-10.200.16.10:59990.service - OpenSSH per-connection server daemon (10.200.16.10:59990). Sep 4 23:45:18.796857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:18.801587 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:18.903835 kubelet[2275]: E0904 23:45:18.903778 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:18.906404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:18.906561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:18.906926 systemd[1]: kubelet.service: Consumed 192ms CPU time, 107.3M memory peak. Sep 4 23:45:19.255285 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 59990 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:19.256980 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:19.261886 systemd-logind[1704]: New session 9 of user core. Sep 4 23:45:19.271823 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:45:19.513233 sudo[2285]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:45:19.513507 sudo[2285]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:20.371836 chronyd[1694]: Selected source PHC0 Sep 4 23:45:21.337954 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:45:21.338089 (dockerd)[2301]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:45:22.584089 dockerd[2301]: time="2025-09-04T23:45:22.584026717Z" level=info msg="Starting up" Sep 4 23:45:22.887122 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport896564258-merged.mount: Deactivated successfully. Sep 4 23:45:22.920139 dockerd[2301]: time="2025-09-04T23:45:22.920090205Z" level=info msg="Loading containers: start." Sep 4 23:45:23.206671 kernel: Initializing XFRM netlink socket Sep 4 23:45:23.473912 systemd-networkd[1558]: docker0: Link UP Sep 4 23:45:23.515946 dockerd[2301]: time="2025-09-04T23:45:23.515896203Z" level=info msg="Loading containers: done." Sep 4 23:45:23.528758 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck769837330-merged.mount: Deactivated successfully. Sep 4 23:45:23.548534 dockerd[2301]: time="2025-09-04T23:45:23.548484656Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:45:23.548624 dockerd[2301]: time="2025-09-04T23:45:23.548600656Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:45:23.548802 dockerd[2301]: time="2025-09-04T23:45:23.548769256Z" level=info msg="Daemon has completed initialization" Sep 4 23:45:23.608422 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:45:23.609226 dockerd[2301]: time="2025-09-04T23:45:23.608175728Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:45:24.717362 containerd[1735]: time="2025-09-04T23:45:24.717118150Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 4 23:45:25.577920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2218289263.mount: Deactivated successfully. Sep 4 23:45:26.777062 containerd[1735]: time="2025-09-04T23:45:26.777012563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:26.780117 containerd[1735]: time="2025-09-04T23:45:26.780059360Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652441" Sep 4 23:45:26.783858 containerd[1735]: time="2025-09-04T23:45:26.783808437Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:26.790689 containerd[1735]: time="2025-09-04T23:45:26.789803593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:26.790796 containerd[1735]: time="2025-09-04T23:45:26.790717432Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 2.073558322s" Sep 4 23:45:26.790796 containerd[1735]: time="2025-09-04T23:45:26.790760472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 4 23:45:26.792215 containerd[1735]: time="2025-09-04T23:45:26.792184511Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 4 23:45:28.107765 containerd[1735]: time="2025-09-04T23:45:28.107713806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:28.111098 containerd[1735]: time="2025-09-04T23:45:28.111049243Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460309" Sep 4 23:45:28.115105 containerd[1735]: time="2025-09-04T23:45:28.115054080Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:28.120785 containerd[1735]: time="2025-09-04T23:45:28.120738915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:28.122091 containerd[1735]: time="2025-09-04T23:45:28.121960994Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.329741203s" Sep 4 23:45:28.122091 containerd[1735]: time="2025-09-04T23:45:28.121995234Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 4 23:45:28.122662 containerd[1735]: time="2025-09-04T23:45:28.122469194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 4 23:45:28.932327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 23:45:28.938884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:29.054956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:29.064126 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:29.195756 kubelet[2555]: E0904 23:45:29.195180 2555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:29.198208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:29.198355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:29.199993 systemd[1]: kubelet.service: Consumed 139ms CPU time, 105.5M memory peak. Sep 4 23:45:29.519026 containerd[1735]: time="2025-09-04T23:45:29.518367949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:29.521538 containerd[1735]: time="2025-09-04T23:45:29.521303588Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125903" Sep 4 23:45:29.526269 containerd[1735]: time="2025-09-04T23:45:29.526204546Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:29.531966 containerd[1735]: time="2025-09-04T23:45:29.531909504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:29.533053 containerd[1735]: time="2025-09-04T23:45:29.533020384Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.41052175s" Sep 4 23:45:29.533594 containerd[1735]: time="2025-09-04T23:45:29.533160343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 4 23:45:29.534103 containerd[1735]: time="2025-09-04T23:45:29.533837503Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 4 23:45:31.361423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount44681649.mount: Deactivated successfully. Sep 4 23:45:31.704882 containerd[1735]: time="2025-09-04T23:45:31.704839913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:31.707862 containerd[1735]: time="2025-09-04T23:45:31.707816792Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916095" Sep 4 23:45:31.711991 containerd[1735]: time="2025-09-04T23:45:31.711939991Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:31.716235 containerd[1735]: time="2025-09-04T23:45:31.716183709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:31.716867 containerd[1735]: time="2025-09-04T23:45:31.716823349Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 2.182960806s" Sep 4 23:45:31.716867 containerd[1735]: time="2025-09-04T23:45:31.716863829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 4 23:45:31.717574 containerd[1735]: time="2025-09-04T23:45:31.717375869Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:45:32.372925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770762939.mount: Deactivated successfully. Sep 4 23:45:33.465875 containerd[1735]: time="2025-09-04T23:45:33.465827320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:33.469840 containerd[1735]: time="2025-09-04T23:45:33.469795159Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 4 23:45:33.473634 containerd[1735]: time="2025-09-04T23:45:33.473610078Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:33.479953 containerd[1735]: time="2025-09-04T23:45:33.479884835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:33.481364 containerd[1735]: time="2025-09-04T23:45:33.481235395Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.763824806s" Sep 4 23:45:33.481364 containerd[1735]: time="2025-09-04T23:45:33.481270635Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 4 23:45:33.481801 containerd[1735]: time="2025-09-04T23:45:33.481772994Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:45:34.073592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454675594.mount: Deactivated successfully. Sep 4 23:45:34.098337 containerd[1735]: time="2025-09-04T23:45:34.098280439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:34.101804 containerd[1735]: time="2025-09-04T23:45:34.101756157Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 4 23:45:34.106807 containerd[1735]: time="2025-09-04T23:45:34.106745356Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:34.111844 containerd[1735]: time="2025-09-04T23:45:34.111789234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:34.112871 containerd[1735]: time="2025-09-04T23:45:34.112492793Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 630.606439ms" Sep 4 23:45:34.112871 containerd[1735]: time="2025-09-04T23:45:34.112530953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 4 23:45:34.113439 containerd[1735]: time="2025-09-04T23:45:34.113282953Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 4 23:45:34.774569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911803902.mount: Deactivated successfully. Sep 4 23:45:36.415676 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 4 23:45:36.995989 containerd[1735]: time="2025-09-04T23:45:36.995943922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:36.999550 containerd[1735]: time="2025-09-04T23:45:36.999499879Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 4 23:45:37.003418 containerd[1735]: time="2025-09-04T23:45:37.003373836Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:37.008756 containerd[1735]: time="2025-09-04T23:45:37.008691632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:37.010061 containerd[1735]: time="2025-09-04T23:45:37.009939791Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.896627918s" Sep 4 23:45:37.010061 containerd[1735]: time="2025-09-04T23:45:37.009972191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 4 23:45:39.431812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 23:45:39.436913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:39.538965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:39.541373 (kubelet)[2708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:39.674293 kubelet[2708]: E0904 23:45:39.674234 2708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:39.678844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:39.678985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:39.679391 systemd[1]: kubelet.service: Consumed 124ms CPU time, 105.1M memory peak. Sep 4 23:45:41.797748 update_engine[1708]: I20250904 23:45:41.797673 1708 update_attempter.cc:509] Updating boot flags... Sep 4 23:45:41.919445 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2731) Sep 4 23:45:42.094677 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2730) Sep 4 23:45:42.391451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:42.392284 systemd[1]: kubelet.service: Consumed 124ms CPU time, 105.1M memory peak. Sep 4 23:45:42.398977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:42.427369 systemd[1]: Reload requested from client PID 2837 ('systemctl') (unit session-9.scope)... Sep 4 23:45:42.427384 systemd[1]: Reloading... Sep 4 23:45:42.557688 zram_generator::config[2884]: No configuration found. Sep 4 23:45:42.663177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:42.766400 systemd[1]: Reloading finished in 338 ms. Sep 4 23:45:42.817357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:42.821427 (kubelet)[2941]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:42.826303 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:42.828094 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:45:42.828335 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:42.828392 systemd[1]: kubelet.service: Consumed 96ms CPU time, 98.7M memory peak. Sep 4 23:45:42.834384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:42.947375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:42.957928 (kubelet)[2958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:42.990366 kubelet[2958]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:42.990366 kubelet[2958]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 23:45:42.990366 kubelet[2958]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:42.990732 kubelet[2958]: I0904 23:45:42.990442 2958 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:45:43.703558 kubelet[2958]: I0904 23:45:43.703520 2958 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 23:45:43.704034 kubelet[2958]: I0904 23:45:43.703703 2958 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:45:43.705210 kubelet[2958]: I0904 23:45:43.705191 2958 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 23:45:43.725222 kubelet[2958]: E0904 23:45:43.725161 2958 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:43.726462 kubelet[2958]: I0904 23:45:43.726428 2958 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:45:43.735263 kubelet[2958]: E0904 23:45:43.735217 2958 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:45:43.735263 kubelet[2958]: I0904 23:45:43.735262 2958 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:45:43.739093 kubelet[2958]: I0904 23:45:43.739070 2958 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:45:43.739791 kubelet[2958]: I0904 23:45:43.739768 2958 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 23:45:43.739936 kubelet[2958]: I0904 23:45:43.739910 2958 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:45:43.740106 kubelet[2958]: I0904 23:45:43.739937 2958 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-1143fb47ea","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:45:43.740191 kubelet[2958]: I0904 23:45:43.740115 2958 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:45:43.740191 kubelet[2958]: I0904 23:45:43.740125 2958 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 23:45:43.740259 kubelet[2958]: I0904 23:45:43.740240 2958 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:43.742621 kubelet[2958]: I0904 23:45:43.742595 2958 kubelet.go:408] "Attempting to sync node with API server" Sep 4 23:45:43.742621 kubelet[2958]: I0904 23:45:43.742624 2958 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:45:43.742711 kubelet[2958]: I0904 23:45:43.742656 2958 kubelet.go:314] "Adding apiserver pod source" Sep 4 23:45:43.742711 kubelet[2958]: I0904 23:45:43.742673 2958 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:45:43.748671 kubelet[2958]: W0904 23:45:43.747582 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-1143fb47ea&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Sep 4 23:45:43.748671 kubelet[2958]: E0904 23:45:43.747708 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-1143fb47ea&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:43.748671 kubelet[2958]: I0904 23:45:43.747807 2958 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:45:43.748671 kubelet[2958]: I0904 23:45:43.748258 2958 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:45:43.748671 kubelet[2958]: W0904 23:45:43.748301 2958 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:45:43.750056 kubelet[2958]: I0904 23:45:43.750032 2958 server.go:1274] "Started kubelet" Sep 4 23:45:43.752798 kubelet[2958]: W0904 23:45:43.752755 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Sep 4 23:45:43.752875 kubelet[2958]: E0904 23:45:43.752804 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:43.752953 kubelet[2958]: I0904 23:45:43.752919 2958 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:45:43.753778 kubelet[2958]: I0904 23:45:43.753758 2958 server.go:449] "Adding debug handlers to kubelet server" Sep 4 23:45:43.754267 kubelet[2958]: I0904 23:45:43.754219 2958 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:45:43.754594 kubelet[2958]: I0904 23:45:43.754562 2958 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:45:43.755966 kubelet[2958]: E0904 23:45:43.754852 2958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-n-1143fb47ea.1862390f9715fe50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-n-1143fb47ea,UID:ci-4230.2.2-n-1143fb47ea,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-n-1143fb47ea,},FirstTimestamp:2025-09-04 23:45:43.7500084 +0000 UTC m=+0.789143038,LastTimestamp:2025-09-04 23:45:43.7500084 +0000 UTC m=+0.789143038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-n-1143fb47ea,}" Sep 4 23:45:43.756280 kubelet[2958]: I0904 23:45:43.756245 2958 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:45:43.757266 kubelet[2958]: I0904 23:45:43.757079 2958 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:45:43.759606 kubelet[2958]: E0904 23:45:43.759578 2958 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-1143fb47ea\" not found" Sep 4 23:45:43.759746 kubelet[2958]: I0904 23:45:43.759733 2958 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 23:45:43.760023 kubelet[2958]: I0904 23:45:43.760004 2958 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 23:45:43.760141 kubelet[2958]: I0904 23:45:43.760129 2958 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:45:43.760700 kubelet[2958]: W0904 23:45:43.760639 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Sep 4 23:45:43.760841 kubelet[2958]: E0904 23:45:43.760821 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:43.760997 kubelet[2958]: E0904 23:45:43.760982 2958 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:45:43.763265 kubelet[2958]: E0904 23:45:43.763223 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-1143fb47ea?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="200ms" Sep 4 23:45:43.763407 kubelet[2958]: I0904 23:45:43.763383 2958 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:45:43.763407 kubelet[2958]: I0904 23:45:43.763403 2958 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:45:43.763486 kubelet[2958]: I0904 23:45:43.763464 2958 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:45:43.788889 kubelet[2958]: I0904 23:45:43.788866 2958 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 23:45:43.789262 kubelet[2958]: I0904 23:45:43.789076 2958 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 23:45:43.789262 kubelet[2958]: I0904 23:45:43.789119 2958 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:43.797568 kubelet[2958]: I0904 23:45:43.797539 2958 policy_none.go:49] "None policy: Start" Sep 4 23:45:43.798685 kubelet[2958]: I0904 23:45:43.798439 2958 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 23:45:43.798685 kubelet[2958]: I0904 23:45:43.798465 2958 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:45:43.808508 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:45:43.816689 kubelet[2958]: I0904 23:45:43.816629 2958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:45:43.818262 kubelet[2958]: I0904 23:45:43.817621 2958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:45:43.818262 kubelet[2958]: I0904 23:45:43.817668 2958 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 23:45:43.818262 kubelet[2958]: I0904 23:45:43.817686 2958 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 23:45:43.818262 kubelet[2958]: E0904 23:45:43.817730 2958 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:45:43.821693 kubelet[2958]: W0904 23:45:43.821610 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Sep 4 23:45:43.821838 kubelet[2958]: E0904 23:45:43.821818 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:43.824208 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:45:43.827517 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:45:43.835559 kubelet[2958]: I0904 23:45:43.835539 2958 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:45:43.835920 kubelet[2958]: I0904 23:45:43.835907 2958 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:45:43.836044 kubelet[2958]: I0904 23:45:43.836005 2958 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:45:43.836496 kubelet[2958]: I0904 23:45:43.836482 2958 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:45:43.837988 kubelet[2958]: E0904 23:45:43.837968 2958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-n-1143fb47ea\" not found" Sep 4 23:45:43.928737 systemd[1]: Created slice kubepods-burstable-podd5a6827666c18bf9862cb381d0028023.slice - libcontainer container kubepods-burstable-podd5a6827666c18bf9862cb381d0028023.slice. Sep 4 23:45:43.939867 kubelet[2958]: I0904 23:45:43.939818 2958 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:43.940487 kubelet[2958]: E0904 23:45:43.940462 2958 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:43.942214 systemd[1]: Created slice kubepods-burstable-pod1b73aa9a21bf164ff73ea3c5a76d6f65.slice - libcontainer container kubepods-burstable-pod1b73aa9a21bf164ff73ea3c5a76d6f65.slice. Sep 4 23:45:43.947057 systemd[1]: Created slice kubepods-burstable-pod0e38661662825dc232497f6f6a846100.slice - libcontainer container kubepods-burstable-pod0e38661662825dc232497f6f6a846100.slice. Sep 4 23:45:43.964522 kubelet[2958]: E0904 23:45:43.964409 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-1143fb47ea?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="400ms" Sep 4 23:45:44.061771 kubelet[2958]: I0904 23:45:44.061728 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b73aa9a21bf164ff73ea3c5a76d6f65-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-1143fb47ea\" (UID: \"1b73aa9a21bf164ff73ea3c5a76d6f65\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.061771 kubelet[2958]: I0904 23:45:44.061771 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b73aa9a21bf164ff73ea3c5a76d6f65-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-1143fb47ea\" (UID: \"1b73aa9a21bf164ff73ea3c5a76d6f65\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.062126 kubelet[2958]: I0904 23:45:44.061789 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0e38661662825dc232497f6f6a846100-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-1143fb47ea\" (UID: \"0e38661662825dc232497f6f6a846100\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.062126 kubelet[2958]: I0904 23:45:44.061811 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e38661662825dc232497f6f6a846100-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-1143fb47ea\" (UID: \"0e38661662825dc232497f6f6a846100\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.062126 kubelet[2958]: I0904 23:45:44.061827 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b73aa9a21bf164ff73ea3c5a76d6f65-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-1143fb47ea\" (UID: \"1b73aa9a21bf164ff73ea3c5a76d6f65\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.062126 kubelet[2958]: I0904 23:45:44.061845 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e38661662825dc232497f6f6a846100-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-1143fb47ea\" (UID: \"0e38661662825dc232497f6f6a846100\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.062126 kubelet[2958]: I0904 23:45:44.061860 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e38661662825dc232497f6f6a846100-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-1143fb47ea\" (UID: \"0e38661662825dc232497f6f6a846100\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.062264 kubelet[2958]: I0904 23:45:44.061876 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e38661662825dc232497f6f6a846100-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-1143fb47ea\" (UID: \"0e38661662825dc232497f6f6a846100\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.062264 kubelet[2958]: I0904 23:45:44.061891 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5a6827666c18bf9862cb381d0028023-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-1143fb47ea\" (UID: \"d5a6827666c18bf9862cb381d0028023\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.143123 kubelet[2958]: I0904 23:45:44.142759 2958 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.143123 kubelet[2958]: E0904 23:45:44.143069 2958 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.238898 containerd[1735]: time="2025-09-04T23:45:44.238781306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-1143fb47ea,Uid:d5a6827666c18bf9862cb381d0028023,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:44.245221 containerd[1735]: time="2025-09-04T23:45:44.244965421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-1143fb47ea,Uid:1b73aa9a21bf164ff73ea3c5a76d6f65,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:44.251266 containerd[1735]: time="2025-09-04T23:45:44.251054657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-1143fb47ea,Uid:0e38661662825dc232497f6f6a846100,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:44.365384 kubelet[2958]: E0904 23:45:44.365339 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-1143fb47ea?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="800ms" Sep 4 23:45:44.545010 kubelet[2958]: I0904 23:45:44.544568 2958 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.545010 kubelet[2958]: E0904 23:45:44.544919 2958 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:44.655975 kubelet[2958]: W0904 23:45:44.655887 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Sep 4 23:45:44.655975 kubelet[2958]: E0904 23:45:44.655937 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:44.890994 kubelet[2958]: W0904 23:45:44.890830 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Sep 4 23:45:44.890994 kubelet[2958]: E0904 23:45:44.890893 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:44.902111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2114694540.mount: Deactivated successfully. Sep 4 23:45:44.911614 kubelet[2958]: W0904 23:45:44.911556 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-1143fb47ea&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Sep 4 23:45:44.911757 kubelet[2958]: E0904 23:45:44.911623 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-1143fb47ea&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:44.922738 containerd[1735]: time="2025-09-04T23:45:44.922693478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:44.934492 containerd[1735]: time="2025-09-04T23:45:44.934442149Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 23:45:44.942138 containerd[1735]: time="2025-09-04T23:45:44.942100623Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:44.949527 containerd[1735]: time="2025-09-04T23:45:44.949467018Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:44.957402 containerd[1735]: time="2025-09-04T23:45:44.956307533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:44.957402 containerd[1735]: time="2025-09-04T23:45:44.957179852Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 718.315906ms" Sep 4 23:45:44.963373 containerd[1735]: time="2025-09-04T23:45:44.963328368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:45:44.966616 containerd[1735]: time="2025-09-04T23:45:44.966583525Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:45:44.967087 containerd[1735]: time="2025-09-04T23:45:44.967031925Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:44.974042 containerd[1735]: time="2025-09-04T23:45:44.973977960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 722.852223ms" Sep 4 23:45:44.974661 containerd[1735]: time="2025-09-04T23:45:44.974619359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 729.577178ms" Sep 4 23:45:45.110092 kubelet[2958]: W0904 23:45:45.110004 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Sep 4 23:45:45.110092 kubelet[2958]: E0904 23:45:45.110067 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:45.166962 kubelet[2958]: E0904 23:45:45.166838 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-1143fb47ea?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="1.6s" Sep 4 23:45:45.347444 kubelet[2958]: I0904 23:45:45.347146 2958 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:45.347728 kubelet[2958]: E0904 23:45:45.347701 2958 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:45.771873 kubelet[2958]: E0904 23:45:45.771828 2958 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:45.845414 containerd[1735]: time="2025-09-04T23:45:45.845301638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:45.846100 containerd[1735]: time="2025-09-04T23:45:45.846007277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:45.846320 containerd[1735]: time="2025-09-04T23:45:45.846274197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:45.846935 containerd[1735]: time="2025-09-04T23:45:45.846785277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:45.848499 containerd[1735]: time="2025-09-04T23:45:45.848422876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:45.848499 containerd[1735]: time="2025-09-04T23:45:45.848473755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:45.848627 containerd[1735]: time="2025-09-04T23:45:45.848578275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:45.848906 containerd[1735]: time="2025-09-04T23:45:45.848807715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:45.856308 containerd[1735]: time="2025-09-04T23:45:45.856216870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:45.856308 containerd[1735]: time="2025-09-04T23:45:45.856269070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:45.856308 containerd[1735]: time="2025-09-04T23:45:45.856285470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:45.856726 containerd[1735]: time="2025-09-04T23:45:45.856364150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:45.897811 systemd[1]: Started cri-containerd-8e4aa50574de4271ee36e542fb6147249006ba3420756385da226353bd88585f.scope - libcontainer container 8e4aa50574de4271ee36e542fb6147249006ba3420756385da226353bd88585f. Sep 4 23:45:45.898889 systemd[1]: Started cri-containerd-d83409bc9a0b69b690bdaf0c133fc19858bf51ca977ab909a5cb56f52b39d1cb.scope - libcontainer container d83409bc9a0b69b690bdaf0c133fc19858bf51ca977ab909a5cb56f52b39d1cb. Sep 4 23:45:45.914826 systemd[1]: Started cri-containerd-05f0e6e3db817b6b38952531715ca1c6f0c30420fd497937a93ecf7526bc14c9.scope - libcontainer container 05f0e6e3db817b6b38952531715ca1c6f0c30420fd497937a93ecf7526bc14c9. Sep 4 23:45:45.954072 containerd[1735]: time="2025-09-04T23:45:45.954031038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-1143fb47ea,Uid:0e38661662825dc232497f6f6a846100,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e4aa50574de4271ee36e542fb6147249006ba3420756385da226353bd88585f\"" Sep 4 23:45:45.960517 containerd[1735]: time="2025-09-04T23:45:45.960405433Z" level=info msg="CreateContainer within sandbox \"8e4aa50574de4271ee36e542fb6147249006ba3420756385da226353bd88585f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:45:45.968922 containerd[1735]: time="2025-09-04T23:45:45.968874547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-1143fb47ea,Uid:1b73aa9a21bf164ff73ea3c5a76d6f65,Namespace:kube-system,Attempt:0,} returns sandbox id \"d83409bc9a0b69b690bdaf0c133fc19858bf51ca977ab909a5cb56f52b39d1cb\"" Sep 4 23:45:45.973767 containerd[1735]: time="2025-09-04T23:45:45.973673223Z" level=info msg="CreateContainer within sandbox \"d83409bc9a0b69b690bdaf0c133fc19858bf51ca977ab909a5cb56f52b39d1cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:45:45.973883 containerd[1735]: time="2025-09-04T23:45:45.973753303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-1143fb47ea,Uid:d5a6827666c18bf9862cb381d0028023,Namespace:kube-system,Attempt:0,} returns sandbox id \"05f0e6e3db817b6b38952531715ca1c6f0c30420fd497937a93ecf7526bc14c9\"" Sep 4 23:45:45.977249 containerd[1735]: time="2025-09-04T23:45:45.977206421Z" level=info msg="CreateContainer within sandbox \"05f0e6e3db817b6b38952531715ca1c6f0c30420fd497937a93ecf7526bc14c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:45:46.032665 containerd[1735]: time="2025-09-04T23:45:46.032183220Z" level=info msg="CreateContainer within sandbox \"8e4aa50574de4271ee36e542fb6147249006ba3420756385da226353bd88585f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c87494d06fc60cd19ecd93f89fc984d265fb6c1d73f12d81e769922f239828ab\"" Sep 4 23:45:46.033452 containerd[1735]: time="2025-09-04T23:45:46.033414979Z" level=info msg="StartContainer for \"c87494d06fc60cd19ecd93f89fc984d265fb6c1d73f12d81e769922f239828ab\"" Sep 4 23:45:46.051397 containerd[1735]: time="2025-09-04T23:45:46.051205686Z" level=info msg="CreateContainer within sandbox \"d83409bc9a0b69b690bdaf0c133fc19858bf51ca977ab909a5cb56f52b39d1cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"013b4b81f817657ecf7dda06ce6c38ec1058871ef447ff063da3db4676884f37\"" Sep 4 23:45:46.052132 containerd[1735]: time="2025-09-04T23:45:46.052099605Z" level=info msg="StartContainer for \"013b4b81f817657ecf7dda06ce6c38ec1058871ef447ff063da3db4676884f37\"" Sep 4 23:45:46.056055 systemd[1]: Started cri-containerd-c87494d06fc60cd19ecd93f89fc984d265fb6c1d73f12d81e769922f239828ab.scope - libcontainer container c87494d06fc60cd19ecd93f89fc984d265fb6c1d73f12d81e769922f239828ab. Sep 4 23:45:46.057589 containerd[1735]: time="2025-09-04T23:45:46.057545561Z" level=info msg="CreateContainer within sandbox \"05f0e6e3db817b6b38952531715ca1c6f0c30420fd497937a93ecf7526bc14c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d54d7bda2b71b75eb3cee673a963dfd3385d82b12180eccea062b1b30358eac\"" Sep 4 23:45:46.058190 containerd[1735]: time="2025-09-04T23:45:46.058088361Z" level=info msg="StartContainer for \"3d54d7bda2b71b75eb3cee673a963dfd3385d82b12180eccea062b1b30358eac\"" Sep 4 23:45:46.104836 systemd[1]: Started cri-containerd-013b4b81f817657ecf7dda06ce6c38ec1058871ef447ff063da3db4676884f37.scope - libcontainer container 013b4b81f817657ecf7dda06ce6c38ec1058871ef447ff063da3db4676884f37. Sep 4 23:45:46.124591 containerd[1735]: time="2025-09-04T23:45:46.124235872Z" level=info msg="StartContainer for \"c87494d06fc60cd19ecd93f89fc984d265fb6c1d73f12d81e769922f239828ab\" returns successfully" Sep 4 23:45:46.127966 systemd[1]: Started cri-containerd-3d54d7bda2b71b75eb3cee673a963dfd3385d82b12180eccea062b1b30358eac.scope - libcontainer container 3d54d7bda2b71b75eb3cee673a963dfd3385d82b12180eccea062b1b30358eac. Sep 4 23:45:46.171911 containerd[1735]: time="2025-09-04T23:45:46.171863157Z" level=info msg="StartContainer for \"013b4b81f817657ecf7dda06ce6c38ec1058871ef447ff063da3db4676884f37\" returns successfully" Sep 4 23:45:46.223697 containerd[1735]: time="2025-09-04T23:45:46.223549079Z" level=info msg="StartContainer for \"3d54d7bda2b71b75eb3cee673a963dfd3385d82b12180eccea062b1b30358eac\" returns successfully" Sep 4 23:45:46.950902 kubelet[2958]: I0904 23:45:46.950866 2958 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:48.380485 kubelet[2958]: E0904 23:45:48.380439 2958 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.2-n-1143fb47ea\" not found" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:48.650296 kubelet[2958]: I0904 23:45:48.649970 2958 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:48.650296 kubelet[2958]: E0904 23:45:48.650029 2958 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.2.2-n-1143fb47ea\": node \"ci-4230.2.2-n-1143fb47ea\" not found" Sep 4 23:45:48.754596 kubelet[2958]: I0904 23:45:48.754553 2958 apiserver.go:52] "Watching apiserver" Sep 4 23:45:48.760997 kubelet[2958]: I0904 23:45:48.760951 2958 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 23:45:51.038931 systemd[1]: Reload requested from client PID 3233 ('systemctl') (unit session-9.scope)... Sep 4 23:45:51.038949 systemd[1]: Reloading... Sep 4 23:45:51.132685 zram_generator::config[3278]: No configuration found. Sep 4 23:45:51.157133 kubelet[2958]: W0904 23:45:51.157093 2958 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:51.261357 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:51.376986 systemd[1]: Reloading finished in 337 ms. Sep 4 23:45:51.398463 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:51.417857 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:45:51.418096 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:51.418156 systemd[1]: kubelet.service: Consumed 1.140s CPU time, 131.5M memory peak. Sep 4 23:45:51.423984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:51.533755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:51.544225 (kubelet)[3344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:51.580153 kubelet[3344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:51.580477 kubelet[3344]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 23:45:51.580516 kubelet[3344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:51.580712 kubelet[3344]: I0904 23:45:51.580680 3344 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:45:51.586915 kubelet[3344]: I0904 23:45:51.586873 3344 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 23:45:51.586915 kubelet[3344]: I0904 23:45:51.586907 3344 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:45:51.587183 kubelet[3344]: I0904 23:45:51.587161 3344 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 23:45:51.588635 kubelet[3344]: I0904 23:45:51.588610 3344 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:45:51.590848 kubelet[3344]: I0904 23:45:51.590809 3344 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:45:51.599230 kubelet[3344]: E0904 23:45:51.599048 3344 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:45:51.599230 kubelet[3344]: I0904 23:45:51.599091 3344 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:45:51.606092 kubelet[3344]: I0904 23:45:51.605216 3344 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:45:51.606092 kubelet[3344]: I0904 23:45:51.605364 3344 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 23:45:51.606092 kubelet[3344]: I0904 23:45:51.605463 3344 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:45:51.606092 kubelet[3344]: I0904 23:45:51.605492 3344 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-1143fb47ea","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:45:51.606329 kubelet[3344]: I0904 23:45:51.605781 3344 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:45:51.606329 kubelet[3344]: I0904 23:45:51.605791 3344 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 23:45:51.606329 kubelet[3344]: I0904 23:45:51.605831 3344 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:51.606329 kubelet[3344]: I0904 23:45:51.605932 3344 kubelet.go:408] "Attempting to sync node with API server" Sep 4 23:45:51.606329 kubelet[3344]: I0904 23:45:51.605944 3344 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:45:51.606329 kubelet[3344]: I0904 23:45:51.605970 3344 kubelet.go:314] "Adding apiserver pod source" Sep 4 23:45:51.606329 kubelet[3344]: I0904 23:45:51.605984 3344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:45:51.608558 kubelet[3344]: I0904 23:45:51.608536 3344 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:45:51.609245 kubelet[3344]: I0904 23:45:51.609226 3344 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:45:51.609955 kubelet[3344]: I0904 23:45:51.609938 3344 server.go:1274] "Started kubelet" Sep 4 23:45:51.612212 kubelet[3344]: I0904 23:45:51.612172 3344 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:45:51.613118 kubelet[3344]: I0904 23:45:51.613098 3344 server.go:449] "Adding debug handlers to kubelet server" Sep 4 23:45:51.616969 kubelet[3344]: I0904 23:45:51.616922 3344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:45:51.619652 kubelet[3344]: I0904 23:45:51.618244 3344 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:45:51.620580 kubelet[3344]: I0904 23:45:51.618277 3344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:45:51.620685 kubelet[3344]: I0904 23:45:51.618434 3344 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:45:51.628538 kubelet[3344]: E0904 23:45:51.628433 3344 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:45:51.629999 kubelet[3344]: I0904 23:45:51.629872 3344 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 23:45:51.629999 kubelet[3344]: I0904 23:45:51.629987 3344 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 23:45:51.630502 kubelet[3344]: I0904 23:45:51.630473 3344 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:45:51.631321 kubelet[3344]: I0904 23:45:51.630901 3344 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:45:51.631321 kubelet[3344]: I0904 23:45:51.631011 3344 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:45:51.632687 kubelet[3344]: E0904 23:45:51.632630 3344 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-1143fb47ea\" not found" Sep 4 23:45:51.640279 kubelet[3344]: I0904 23:45:51.640072 3344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:45:51.647504 kubelet[3344]: I0904 23:45:51.642638 3344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:45:51.647504 kubelet[3344]: I0904 23:45:51.642691 3344 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 23:45:51.647504 kubelet[3344]: I0904 23:45:51.645720 3344 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 23:45:51.647504 kubelet[3344]: E0904 23:45:51.645797 3344 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:45:51.649720 kubelet[3344]: I0904 23:45:51.648927 3344 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:45:51.710686 kubelet[3344]: I0904 23:45:51.709979 3344 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 23:45:51.710686 kubelet[3344]: I0904 23:45:51.710016 3344 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 23:45:51.710686 kubelet[3344]: I0904 23:45:51.710052 3344 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:51.710686 kubelet[3344]: I0904 23:45:51.710226 3344 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:45:51.710686 kubelet[3344]: I0904 23:45:51.710237 3344 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:45:51.710686 kubelet[3344]: I0904 23:45:51.710256 3344 policy_none.go:49] "None policy: Start" Sep 4 23:45:51.711332 kubelet[3344]: I0904 23:45:51.711306 3344 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 23:45:51.711332 kubelet[3344]: I0904 23:45:51.711335 3344 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:45:51.711519 kubelet[3344]: I0904 23:45:51.711499 3344 state_mem.go:75] "Updated machine memory state" Sep 4 23:45:51.719476 kubelet[3344]: I0904 23:45:51.717474 3344 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:45:51.719476 kubelet[3344]: I0904 23:45:51.717673 3344 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:45:51.719476 kubelet[3344]: I0904 23:45:51.717686 3344 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:45:51.719476 kubelet[3344]: I0904 23:45:51.717956 3344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:45:51.758871 kubelet[3344]: W0904 23:45:51.758835 3344 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:51.763615 kubelet[3344]: W0904 23:45:51.763518 3344 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:51.764686 kubelet[3344]: W0904 23:45:51.764608 3344 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:51.764782 kubelet[3344]: E0904 23:45:51.764686 3344 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.2.2-n-1143fb47ea\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.821885 kubelet[3344]: I0904 23:45:51.821851 3344 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.832012 kubelet[3344]: I0904 23:45:51.831952 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b73aa9a21bf164ff73ea3c5a76d6f65-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-1143fb47ea\" (UID: \"1b73aa9a21bf164ff73ea3c5a76d6f65\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.832012 kubelet[3344]: I0904 23:45:51.832014 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e38661662825dc232497f6f6a846100-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-1143fb47ea\" (UID: \"0e38661662825dc232497f6f6a846100\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.832911 kubelet[3344]: I0904 23:45:51.832035 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e38661662825dc232497f6f6a846100-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-1143fb47ea\" (UID: \"0e38661662825dc232497f6f6a846100\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.832911 kubelet[3344]: I0904 23:45:51.832736 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e38661662825dc232497f6f6a846100-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-1143fb47ea\" (UID: \"0e38661662825dc232497f6f6a846100\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.832911 kubelet[3344]: I0904 23:45:51.832763 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5a6827666c18bf9862cb381d0028023-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-1143fb47ea\" (UID: \"d5a6827666c18bf9862cb381d0028023\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.832911 kubelet[3344]: I0904 23:45:51.832779 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b73aa9a21bf164ff73ea3c5a76d6f65-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-1143fb47ea\" (UID: \"1b73aa9a21bf164ff73ea3c5a76d6f65\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.832911 kubelet[3344]: I0904 23:45:51.832794 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b73aa9a21bf164ff73ea3c5a76d6f65-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-1143fb47ea\" (UID: \"1b73aa9a21bf164ff73ea3c5a76d6f65\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.833176 kubelet[3344]: I0904 23:45:51.832812 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0e38661662825dc232497f6f6a846100-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-1143fb47ea\" (UID: \"0e38661662825dc232497f6f6a846100\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.833176 kubelet[3344]: I0904 23:45:51.832830 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e38661662825dc232497f6f6a846100-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-1143fb47ea\" (UID: \"0e38661662825dc232497f6f6a846100\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.835182 kubelet[3344]: I0904 23:45:51.835154 3344 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:51.835273 kubelet[3344]: I0904 23:45:51.835254 3344 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:52.097612 sudo[3376]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:45:52.097947 sudo[3376]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:45:52.531026 sudo[3376]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:52.613775 kubelet[3344]: I0904 23:45:52.613726 3344 apiserver.go:52] "Watching apiserver" Sep 4 23:45:52.631311 kubelet[3344]: I0904 23:45:52.631267 3344 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 23:45:52.700502 kubelet[3344]: W0904 23:45:52.700461 3344 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:52.700639 kubelet[3344]: E0904 23:45:52.700531 3344 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.2.2-n-1143fb47ea\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-1143fb47ea" Sep 4 23:45:52.723602 kubelet[3344]: I0904 23:45:52.723539 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-n-1143fb47ea" podStartSLOduration=1.72352164 podStartE2EDuration="1.72352164s" podCreationTimestamp="2025-09-04 23:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:52.722321281 +0000 UTC m=+1.174339606" watchObservedRunningTime="2025-09-04 23:45:52.72352164 +0000 UTC m=+1.175539925" Sep 4 23:45:52.743599 kubelet[3344]: I0904 23:45:52.741990 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-1143fb47ea" podStartSLOduration=1.741974948 podStartE2EDuration="1.741974948s" podCreationTimestamp="2025-09-04 23:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:52.741553588 +0000 UTC m=+1.193571913" watchObservedRunningTime="2025-09-04 23:45:52.741974948 +0000 UTC m=+1.193993233" Sep 4 23:45:54.390840 sudo[2285]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:54.470549 sshd[2284]: Connection closed by 10.200.16.10 port 59990 Sep 4 23:45:54.471165 sshd-session[2274]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:54.475488 systemd[1]: sshd@6-10.200.20.36:22-10.200.16.10:59990.service: Deactivated successfully. Sep 4 23:45:54.479522 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:45:54.480506 systemd[1]: session-9.scope: Consumed 7.117s CPU time, 258.1M memory peak. Sep 4 23:45:54.483901 systemd-logind[1704]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:45:54.485259 systemd-logind[1704]: Removed session 9. Sep 4 23:45:57.416751 kubelet[3344]: I0904 23:45:57.416711 3344 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:45:57.417663 containerd[1735]: time="2025-09-04T23:45:57.417371647Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:45:57.418183 kubelet[3344]: I0904 23:45:57.417966 3344 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:45:57.452163 kubelet[3344]: I0904 23:45:57.451915 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-n-1143fb47ea" podStartSLOduration=6.451894825 podStartE2EDuration="6.451894825s" podCreationTimestamp="2025-09-04 23:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:52.7547245 +0000 UTC m=+1.206742825" watchObservedRunningTime="2025-09-04 23:45:57.451894825 +0000 UTC m=+5.903913150" Sep 4 23:45:57.465200 systemd[1]: Created slice kubepods-besteffort-podf1e7d305_baad_47df_bcb5_b16bc49595c0.slice - libcontainer container kubepods-besteffort-podf1e7d305_baad_47df_bcb5_b16bc49595c0.slice. Sep 4 23:45:57.475126 kubelet[3344]: I0904 23:45:57.474590 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1e7d305-baad-47df-bcb5-b16bc49595c0-kube-proxy\") pod \"kube-proxy-zk56r\" (UID: \"f1e7d305-baad-47df-bcb5-b16bc49595c0\") " pod="kube-system/kube-proxy-zk56r" Sep 4 23:45:57.475126 kubelet[3344]: I0904 23:45:57.474670 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1e7d305-baad-47df-bcb5-b16bc49595c0-xtables-lock\") pod \"kube-proxy-zk56r\" (UID: \"f1e7d305-baad-47df-bcb5-b16bc49595c0\") " pod="kube-system/kube-proxy-zk56r" Sep 4 23:45:57.475126 kubelet[3344]: I0904 23:45:57.474694 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1e7d305-baad-47df-bcb5-b16bc49595c0-lib-modules\") pod \"kube-proxy-zk56r\" (UID: \"f1e7d305-baad-47df-bcb5-b16bc49595c0\") " pod="kube-system/kube-proxy-zk56r" Sep 4 23:45:57.475126 kubelet[3344]: I0904 23:45:57.474715 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2z87\" (UniqueName: \"kubernetes.io/projected/f1e7d305-baad-47df-bcb5-b16bc49595c0-kube-api-access-t2z87\") pod \"kube-proxy-zk56r\" (UID: \"f1e7d305-baad-47df-bcb5-b16bc49595c0\") " pod="kube-system/kube-proxy-zk56r" Sep 4 23:45:57.492273 systemd[1]: Created slice kubepods-burstable-podb233144a_53c3_46ce_8519_5ba3943f2e3b.slice - libcontainer container kubepods-burstable-podb233144a_53c3_46ce_8519_5ba3943f2e3b.slice. Sep 4 23:45:57.575720 kubelet[3344]: I0904 23:45:57.575657 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-run\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.575720 kubelet[3344]: I0904 23:45:57.575700 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cni-path\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.575893 kubelet[3344]: I0904 23:45:57.575743 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-cgroup\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.575893 kubelet[3344]: I0904 23:45:57.575760 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-host-proc-sys-net\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.575893 kubelet[3344]: I0904 23:45:57.575776 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-host-proc-sys-kernel\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.575893 kubelet[3344]: I0904 23:45:57.575792 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-hostproc\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.575893 kubelet[3344]: I0904 23:45:57.575826 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-config-path\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.575893 kubelet[3344]: I0904 23:45:57.575843 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-bpf-maps\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.576020 kubelet[3344]: I0904 23:45:57.575861 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-xtables-lock\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.576020 kubelet[3344]: I0904 23:45:57.575875 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-etc-cni-netd\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.576020 kubelet[3344]: I0904 23:45:57.575898 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-lib-modules\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.576020 kubelet[3344]: I0904 23:45:57.575938 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b233144a-53c3-46ce-8519-5ba3943f2e3b-clustermesh-secrets\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.576020 kubelet[3344]: I0904 23:45:57.575953 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b233144a-53c3-46ce-8519-5ba3943f2e3b-hubble-tls\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.576020 kubelet[3344]: I0904 23:45:57.575972 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrdp\" (UniqueName: \"kubernetes.io/projected/b233144a-53c3-46ce-8519-5ba3943f2e3b-kube-api-access-7xrdp\") pod \"cilium-7bbxn\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " pod="kube-system/cilium-7bbxn" Sep 4 23:45:57.585997 kubelet[3344]: E0904 23:45:57.585689 3344 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 23:45:57.585997 kubelet[3344]: E0904 23:45:57.585724 3344 projected.go:194] Error preparing data for projected volume kube-api-access-t2z87 for pod kube-system/kube-proxy-zk56r: configmap "kube-root-ca.crt" not found Sep 4 23:45:57.585997 kubelet[3344]: E0904 23:45:57.585797 3344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1e7d305-baad-47df-bcb5-b16bc49595c0-kube-api-access-t2z87 podName:f1e7d305-baad-47df-bcb5-b16bc49595c0 nodeName:}" failed. No retries permitted until 2025-09-04 23:45:58.085774899 +0000 UTC m=+6.537793184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t2z87" (UniqueName: "kubernetes.io/projected/f1e7d305-baad-47df-bcb5-b16bc49595c0-kube-api-access-t2z87") pod "kube-proxy-zk56r" (UID: "f1e7d305-baad-47df-bcb5-b16bc49595c0") : configmap "kube-root-ca.crt" not found Sep 4 23:45:57.694913 kubelet[3344]: E0904 23:45:57.694860 3344 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 23:45:57.695107 kubelet[3344]: E0904 23:45:57.695067 3344 projected.go:194] Error preparing data for projected volume kube-api-access-7xrdp for pod kube-system/cilium-7bbxn: configmap "kube-root-ca.crt" not found Sep 4 23:45:57.695241 kubelet[3344]: E0904 23:45:57.695193 3344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233144a-53c3-46ce-8519-5ba3943f2e3b-kube-api-access-7xrdp podName:b233144a-53c3-46ce-8519-5ba3943f2e3b nodeName:}" failed. No retries permitted until 2025-09-04 23:45:58.195171269 +0000 UTC m=+6.647189594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7xrdp" (UniqueName: "kubernetes.io/projected/b233144a-53c3-46ce-8519-5ba3943f2e3b-kube-api-access-7xrdp") pod "cilium-7bbxn" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b") : configmap "kube-root-ca.crt" not found Sep 4 23:45:58.180459 kubelet[3344]: E0904 23:45:58.179870 3344 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 23:45:58.180459 kubelet[3344]: E0904 23:45:58.179910 3344 projected.go:194] Error preparing data for projected volume kube-api-access-t2z87 for pod kube-system/kube-proxy-zk56r: configmap "kube-root-ca.crt" not found Sep 4 23:45:58.180459 kubelet[3344]: E0904 23:45:58.179950 3344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1e7d305-baad-47df-bcb5-b16bc49595c0-kube-api-access-t2z87 podName:f1e7d305-baad-47df-bcb5-b16bc49595c0 nodeName:}" failed. No retries permitted until 2025-09-04 23:45:59.17993468 +0000 UTC m=+7.631952965 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t2z87" (UniqueName: "kubernetes.io/projected/f1e7d305-baad-47df-bcb5-b16bc49595c0-kube-api-access-t2z87") pod "kube-proxy-zk56r" (UID: "f1e7d305-baad-47df-bcb5-b16bc49595c0") : configmap "kube-root-ca.crt" not found Sep 4 23:45:58.398374 containerd[1735]: time="2025-09-04T23:45:58.398317821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7bbxn,Uid:b233144a-53c3-46ce-8519-5ba3943f2e3b,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:58.451978 containerd[1735]: time="2025-09-04T23:45:58.451894427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:58.452572 containerd[1735]: time="2025-09-04T23:45:58.452341027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:58.452572 containerd[1735]: time="2025-09-04T23:45:58.452380266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:58.452846 containerd[1735]: time="2025-09-04T23:45:58.452703986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:58.472816 systemd[1]: Started cri-containerd-2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf.scope - libcontainer container 2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf. Sep 4 23:45:58.514713 containerd[1735]: time="2025-09-04T23:45:58.514575987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7bbxn,Uid:b233144a-53c3-46ce-8519-5ba3943f2e3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\"" Sep 4 23:45:58.518281 systemd[1]: Created slice kubepods-besteffort-pod5340cd6d_1279_42f3_9174_d0003074c03e.slice - libcontainer container kubepods-besteffort-pod5340cd6d_1279_42f3_9174_d0003074c03e.slice. Sep 4 23:45:58.524188 containerd[1735]: time="2025-09-04T23:45:58.524119621Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:45:58.583044 kubelet[3344]: I0904 23:45:58.582940 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5340cd6d-1279-42f3-9174-d0003074c03e-cilium-config-path\") pod \"cilium-operator-5d85765b45-twj6x\" (UID: \"5340cd6d-1279-42f3-9174-d0003074c03e\") " pod="kube-system/cilium-operator-5d85765b45-twj6x" Sep 4 23:45:58.583044 kubelet[3344]: I0904 23:45:58.583001 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfjmt\" (UniqueName: \"kubernetes.io/projected/5340cd6d-1279-42f3-9174-d0003074c03e-kube-api-access-kfjmt\") pod \"cilium-operator-5d85765b45-twj6x\" (UID: \"5340cd6d-1279-42f3-9174-d0003074c03e\") " pod="kube-system/cilium-operator-5d85765b45-twj6x" Sep 4 23:45:58.825278 containerd[1735]: time="2025-09-04T23:45:58.824830669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-twj6x,Uid:5340cd6d-1279-42f3-9174-d0003074c03e,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:58.868601 containerd[1735]: time="2025-09-04T23:45:58.868336281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:58.868601 containerd[1735]: time="2025-09-04T23:45:58.868404561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:58.868601 containerd[1735]: time="2025-09-04T23:45:58.868415841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:58.868901 containerd[1735]: time="2025-09-04T23:45:58.868580761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:58.891830 systemd[1]: Started cri-containerd-73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978.scope - libcontainer container 73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978. Sep 4 23:45:58.919811 containerd[1735]: time="2025-09-04T23:45:58.919549129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-twj6x,Uid:5340cd6d-1279-42f3-9174-d0003074c03e,Namespace:kube-system,Attempt:0,} returns sandbox id \"73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978\"" Sep 4 23:45:59.279138 containerd[1735]: time="2025-09-04T23:45:59.278759579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zk56r,Uid:f1e7d305-baad-47df-bcb5-b16bc49595c0,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:59.319285 containerd[1735]: time="2025-09-04T23:45:59.319192954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:59.319454 containerd[1735]: time="2025-09-04T23:45:59.319260434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:59.319454 containerd[1735]: time="2025-09-04T23:45:59.319276834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:59.319454 containerd[1735]: time="2025-09-04T23:45:59.319358314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:59.338828 systemd[1]: Started cri-containerd-6e60612635a9520746d1d08b93ecc99c6284f0fb8fc859f8952755539dd98626.scope - libcontainer container 6e60612635a9520746d1d08b93ecc99c6284f0fb8fc859f8952755539dd98626. Sep 4 23:45:59.360100 containerd[1735]: time="2025-09-04T23:45:59.360040448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zk56r,Uid:f1e7d305-baad-47df-bcb5-b16bc49595c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e60612635a9520746d1d08b93ecc99c6284f0fb8fc859f8952755539dd98626\"" Sep 4 23:45:59.363853 containerd[1735]: time="2025-09-04T23:45:59.363467605Z" level=info msg="CreateContainer within sandbox \"6e60612635a9520746d1d08b93ecc99c6284f0fb8fc859f8952755539dd98626\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:45:59.404091 containerd[1735]: time="2025-09-04T23:45:59.404039700Z" level=info msg="CreateContainer within sandbox \"6e60612635a9520746d1d08b93ecc99c6284f0fb8fc859f8952755539dd98626\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"568935bf84c003456aa18820040eba635b2f41a015d52764b88cc42d9fb9a7f8\"" Sep 4 23:45:59.405730 containerd[1735]: time="2025-09-04T23:45:59.404671019Z" level=info msg="StartContainer for \"568935bf84c003456aa18820040eba635b2f41a015d52764b88cc42d9fb9a7f8\"" Sep 4 23:45:59.430810 systemd[1]: Started cri-containerd-568935bf84c003456aa18820040eba635b2f41a015d52764b88cc42d9fb9a7f8.scope - libcontainer container 568935bf84c003456aa18820040eba635b2f41a015d52764b88cc42d9fb9a7f8. Sep 4 23:45:59.468930 containerd[1735]: time="2025-09-04T23:45:59.468878538Z" level=info msg="StartContainer for \"568935bf84c003456aa18820040eba635b2f41a015d52764b88cc42d9fb9a7f8\" returns successfully" Sep 4 23:45:59.722244 kubelet[3344]: I0904 23:45:59.722067 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zk56r" podStartSLOduration=2.722048977 podStartE2EDuration="2.722048977s" podCreationTimestamp="2025-09-04 23:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:59.720183658 +0000 UTC m=+8.172201983" watchObservedRunningTime="2025-09-04 23:45:59.722048977 +0000 UTC m=+8.174067302" Sep 4 23:46:04.104728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1086396706.mount: Deactivated successfully. Sep 4 23:46:05.751156 containerd[1735]: time="2025-09-04T23:46:05.751085967Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:05.754790 containerd[1735]: time="2025-09-04T23:46:05.754553763Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 4 23:46:05.758229 containerd[1735]: time="2025-09-04T23:46:05.758149759Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:05.760128 containerd[1735]: time="2025-09-04T23:46:05.759928197Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.235755456s" Sep 4 23:46:05.760128 containerd[1735]: time="2025-09-04T23:46:05.759967117Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 23:46:05.762204 containerd[1735]: time="2025-09-04T23:46:05.761958955Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:46:05.765487 containerd[1735]: time="2025-09-04T23:46:05.765445071Z" level=info msg="CreateContainer within sandbox \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:46:05.810208 containerd[1735]: time="2025-09-04T23:46:05.810154781Z" level=info msg="CreateContainer within sandbox \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f\"" Sep 4 23:46:05.812162 containerd[1735]: time="2025-09-04T23:46:05.810782820Z" level=info msg="StartContainer for \"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f\"" Sep 4 23:46:05.840873 systemd[1]: Started cri-containerd-3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f.scope - libcontainer container 3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f. Sep 4 23:46:05.874160 containerd[1735]: time="2025-09-04T23:46:05.873486510Z" level=info msg="StartContainer for \"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f\" returns successfully" Sep 4 23:46:05.884795 systemd[1]: cri-containerd-3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f.scope: Deactivated successfully. Sep 4 23:46:06.797214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f-rootfs.mount: Deactivated successfully. Sep 4 23:46:07.648166 containerd[1735]: time="2025-09-04T23:46:07.648043172Z" level=info msg="shim disconnected" id=3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f namespace=k8s.io Sep 4 23:46:07.648166 containerd[1735]: time="2025-09-04T23:46:07.648097932Z" level=warning msg="cleaning up after shim disconnected" id=3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f namespace=k8s.io Sep 4 23:46:07.648166 containerd[1735]: time="2025-09-04T23:46:07.648106332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:07.726440 containerd[1735]: time="2025-09-04T23:46:07.726391153Z" level=info msg="CreateContainer within sandbox \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:46:07.759180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2618177417.mount: Deactivated successfully. Sep 4 23:46:07.768607 containerd[1735]: time="2025-09-04T23:46:07.768568561Z" level=info msg="CreateContainer within sandbox \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706\"" Sep 4 23:46:07.769555 containerd[1735]: time="2025-09-04T23:46:07.769366601Z" level=info msg="StartContainer for \"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706\"" Sep 4 23:46:07.799835 systemd[1]: Started cri-containerd-b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706.scope - libcontainer container b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706. Sep 4 23:46:07.802316 systemd[1]: run-containerd-runc-k8s.io-b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706-runc.JbJB5c.mount: Deactivated successfully. Sep 4 23:46:07.833744 containerd[1735]: time="2025-09-04T23:46:07.833543073Z" level=info msg="StartContainer for \"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706\" returns successfully" Sep 4 23:46:07.843136 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:46:07.843889 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:07.844629 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:07.850402 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:07.852678 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:46:07.853287 systemd[1]: cri-containerd-b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706.scope: Deactivated successfully. Sep 4 23:46:07.877540 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:07.881332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706-rootfs.mount: Deactivated successfully. Sep 4 23:46:07.894724 containerd[1735]: time="2025-09-04T23:46:07.894622907Z" level=info msg="shim disconnected" id=b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706 namespace=k8s.io Sep 4 23:46:07.894724 containerd[1735]: time="2025-09-04T23:46:07.894717747Z" level=warning msg="cleaning up after shim disconnected" id=b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706 namespace=k8s.io Sep 4 23:46:07.895037 containerd[1735]: time="2025-09-04T23:46:07.894727187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:08.735495 containerd[1735]: time="2025-09-04T23:46:08.735394548Z" level=info msg="CreateContainer within sandbox \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:46:08.801856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911248862.mount: Deactivated successfully. Sep 4 23:46:08.831956 containerd[1735]: time="2025-09-04T23:46:08.831911193Z" level=info msg="CreateContainer within sandbox \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf\"" Sep 4 23:46:08.834318 containerd[1735]: time="2025-09-04T23:46:08.834272111Z" level=info msg="StartContainer for \"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf\"" Sep 4 23:46:08.864202 systemd[1]: Started cri-containerd-e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf.scope - libcontainer container e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf. Sep 4 23:46:08.913790 systemd[1]: cri-containerd-e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf.scope: Deactivated successfully. Sep 4 23:46:08.918301 containerd[1735]: time="2025-09-04T23:46:08.918134926Z" level=info msg="StartContainer for \"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf\" returns successfully" Sep 4 23:46:08.961918 containerd[1735]: time="2025-09-04T23:46:08.961619372Z" level=info msg="shim disconnected" id=e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf namespace=k8s.io Sep 4 23:46:08.961918 containerd[1735]: time="2025-09-04T23:46:08.961737452Z" level=warning msg="cleaning up after shim disconnected" id=e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf namespace=k8s.io Sep 4 23:46:08.961918 containerd[1735]: time="2025-09-04T23:46:08.961746612Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:09.298512 containerd[1735]: time="2025-09-04T23:46:09.298191429Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:09.301681 containerd[1735]: time="2025-09-04T23:46:09.301317427Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 4 23:46:09.305118 containerd[1735]: time="2025-09-04T23:46:09.305048064Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:09.307109 containerd[1735]: time="2025-09-04T23:46:09.306584542Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.544589507s" Sep 4 23:46:09.307109 containerd[1735]: time="2025-09-04T23:46:09.306629582Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 23:46:09.309885 containerd[1735]: time="2025-09-04T23:46:09.309852260Z" level=info msg="CreateContainer within sandbox \"73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:46:09.345960 containerd[1735]: time="2025-09-04T23:46:09.345909432Z" level=info msg="CreateContainer within sandbox \"73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\"" Sep 4 23:46:09.346859 containerd[1735]: time="2025-09-04T23:46:09.346771151Z" level=info msg="StartContainer for \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\"" Sep 4 23:46:09.373860 systemd[1]: Started cri-containerd-ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a.scope - libcontainer container ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a. Sep 4 23:46:09.407232 containerd[1735]: time="2025-09-04T23:46:09.407179384Z" level=info msg="StartContainer for \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\" returns successfully" Sep 4 23:46:09.736739 containerd[1735]: time="2025-09-04T23:46:09.736679607Z" level=info msg="CreateContainer within sandbox \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:46:09.780215 containerd[1735]: time="2025-09-04T23:46:09.780149813Z" level=info msg="CreateContainer within sandbox \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749\"" Sep 4 23:46:09.783718 containerd[1735]: time="2025-09-04T23:46:09.783671330Z" level=info msg="StartContainer for \"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749\"" Sep 4 23:46:09.807442 kubelet[3344]: I0904 23:46:09.807372 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-twj6x" podStartSLOduration=1.4209580179999999 podStartE2EDuration="11.807354432s" podCreationTimestamp="2025-09-04 23:45:58 +0000 UTC" firstStartedPulling="2025-09-04 23:45:58.921235008 +0000 UTC m=+7.373253293" lastFinishedPulling="2025-09-04 23:46:09.307631422 +0000 UTC m=+17.759649707" observedRunningTime="2025-09-04 23:46:09.807023032 +0000 UTC m=+18.259041357" watchObservedRunningTime="2025-09-04 23:46:09.807354432 +0000 UTC m=+18.259372757" Sep 4 23:46:09.809258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf-rootfs.mount: Deactivated successfully. Sep 4 23:46:09.840763 systemd[1]: Started cri-containerd-78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749.scope - libcontainer container 78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749. Sep 4 23:46:09.900865 systemd[1]: cri-containerd-78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749.scope: Deactivated successfully. Sep 4 23:46:09.902547 containerd[1735]: time="2025-09-04T23:46:09.902427157Z" level=info msg="StartContainer for \"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749\" returns successfully" Sep 4 23:46:09.938522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749-rootfs.mount: Deactivated successfully. Sep 4 23:46:10.253893 containerd[1735]: time="2025-09-04T23:46:10.253689323Z" level=info msg="shim disconnected" id=78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749 namespace=k8s.io Sep 4 23:46:10.253893 containerd[1735]: time="2025-09-04T23:46:10.253741003Z" level=warning msg="cleaning up after shim disconnected" id=78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749 namespace=k8s.io Sep 4 23:46:10.253893 containerd[1735]: time="2025-09-04T23:46:10.253748803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:10.748405 containerd[1735]: time="2025-09-04T23:46:10.748362097Z" level=info msg="CreateContainer within sandbox \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:46:10.799370 containerd[1735]: time="2025-09-04T23:46:10.799318657Z" level=info msg="CreateContainer within sandbox \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\"" Sep 4 23:46:10.801180 containerd[1735]: time="2025-09-04T23:46:10.799943297Z" level=info msg="StartContainer for \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\"" Sep 4 23:46:10.837857 systemd[1]: Started cri-containerd-c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9.scope - libcontainer container c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9. Sep 4 23:46:10.876792 containerd[1735]: time="2025-09-04T23:46:10.874241119Z" level=info msg="StartContainer for \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\" returns successfully" Sep 4 23:46:10.968052 kubelet[3344]: I0904 23:46:10.968017 3344 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 4 23:46:11.011545 systemd[1]: Created slice kubepods-burstable-podfd48aeb4_2ff1_415a_a6d6_d9421ed6dee1.slice - libcontainer container kubepods-burstable-podfd48aeb4_2ff1_415a_a6d6_d9421ed6dee1.slice. Sep 4 23:46:11.033937 systemd[1]: Created slice kubepods-burstable-pod3513f54b_128c_41e8_8539_a87cf013f339.slice - libcontainer container kubepods-burstable-pod3513f54b_128c_41e8_8539_a87cf013f339.slice. Sep 4 23:46:11.067794 kubelet[3344]: I0904 23:46:11.067610 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mrq4\" (UniqueName: \"kubernetes.io/projected/3513f54b-128c-41e8-8539-a87cf013f339-kube-api-access-8mrq4\") pod \"coredns-7c65d6cfc9-hnwlm\" (UID: \"3513f54b-128c-41e8-8539-a87cf013f339\") " pod="kube-system/coredns-7c65d6cfc9-hnwlm" Sep 4 23:46:11.067794 kubelet[3344]: I0904 23:46:11.067674 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3513f54b-128c-41e8-8539-a87cf013f339-config-volume\") pod \"coredns-7c65d6cfc9-hnwlm\" (UID: \"3513f54b-128c-41e8-8539-a87cf013f339\") " pod="kube-system/coredns-7c65d6cfc9-hnwlm" Sep 4 23:46:11.067794 kubelet[3344]: I0904 23:46:11.067698 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbb5b\" (UniqueName: \"kubernetes.io/projected/fd48aeb4-2ff1-415a-a6d6-d9421ed6dee1-kube-api-access-nbb5b\") pod \"coredns-7c65d6cfc9-g6rzl\" (UID: \"fd48aeb4-2ff1-415a-a6d6-d9421ed6dee1\") " pod="kube-system/coredns-7c65d6cfc9-g6rzl" Sep 4 23:46:11.067794 kubelet[3344]: I0904 23:46:11.067718 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd48aeb4-2ff1-415a-a6d6-d9421ed6dee1-config-volume\") pod \"coredns-7c65d6cfc9-g6rzl\" (UID: \"fd48aeb4-2ff1-415a-a6d6-d9421ed6dee1\") " pod="kube-system/coredns-7c65d6cfc9-g6rzl" Sep 4 23:46:11.319190 containerd[1735]: time="2025-09-04T23:46:11.318476732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g6rzl,Uid:fd48aeb4-2ff1-415a-a6d6-d9421ed6dee1,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:11.337303 containerd[1735]: time="2025-09-04T23:46:11.337262877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hnwlm,Uid:3513f54b-128c-41e8-8539-a87cf013f339,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:13.309924 systemd-networkd[1558]: cilium_host: Link UP Sep 4 23:46:13.311329 systemd-networkd[1558]: cilium_net: Link UP Sep 4 23:46:13.311875 systemd-networkd[1558]: cilium_net: Gained carrier Sep 4 23:46:13.312052 systemd-networkd[1558]: cilium_host: Gained carrier Sep 4 23:46:13.517161 systemd-networkd[1558]: cilium_vxlan: Link UP Sep 4 23:46:13.517168 systemd-networkd[1558]: cilium_vxlan: Gained carrier Sep 4 23:46:13.641850 systemd-networkd[1558]: cilium_host: Gained IPv6LL Sep 4 23:46:13.881915 systemd-networkd[1558]: cilium_net: Gained IPv6LL Sep 4 23:46:13.898683 kernel: NET: Registered PF_ALG protocol family Sep 4 23:46:14.734495 systemd-networkd[1558]: lxc_health: Link UP Sep 4 23:46:14.736989 systemd-networkd[1558]: lxc_health: Gained carrier Sep 4 23:46:14.902769 systemd-networkd[1558]: lxcb5d92dd5e2d4: Link UP Sep 4 23:46:14.916672 kernel: eth0: renamed from tmp4ffe4 Sep 4 23:46:14.923073 systemd-networkd[1558]: lxcb5d92dd5e2d4: Gained carrier Sep 4 23:46:14.933515 systemd-networkd[1558]: lxc2b0c584d1de1: Link UP Sep 4 23:46:14.937753 kernel: eth0: renamed from tmp70f42 Sep 4 23:46:14.942287 systemd-networkd[1558]: lxc2b0c584d1de1: Gained carrier Sep 4 23:46:15.097823 systemd-networkd[1558]: cilium_vxlan: Gained IPv6LL Sep 4 23:46:16.313861 systemd-networkd[1558]: lxcb5d92dd5e2d4: Gained IPv6LL Sep 4 23:46:16.433773 kubelet[3344]: I0904 23:46:16.433689 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7bbxn" podStartSLOduration=12.194375765 podStartE2EDuration="19.433671379s" podCreationTimestamp="2025-09-04 23:45:57 +0000 UTC" firstStartedPulling="2025-09-04 23:45:58.522180662 +0000 UTC m=+6.974198987" lastFinishedPulling="2025-09-04 23:46:05.761476276 +0000 UTC m=+14.213494601" observedRunningTime="2025-09-04 23:46:11.779588532 +0000 UTC m=+20.231606857" watchObservedRunningTime="2025-09-04 23:46:16.433671379 +0000 UTC m=+24.885689704" Sep 4 23:46:16.442773 systemd-networkd[1558]: lxc_health: Gained IPv6LL Sep 4 23:46:16.954868 systemd-networkd[1558]: lxc2b0c584d1de1: Gained IPv6LL Sep 4 23:46:18.634988 containerd[1735]: time="2025-09-04T23:46:18.634237324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:18.634988 containerd[1735]: time="2025-09-04T23:46:18.634347084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:18.634988 containerd[1735]: time="2025-09-04T23:46:18.634384964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:18.634988 containerd[1735]: time="2025-09-04T23:46:18.634479364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:18.655844 containerd[1735]: time="2025-09-04T23:46:18.654475787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:18.655844 containerd[1735]: time="2025-09-04T23:46:18.654533707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:18.655844 containerd[1735]: time="2025-09-04T23:46:18.654548907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:18.655844 containerd[1735]: time="2025-09-04T23:46:18.654615427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:18.681898 systemd[1]: Started cri-containerd-4ffe460909a5b94cd4a1053eb0fbc42d2fc6712a256052397721decaa56a7e41.scope - libcontainer container 4ffe460909a5b94cd4a1053eb0fbc42d2fc6712a256052397721decaa56a7e41. Sep 4 23:46:18.700039 systemd[1]: Started cri-containerd-70f42123b88e6036231761e4afc83b4b27e2b07a7aefc96712387233b4a72c07.scope - libcontainer container 70f42123b88e6036231761e4afc83b4b27e2b07a7aefc96712387233b4a72c07. Sep 4 23:46:18.765966 containerd[1735]: time="2025-09-04T23:46:18.765908213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hnwlm,Uid:3513f54b-128c-41e8-8539-a87cf013f339,Namespace:kube-system,Attempt:0,} returns sandbox id \"70f42123b88e6036231761e4afc83b4b27e2b07a7aefc96712387233b4a72c07\"" Sep 4 23:46:18.766230 containerd[1735]: time="2025-09-04T23:46:18.766128573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g6rzl,Uid:fd48aeb4-2ff1-415a-a6d6-d9421ed6dee1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ffe460909a5b94cd4a1053eb0fbc42d2fc6712a256052397721decaa56a7e41\"" Sep 4 23:46:18.771924 containerd[1735]: time="2025-09-04T23:46:18.771369208Z" level=info msg="CreateContainer within sandbox \"70f42123b88e6036231761e4afc83b4b27e2b07a7aefc96712387233b4a72c07\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:18.771924 containerd[1735]: time="2025-09-04T23:46:18.771380888Z" level=info msg="CreateContainer within sandbox \"4ffe460909a5b94cd4a1053eb0fbc42d2fc6712a256052397721decaa56a7e41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:18.815195 containerd[1735]: time="2025-09-04T23:46:18.815141891Z" level=info msg="CreateContainer within sandbox \"70f42123b88e6036231761e4afc83b4b27e2b07a7aefc96712387233b4a72c07\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e71a72c54200d316684f52ea53a6be732d9fd5ab82f0fdb6a0195d69f96825e2\"" Sep 4 23:46:18.816980 containerd[1735]: time="2025-09-04T23:46:18.816029450Z" level=info msg="StartContainer for \"e71a72c54200d316684f52ea53a6be732d9fd5ab82f0fdb6a0195d69f96825e2\"" Sep 4 23:46:18.840868 containerd[1735]: time="2025-09-04T23:46:18.840832990Z" level=info msg="CreateContainer within sandbox \"4ffe460909a5b94cd4a1053eb0fbc42d2fc6712a256052397721decaa56a7e41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"787c76e85887da3b0cecf104bb9873ef7e2e768d0fc31c4c40dd2797e0de0871\"" Sep 4 23:46:18.841553 containerd[1735]: time="2025-09-04T23:46:18.841521229Z" level=info msg="StartContainer for \"787c76e85887da3b0cecf104bb9873ef7e2e768d0fc31c4c40dd2797e0de0871\"" Sep 4 23:46:18.843722 systemd[1]: Started cri-containerd-e71a72c54200d316684f52ea53a6be732d9fd5ab82f0fdb6a0195d69f96825e2.scope - libcontainer container e71a72c54200d316684f52ea53a6be732d9fd5ab82f0fdb6a0195d69f96825e2. Sep 4 23:46:18.871866 systemd[1]: Started cri-containerd-787c76e85887da3b0cecf104bb9873ef7e2e768d0fc31c4c40dd2797e0de0871.scope - libcontainer container 787c76e85887da3b0cecf104bb9873ef7e2e768d0fc31c4c40dd2797e0de0871. Sep 4 23:46:18.897755 containerd[1735]: time="2025-09-04T23:46:18.895639343Z" level=info msg="StartContainer for \"e71a72c54200d316684f52ea53a6be732d9fd5ab82f0fdb6a0195d69f96825e2\" returns successfully" Sep 4 23:46:18.913443 containerd[1735]: time="2025-09-04T23:46:18.913391008Z" level=info msg="StartContainer for \"787c76e85887da3b0cecf104bb9873ef7e2e768d0fc31c4c40dd2797e0de0871\" returns successfully" Sep 4 23:46:19.789577 kubelet[3344]: I0904 23:46:19.789496 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hnwlm" podStartSLOduration=21.789463109 podStartE2EDuration="21.789463109s" podCreationTimestamp="2025-09-04 23:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:19.78926579 +0000 UTC m=+28.241284115" watchObservedRunningTime="2025-09-04 23:46:19.789463109 +0000 UTC m=+28.241481434" Sep 4 23:46:19.806439 kubelet[3344]: I0904 23:46:19.806361 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-g6rzl" podStartSLOduration=21.806341935 podStartE2EDuration="21.806341935s" podCreationTimestamp="2025-09-04 23:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:19.805508696 +0000 UTC m=+28.257527021" watchObservedRunningTime="2025-09-04 23:46:19.806341935 +0000 UTC m=+28.258360260" Sep 4 23:47:58.230209 systemd[1]: Started sshd@7-10.200.20.36:22-10.200.16.10:36716.service - OpenSSH per-connection server daemon (10.200.16.10:36716). Sep 4 23:47:58.725413 sshd[4739]: Accepted publickey for core from 10.200.16.10 port 36716 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:47:58.727025 sshd-session[4739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:58.732746 systemd-logind[1704]: New session 10 of user core. Sep 4 23:47:58.741849 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:47:59.155857 sshd[4741]: Connection closed by 10.200.16.10 port 36716 Sep 4 23:47:59.156539 sshd-session[4739]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:59.160289 systemd[1]: sshd@7-10.200.20.36:22-10.200.16.10:36716.service: Deactivated successfully. Sep 4 23:47:59.162769 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:47:59.164042 systemd-logind[1704]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:47:59.165289 systemd-logind[1704]: Removed session 10. Sep 4 23:48:04.246620 systemd[1]: Started sshd@8-10.200.20.36:22-10.200.16.10:59546.service - OpenSSH per-connection server daemon (10.200.16.10:59546). Sep 4 23:48:04.707220 sshd[4756]: Accepted publickey for core from 10.200.16.10 port 59546 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:04.708796 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:04.714442 systemd-logind[1704]: New session 11 of user core. Sep 4 23:48:04.719810 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:48:05.115065 sshd[4758]: Connection closed by 10.200.16.10 port 59546 Sep 4 23:48:05.115797 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:05.119607 systemd-logind[1704]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:48:05.120283 systemd[1]: sshd@8-10.200.20.36:22-10.200.16.10:59546.service: Deactivated successfully. Sep 4 23:48:05.124413 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:48:05.125885 systemd-logind[1704]: Removed session 11. Sep 4 23:48:10.202286 systemd[1]: Started sshd@9-10.200.20.36:22-10.200.16.10:43522.service - OpenSSH per-connection server daemon (10.200.16.10:43522). Sep 4 23:48:10.658278 sshd[4771]: Accepted publickey for core from 10.200.16.10 port 43522 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:10.659292 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:10.669007 systemd-logind[1704]: New session 12 of user core. Sep 4 23:48:10.673855 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:48:11.069152 sshd[4773]: Connection closed by 10.200.16.10 port 43522 Sep 4 23:48:11.070609 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:11.073940 systemd[1]: sshd@9-10.200.20.36:22-10.200.16.10:43522.service: Deactivated successfully. Sep 4 23:48:11.077572 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:48:11.080967 systemd-logind[1704]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:48:11.083849 systemd-logind[1704]: Removed session 12. Sep 4 23:48:16.159964 systemd[1]: Started sshd@10-10.200.20.36:22-10.200.16.10:43534.service - OpenSSH per-connection server daemon (10.200.16.10:43534). Sep 4 23:48:16.614408 sshd[4786]: Accepted publickey for core from 10.200.16.10 port 43534 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:16.615966 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:16.620831 systemd-logind[1704]: New session 13 of user core. Sep 4 23:48:16.627819 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:48:17.010999 sshd[4788]: Connection closed by 10.200.16.10 port 43534 Sep 4 23:48:17.010495 sshd-session[4786]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:17.013607 systemd-logind[1704]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:48:17.013970 systemd[1]: sshd@10-10.200.20.36:22-10.200.16.10:43534.service: Deactivated successfully. Sep 4 23:48:17.017127 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:48:17.019727 systemd-logind[1704]: Removed session 13. Sep 4 23:48:17.097230 systemd[1]: Started sshd@11-10.200.20.36:22-10.200.16.10:43538.service - OpenSSH per-connection server daemon (10.200.16.10:43538). Sep 4 23:48:17.552709 sshd[4800]: Accepted publickey for core from 10.200.16.10 port 43538 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:17.555187 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:17.560409 systemd-logind[1704]: New session 14 of user core. Sep 4 23:48:17.563807 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:48:18.007688 sshd[4802]: Connection closed by 10.200.16.10 port 43538 Sep 4 23:48:18.008080 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:18.011774 systemd[1]: sshd@11-10.200.20.36:22-10.200.16.10:43538.service: Deactivated successfully. Sep 4 23:48:18.015858 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:48:18.016925 systemd-logind[1704]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:48:18.017893 systemd-logind[1704]: Removed session 14. Sep 4 23:48:18.101892 systemd[1]: Started sshd@12-10.200.20.36:22-10.200.16.10:43554.service - OpenSSH per-connection server daemon (10.200.16.10:43554). Sep 4 23:48:18.596222 sshd[4811]: Accepted publickey for core from 10.200.16.10 port 43554 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:18.598051 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:18.603369 systemd-logind[1704]: New session 15 of user core. Sep 4 23:48:18.606824 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:48:19.025687 sshd[4813]: Connection closed by 10.200.16.10 port 43554 Sep 4 23:48:19.026259 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:19.029587 systemd[1]: sshd@12-10.200.20.36:22-10.200.16.10:43554.service: Deactivated successfully. Sep 4 23:48:19.032098 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:48:19.033268 systemd-logind[1704]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:48:19.034241 systemd-logind[1704]: Removed session 15. Sep 4 23:48:24.113624 systemd[1]: Started sshd@13-10.200.20.36:22-10.200.16.10:35288.service - OpenSSH per-connection server daemon (10.200.16.10:35288). Sep 4 23:48:24.571705 sshd[4825]: Accepted publickey for core from 10.200.16.10 port 35288 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:24.573081 sshd-session[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:24.577235 systemd-logind[1704]: New session 16 of user core. Sep 4 23:48:24.586840 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:48:24.987353 sshd[4827]: Connection closed by 10.200.16.10 port 35288 Sep 4 23:48:24.987993 sshd-session[4825]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:24.992029 systemd[1]: sshd@13-10.200.20.36:22-10.200.16.10:35288.service: Deactivated successfully. Sep 4 23:48:24.994213 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:48:24.995101 systemd-logind[1704]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:48:24.996042 systemd-logind[1704]: Removed session 16. Sep 4 23:48:30.077867 systemd[1]: Started sshd@14-10.200.20.36:22-10.200.16.10:48774.service - OpenSSH per-connection server daemon (10.200.16.10:48774). Sep 4 23:48:30.575136 sshd[4842]: Accepted publickey for core from 10.200.16.10 port 48774 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:30.576423 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:30.581713 systemd-logind[1704]: New session 17 of user core. Sep 4 23:48:30.584997 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:48:30.994782 sshd[4844]: Connection closed by 10.200.16.10 port 48774 Sep 4 23:48:30.995361 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:30.999192 systemd[1]: sshd@14-10.200.20.36:22-10.200.16.10:48774.service: Deactivated successfully. Sep 4 23:48:30.999257 systemd-logind[1704]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:48:31.002592 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:48:31.004726 systemd-logind[1704]: Removed session 17. Sep 4 23:48:31.090615 systemd[1]: Started sshd@15-10.200.20.36:22-10.200.16.10:48780.service - OpenSSH per-connection server daemon (10.200.16.10:48780). Sep 4 23:48:31.585165 sshd[4856]: Accepted publickey for core from 10.200.16.10 port 48780 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:31.587382 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:31.592232 systemd-logind[1704]: New session 18 of user core. Sep 4 23:48:31.598827 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:48:32.035278 sshd[4858]: Connection closed by 10.200.16.10 port 48780 Sep 4 23:48:32.035865 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:32.039495 systemd[1]: sshd@15-10.200.20.36:22-10.200.16.10:48780.service: Deactivated successfully. Sep 4 23:48:32.041761 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:48:32.042487 systemd-logind[1704]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:48:32.043724 systemd-logind[1704]: Removed session 18. Sep 4 23:48:32.119933 systemd[1]: Started sshd@16-10.200.20.36:22-10.200.16.10:48792.service - OpenSSH per-connection server daemon (10.200.16.10:48792). Sep 4 23:48:32.581724 sshd[4868]: Accepted publickey for core from 10.200.16.10 port 48792 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:32.583105 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:32.587412 systemd-logind[1704]: New session 19 of user core. Sep 4 23:48:32.596070 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:48:34.262559 sshd[4870]: Connection closed by 10.200.16.10 port 48792 Sep 4 23:48:34.261917 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:34.265705 systemd[1]: sshd@16-10.200.20.36:22-10.200.16.10:48792.service: Deactivated successfully. Sep 4 23:48:34.268287 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:48:34.269413 systemd-logind[1704]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:48:34.271376 systemd-logind[1704]: Removed session 19. Sep 4 23:48:34.354951 systemd[1]: Started sshd@17-10.200.20.36:22-10.200.16.10:48802.service - OpenSSH per-connection server daemon (10.200.16.10:48802). Sep 4 23:48:34.811146 sshd[4887]: Accepted publickey for core from 10.200.16.10 port 48802 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:34.812582 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:34.816711 systemd-logind[1704]: New session 20 of user core. Sep 4 23:48:34.824840 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:48:35.322750 sshd[4889]: Connection closed by 10.200.16.10 port 48802 Sep 4 23:48:35.323133 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:35.326899 systemd-logind[1704]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:48:35.327144 systemd[1]: sshd@17-10.200.20.36:22-10.200.16.10:48802.service: Deactivated successfully. Sep 4 23:48:35.329912 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:48:35.332184 systemd-logind[1704]: Removed session 20. Sep 4 23:48:35.410909 systemd[1]: Started sshd@18-10.200.20.36:22-10.200.16.10:48816.service - OpenSSH per-connection server daemon (10.200.16.10:48816). Sep 4 23:48:35.866488 sshd[4899]: Accepted publickey for core from 10.200.16.10 port 48816 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:35.868132 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:35.872801 systemd-logind[1704]: New session 21 of user core. Sep 4 23:48:35.878840 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:48:36.270260 sshd[4901]: Connection closed by 10.200.16.10 port 48816 Sep 4 23:48:36.270872 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:36.274538 systemd-logind[1704]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:48:36.275533 systemd[1]: sshd@18-10.200.20.36:22-10.200.16.10:48816.service: Deactivated successfully. Sep 4 23:48:36.278261 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:48:36.280742 systemd-logind[1704]: Removed session 21. Sep 4 23:48:41.357951 systemd[1]: Started sshd@19-10.200.20.36:22-10.200.16.10:57418.service - OpenSSH per-connection server daemon (10.200.16.10:57418). Sep 4 23:48:41.814046 sshd[4916]: Accepted publickey for core from 10.200.16.10 port 57418 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:41.815500 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:41.819750 systemd-logind[1704]: New session 22 of user core. Sep 4 23:48:41.827802 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:48:42.217907 sshd[4918]: Connection closed by 10.200.16.10 port 57418 Sep 4 23:48:42.217812 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:42.220740 systemd-logind[1704]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:48:42.221358 systemd[1]: sshd@19-10.200.20.36:22-10.200.16.10:57418.service: Deactivated successfully. Sep 4 23:48:42.223720 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:48:42.227199 systemd-logind[1704]: Removed session 22. Sep 4 23:48:47.315955 systemd[1]: Started sshd@20-10.200.20.36:22-10.200.16.10:57428.service - OpenSSH per-connection server daemon (10.200.16.10:57428). Sep 4 23:48:47.810564 sshd[4930]: Accepted publickey for core from 10.200.16.10 port 57428 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:47.812016 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:47.817006 systemd-logind[1704]: New session 23 of user core. Sep 4 23:48:47.822824 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:48:48.227494 sshd[4932]: Connection closed by 10.200.16.10 port 57428 Sep 4 23:48:48.228324 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:48.232512 systemd-logind[1704]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:48:48.232578 systemd[1]: sshd@20-10.200.20.36:22-10.200.16.10:57428.service: Deactivated successfully. Sep 4 23:48:48.235490 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:48:48.237448 systemd-logind[1704]: Removed session 23. Sep 4 23:48:53.321659 systemd[1]: Started sshd@21-10.200.20.36:22-10.200.16.10:43148.service - OpenSSH per-connection server daemon (10.200.16.10:43148). Sep 4 23:48:53.820854 sshd[4945]: Accepted publickey for core from 10.200.16.10 port 43148 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:53.822243 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:53.826582 systemd-logind[1704]: New session 24 of user core. Sep 4 23:48:53.836834 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:48:54.239583 sshd[4947]: Connection closed by 10.200.16.10 port 43148 Sep 4 23:48:54.239486 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:54.243194 systemd-logind[1704]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:48:54.244292 systemd[1]: sshd@21-10.200.20.36:22-10.200.16.10:43148.service: Deactivated successfully. Sep 4 23:48:54.247201 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:48:54.248960 systemd-logind[1704]: Removed session 24. Sep 4 23:48:54.326579 systemd[1]: Started sshd@22-10.200.20.36:22-10.200.16.10:43164.service - OpenSSH per-connection server daemon (10.200.16.10:43164). Sep 4 23:48:54.793001 sshd[4959]: Accepted publickey for core from 10.200.16.10 port 43164 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:54.794305 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:54.799197 systemd-logind[1704]: New session 25 of user core. Sep 4 23:48:54.803812 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:48:56.628947 systemd[1]: run-containerd-runc-k8s.io-c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9-runc.wAqzya.mount: Deactivated successfully. Sep 4 23:48:56.630891 containerd[1735]: time="2025-09-04T23:48:56.629837547Z" level=info msg="StopContainer for \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\" with timeout 30 (s)" Sep 4 23:48:56.634155 containerd[1735]: time="2025-09-04T23:48:56.633531926Z" level=info msg="Stop container \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\" with signal terminated" Sep 4 23:48:56.647005 containerd[1735]: time="2025-09-04T23:48:56.646956930Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:48:56.651298 systemd[1]: cri-containerd-ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a.scope: Deactivated successfully. Sep 4 23:48:56.659890 containerd[1735]: time="2025-09-04T23:48:56.659772898Z" level=info msg="StopContainer for \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\" with timeout 2 (s)" Sep 4 23:48:56.660265 containerd[1735]: time="2025-09-04T23:48:56.660200255Z" level=info msg="Stop container \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\" with signal terminated" Sep 4 23:48:56.670390 systemd-networkd[1558]: lxc_health: Link DOWN Sep 4 23:48:56.670401 systemd-networkd[1558]: lxc_health: Lost carrier Sep 4 23:48:56.679820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a-rootfs.mount: Deactivated successfully. Sep 4 23:48:56.691854 systemd[1]: cri-containerd-c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9.scope: Deactivated successfully. Sep 4 23:48:56.692335 systemd[1]: cri-containerd-c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9.scope: Consumed 6.556s CPU time, 124.7M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:48:56.713351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9-rootfs.mount: Deactivated successfully. Sep 4 23:48:56.736890 containerd[1735]: time="2025-09-04T23:48:56.736754347Z" level=info msg="shim disconnected" id=c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9 namespace=k8s.io Sep 4 23:48:56.736890 containerd[1735]: time="2025-09-04T23:48:56.736833907Z" level=warning msg="cleaning up after shim disconnected" id=c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9 namespace=k8s.io Sep 4 23:48:56.737208 containerd[1735]: time="2025-09-04T23:48:56.736845587Z" level=info msg="shim disconnected" id=ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a namespace=k8s.io Sep 4 23:48:56.737208 containerd[1735]: time="2025-09-04T23:48:56.737000266Z" level=warning msg="cleaning up after shim disconnected" id=ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a namespace=k8s.io Sep 4 23:48:56.737208 containerd[1735]: time="2025-09-04T23:48:56.737014146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:56.737208 containerd[1735]: time="2025-09-04T23:48:56.736844627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:56.753690 containerd[1735]: time="2025-09-04T23:48:56.753186666Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:48:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:48:56.755478 containerd[1735]: time="2025-09-04T23:48:56.755143056Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:48:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:48:56.762833 containerd[1735]: time="2025-09-04T23:48:56.762785819Z" level=info msg="StopContainer for \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\" returns successfully" Sep 4 23:48:56.763406 kubelet[3344]: E0904 23:48:56.763371 3344 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:48:56.765382 containerd[1735]: time="2025-09-04T23:48:56.763592575Z" level=info msg="StopPodSandbox for \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\"" Sep 4 23:48:56.765382 containerd[1735]: time="2025-09-04T23:48:56.763626775Z" level=info msg="Container to stop \"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:56.765382 containerd[1735]: time="2025-09-04T23:48:56.763638214Z" level=info msg="Container to stop \"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:56.765382 containerd[1735]: time="2025-09-04T23:48:56.763669054Z" level=info msg="Container to stop \"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:56.765382 containerd[1735]: time="2025-09-04T23:48:56.763677614Z" level=info msg="Container to stop \"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:56.765382 containerd[1735]: time="2025-09-04T23:48:56.763709334Z" level=info msg="Container to stop \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:56.765870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf-shm.mount: Deactivated successfully. Sep 4 23:48:56.766435 containerd[1735]: time="2025-09-04T23:48:56.766392841Z" level=info msg="StopContainer for \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\" returns successfully" Sep 4 23:48:56.768189 containerd[1735]: time="2025-09-04T23:48:56.768158952Z" level=info msg="StopPodSandbox for \"73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978\"" Sep 4 23:48:56.768473 containerd[1735]: time="2025-09-04T23:48:56.768352391Z" level=info msg="Container to stop \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:56.773380 systemd[1]: cri-containerd-2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf.scope: Deactivated successfully. Sep 4 23:48:56.785678 systemd[1]: cri-containerd-73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978.scope: Deactivated successfully. Sep 4 23:48:56.825582 containerd[1735]: time="2025-09-04T23:48:56.825375749Z" level=info msg="shim disconnected" id=2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf namespace=k8s.io Sep 4 23:48:56.825582 containerd[1735]: time="2025-09-04T23:48:56.825453349Z" level=warning msg="cleaning up after shim disconnected" id=2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf namespace=k8s.io Sep 4 23:48:56.825582 containerd[1735]: time="2025-09-04T23:48:56.825462269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:56.829841 containerd[1735]: time="2025-09-04T23:48:56.829563288Z" level=info msg="shim disconnected" id=73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978 namespace=k8s.io Sep 4 23:48:56.829841 containerd[1735]: time="2025-09-04T23:48:56.829626288Z" level=warning msg="cleaning up after shim disconnected" id=73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978 namespace=k8s.io Sep 4 23:48:56.829841 containerd[1735]: time="2025-09-04T23:48:56.829634408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:56.842675 containerd[1735]: time="2025-09-04T23:48:56.841919107Z" level=info msg="TearDown network for sandbox \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" successfully" Sep 4 23:48:56.842675 containerd[1735]: time="2025-09-04T23:48:56.841958227Z" level=info msg="StopPodSandbox for \"2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf\" returns successfully" Sep 4 23:48:56.849482 containerd[1735]: time="2025-09-04T23:48:56.849433990Z" level=info msg="TearDown network for sandbox \"73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978\" successfully" Sep 4 23:48:56.849482 containerd[1735]: time="2025-09-04T23:48:56.849465550Z" level=info msg="StopPodSandbox for \"73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978\" returns successfully" Sep 4 23:48:56.874354 kubelet[3344]: I0904 23:48:56.874301 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b233144a-53c3-46ce-8519-5ba3943f2e3b-hubble-tls\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.874621 kubelet[3344]: I0904 23:48:56.874606 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfjmt\" (UniqueName: \"kubernetes.io/projected/5340cd6d-1279-42f3-9174-d0003074c03e-kube-api-access-kfjmt\") pod \"5340cd6d-1279-42f3-9174-d0003074c03e\" (UID: \"5340cd6d-1279-42f3-9174-d0003074c03e\") " Sep 4 23:48:56.874747 kubelet[3344]: I0904 23:48:56.874731 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cni-path\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.874817 kubelet[3344]: I0904 23:48:56.874805 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-run\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875054 kubelet[3344]: I0904 23:48:56.874883 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-xtables-lock\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875054 kubelet[3344]: I0904 23:48:56.874904 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-etc-cni-netd\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875054 kubelet[3344]: I0904 23:48:56.874920 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-lib-modules\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875054 kubelet[3344]: I0904 23:48:56.874936 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-hostproc\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875054 kubelet[3344]: I0904 23:48:56.874952 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-cgroup\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875054 kubelet[3344]: I0904 23:48:56.874971 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b233144a-53c3-46ce-8519-5ba3943f2e3b-clustermesh-secrets\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875202 kubelet[3344]: I0904 23:48:56.874990 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-host-proc-sys-net\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875202 kubelet[3344]: I0904 23:48:56.875007 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-config-path\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875419 kubelet[3344]: I0904 23:48:56.875021 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-bpf-maps\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875419 kubelet[3344]: I0904 23:48:56.875283 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xrdp\" (UniqueName: \"kubernetes.io/projected/b233144a-53c3-46ce-8519-5ba3943f2e3b-kube-api-access-7xrdp\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875419 kubelet[3344]: I0904 23:48:56.875303 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5340cd6d-1279-42f3-9174-d0003074c03e-cilium-config-path\") pod \"5340cd6d-1279-42f3-9174-d0003074c03e\" (UID: \"5340cd6d-1279-42f3-9174-d0003074c03e\") " Sep 4 23:48:56.875419 kubelet[3344]: I0904 23:48:56.875320 3344 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-host-proc-sys-kernel\") pod \"b233144a-53c3-46ce-8519-5ba3943f2e3b\" (UID: \"b233144a-53c3-46ce-8519-5ba3943f2e3b\") " Sep 4 23:48:56.875419 kubelet[3344]: I0904 23:48:56.875330 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:56.875551 kubelet[3344]: I0904 23:48:56.875370 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:56.875705 kubelet[3344]: I0904 23:48:56.875583 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:56.875705 kubelet[3344]: I0904 23:48:56.875616 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-hostproc" (OuterVolumeSpecName: "hostproc") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:56.875705 kubelet[3344]: I0904 23:48:56.875631 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:56.876545 kubelet[3344]: I0904 23:48:56.876168 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:56.879763 kubelet[3344]: I0904 23:48:56.879728 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cni-path" (OuterVolumeSpecName: "cni-path") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:56.879914 kubelet[3344]: I0904 23:48:56.879792 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:56.880021 kubelet[3344]: I0904 23:48:56.879809 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:56.880406 kubelet[3344]: I0904 23:48:56.880387 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:56.880883 kubelet[3344]: I0904 23:48:56.880380 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5340cd6d-1279-42f3-9174-d0003074c03e-kube-api-access-kfjmt" (OuterVolumeSpecName: "kube-api-access-kfjmt") pod "5340cd6d-1279-42f3-9174-d0003074c03e" (UID: "5340cd6d-1279-42f3-9174-d0003074c03e"). InnerVolumeSpecName "kube-api-access-kfjmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 23:48:56.880883 kubelet[3344]: I0904 23:48:56.880455 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b233144a-53c3-46ce-8519-5ba3943f2e3b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 23:48:56.884359 kubelet[3344]: I0904 23:48:56.884327 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b233144a-53c3-46ce-8519-5ba3943f2e3b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 23:48:56.885362 kubelet[3344]: I0904 23:48:56.885214 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5340cd6d-1279-42f3-9174-d0003074c03e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5340cd6d-1279-42f3-9174-d0003074c03e" (UID: "5340cd6d-1279-42f3-9174-d0003074c03e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 23:48:56.886416 kubelet[3344]: I0904 23:48:56.886381 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 23:48:56.886856 kubelet[3344]: I0904 23:48:56.886823 3344 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b233144a-53c3-46ce-8519-5ba3943f2e3b-kube-api-access-7xrdp" (OuterVolumeSpecName: "kube-api-access-7xrdp") pod "b233144a-53c3-46ce-8519-5ba3943f2e3b" (UID: "b233144a-53c3-46ce-8519-5ba3943f2e3b"). InnerVolumeSpecName "kube-api-access-7xrdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 23:48:56.976056 kubelet[3344]: I0904 23:48:56.975880 3344 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-cgroup\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976056 kubelet[3344]: I0904 23:48:56.975914 3344 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b233144a-53c3-46ce-8519-5ba3943f2e3b-clustermesh-secrets\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976056 kubelet[3344]: I0904 23:48:56.975924 3344 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-host-proc-sys-net\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976056 kubelet[3344]: I0904 23:48:56.975933 3344 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-config-path\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976056 kubelet[3344]: I0904 23:48:56.975943 3344 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-bpf-maps\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976056 kubelet[3344]: I0904 23:48:56.975952 3344 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xrdp\" (UniqueName: \"kubernetes.io/projected/b233144a-53c3-46ce-8519-5ba3943f2e3b-kube-api-access-7xrdp\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976056 kubelet[3344]: I0904 23:48:56.975962 3344 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-host-proc-sys-kernel\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976056 kubelet[3344]: I0904 23:48:56.975971 3344 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5340cd6d-1279-42f3-9174-d0003074c03e-cilium-config-path\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976339 kubelet[3344]: I0904 23:48:56.975980 3344 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cni-path\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976339 kubelet[3344]: I0904 23:48:56.975988 3344 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b233144a-53c3-46ce-8519-5ba3943f2e3b-hubble-tls\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976339 kubelet[3344]: I0904 23:48:56.975996 3344 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfjmt\" (UniqueName: \"kubernetes.io/projected/5340cd6d-1279-42f3-9174-d0003074c03e-kube-api-access-kfjmt\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976339 kubelet[3344]: I0904 23:48:56.976004 3344 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-lib-modules\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976339 kubelet[3344]: I0904 23:48:56.976012 3344 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-cilium-run\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976339 kubelet[3344]: I0904 23:48:56.976021 3344 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-xtables-lock\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976339 kubelet[3344]: I0904 23:48:56.976030 3344 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-etc-cni-netd\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:56.976339 kubelet[3344]: I0904 23:48:56.976037 3344 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b233144a-53c3-46ce-8519-5ba3943f2e3b-hostproc\") on node \"ci-4230.2.2-n-1143fb47ea\" DevicePath \"\"" Sep 4 23:48:57.068450 kubelet[3344]: I0904 23:48:57.066919 3344 scope.go:117] "RemoveContainer" containerID="ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a" Sep 4 23:48:57.070358 containerd[1735]: time="2025-09-04T23:48:57.070303497Z" level=info msg="RemoveContainer for \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\"" Sep 4 23:48:57.076696 systemd[1]: Removed slice kubepods-besteffort-pod5340cd6d_1279_42f3_9174_d0003074c03e.slice - libcontainer container kubepods-besteffort-pod5340cd6d_1279_42f3_9174_d0003074c03e.slice. Sep 4 23:48:57.079738 containerd[1735]: time="2025-09-04T23:48:57.079690011Z" level=info msg="RemoveContainer for \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\" returns successfully" Sep 4 23:48:57.081512 kubelet[3344]: I0904 23:48:57.081488 3344 scope.go:117] "RemoveContainer" containerID="ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a" Sep 4 23:48:57.082129 containerd[1735]: time="2025-09-04T23:48:57.082078479Z" level=error msg="ContainerStatus for \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\": not found" Sep 4 23:48:57.082324 kubelet[3344]: E0904 23:48:57.082284 3344 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\": not found" containerID="ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a" Sep 4 23:48:57.082529 kubelet[3344]: I0904 23:48:57.082422 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a"} err="failed to get container status \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae44a585e7977afa4750566f4347c512ffd881d38f01691aa49db527516bdd0a\": not found" Sep 4 23:48:57.082847 kubelet[3344]: I0904 23:48:57.082830 3344 scope.go:117] "RemoveContainer" containerID="c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9" Sep 4 23:48:57.084781 systemd[1]: Removed slice kubepods-burstable-podb233144a_53c3_46ce_8519_5ba3943f2e3b.slice - libcontainer container kubepods-burstable-podb233144a_53c3_46ce_8519_5ba3943f2e3b.slice. Sep 4 23:48:57.085007 systemd[1]: kubepods-burstable-podb233144a_53c3_46ce_8519_5ba3943f2e3b.slice: Consumed 6.634s CPU time, 125.2M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:48:57.087031 containerd[1735]: time="2025-09-04T23:48:57.086996455Z" level=info msg="RemoveContainer for \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\"" Sep 4 23:48:57.096569 containerd[1735]: time="2025-09-04T23:48:57.096523168Z" level=info msg="RemoveContainer for \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\" returns successfully" Sep 4 23:48:57.098281 kubelet[3344]: I0904 23:48:57.096980 3344 scope.go:117] "RemoveContainer" containerID="78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749" Sep 4 23:48:57.099369 containerd[1735]: time="2025-09-04T23:48:57.099336914Z" level=info msg="RemoveContainer for \"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749\"" Sep 4 23:48:57.108592 containerd[1735]: time="2025-09-04T23:48:57.108548428Z" level=info msg="RemoveContainer for \"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749\" returns successfully" Sep 4 23:48:57.109965 kubelet[3344]: I0904 23:48:57.108876 3344 scope.go:117] "RemoveContainer" containerID="e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf" Sep 4 23:48:57.111689 containerd[1735]: time="2025-09-04T23:48:57.111296374Z" level=info msg="RemoveContainer for \"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf\"" Sep 4 23:48:57.119120 containerd[1735]: time="2025-09-04T23:48:57.119084096Z" level=info msg="RemoveContainer for \"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf\" returns successfully" Sep 4 23:48:57.119464 kubelet[3344]: I0904 23:48:57.119444 3344 scope.go:117] "RemoveContainer" containerID="b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706" Sep 4 23:48:57.122306 containerd[1735]: time="2025-09-04T23:48:57.122243880Z" level=info msg="RemoveContainer for \"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706\"" Sep 4 23:48:57.131255 containerd[1735]: time="2025-09-04T23:48:57.131081637Z" level=info msg="RemoveContainer for \"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706\" returns successfully" Sep 4 23:48:57.131343 kubelet[3344]: I0904 23:48:57.131301 3344 scope.go:117] "RemoveContainer" containerID="3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f" Sep 4 23:48:57.132812 containerd[1735]: time="2025-09-04T23:48:57.132717348Z" level=info msg="RemoveContainer for \"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f\"" Sep 4 23:48:57.141273 containerd[1735]: time="2025-09-04T23:48:57.141230426Z" level=info msg="RemoveContainer for \"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f\" returns successfully" Sep 4 23:48:57.141694 kubelet[3344]: I0904 23:48:57.141559 3344 scope.go:117] "RemoveContainer" containerID="c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9" Sep 4 23:48:57.141984 containerd[1735]: time="2025-09-04T23:48:57.141920423Z" level=error msg="ContainerStatus for \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\": not found" Sep 4 23:48:57.142138 kubelet[3344]: E0904 23:48:57.142110 3344 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\": not found" containerID="c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9" Sep 4 23:48:57.142181 kubelet[3344]: I0904 23:48:57.142143 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9"} err="failed to get container status \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7799599c1df1ba19fe547d211ceb149f229f1c0bbddc3ea072bec21c3a572c9\": not found" Sep 4 23:48:57.142181 kubelet[3344]: I0904 23:48:57.142167 3344 scope.go:117] "RemoveContainer" containerID="78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749" Sep 4 23:48:57.142400 containerd[1735]: time="2025-09-04T23:48:57.142373981Z" level=error msg="ContainerStatus for \"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749\": not found" Sep 4 23:48:57.142615 kubelet[3344]: E0904 23:48:57.142586 3344 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749\": not found" containerID="78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749" Sep 4 23:48:57.142690 kubelet[3344]: I0904 23:48:57.142631 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749"} err="failed to get container status \"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749\": rpc error: code = NotFound desc = an error occurred when try to find container \"78376c8e86c89b7e28565cb93c8077c18531dd70260f10bd93f96e9d5abaf749\": not found" Sep 4 23:48:57.142690 kubelet[3344]: I0904 23:48:57.142686 3344 scope.go:117] "RemoveContainer" containerID="e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf" Sep 4 23:48:57.143007 containerd[1735]: time="2025-09-04T23:48:57.142970138Z" level=error msg="ContainerStatus for \"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf\": not found" Sep 4 23:48:57.143128 kubelet[3344]: E0904 23:48:57.143101 3344 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf\": not found" containerID="e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf" Sep 4 23:48:57.143162 kubelet[3344]: I0904 23:48:57.143138 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf"} err="failed to get container status \"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9b8b7988de25f62d754fc7b87925140e5e121eca5a7ff0da1f0a99938583dcf\": not found" Sep 4 23:48:57.143162 kubelet[3344]: I0904 23:48:57.143155 3344 scope.go:117] "RemoveContainer" containerID="b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706" Sep 4 23:48:57.143410 containerd[1735]: time="2025-09-04T23:48:57.143377936Z" level=error msg="ContainerStatus for \"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706\": not found" Sep 4 23:48:57.143658 kubelet[3344]: E0904 23:48:57.143592 3344 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706\": not found" containerID="b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706" Sep 4 23:48:57.143700 kubelet[3344]: I0904 23:48:57.143660 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706"} err="failed to get container status \"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5ef25ab1bfa53f9f97616f7a4a860eb79650e5ea8f0e344ef75115f66300706\": not found" Sep 4 23:48:57.143700 kubelet[3344]: I0904 23:48:57.143676 3344 scope.go:117] "RemoveContainer" containerID="3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f" Sep 4 23:48:57.143979 containerd[1735]: time="2025-09-04T23:48:57.143949453Z" level=error msg="ContainerStatus for \"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f\": not found" Sep 4 23:48:57.144115 kubelet[3344]: E0904 23:48:57.144065 3344 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f\": not found" containerID="3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f" Sep 4 23:48:57.144154 kubelet[3344]: I0904 23:48:57.144120 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f"} err="failed to get container status \"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3747ef66415438edb7fafffecb4f8651dc6bd8c22a525c708c7047550efee46f\": not found" Sep 4 23:48:57.624395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978-rootfs.mount: Deactivated successfully. Sep 4 23:48:57.624793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73aa472956f2155a23b8f547ad0871156a2d75d8cd2f02e109110f279fbd2978-shm.mount: Deactivated successfully. Sep 4 23:48:57.624862 systemd[1]: var-lib-kubelet-pods-5340cd6d\x2d1279\x2d42f3\x2d9174\x2dd0003074c03e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkfjmt.mount: Deactivated successfully. Sep 4 23:48:57.624915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cee6119f15123a4eaa29b4f774815a9f568373873fa689d0475764a04d330cf-rootfs.mount: Deactivated successfully. Sep 4 23:48:57.624960 systemd[1]: var-lib-kubelet-pods-b233144a\x2d53c3\x2d46ce\x2d8519\x2d5ba3943f2e3b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7xrdp.mount: Deactivated successfully. Sep 4 23:48:57.625007 systemd[1]: var-lib-kubelet-pods-b233144a\x2d53c3\x2d46ce\x2d8519\x2d5ba3943f2e3b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:48:57.625055 systemd[1]: var-lib-kubelet-pods-b233144a\x2d53c3\x2d46ce\x2d8519\x2d5ba3943f2e3b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:48:57.649252 kubelet[3344]: I0904 23:48:57.649207 3344 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5340cd6d-1279-42f3-9174-d0003074c03e" path="/var/lib/kubelet/pods/5340cd6d-1279-42f3-9174-d0003074c03e/volumes" Sep 4 23:48:57.649619 kubelet[3344]: I0904 23:48:57.649595 3344 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b233144a-53c3-46ce-8519-5ba3943f2e3b" path="/var/lib/kubelet/pods/b233144a-53c3-46ce-8519-5ba3943f2e3b/volumes" Sep 4 23:48:58.658238 sshd[4961]: Connection closed by 10.200.16.10 port 43164 Sep 4 23:48:58.657640 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:58.660777 systemd-logind[1704]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:48:58.660964 systemd[1]: sshd@22-10.200.20.36:22-10.200.16.10:43164.service: Deactivated successfully. Sep 4 23:48:58.662915 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:48:58.665289 systemd-logind[1704]: Removed session 25. Sep 4 23:48:58.765909 systemd[1]: Started sshd@23-10.200.20.36:22-10.200.16.10:43178.service - OpenSSH per-connection server daemon (10.200.16.10:43178). Sep 4 23:48:59.261758 sshd[5117]: Accepted publickey for core from 10.200.16.10 port 43178 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:59.263131 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:59.267776 systemd-logind[1704]: New session 26 of user core. Sep 4 23:48:59.274855 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:49:00.732406 kubelet[3344]: E0904 23:49:00.732357 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b233144a-53c3-46ce-8519-5ba3943f2e3b" containerName="mount-bpf-fs" Sep 4 23:49:00.732406 kubelet[3344]: E0904 23:49:00.732396 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b233144a-53c3-46ce-8519-5ba3943f2e3b" containerName="cilium-agent" Sep 4 23:49:00.732406 kubelet[3344]: E0904 23:49:00.732403 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b233144a-53c3-46ce-8519-5ba3943f2e3b" containerName="mount-cgroup" Sep 4 23:49:00.732406 kubelet[3344]: E0904 23:49:00.732409 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b233144a-53c3-46ce-8519-5ba3943f2e3b" containerName="apply-sysctl-overwrites" Sep 4 23:49:00.732406 kubelet[3344]: E0904 23:49:00.732415 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5340cd6d-1279-42f3-9174-d0003074c03e" containerName="cilium-operator" Sep 4 23:49:00.732406 kubelet[3344]: E0904 23:49:00.732421 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b233144a-53c3-46ce-8519-5ba3943f2e3b" containerName="clean-cilium-state" Sep 4 23:49:00.733142 kubelet[3344]: I0904 23:49:00.732445 3344 memory_manager.go:354] "RemoveStaleState removing state" podUID="b233144a-53c3-46ce-8519-5ba3943f2e3b" containerName="cilium-agent" Sep 4 23:49:00.733142 kubelet[3344]: I0904 23:49:00.732452 3344 memory_manager.go:354] "RemoveStaleState removing state" podUID="5340cd6d-1279-42f3-9174-d0003074c03e" containerName="cilium-operator" Sep 4 23:49:00.740900 systemd[1]: Created slice kubepods-burstable-podd5a6e892_b20f_45ed_aec1_65346f8975a0.slice - libcontainer container kubepods-burstable-podd5a6e892_b20f_45ed_aec1_65346f8975a0.slice. Sep 4 23:49:00.790267 sshd[5119]: Connection closed by 10.200.16.10 port 43178 Sep 4 23:49:00.791126 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:00.795954 systemd-logind[1704]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:49:00.796966 systemd[1]: sshd@23-10.200.20.36:22-10.200.16.10:43178.service: Deactivated successfully. Sep 4 23:49:00.797603 kubelet[3344]: I0904 23:49:00.797270 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5a6e892-b20f-45ed-aec1-65346f8975a0-cilium-run\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.797603 kubelet[3344]: I0904 23:49:00.797304 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5a6e892-b20f-45ed-aec1-65346f8975a0-hubble-tls\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.797603 kubelet[3344]: I0904 23:49:00.797324 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5a6e892-b20f-45ed-aec1-65346f8975a0-cilium-config-path\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.797603 kubelet[3344]: I0904 23:49:00.797340 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8gj9\" (UniqueName: \"kubernetes.io/projected/d5a6e892-b20f-45ed-aec1-65346f8975a0-kube-api-access-b8gj9\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.797603 kubelet[3344]: I0904 23:49:00.797358 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5a6e892-b20f-45ed-aec1-65346f8975a0-cilium-cgroup\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.797603 kubelet[3344]: I0904 23:49:00.797375 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5a6e892-b20f-45ed-aec1-65346f8975a0-clustermesh-secrets\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.798556 kubelet[3344]: I0904 23:49:00.797390 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d5a6e892-b20f-45ed-aec1-65346f8975a0-cilium-ipsec-secrets\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.798556 kubelet[3344]: I0904 23:49:00.797406 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5a6e892-b20f-45ed-aec1-65346f8975a0-xtables-lock\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.798556 kubelet[3344]: I0904 23:49:00.797421 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5a6e892-b20f-45ed-aec1-65346f8975a0-hostproc\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.798556 kubelet[3344]: I0904 23:49:00.797434 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5a6e892-b20f-45ed-aec1-65346f8975a0-lib-modules\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.798556 kubelet[3344]: I0904 23:49:00.797449 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5a6e892-b20f-45ed-aec1-65346f8975a0-host-proc-sys-kernel\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.798556 kubelet[3344]: I0904 23:49:00.797464 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5a6e892-b20f-45ed-aec1-65346f8975a0-cni-path\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.798947 kubelet[3344]: I0904 23:49:00.797480 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5a6e892-b20f-45ed-aec1-65346f8975a0-host-proc-sys-net\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.798947 kubelet[3344]: I0904 23:49:00.797496 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5a6e892-b20f-45ed-aec1-65346f8975a0-bpf-maps\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.798947 kubelet[3344]: I0904 23:49:00.797514 3344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5a6e892-b20f-45ed-aec1-65346f8975a0-etc-cni-netd\") pod \"cilium-mr2vs\" (UID: \"d5a6e892-b20f-45ed-aec1-65346f8975a0\") " pod="kube-system/cilium-mr2vs" Sep 4 23:49:00.803068 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:49:00.806641 systemd[1]: session-26.scope: Consumed 1.079s CPU time, 23.6M memory peak. Sep 4 23:49:00.808781 systemd-logind[1704]: Removed session 26. Sep 4 23:49:00.881928 systemd[1]: Started sshd@24-10.200.20.36:22-10.200.16.10:44742.service - OpenSSH per-connection server daemon (10.200.16.10:44742). Sep 4 23:49:01.046573 containerd[1735]: time="2025-09-04T23:49:01.046454266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mr2vs,Uid:d5a6e892-b20f-45ed-aec1-65346f8975a0,Namespace:kube-system,Attempt:0,}" Sep 4 23:49:01.089407 containerd[1735]: time="2025-09-04T23:49:01.089301454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:01.090411 containerd[1735]: time="2025-09-04T23:49:01.090335289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:01.090526 containerd[1735]: time="2025-09-04T23:49:01.090377729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:01.090526 containerd[1735]: time="2025-09-04T23:49:01.090500928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:01.111874 systemd[1]: Started cri-containerd-f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f.scope - libcontainer container f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f. Sep 4 23:49:01.132952 containerd[1735]: time="2025-09-04T23:49:01.132912278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mr2vs,Uid:d5a6e892-b20f-45ed-aec1-65346f8975a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f\"" Sep 4 23:49:01.136778 containerd[1735]: time="2025-09-04T23:49:01.136739699Z" level=info msg="CreateContainer within sandbox \"f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:49:01.170992 containerd[1735]: time="2025-09-04T23:49:01.170943650Z" level=info msg="CreateContainer within sandbox \"f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2889b4f0ccd2c2f1945fee83c1c9eb6021d089627dcbb8819525c55a59c678e\"" Sep 4 23:49:01.171885 containerd[1735]: time="2025-09-04T23:49:01.171856326Z" level=info msg="StartContainer for \"c2889b4f0ccd2c2f1945fee83c1c9eb6021d089627dcbb8819525c55a59c678e\"" Sep 4 23:49:01.194872 systemd[1]: Started cri-containerd-c2889b4f0ccd2c2f1945fee83c1c9eb6021d089627dcbb8819525c55a59c678e.scope - libcontainer container c2889b4f0ccd2c2f1945fee83c1c9eb6021d089627dcbb8819525c55a59c678e. Sep 4 23:49:01.227611 containerd[1735]: time="2025-09-04T23:49:01.227483690Z" level=info msg="StartContainer for \"c2889b4f0ccd2c2f1945fee83c1c9eb6021d089627dcbb8819525c55a59c678e\" returns successfully" Sep 4 23:49:01.234339 systemd[1]: cri-containerd-c2889b4f0ccd2c2f1945fee83c1c9eb6021d089627dcbb8819525c55a59c678e.scope: Deactivated successfully. Sep 4 23:49:01.299352 containerd[1735]: time="2025-09-04T23:49:01.299019697Z" level=info msg="shim disconnected" id=c2889b4f0ccd2c2f1945fee83c1c9eb6021d089627dcbb8819525c55a59c678e namespace=k8s.io Sep 4 23:49:01.299352 containerd[1735]: time="2025-09-04T23:49:01.299092896Z" level=warning msg="cleaning up after shim disconnected" id=c2889b4f0ccd2c2f1945fee83c1c9eb6021d089627dcbb8819525c55a59c678e namespace=k8s.io Sep 4 23:49:01.299352 containerd[1735]: time="2025-09-04T23:49:01.299101016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:01.358943 sshd[5132]: Accepted publickey for core from 10.200.16.10 port 44742 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:49:01.360369 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:01.364707 systemd-logind[1704]: New session 27 of user core. Sep 4 23:49:01.368889 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:49:01.707948 sshd[5240]: Connection closed by 10.200.16.10 port 44742 Sep 4 23:49:01.708812 sshd-session[5132]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:01.712507 systemd[1]: sshd@24-10.200.20.36:22-10.200.16.10:44742.service: Deactivated successfully. Sep 4 23:49:01.714337 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:49:01.716256 systemd-logind[1704]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:49:01.717383 systemd-logind[1704]: Removed session 27. Sep 4 23:49:01.764681 kubelet[3344]: E0904 23:49:01.764539 3344 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:49:01.799917 systemd[1]: Started sshd@25-10.200.20.36:22-10.200.16.10:44756.service - OpenSSH per-connection server daemon (10.200.16.10:44756). Sep 4 23:49:02.092361 containerd[1735]: time="2025-09-04T23:49:02.091727175Z" level=info msg="CreateContainer within sandbox \"f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:49:02.117014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277824885.mount: Deactivated successfully. Sep 4 23:49:02.130208 containerd[1735]: time="2025-09-04T23:49:02.130066825Z" level=info msg="CreateContainer within sandbox \"f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"81dfb1d629906521c5e3cffcb6427a63cc6de27e11c4e1d34c1d0d15b6c92ff3\"" Sep 4 23:49:02.132584 containerd[1735]: time="2025-09-04T23:49:02.130990741Z" level=info msg="StartContainer for \"81dfb1d629906521c5e3cffcb6427a63cc6de27e11c4e1d34c1d0d15b6c92ff3\"" Sep 4 23:49:02.161855 systemd[1]: Started cri-containerd-81dfb1d629906521c5e3cffcb6427a63cc6de27e11c4e1d34c1d0d15b6c92ff3.scope - libcontainer container 81dfb1d629906521c5e3cffcb6427a63cc6de27e11c4e1d34c1d0d15b6c92ff3. Sep 4 23:49:02.191116 containerd[1735]: time="2025-09-04T23:49:02.190991804Z" level=info msg="StartContainer for \"81dfb1d629906521c5e3cffcb6427a63cc6de27e11c4e1d34c1d0d15b6c92ff3\" returns successfully" Sep 4 23:49:02.193294 systemd[1]: cri-containerd-81dfb1d629906521c5e3cffcb6427a63cc6de27e11c4e1d34c1d0d15b6c92ff3.scope: Deactivated successfully. Sep 4 23:49:02.230990 containerd[1735]: time="2025-09-04T23:49:02.230759087Z" level=info msg="shim disconnected" id=81dfb1d629906521c5e3cffcb6427a63cc6de27e11c4e1d34c1d0d15b6c92ff3 namespace=k8s.io Sep 4 23:49:02.230990 containerd[1735]: time="2025-09-04T23:49:02.230819007Z" level=warning msg="cleaning up after shim disconnected" id=81dfb1d629906521c5e3cffcb6427a63cc6de27e11c4e1d34c1d0d15b6c92ff3 namespace=k8s.io Sep 4 23:49:02.230990 containerd[1735]: time="2025-09-04T23:49:02.230827247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:02.240888 containerd[1735]: time="2025-09-04T23:49:02.240841797Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:49:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:49:02.298684 sshd[5248]: Accepted publickey for core from 10.200.16.10 port 44756 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:49:02.300053 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:02.304994 systemd-logind[1704]: New session 28 of user core. Sep 4 23:49:02.308806 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 23:49:02.902988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81dfb1d629906521c5e3cffcb6427a63cc6de27e11c4e1d34c1d0d15b6c92ff3-rootfs.mount: Deactivated successfully. Sep 4 23:49:03.093346 containerd[1735]: time="2025-09-04T23:49:03.093225584Z" level=info msg="CreateContainer within sandbox \"f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:49:03.134527 containerd[1735]: time="2025-09-04T23:49:03.134468633Z" level=info msg="CreateContainer within sandbox \"f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"198f1c70d930dc897a9055c891aa3a4995ab0fdd576f651e7322c1eb8c806b51\"" Sep 4 23:49:03.135434 containerd[1735]: time="2025-09-04T23:49:03.135324908Z" level=info msg="StartContainer for \"198f1c70d930dc897a9055c891aa3a4995ab0fdd576f651e7322c1eb8c806b51\"" Sep 4 23:49:03.168829 systemd[1]: Started cri-containerd-198f1c70d930dc897a9055c891aa3a4995ab0fdd576f651e7322c1eb8c806b51.scope - libcontainer container 198f1c70d930dc897a9055c891aa3a4995ab0fdd576f651e7322c1eb8c806b51. Sep 4 23:49:03.206268 systemd[1]: cri-containerd-198f1c70d930dc897a9055c891aa3a4995ab0fdd576f651e7322c1eb8c806b51.scope: Deactivated successfully. Sep 4 23:49:03.210798 containerd[1735]: time="2025-09-04T23:49:03.210735887Z" level=info msg="StartContainer for \"198f1c70d930dc897a9055c891aa3a4995ab0fdd576f651e7322c1eb8c806b51\" returns successfully" Sep 4 23:49:03.245407 containerd[1735]: time="2025-09-04T23:49:03.245340093Z" level=info msg="shim disconnected" id=198f1c70d930dc897a9055c891aa3a4995ab0fdd576f651e7322c1eb8c806b51 namespace=k8s.io Sep 4 23:49:03.245407 containerd[1735]: time="2025-09-04T23:49:03.245398293Z" level=warning msg="cleaning up after shim disconnected" id=198f1c70d930dc897a9055c891aa3a4995ab0fdd576f651e7322c1eb8c806b51 namespace=k8s.io Sep 4 23:49:03.245407 containerd[1735]: time="2025-09-04T23:49:03.245407413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:03.255166 containerd[1735]: time="2025-09-04T23:49:03.255113639Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:49:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:49:03.646571 kubelet[3344]: E0904 23:49:03.646291 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-hnwlm" podUID="3513f54b-128c-41e8-8539-a87cf013f339" Sep 4 23:49:03.903130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-198f1c70d930dc897a9055c891aa3a4995ab0fdd576f651e7322c1eb8c806b51-rootfs.mount: Deactivated successfully. Sep 4 23:49:04.103691 containerd[1735]: time="2025-09-04T23:49:04.103628656Z" level=info msg="CreateContainer within sandbox \"f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:49:04.143308 containerd[1735]: time="2025-09-04T23:49:04.143222115Z" level=info msg="CreateContainer within sandbox \"f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"670b9cf90ce2a1a257fdc00cdeafa2ef2d45a794279c625030ef41ab1714d184\"" Sep 4 23:49:04.143830 containerd[1735]: time="2025-09-04T23:49:04.143806392Z" level=info msg="StartContainer for \"670b9cf90ce2a1a257fdc00cdeafa2ef2d45a794279c625030ef41ab1714d184\"" Sep 4 23:49:04.172813 systemd[1]: Started cri-containerd-670b9cf90ce2a1a257fdc00cdeafa2ef2d45a794279c625030ef41ab1714d184.scope - libcontainer container 670b9cf90ce2a1a257fdc00cdeafa2ef2d45a794279c625030ef41ab1714d184. Sep 4 23:49:04.196381 systemd[1]: cri-containerd-670b9cf90ce2a1a257fdc00cdeafa2ef2d45a794279c625030ef41ab1714d184.scope: Deactivated successfully. Sep 4 23:49:04.203317 containerd[1735]: time="2025-09-04T23:49:04.203133500Z" level=info msg="StartContainer for \"670b9cf90ce2a1a257fdc00cdeafa2ef2d45a794279c625030ef41ab1714d184\" returns successfully" Sep 4 23:49:04.221048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-670b9cf90ce2a1a257fdc00cdeafa2ef2d45a794279c625030ef41ab1714d184-rootfs.mount: Deactivated successfully. Sep 4 23:49:04.232551 containerd[1735]: time="2025-09-04T23:49:04.232315457Z" level=info msg="shim disconnected" id=670b9cf90ce2a1a257fdc00cdeafa2ef2d45a794279c625030ef41ab1714d184 namespace=k8s.io Sep 4 23:49:04.232551 containerd[1735]: time="2025-09-04T23:49:04.232373496Z" level=warning msg="cleaning up after shim disconnected" id=670b9cf90ce2a1a257fdc00cdeafa2ef2d45a794279c625030ef41ab1714d184 namespace=k8s.io Sep 4 23:49:04.232551 containerd[1735]: time="2025-09-04T23:49:04.232382656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:05.108834 containerd[1735]: time="2025-09-04T23:49:05.108449575Z" level=info msg="CreateContainer within sandbox \"f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:49:05.146910 containerd[1735]: time="2025-09-04T23:49:05.146855379Z" level=info msg="CreateContainer within sandbox \"f6f1e96ad5672066b949872aed7346d1b8544f454ff06f4c15e134b760315d8f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b3e8e01a7f201f1d691d65c82ee6383d6f515d135ae60d2953878469ab822163\"" Sep 4 23:49:05.148914 containerd[1735]: time="2025-09-04T23:49:05.148807971Z" level=info msg="StartContainer for \"b3e8e01a7f201f1d691d65c82ee6383d6f515d135ae60d2953878469ab822163\"" Sep 4 23:49:05.158583 kubelet[3344]: I0904 23:49:05.158526 3344 setters.go:600] "Node became not ready" node="ci-4230.2.2-n-1143fb47ea" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T23:49:05Z","lastTransitionTime":"2025-09-04T23:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 23:49:05.191847 systemd[1]: Started cri-containerd-b3e8e01a7f201f1d691d65c82ee6383d6f515d135ae60d2953878469ab822163.scope - libcontainer container b3e8e01a7f201f1d691d65c82ee6383d6f515d135ae60d2953878469ab822163. Sep 4 23:49:05.229452 containerd[1735]: time="2025-09-04T23:49:05.229212886Z" level=info msg="StartContainer for \"b3e8e01a7f201f1d691d65c82ee6383d6f515d135ae60d2953878469ab822163\" returns successfully" Sep 4 23:49:05.647620 kubelet[3344]: E0904 23:49:05.646842 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-hnwlm" podUID="3513f54b-128c-41e8-8539-a87cf013f339" Sep 4 23:49:05.812719 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 23:49:08.522843 systemd-networkd[1558]: lxc_health: Link UP Sep 4 23:49:08.531212 systemd-networkd[1558]: lxc_health: Gained carrier Sep 4 23:49:09.073362 kubelet[3344]: I0904 23:49:09.073066 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mr2vs" podStartSLOduration=9.073047323 podStartE2EDuration="9.073047323s" podCreationTimestamp="2025-09-04 23:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:49:06.135496496 +0000 UTC m=+194.587514821" watchObservedRunningTime="2025-09-04 23:49:09.073047323 +0000 UTC m=+197.525065648" Sep 4 23:49:10.330332 systemd-networkd[1558]: lxc_health: Gained IPv6LL Sep 4 23:49:15.440213 sshd[5310]: Connection closed by 10.200.16.10 port 44756 Sep 4 23:49:15.440854 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:15.444639 systemd[1]: sshd@25-10.200.20.36:22-10.200.16.10:44756.service: Deactivated successfully. Sep 4 23:49:15.447251 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 23:49:15.448757 systemd-logind[1704]: Session 28 logged out. Waiting for processes to exit. Sep 4 23:49:15.450336 systemd-logind[1704]: Removed session 28.