Jul 15 23:11:35.037782 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 15 23:11:35.037800 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 22:00:45 -00 2025 Jul 15 23:11:35.037806 kernel: KASLR enabled Jul 15 23:11:35.037810 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 15 23:11:35.037815 kernel: printk: legacy bootconsole [pl11] enabled Jul 15 23:11:35.037819 kernel: efi: EFI v2.7 by EDK II Jul 15 23:11:35.037824 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f216698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 15 23:11:35.037828 kernel: random: crng init done Jul 15 23:11:35.037832 kernel: secureboot: Secure boot disabled Jul 15 23:11:35.037835 kernel: ACPI: Early table checksum verification disabled Jul 15 23:11:35.037839 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 15 23:11:35.037843 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:11:35.037847 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:11:35.037852 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 15 23:11:35.037857 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:11:35.037861 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:11:35.037865 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:11:35.037870 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:11:35.037875 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:11:35.037879 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:11:35.037883 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 15 23:11:35.037887 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:11:35.037891 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 15 23:11:35.037896 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 23:11:35.037900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 15 23:11:35.037904 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 15 23:11:35.037908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 15 23:11:35.037912 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 15 23:11:35.037917 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 15 23:11:35.037922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 15 23:11:35.037926 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 15 23:11:35.037930 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 15 23:11:35.037934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 15 23:11:35.037939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 15 23:11:35.037943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 15 23:11:35.037947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 15 23:11:35.037951 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 15 23:11:35.037955 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Jul 15 23:11:35.037959 kernel: Zone ranges: Jul 15 23:11:35.037963 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 15 23:11:35.037970 kernel: DMA32 empty Jul 15 23:11:35.037974 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 15 23:11:35.037979 kernel: Device empty Jul 15 23:11:35.037983 kernel: Movable zone start for each node Jul 15 23:11:35.037987 kernel: Early memory node ranges Jul 15 23:11:35.037992 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 15 23:11:35.037997 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 15 23:11:35.038001 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 15 23:11:35.038005 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 15 23:11:35.038010 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 15 23:11:35.038014 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 15 23:11:35.038018 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 15 23:11:35.038023 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 15 23:11:35.038027 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 15 23:11:35.038031 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 15 23:11:35.038035 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 15 23:11:35.038040 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Jul 15 23:11:35.038045 kernel: psci: probing for conduit method from ACPI. Jul 15 23:11:35.038049 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 23:11:35.038053 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 23:11:35.038058 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 15 23:11:35.038062 kernel: psci: SMC Calling Convention v1.4 Jul 15 23:11:35.038066 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 15 23:11:35.038071 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 15 23:11:35.038075 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 23:11:35.038079 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 23:11:35.038084 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 15 23:11:35.038088 kernel: Detected PIPT I-cache on CPU0 Jul 15 23:11:35.038093 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 15 23:11:35.038097 kernel: CPU features: detected: GIC system register CPU interface Jul 15 23:11:35.038102 kernel: CPU features: detected: Spectre-v4 Jul 15 23:11:35.038106 kernel: CPU features: detected: Spectre-BHB Jul 15 23:11:35.038110 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 23:11:35.038115 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 23:11:35.038119 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 15 23:11:35.038123 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 23:11:35.038127 kernel: alternatives: applying boot alternatives Jul 15 23:11:35.038133 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:11:35.038138 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:11:35.038143 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:11:35.038147 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:11:35.038151 kernel: Fallback order for Node 0: 0 Jul 15 23:11:35.038156 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 15 23:11:35.038160 kernel: Policy zone: Normal Jul 15 23:11:35.038164 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:11:35.038169 kernel: software IO TLB: area num 2. Jul 15 23:11:35.038173 kernel: software IO TLB: mapped [mem 0x0000000036200000-0x000000003a200000] (64MB) Jul 15 23:11:35.038178 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 23:11:35.038182 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:11:35.038187 kernel: rcu: RCU event tracing is enabled. Jul 15 23:11:35.038192 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 23:11:35.038197 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:11:35.038201 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:11:35.038205 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:11:35.038210 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 23:11:35.038214 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:11:35.038219 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:11:35.038223 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 23:11:35.038227 kernel: GICv3: 960 SPIs implemented Jul 15 23:11:35.038232 kernel: GICv3: 0 Extended SPIs implemented Jul 15 23:11:35.038236 kernel: Root IRQ handler: gic_handle_irq Jul 15 23:11:35.038240 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 15 23:11:35.038245 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 15 23:11:35.038250 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 15 23:11:35.038267 kernel: ITS: No ITS available, not enabling LPIs Jul 15 23:11:35.038272 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:11:35.038276 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 15 23:11:35.038281 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 23:11:35.038285 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 15 23:11:35.038289 kernel: Console: colour dummy device 80x25 Jul 15 23:11:35.038294 kernel: printk: legacy console [tty1] enabled Jul 15 23:11:35.038299 kernel: ACPI: Core revision 20240827 Jul 15 23:11:35.038303 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 15 23:11:35.038309 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:11:35.038313 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:11:35.038318 kernel: landlock: Up and running. Jul 15 23:11:35.038322 kernel: SELinux: Initializing. Jul 15 23:11:35.038327 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:11:35.038334 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:11:35.038340 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 15 23:11:35.038344 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 15 23:11:35.038349 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 15 23:11:35.038354 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:11:35.038358 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:11:35.038364 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:11:35.038369 kernel: Remapping and enabling EFI services. Jul 15 23:11:35.038373 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:11:35.038378 kernel: Detected PIPT I-cache on CPU1 Jul 15 23:11:35.038383 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 15 23:11:35.038388 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 15 23:11:35.038393 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 23:11:35.038397 kernel: SMP: Total of 2 processors activated. Jul 15 23:11:35.038402 kernel: CPU: All CPU(s) started at EL1 Jul 15 23:11:35.038407 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 23:11:35.038412 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 15 23:11:35.038416 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 23:11:35.038421 kernel: CPU features: detected: Common not Private translations Jul 15 23:11:35.038426 kernel: CPU features: detected: CRC32 instructions Jul 15 23:11:35.038431 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 15 23:11:35.038436 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 23:11:35.038441 kernel: CPU features: detected: LSE atomic instructions Jul 15 23:11:35.038445 kernel: CPU features: detected: Privileged Access Never Jul 15 23:11:35.038450 kernel: CPU features: detected: Speculation barrier (SB) Jul 15 23:11:35.038455 kernel: CPU features: detected: TLB range maintenance instructions Jul 15 23:11:35.038459 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 23:11:35.038464 kernel: CPU features: detected: Scalable Vector Extension Jul 15 23:11:35.038469 kernel: alternatives: applying system-wide alternatives Jul 15 23:11:35.038474 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 15 23:11:35.038479 kernel: SVE: maximum available vector length 16 bytes per vector Jul 15 23:11:35.038484 kernel: SVE: default vector length 16 bytes per vector Jul 15 23:11:35.038489 kernel: Memory: 3959092K/4194160K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 213880K reserved, 16384K cma-reserved) Jul 15 23:11:35.038493 kernel: devtmpfs: initialized Jul 15 23:11:35.038498 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:11:35.038503 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 23:11:35.038508 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 23:11:35.038512 kernel: 0 pages in range for non-PLT usage Jul 15 23:11:35.038518 kernel: 508432 pages in range for PLT usage Jul 15 23:11:35.038522 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:11:35.038527 kernel: SMBIOS 3.1.0 present. Jul 15 23:11:35.038532 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 15 23:11:35.038536 kernel: DMI: Memory slots populated: 2/2 Jul 15 23:11:35.038541 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:11:35.038546 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 23:11:35.038551 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 23:11:35.038555 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 23:11:35.038561 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:11:35.038566 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 15 23:11:35.038570 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:11:35.038575 kernel: cpuidle: using governor menu Jul 15 23:11:35.038580 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 23:11:35.038584 kernel: ASID allocator initialised with 32768 entries Jul 15 23:11:35.038589 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:11:35.038594 kernel: Serial: AMBA PL011 UART driver Jul 15 23:11:35.038598 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:11:35.038604 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:11:35.038608 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 23:11:35.038613 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 23:11:35.038618 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:11:35.038622 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:11:35.038627 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 23:11:35.038632 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 23:11:35.038636 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:11:35.038641 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:11:35.038646 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:11:35.038651 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:11:35.038656 kernel: ACPI: Interpreter enabled Jul 15 23:11:35.038660 kernel: ACPI: Using GIC for interrupt routing Jul 15 23:11:35.038665 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 15 23:11:35.038670 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 23:11:35.038675 kernel: printk: legacy bootconsole [pl11] disabled Jul 15 23:11:35.038679 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 15 23:11:35.038684 kernel: ACPI: CPU0 has been hot-added Jul 15 23:11:35.038689 kernel: ACPI: CPU1 has been hot-added Jul 15 23:11:35.038694 kernel: iommu: Default domain type: Translated Jul 15 23:11:35.038699 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 23:11:35.038703 kernel: efivars: Registered efivars operations Jul 15 23:11:35.038708 kernel: vgaarb: loaded Jul 15 23:11:35.038713 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 23:11:35.038717 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:11:35.038722 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:11:35.038727 kernel: pnp: PnP ACPI init Jul 15 23:11:35.038732 kernel: pnp: PnP ACPI: found 0 devices Jul 15 23:11:35.038737 kernel: NET: Registered PF_INET protocol family Jul 15 23:11:35.038741 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:11:35.038746 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:11:35.038751 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:11:35.038756 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:11:35.038760 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:11:35.038765 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:11:35.038770 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:11:35.038775 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:11:35.038780 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:11:35.038785 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:11:35.038789 kernel: kvm [1]: HYP mode not available Jul 15 23:11:35.038794 kernel: Initialise system trusted keyrings Jul 15 23:11:35.038799 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:11:35.038803 kernel: Key type asymmetric registered Jul 15 23:11:35.038808 kernel: Asymmetric key parser 'x509' registered Jul 15 23:11:35.038813 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 23:11:35.038818 kernel: io scheduler mq-deadline registered Jul 15 23:11:35.038823 kernel: io scheduler kyber registered Jul 15 23:11:35.038827 kernel: io scheduler bfq registered Jul 15 23:11:35.038832 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:11:35.038837 kernel: thunder_xcv, ver 1.0 Jul 15 23:11:35.038841 kernel: thunder_bgx, ver 1.0 Jul 15 23:11:35.038846 kernel: nicpf, ver 1.0 Jul 15 23:11:35.038851 kernel: nicvf, ver 1.0 Jul 15 23:11:35.038958 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 23:11:35.039009 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T23:11:34 UTC (1752621094) Jul 15 23:11:35.039015 kernel: efifb: probing for efifb Jul 15 23:11:35.039020 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 15 23:11:35.039025 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 15 23:11:35.039029 kernel: efifb: scrolling: redraw Jul 15 23:11:35.039034 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 23:11:35.039039 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 23:11:35.039044 kernel: fb0: EFI VGA frame buffer device Jul 15 23:11:35.039049 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 15 23:11:35.039054 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 23:11:35.039059 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 23:11:35.039063 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:11:35.039068 kernel: watchdog: NMI not fully supported Jul 15 23:11:35.039073 kernel: watchdog: Hard watchdog permanently disabled Jul 15 23:11:35.039077 kernel: Segment Routing with IPv6 Jul 15 23:11:35.039082 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:11:35.039087 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:11:35.039092 kernel: Key type dns_resolver registered Jul 15 23:11:35.039097 kernel: registered taskstats version 1 Jul 15 23:11:35.039102 kernel: Loading compiled-in X.509 certificates Jul 15 23:11:35.039106 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 2e049b1166d7080a2074348abe7e86e115624bdd' Jul 15 23:11:35.039111 kernel: Demotion targets for Node 0: null Jul 15 23:11:35.039116 kernel: Key type .fscrypt registered Jul 15 23:11:35.039120 kernel: Key type fscrypt-provisioning registered Jul 15 23:11:35.039125 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:11:35.039130 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:11:35.039135 kernel: ima: No architecture policies found Jul 15 23:11:35.039140 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 23:11:35.039145 kernel: clk: Disabling unused clocks Jul 15 23:11:35.039149 kernel: PM: genpd: Disabling unused power domains Jul 15 23:11:35.039154 kernel: Warning: unable to open an initial console. Jul 15 23:11:35.039159 kernel: Freeing unused kernel memory: 39488K Jul 15 23:11:35.039163 kernel: Run /init as init process Jul 15 23:11:35.039168 kernel: with arguments: Jul 15 23:11:35.039173 kernel: /init Jul 15 23:11:35.039178 kernel: with environment: Jul 15 23:11:35.039183 kernel: HOME=/ Jul 15 23:11:35.039187 kernel: TERM=linux Jul 15 23:11:35.039192 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:11:35.039198 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:11:35.039205 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:11:35.039210 systemd[1]: Detected virtualization microsoft. Jul 15 23:11:35.039216 systemd[1]: Detected architecture arm64. Jul 15 23:11:35.039221 systemd[1]: Running in initrd. Jul 15 23:11:35.039226 systemd[1]: No hostname configured, using default hostname. Jul 15 23:11:35.039231 systemd[1]: Hostname set to . Jul 15 23:11:35.039236 systemd[1]: Initializing machine ID from random generator. Jul 15 23:11:35.039241 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:11:35.039246 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:11:35.039261 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:11:35.039268 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:11:35.039274 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:11:35.039279 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:11:35.039285 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:11:35.039291 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:11:35.039296 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:11:35.039301 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:11:35.039307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:11:35.039312 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:11:35.039317 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:11:35.039322 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:11:35.039328 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:11:35.039333 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:11:35.039338 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:11:35.039343 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:11:35.039348 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:11:35.039354 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:11:35.039360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:11:35.039365 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:11:35.039370 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:11:35.039375 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:11:35.039380 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:11:35.039385 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:11:35.039391 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:11:35.039396 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:11:35.039402 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:11:35.039407 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:11:35.039422 systemd-journald[224]: Collecting audit messages is disabled. Jul 15 23:11:35.039437 systemd-journald[224]: Journal started Jul 15 23:11:35.039451 systemd-journald[224]: Runtime Journal (/run/log/journal/7b583ecf59cd4d569fda8956319e4709) is 8M, max 78.5M, 70.5M free. Jul 15 23:11:35.043289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:11:35.047947 systemd-modules-load[226]: Inserted module 'overlay' Jul 15 23:11:35.070436 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:11:35.070485 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:11:35.070496 kernel: Bridge firewalling registered Jul 15 23:11:35.072751 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 15 23:11:35.077367 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:11:35.081787 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:11:35.093526 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:11:35.100270 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:11:35.109276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:35.117162 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:11:35.130296 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:11:35.141623 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:11:35.159906 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:11:35.180619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:11:35.186284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:11:35.197293 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:11:35.201816 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:11:35.213500 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:11:35.225823 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:11:35.246403 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:11:35.251666 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:11:35.273546 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:11:35.286413 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:11:35.319769 systemd-resolved[262]: Positive Trust Anchors: Jul 15 23:11:35.319786 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:11:35.319805 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:11:35.321473 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 15 23:11:35.323084 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:11:35.328288 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:11:35.423273 kernel: SCSI subsystem initialized Jul 15 23:11:35.429270 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:11:35.436271 kernel: iscsi: registered transport (tcp) Jul 15 23:11:35.449057 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:11:35.449091 kernel: QLogic iSCSI HBA Driver Jul 15 23:11:35.461234 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:11:35.479479 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:11:35.490783 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:11:35.534702 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:11:35.540737 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:11:35.597266 kernel: raid6: neonx8 gen() 18546 MB/s Jul 15 23:11:35.617259 kernel: raid6: neonx4 gen() 18561 MB/s Jul 15 23:11:35.636258 kernel: raid6: neonx2 gen() 17077 MB/s Jul 15 23:11:35.655259 kernel: raid6: neonx1 gen() 15007 MB/s Jul 15 23:11:35.675260 kernel: raid6: int64x8 gen() 10536 MB/s Jul 15 23:11:35.694258 kernel: raid6: int64x4 gen() 10606 MB/s Jul 15 23:11:35.713259 kernel: raid6: int64x2 gen() 8979 MB/s Jul 15 23:11:35.735544 kernel: raid6: int64x1 gen() 6991 MB/s Jul 15 23:11:35.735554 kernel: raid6: using algorithm neonx4 gen() 18561 MB/s Jul 15 23:11:35.757368 kernel: raid6: .... xor() 15153 MB/s, rmw enabled Jul 15 23:11:35.757375 kernel: raid6: using neon recovery algorithm Jul 15 23:11:35.765275 kernel: xor: measuring software checksum speed Jul 15 23:11:35.765283 kernel: 8regs : 28593 MB/sec Jul 15 23:11:35.767908 kernel: 32regs : 28800 MB/sec Jul 15 23:11:35.770405 kernel: arm64_neon : 37364 MB/sec Jul 15 23:11:35.773326 kernel: xor: using function: arm64_neon (37364 MB/sec) Jul 15 23:11:35.812279 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:11:35.816834 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:11:35.825586 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:11:35.855080 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 15 23:11:35.858755 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:11:35.870191 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:11:35.892236 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jul 15 23:11:35.911309 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:11:35.917170 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:11:35.960278 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:11:35.976319 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:11:36.028856 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:11:36.033083 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:36.046760 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:11:36.064396 kernel: hv_vmbus: Vmbus version:5.3 Jul 15 23:11:36.064414 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 23:11:36.064421 kernel: hv_vmbus: registering driver hid_hyperv Jul 15 23:11:36.060718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:11:36.079700 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 23:11:36.079719 kernel: PTP clock support registered Jul 15 23:11:36.075215 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:11:36.096197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:11:36.115116 kernel: hv_utils: Registering HyperV Utility Driver Jul 15 23:11:36.115139 kernel: hv_vmbus: registering driver hv_utils Jul 15 23:11:36.115147 kernel: hv_utils: Heartbeat IC version 3.0 Jul 15 23:11:36.115153 kernel: hv_vmbus: registering driver hv_netvsc Jul 15 23:11:36.115159 kernel: hv_utils: Shutdown IC version 3.2 Jul 15 23:11:36.115166 kernel: hv_utils: TimeSync IC version 4.0 Jul 15 23:11:36.096296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:36.138115 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 15 23:11:36.138135 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 15 23:11:36.138143 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 15 23:11:36.138150 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 15 23:11:36.138271 kernel: hv_vmbus: registering driver hv_storvsc Jul 15 23:11:36.100786 systemd-resolved[262]: Clock change detected. Flushing caches. Jul 15 23:11:36.119677 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:11:36.157086 kernel: scsi host0: storvsc_host_t Jul 15 23:11:36.157123 kernel: scsi host1: storvsc_host_t Jul 15 23:11:36.157238 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 15 23:11:36.124308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:11:36.165642 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 15 23:11:36.180543 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 15 23:11:36.180715 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jul 15 23:11:36.189686 kernel: sd 1:0:0:0: [sda] Write Protect is off Jul 15 23:11:36.189858 kernel: hv_netvsc 0022487e-48d5-0022-487e-48d50022487e eth0: VF slot 1 added Jul 15 23:11:36.189996 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 15 23:11:36.195718 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 15 23:11:36.190894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:36.210307 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#303 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 23:11:36.217571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#310 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 23:11:36.224466 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:11:36.224493 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jul 15 23:11:36.228652 kernel: hv_vmbus: registering driver hv_pci Jul 15 23:11:36.228688 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jul 15 23:11:36.236314 kernel: hv_pci cc1b10dc-3ce2-48cb-a9cd-2568da939670: PCI VMBus probing: Using version 0x10004 Jul 15 23:11:36.236476 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 23:11:36.244845 kernel: hv_pci cc1b10dc-3ce2-48cb-a9cd-2568da939670: PCI host bridge to bus 3ce2:00 Jul 15 23:11:36.245004 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jul 15 23:11:36.251547 kernel: pci_bus 3ce2:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 15 23:11:36.251693 kernel: pci_bus 3ce2:00: No busn resource found for root bus, will use [bus 00-ff] Jul 15 23:11:36.262587 kernel: pci 3ce2:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 15 23:11:36.273057 kernel: pci 3ce2:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 15 23:11:36.273115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#259 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 23:11:36.277568 kernel: pci 3ce2:00:02.0: enabling Extended Tags Jul 15 23:11:36.300084 kernel: pci 3ce2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3ce2:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 15 23:11:36.300262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#298 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 23:11:36.300335 kernel: pci_bus 3ce2:00: busn_res: [bus 00-ff] end is updated to 00 Jul 15 23:11:36.309565 kernel: pci 3ce2:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 15 23:11:36.369484 kernel: mlx5_core 3ce2:00:02.0: enabling device (0000 -> 0002) Jul 15 23:11:36.377222 kernel: mlx5_core 3ce2:00:02.0: PTM is not supported by PCIe Jul 15 23:11:36.377349 kernel: mlx5_core 3ce2:00:02.0: firmware version: 16.30.5006 Jul 15 23:11:36.545082 kernel: hv_netvsc 0022487e-48d5-0022-487e-48d50022487e eth0: VF registering: eth1 Jul 15 23:11:36.545273 kernel: mlx5_core 3ce2:00:02.0 eth1: joined to eth0 Jul 15 23:11:36.551636 kernel: mlx5_core 3ce2:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 15 23:11:36.562248 kernel: mlx5_core 3ce2:00:02.0 enP15586s1: renamed from eth1 Jul 15 23:11:36.725881 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 15 23:11:36.758563 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 15 23:11:36.780720 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 15 23:11:36.786656 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 15 23:11:36.806781 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 15 23:11:36.823849 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:11:36.828961 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:11:36.837708 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:11:36.846914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:11:36.860698 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:11:36.870758 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:11:36.889947 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:11:36.901757 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#251 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 23:11:36.907549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:11:37.918042 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#199 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 23:11:37.933559 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:11:37.933795 disk-uuid[658]: The operation has completed successfully. Jul 15 23:11:37.997357 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:11:37.997453 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:11:38.021214 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:11:38.039755 sh[821]: Success Jul 15 23:11:38.069891 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:11:38.069948 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:11:38.074985 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:11:38.086560 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 23:11:38.261862 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:11:38.267199 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:11:38.287700 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:11:38.311508 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:11:38.311556 kernel: BTRFS: device fsid e70e9257-c19d-4e0a-b2ee-631da7d0eb2b devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (839) Jul 15 23:11:38.316723 kernel: BTRFS info (device dm-0): first mount of filesystem e70e9257-c19d-4e0a-b2ee-631da7d0eb2b Jul 15 23:11:38.321084 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:11:38.324082 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:11:38.555503 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:11:38.559720 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:11:38.566711 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:11:38.567414 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:11:38.589416 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:11:38.610563 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (864) Jul 15 23:11:38.620616 kernel: BTRFS info (device sda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:11:38.620677 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:11:38.623907 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:11:38.645591 kernel: BTRFS info (device sda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:11:38.647629 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:11:38.657628 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:11:38.708190 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:11:38.714638 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:11:38.749647 systemd-networkd[1008]: lo: Link UP Jul 15 23:11:38.749653 systemd-networkd[1008]: lo: Gained carrier Jul 15 23:11:38.750889 systemd-networkd[1008]: Enumeration completed Jul 15 23:11:38.752330 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:11:38.752872 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:11:38.752875 systemd-networkd[1008]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:11:38.760175 systemd[1]: Reached target network.target - Network. Jul 15 23:11:38.828546 kernel: mlx5_core 3ce2:00:02.0 enP15586s1: Link up Jul 15 23:11:38.859632 kernel: hv_netvsc 0022487e-48d5-0022-487e-48d50022487e eth0: Data path switched to VF: enP15586s1 Jul 15 23:11:38.860018 systemd-networkd[1008]: enP15586s1: Link UP Jul 15 23:11:38.862966 systemd-networkd[1008]: eth0: Link UP Jul 15 23:11:38.863050 systemd-networkd[1008]: eth0: Gained carrier Jul 15 23:11:38.863062 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:11:38.877810 systemd-networkd[1008]: enP15586s1: Gained carrier Jul 15 23:11:38.887561 systemd-networkd[1008]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 23:11:39.514367 ignition[935]: Ignition 2.21.0 Jul 15 23:11:39.516818 ignition[935]: Stage: fetch-offline Jul 15 23:11:39.516920 ignition[935]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:39.520973 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:11:39.516926 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:11:39.531879 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 23:11:39.517026 ignition[935]: parsed url from cmdline: "" Jul 15 23:11:39.517029 ignition[935]: no config URL provided Jul 15 23:11:39.517032 ignition[935]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:11:39.517037 ignition[935]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:11:39.517040 ignition[935]: failed to fetch config: resource requires networking Jul 15 23:11:39.517170 ignition[935]: Ignition finished successfully Jul 15 23:11:39.561261 ignition[1017]: Ignition 2.21.0 Jul 15 23:11:39.561272 ignition[1017]: Stage: fetch Jul 15 23:11:39.561530 ignition[1017]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:39.561542 ignition[1017]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:11:39.561621 ignition[1017]: parsed url from cmdline: "" Jul 15 23:11:39.561624 ignition[1017]: no config URL provided Jul 15 23:11:39.561628 ignition[1017]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:11:39.561633 ignition[1017]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:11:39.561668 ignition[1017]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 15 23:11:39.632891 ignition[1017]: GET result: OK Jul 15 23:11:39.632985 ignition[1017]: config has been read from IMDS userdata Jul 15 23:11:39.633011 ignition[1017]: parsing config with SHA512: 28a84abb6da337ecb783d89d2c6143db0f13fb8c9dfd6a25af7d7bbaf8e654b6d883b867fbf65c5a650c0288e8bfb19ce983783ddce6a224b8622364a3f7ada8 Jul 15 23:11:39.640792 unknown[1017]: fetched base config from "system" Jul 15 23:11:39.640807 unknown[1017]: fetched base config from "system" Jul 15 23:11:39.641258 ignition[1017]: fetch: fetch complete Jul 15 23:11:39.640811 unknown[1017]: fetched user config from "azure" Jul 15 23:11:39.641266 ignition[1017]: fetch: fetch passed Jul 15 23:11:39.643408 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 23:11:39.641314 ignition[1017]: Ignition finished successfully Jul 15 23:11:39.649108 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:11:39.684763 ignition[1024]: Ignition 2.21.0 Jul 15 23:11:39.687172 ignition[1024]: Stage: kargs Jul 15 23:11:39.687426 ignition[1024]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:39.691244 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:11:39.687436 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:11:39.699148 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:11:39.688394 ignition[1024]: kargs: kargs passed Jul 15 23:11:39.688455 ignition[1024]: Ignition finished successfully Jul 15 23:11:39.726513 ignition[1030]: Ignition 2.21.0 Jul 15 23:11:39.726561 ignition[1030]: Stage: disks Jul 15 23:11:39.726703 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:39.732948 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:11:39.726710 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:11:39.739756 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:11:39.730458 ignition[1030]: disks: disks passed Jul 15 23:11:39.747911 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:11:39.730512 ignition[1030]: Ignition finished successfully Jul 15 23:11:39.757109 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:11:39.765574 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:11:39.771903 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:11:39.780911 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:11:39.853974 systemd-fsck[1039]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 15 23:11:39.862658 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:11:39.868633 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:11:40.034544 kernel: EXT4-fs (sda9): mounted filesystem db08fdf6-07fd-45a1-bb3b-a7d0399d70fd r/w with ordered data mode. Quota mode: none. Jul 15 23:11:40.034609 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:11:40.038621 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:11:40.045661 systemd-networkd[1008]: eth0: Gained IPv6LL Jul 15 23:11:40.045880 systemd-networkd[1008]: enP15586s1: Gained IPv6LL Jul 15 23:11:40.058369 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:11:40.066259 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:11:40.074310 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 15 23:11:40.084358 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:11:40.084422 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:11:40.090131 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:11:40.113298 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:11:40.138299 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1053) Jul 15 23:11:40.138322 kernel: BTRFS info (device sda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:11:40.138356 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:11:40.138365 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:11:40.139739 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:11:40.538338 coreos-metadata[1055]: Jul 15 23:11:40.538 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 23:11:40.545661 coreos-metadata[1055]: Jul 15 23:11:40.545 INFO Fetch successful Jul 15 23:11:40.550917 coreos-metadata[1055]: Jul 15 23:11:40.545 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 15 23:11:40.558700 coreos-metadata[1055]: Jul 15 23:11:40.555 INFO Fetch successful Jul 15 23:11:40.568449 coreos-metadata[1055]: Jul 15 23:11:40.568 INFO wrote hostname ci-4372.0.1-n-7068735510 to /sysroot/etc/hostname Jul 15 23:11:40.574885 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 23:11:40.702546 initrd-setup-root[1083]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:11:40.720352 initrd-setup-root[1090]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:11:40.738546 initrd-setup-root[1097]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:11:40.743521 initrd-setup-root[1104]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:11:41.462581 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:11:41.468713 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:11:41.484225 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:11:41.495350 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:11:41.505406 kernel: BTRFS info (device sda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:11:41.520633 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:11:41.534668 ignition[1172]: INFO : Ignition 2.21.0 Jul 15 23:11:41.534668 ignition[1172]: INFO : Stage: mount Jul 15 23:11:41.542605 ignition[1172]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:41.542605 ignition[1172]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:11:41.542605 ignition[1172]: INFO : mount: mount passed Jul 15 23:11:41.542605 ignition[1172]: INFO : Ignition finished successfully Jul 15 23:11:41.541043 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:11:41.546683 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:11:41.570625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:11:41.594277 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1184) Jul 15 23:11:41.594328 kernel: BTRFS info (device sda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:11:41.598603 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:11:41.601900 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:11:41.604513 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:11:41.633559 ignition[1201]: INFO : Ignition 2.21.0 Jul 15 23:11:41.633559 ignition[1201]: INFO : Stage: files Jul 15 23:11:41.633559 ignition[1201]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:41.645061 ignition[1201]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:11:41.645061 ignition[1201]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:11:41.645061 ignition[1201]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:11:41.645061 ignition[1201]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:11:41.665384 ignition[1201]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:11:41.665384 ignition[1201]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:11:41.665384 ignition[1201]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:11:41.653118 unknown[1201]: wrote ssh authorized keys file for user: core Jul 15 23:11:41.730167 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 23:11:41.737915 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 15 23:11:41.760616 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:11:41.824813 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 23:11:41.833437 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:11:41.833437 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 15 23:11:42.354230 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 23:11:42.561309 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:11:42.568403 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:11:42.568403 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:11:42.568403 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:11:42.568403 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:11:42.568403 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:11:42.568403 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:11:42.568403 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:11:42.568403 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:11:42.623313 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:11:42.623313 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:11:42.623313 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:11:42.623313 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:11:42.623313 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:11:42.623313 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 15 23:11:43.067546 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 23:11:43.805606 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:11:43.805606 ignition[1201]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 23:11:43.831572 ignition[1201]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:11:43.840728 ignition[1201]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:11:43.840728 ignition[1201]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 23:11:43.853472 ignition[1201]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:11:43.853472 ignition[1201]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:11:43.853472 ignition[1201]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:11:43.853472 ignition[1201]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:11:43.853472 ignition[1201]: INFO : files: files passed Jul 15 23:11:43.853472 ignition[1201]: INFO : Ignition finished successfully Jul 15 23:11:43.849306 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:11:43.859258 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:11:43.881067 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:11:43.896122 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:11:43.907656 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:11:43.928382 initrd-setup-root-after-ignition[1230]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:11:43.928382 initrd-setup-root-after-ignition[1230]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:11:43.941223 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:11:43.941771 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:11:43.952517 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:11:43.962486 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:11:44.005384 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:11:44.005501 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:11:44.014964 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:11:44.023510 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:11:44.031229 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:11:44.031945 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:11:44.066618 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:11:44.073154 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:11:44.092835 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:11:44.097458 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:11:44.106349 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:11:44.114119 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:11:44.114229 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:11:44.125711 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:11:44.134235 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:11:44.141543 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:11:44.149227 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:11:44.157640 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:11:44.166368 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:11:44.174960 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:11:44.183175 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:11:44.191593 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:11:44.200107 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:11:44.207883 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:11:44.215288 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:11:44.215396 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:11:44.225807 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:11:44.233903 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:11:44.242666 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:11:44.242763 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:11:44.252066 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:11:44.252205 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:11:44.265563 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:11:44.265696 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:11:44.274287 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:11:44.274394 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:11:44.282142 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 15 23:11:44.282239 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 23:11:44.337135 ignition[1254]: INFO : Ignition 2.21.0 Jul 15 23:11:44.337135 ignition[1254]: INFO : Stage: umount Jul 15 23:11:44.337135 ignition[1254]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:44.337135 ignition[1254]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:11:44.337135 ignition[1254]: INFO : umount: umount passed Jul 15 23:11:44.337135 ignition[1254]: INFO : Ignition finished successfully Jul 15 23:11:44.293644 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:11:44.302731 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:11:44.309083 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:11:44.309200 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:11:44.314357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:11:44.314481 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:11:44.335682 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:11:44.336792 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:11:44.336892 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:11:44.341125 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:11:44.343268 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:11:44.349261 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:11:44.349398 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:11:44.359449 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:11:44.359511 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:11:44.371930 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 23:11:44.371992 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 23:11:44.380011 systemd[1]: Stopped target network.target - Network. Jul 15 23:11:44.387567 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:11:44.387636 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:11:44.395643 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:11:44.402956 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:11:44.411588 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:11:44.419976 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:11:44.427364 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:11:44.434948 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:11:44.435004 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:11:44.442281 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:11:44.442315 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:11:44.450417 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:11:44.450474 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:11:44.459298 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:11:44.459331 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:11:44.468062 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:11:44.475645 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:11:44.485033 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:11:44.485150 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:11:44.498489 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:11:44.498724 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:11:44.670244 kernel: hv_netvsc 0022487e-48d5-0022-487e-48d50022487e eth0: Data path switched from VF: enP15586s1 Jul 15 23:11:44.498830 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:11:44.510504 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:11:44.511295 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:11:44.518969 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:11:44.519015 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:11:44.531814 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:11:44.547742 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:11:44.547806 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:11:44.556409 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:11:44.556457 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:11:44.563971 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:11:44.564010 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:11:44.568421 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:11:44.568450 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:11:44.580661 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:11:44.588243 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:11:44.588301 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:11:44.621196 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:11:44.621333 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:11:44.630423 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:11:44.630504 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:11:44.638371 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:11:44.638397 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:11:44.646685 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:11:44.646731 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:11:44.659772 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:11:44.659823 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:11:44.670069 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:11:44.670127 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:11:44.679831 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:11:44.871319 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 15 23:11:44.694735 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:11:44.694808 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:11:44.711861 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:11:44.711920 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:11:44.720480 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 23:11:44.720692 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:11:44.729466 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:11:44.729514 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:11:44.734585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:11:44.734627 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:44.745983 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:11:44.746033 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 23:11:44.746056 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:11:44.746081 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:11:44.746326 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:11:44.746392 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:11:44.753435 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:11:44.753503 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:11:44.765893 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:11:44.766000 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:11:44.770435 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:11:44.774794 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:11:44.774869 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:11:44.783901 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:11:44.807932 systemd[1]: Switching root. Jul 15 23:11:44.978171 systemd-journald[224]: Journal stopped Jul 15 23:11:49.783948 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:11:49.783967 kernel: SELinux: policy capability open_perms=1 Jul 15 23:11:49.783974 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:11:49.783980 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:11:49.783986 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:11:49.783991 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:11:49.783997 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:11:49.784002 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:11:49.784008 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:11:49.784016 systemd[1]: Successfully loaded SELinux policy in 139.179ms. Jul 15 23:11:49.784023 kernel: audit: type=1403 audit(1752621105.671:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:11:49.784029 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.684ms. Jul 15 23:11:49.784036 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:11:49.784042 systemd[1]: Detected virtualization microsoft. Jul 15 23:11:49.784048 systemd[1]: Detected architecture arm64. Jul 15 23:11:49.784054 systemd[1]: Detected first boot. Jul 15 23:11:49.784061 systemd[1]: Hostname set to . Jul 15 23:11:49.784066 systemd[1]: Initializing machine ID from random generator. Jul 15 23:11:49.784072 zram_generator::config[1296]: No configuration found. Jul 15 23:11:49.784078 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:11:49.784084 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:11:49.784090 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:11:49.784097 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:11:49.784103 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:11:49.784109 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:11:49.784114 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:11:49.784121 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:11:49.784126 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:11:49.784132 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:11:49.784139 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:11:49.784146 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:11:49.784152 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:11:49.784158 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:11:49.784164 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:11:49.784170 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:11:49.784176 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:11:49.784182 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:11:49.784188 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:11:49.784195 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:11:49.784201 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 23:11:49.784209 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:11:49.784215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:11:49.784221 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:11:49.784227 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:11:49.784233 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:11:49.784240 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:11:49.784246 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:11:49.784251 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:11:49.784258 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:11:49.784263 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:11:49.784269 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:11:49.784276 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:11:49.784284 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:11:49.784290 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:11:49.784296 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:11:49.784302 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:11:49.784308 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:11:49.784314 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:11:49.784321 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:11:49.784327 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:11:49.784333 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:11:49.784340 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:11:49.784346 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:11:49.784352 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:11:49.784358 systemd[1]: Reached target machines.target - Containers. Jul 15 23:11:49.784364 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:11:49.784371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:11:49.784378 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:11:49.784384 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:11:49.784390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:11:49.784396 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:11:49.784402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:11:49.784408 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:11:49.784415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:11:49.784421 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:11:49.784428 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:11:49.784435 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:11:49.784441 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:11:49.784447 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:11:49.784453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:11:49.784459 kernel: fuse: init (API version 7.41) Jul 15 23:11:49.784465 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:11:49.784471 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:11:49.784478 kernel: loop: module loaded Jul 15 23:11:49.784484 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:11:49.784490 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:11:49.784496 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:11:49.784502 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:11:49.784508 kernel: ACPI: bus type drm_connector registered Jul 15 23:11:49.784514 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:11:49.784646 systemd-journald[1400]: Collecting audit messages is disabled. Jul 15 23:11:49.784669 systemd[1]: Stopped verity-setup.service. Jul 15 23:11:49.784676 systemd-journald[1400]: Journal started Jul 15 23:11:49.784691 systemd-journald[1400]: Runtime Journal (/run/log/journal/c670331541f341e6bf13c1fa6c35caed) is 8M, max 78.5M, 70.5M free. Jul 15 23:11:49.056144 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:11:49.063117 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 23:11:49.063480 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:11:49.063756 systemd[1]: systemd-journald.service: Consumed 2.373s CPU time. Jul 15 23:11:49.797614 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:11:49.803254 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:11:49.808186 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:11:49.812960 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:11:49.816974 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:11:49.821635 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:11:49.828718 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:11:49.832906 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:11:49.838021 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:11:49.843341 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:11:49.843480 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:11:49.848497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:11:49.848711 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:11:49.853917 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:11:49.854041 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:11:49.858842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:11:49.858953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:11:49.864223 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:11:49.864342 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:11:49.869260 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:11:49.869384 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:11:49.874141 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:11:49.878993 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:11:49.884806 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:11:49.890077 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:11:49.895674 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:11:49.908732 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:11:49.914392 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:11:49.921075 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:11:49.925595 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:11:49.925623 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:11:49.930584 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:11:49.936247 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:11:49.940415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:11:49.941283 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:11:49.946298 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:11:49.950886 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:11:49.953587 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:11:49.957790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:11:49.958468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:11:49.963543 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:11:49.970640 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:11:49.976846 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:11:49.982328 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:11:50.000207 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:11:50.006232 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:11:50.015769 kernel: loop0: detected capacity change from 0 to 28936 Jul 15 23:11:50.015877 systemd-journald[1400]: Time spent on flushing to /var/log/journal/c670331541f341e6bf13c1fa6c35caed is 9.060ms for 947 entries. Jul 15 23:11:50.015877 systemd-journald[1400]: System Journal (/var/log/journal/c670331541f341e6bf13c1fa6c35caed) is 8M, max 2.6G, 2.6G free. Jul 15 23:11:50.084485 systemd-journald[1400]: Received client request to flush runtime journal. Jul 15 23:11:50.021009 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:11:50.090176 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:11:50.105099 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:11:50.115007 systemd-tmpfiles[1437]: ACLs are not supported, ignoring. Jul 15 23:11:50.115018 systemd-tmpfiles[1437]: ACLs are not supported, ignoring. Jul 15 23:11:50.129946 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:11:50.136587 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:11:50.155333 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:11:50.156458 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:11:50.362556 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:11:50.396091 kernel: loop1: detected capacity change from 0 to 211168 Jul 15 23:11:50.396888 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:11:50.405077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:11:50.421872 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jul 15 23:11:50.422102 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jul 15 23:11:50.425112 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:11:50.449554 kernel: loop2: detected capacity change from 0 to 138376 Jul 15 23:11:50.797557 kernel: loop3: detected capacity change from 0 to 107312 Jul 15 23:11:50.932083 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:11:50.939396 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:11:50.965098 systemd-udevd[1461]: Using default interface naming scheme 'v255'. Jul 15 23:11:51.051551 kernel: loop4: detected capacity change from 0 to 28936 Jul 15 23:11:51.058555 kernel: loop5: detected capacity change from 0 to 211168 Jul 15 23:11:51.065540 kernel: loop6: detected capacity change from 0 to 138376 Jul 15 23:11:51.073549 kernel: loop7: detected capacity change from 0 to 107312 Jul 15 23:11:51.075625 (sd-merge)[1463]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 15 23:11:51.075977 (sd-merge)[1463]: Merged extensions into '/usr'. Jul 15 23:11:51.078421 systemd[1]: Reload requested from client PID 1435 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:11:51.078551 systemd[1]: Reloading... Jul 15 23:11:51.132581 zram_generator::config[1489]: No configuration found. Jul 15 23:11:51.205728 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:11:51.294149 systemd[1]: Reloading finished in 215 ms. Jul 15 23:11:51.318126 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:11:51.327979 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:11:51.352060 systemd[1]: Starting ensure-sysext.service... Jul 15 23:11:51.362161 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:11:51.372985 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:11:51.416815 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:11:51.416836 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:11:51.416994 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:11:51.417123 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:11:51.417552 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:11:51.417691 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Jul 15 23:11:51.417723 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Jul 15 23:11:51.434793 systemd-tmpfiles[1575]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:11:51.434803 systemd-tmpfiles[1575]: Skipping /boot Jul 15 23:11:51.442730 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:11:51.445993 systemd-tmpfiles[1575]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:11:51.446004 systemd-tmpfiles[1575]: Skipping /boot Jul 15 23:11:51.448745 systemd[1]: Reload requested from client PID 1573 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:11:51.448757 systemd[1]: Reloading... Jul 15 23:11:51.538560 zram_generator::config[1610]: No configuration found. Jul 15 23:11:51.558993 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#254 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 23:11:51.559228 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 23:11:51.661221 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:11:51.665445 kernel: hv_vmbus: registering driver hv_balloon Jul 15 23:11:51.665506 kernel: hv_vmbus: registering driver hyperv_fb Jul 15 23:11:51.665520 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 15 23:11:51.668800 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 15 23:11:51.693867 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 15 23:11:51.703273 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 15 23:11:51.710831 kernel: Console: switching to colour dummy device 80x25 Jul 15 23:11:51.719543 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 23:11:51.747366 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 23:11:51.747983 systemd[1]: Reloading finished in 299 ms. Jul 15 23:11:51.762625 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:11:51.775495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:11:51.806740 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:11:51.815608 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:11:51.825136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:11:51.828846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:11:51.835699 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:11:51.847794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:11:51.852848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:11:51.852979 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:11:51.854448 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:11:51.864047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:11:51.873415 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:11:51.880062 systemd-networkd[1574]: lo: Link UP Jul 15 23:11:51.881813 systemd-networkd[1574]: lo: Gained carrier Jul 15 23:11:51.883853 systemd-networkd[1574]: Enumeration completed Jul 15 23:11:51.884259 systemd-networkd[1574]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:11:51.884330 systemd-networkd[1574]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:11:51.886784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:11:51.897664 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:11:51.904546 kernel: MACsec IEEE 802.1AE Jul 15 23:11:51.907137 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:11:51.907374 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:11:51.919248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:11:51.919406 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:11:51.926160 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:11:51.926593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:11:51.942591 kernel: mlx5_core 3ce2:00:02.0 enP15586s1: Link up Jul 15 23:11:51.966459 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:11:51.971548 kernel: hv_netvsc 0022487e-48d5-0022-487e-48d50022487e eth0: Data path switched to VF: enP15586s1 Jul 15 23:11:51.972740 systemd-networkd[1574]: enP15586s1: Link UP Jul 15 23:11:51.972979 systemd-networkd[1574]: eth0: Link UP Jul 15 23:11:51.973049 systemd-networkd[1574]: eth0: Gained carrier Jul 15 23:11:51.973430 systemd-networkd[1574]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:11:51.975997 systemd-networkd[1574]: enP15586s1: Gained carrier Jul 15 23:11:51.978723 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:11:51.980695 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:11:51.981606 systemd-networkd[1574]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 23:11:51.990287 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:11:51.999055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:11:52.010739 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:11:52.017912 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:11:52.018039 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:11:52.021248 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:11:52.026602 augenrules[1797]: No rules Jul 15 23:11:52.032487 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:11:52.040856 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:11:52.049699 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:11:52.050021 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:11:52.054817 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:11:52.062009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:11:52.062181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:52.068268 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:11:52.068673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:11:52.068817 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:11:52.075983 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:11:52.076410 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:11:52.083351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:11:52.083657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:11:52.083998 systemd-resolved[1743]: Positive Trust Anchors: Jul 15 23:11:52.084007 systemd-resolved[1743]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:11:52.084026 systemd-resolved[1743]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:11:52.089601 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:11:52.090562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:11:52.097813 systemd-resolved[1743]: Using system hostname 'ci-4372.0.1-n-7068735510'. Jul 15 23:11:52.099183 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 15 23:11:52.105168 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:11:52.111563 systemd[1]: Finished ensure-sysext.service. Jul 15 23:11:52.121634 systemd[1]: Reached target network.target - Network. Jul 15 23:11:52.125282 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:11:52.131875 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:11:52.137574 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:11:52.137732 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:11:52.138667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:11:52.166507 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:11:52.179914 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:11:52.324569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:52.455544 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:11:52.461867 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:11:53.222704 systemd-networkd[1574]: enP15586s1: Gained IPv6LL Jul 15 23:11:53.350682 systemd-networkd[1574]: eth0: Gained IPv6LL Jul 15 23:11:53.352787 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:11:53.358336 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:11:54.276201 ldconfig[1430]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:11:54.286320 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:11:54.291993 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:11:54.304387 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:11:54.309363 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:11:54.313868 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:11:54.318739 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:11:54.323835 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:11:54.328096 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:11:54.332971 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:11:54.337890 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:11:54.337914 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:11:54.341555 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:11:54.346439 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:11:54.352752 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:11:54.358208 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:11:54.363369 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:11:54.368470 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:11:54.383142 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:11:54.387782 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:11:54.392948 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:11:54.397386 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:11:54.401041 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:11:54.404580 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:11:54.404599 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:11:54.406485 systemd[1]: Starting chronyd.service - NTP client/server... Jul 15 23:11:54.420625 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:11:54.430095 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 23:11:54.437163 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:11:54.443042 (chronyd)[1831]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 15 23:11:54.446517 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:11:54.452096 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:11:54.459593 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:11:54.462663 chronyd[1842]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 15 23:11:54.463722 jq[1839]: false Jul 15 23:11:54.464208 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:11:54.468728 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 15 23:11:54.472910 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 15 23:11:54.479286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:11:54.480566 KVP[1843]: KVP starting; pid is:1843 Jul 15 23:11:54.486713 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:11:54.486966 KVP[1843]: KVP LIC Version: 3.1 Jul 15 23:11:54.487637 kernel: hv_utils: KVP IC version 4.0 Jul 15 23:11:54.497783 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:11:54.504663 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:11:54.511783 chronyd[1842]: Timezone right/UTC failed leap second check, ignoring Jul 15 23:11:54.512273 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:11:54.511953 chronyd[1842]: Loaded seccomp filter (level 2) Jul 15 23:11:54.518348 extend-filesystems[1840]: Found /dev/sda6 Jul 15 23:11:54.522705 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:11:54.538320 extend-filesystems[1840]: Found /dev/sda9 Jul 15 23:11:54.538320 extend-filesystems[1840]: Checking size of /dev/sda9 Jul 15 23:11:54.536714 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:11:54.568788 extend-filesystems[1840]: Old size kept for /dev/sda9 Jul 15 23:11:54.543255 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:11:54.543704 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:11:54.547786 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:11:54.564571 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:11:54.587212 jq[1874]: true Jul 15 23:11:54.575548 systemd[1]: Started chronyd.service - NTP client/server. Jul 15 23:11:54.588561 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:11:54.598771 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:11:54.598973 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:11:54.599202 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:11:54.600554 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:11:54.609979 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:11:54.610609 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:11:54.618413 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:11:54.626035 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:11:54.626209 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:11:54.650354 update_engine[1867]: I20250715 23:11:54.650265 1867 main.cc:92] Flatcar Update Engine starting Jul 15 23:11:54.657288 (ntainerd)[1893]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:11:54.662012 jq[1892]: true Jul 15 23:11:54.697881 tar[1890]: linux-arm64/LICENSE Jul 15 23:11:54.697881 tar[1890]: linux-arm64/helm Jul 15 23:11:54.703119 systemd-logind[1858]: New seat seat0. Jul 15 23:11:54.705966 systemd-logind[1858]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 15 23:11:54.706137 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:11:54.798967 dbus-daemon[1834]: [system] SELinux support is enabled Jul 15 23:11:54.799361 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:11:54.807678 bash[1958]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:11:54.807727 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:11:54.807757 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:11:54.816621 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:11:54.816639 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:11:54.823452 update_engine[1867]: I20250715 23:11:54.822299 1867 update_check_scheduler.cc:74] Next update check in 9m51s Jul 15 23:11:54.826559 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:11:54.834919 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 23:11:54.835471 dbus-daemon[1834]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 23:11:54.835733 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:11:54.849732 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:11:54.900087 coreos-metadata[1833]: Jul 15 23:11:54.900 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 23:11:54.909533 coreos-metadata[1833]: Jul 15 23:11:54.908 INFO Fetch successful Jul 15 23:11:54.909533 coreos-metadata[1833]: Jul 15 23:11:54.908 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 15 23:11:54.914892 coreos-metadata[1833]: Jul 15 23:11:54.914 INFO Fetch successful Jul 15 23:11:54.914892 coreos-metadata[1833]: Jul 15 23:11:54.914 INFO Fetching http://168.63.129.16/machine/9853e1ca-d941-40c4-84f0-dd9f9e875df3/2bcacf35%2Dd3c0%2D40c5%2D809c%2Defdaa2828d4b.%5Fci%2D4372.0.1%2Dn%2D7068735510?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 15 23:11:54.916888 coreos-metadata[1833]: Jul 15 23:11:54.916 INFO Fetch successful Jul 15 23:11:54.917032 coreos-metadata[1833]: Jul 15 23:11:54.917 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 15 23:11:54.926196 coreos-metadata[1833]: Jul 15 23:11:54.926 INFO Fetch successful Jul 15 23:11:54.970633 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 23:11:54.978329 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:11:55.080625 sshd_keygen[1873]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:11:55.100933 locksmithd[1983]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:11:55.104903 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:11:55.118449 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:11:55.125716 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 15 23:11:55.152510 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:11:55.153702 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:11:55.156358 containerd[1893]: time="2025-07-15T23:11:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:11:55.158308 containerd[1893]: time="2025-07-15T23:11:55.158279300Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:11:55.166035 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:11:55.166436 containerd[1893]: time="2025-07-15T23:11:55.166410364Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.336µs" Jul 15 23:11:55.166505 containerd[1893]: time="2025-07-15T23:11:55.166485220Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:11:55.166614 containerd[1893]: time="2025-07-15T23:11:55.166598900Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:11:55.168454 containerd[1893]: time="2025-07-15T23:11:55.166773668Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:11:55.168454 containerd[1893]: time="2025-07-15T23:11:55.166792716Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:11:55.168454 containerd[1893]: time="2025-07-15T23:11:55.166810436Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:11:55.168454 containerd[1893]: time="2025-07-15T23:11:55.166846780Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:11:55.168454 containerd[1893]: time="2025-07-15T23:11:55.166853308Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:11:55.168454 containerd[1893]: time="2025-07-15T23:11:55.167009676Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:11:55.168454 containerd[1893]: time="2025-07-15T23:11:55.167018484Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:11:55.168454 containerd[1893]: time="2025-07-15T23:11:55.167025660Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:11:55.168454 containerd[1893]: time="2025-07-15T23:11:55.167030700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:11:55.168454 containerd[1893]: time="2025-07-15T23:11:55.167075540Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:11:55.168454 containerd[1893]: time="2025-07-15T23:11:55.167221252Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:11:55.168655 containerd[1893]: time="2025-07-15T23:11:55.167243124Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:11:55.168655 containerd[1893]: time="2025-07-15T23:11:55.167250036Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:11:55.168783 containerd[1893]: time="2025-07-15T23:11:55.168727340Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:11:55.168993 containerd[1893]: time="2025-07-15T23:11:55.168973964Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:11:55.169137 containerd[1893]: time="2025-07-15T23:11:55.169117964Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:11:55.183561 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.192877636Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.192944708Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.192962372Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.192972716Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.192981740Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.192991308Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.193003836Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.193012092Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.193019644Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.193025908Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.193032020Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.193041764Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.193177188Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:11:55.196898 containerd[1893]: time="2025-07-15T23:11:55.193192116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193203948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193211356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193220260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193227596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193234452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193240644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193247820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193254596Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193260596Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193320588Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193331044Z" level=info msg="Start snapshots syncer" Jul 15 23:11:55.197154 containerd[1893]: time="2025-07-15T23:11:55.193348820Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:11:55.197287 containerd[1893]: time="2025-07-15T23:11:55.196359060Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:11:55.197287 containerd[1893]: time="2025-07-15T23:11:55.196438572Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196557300Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196700692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196717580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196725236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196732740Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196743156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196757220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196764460Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196784964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196792132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196800372Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196838156Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196853316Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:11:55.197369 containerd[1893]: time="2025-07-15T23:11:55.196859084Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:11:55.197546 containerd[1893]: time="2025-07-15T23:11:55.196865404Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:11:55.197546 containerd[1893]: time="2025-07-15T23:11:55.196869836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:11:55.197546 containerd[1893]: time="2025-07-15T23:11:55.196875348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:11:55.197546 containerd[1893]: time="2025-07-15T23:11:55.196882556Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:11:55.197810 containerd[1893]: time="2025-07-15T23:11:55.197623868Z" level=info msg="runtime interface created" Jul 15 23:11:55.197810 containerd[1893]: time="2025-07-15T23:11:55.197645676Z" level=info msg="created NRI interface" Jul 15 23:11:55.197810 containerd[1893]: time="2025-07-15T23:11:55.197655292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:11:55.197810 containerd[1893]: time="2025-07-15T23:11:55.197669708Z" level=info msg="Connect containerd service" Jul 15 23:11:55.197810 containerd[1893]: time="2025-07-15T23:11:55.197692972Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:11:55.199878 containerd[1893]: time="2025-07-15T23:11:55.199517908Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:11:55.200743 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:11:55.207940 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:11:55.216462 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 23:11:55.223798 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:11:55.288070 tar[1890]: linux-arm64/README.md Jul 15 23:11:55.305570 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:11:55.421489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:11:55.473410 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:11:55.680596 containerd[1893]: time="2025-07-15T23:11:55.680362340Z" level=info msg="Start subscribing containerd event" Jul 15 23:11:55.680884 containerd[1893]: time="2025-07-15T23:11:55.680856468Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:11:55.680993 containerd[1893]: time="2025-07-15T23:11:55.680978756Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:11:55.681058 containerd[1893]: time="2025-07-15T23:11:55.680926012Z" level=info msg="Start recovering state" Jul 15 23:11:55.681555 containerd[1893]: time="2025-07-15T23:11:55.681396932Z" level=info msg="Start event monitor" Jul 15 23:11:55.681555 containerd[1893]: time="2025-07-15T23:11:55.681422596Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:11:55.681555 containerd[1893]: time="2025-07-15T23:11:55.681434908Z" level=info msg="Start streaming server" Jul 15 23:11:55.681555 containerd[1893]: time="2025-07-15T23:11:55.681442380Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:11:55.681555 containerd[1893]: time="2025-07-15T23:11:55.681448156Z" level=info msg="runtime interface starting up..." Jul 15 23:11:55.681555 containerd[1893]: time="2025-07-15T23:11:55.681454332Z" level=info msg="starting plugins..." Jul 15 23:11:55.681555 containerd[1893]: time="2025-07-15T23:11:55.681464532Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:11:55.682021 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:11:55.686981 containerd[1893]: time="2025-07-15T23:11:55.686944780Z" level=info msg="containerd successfully booted in 0.530885s" Jul 15 23:11:55.687654 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:11:55.695604 systemd[1]: Startup finished in 1.630s (kernel) + 10.935s (initrd) + 10.161s (userspace) = 22.728s. Jul 15 23:11:55.782022 kubelet[2038]: E0715 23:11:55.781948 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:11:55.784832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:11:55.785084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:11:55.786253 systemd[1]: kubelet.service: Consumed 557ms CPU time, 258.4M memory peak. Jul 15 23:11:55.864498 login[2024]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 15 23:11:55.865598 login[2025]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:11:55.877224 systemd-logind[1858]: New session 2 of user core. Jul 15 23:11:55.877507 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:11:55.880738 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:11:55.914148 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:11:55.915383 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:11:55.923016 (systemd)[2057]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:11:55.925017 systemd-logind[1858]: New session c1 of user core. Jul 15 23:11:56.084027 systemd[2057]: Queued start job for default target default.target. Jul 15 23:11:56.096262 systemd[2057]: Created slice app.slice - User Application Slice. Jul 15 23:11:56.096287 systemd[2057]: Reached target paths.target - Paths. Jul 15 23:11:56.096415 systemd[2057]: Reached target timers.target - Timers. Jul 15 23:11:56.097425 systemd[2057]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:11:56.104480 systemd[2057]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:11:56.104538 systemd[2057]: Reached target sockets.target - Sockets. Jul 15 23:11:56.104571 systemd[2057]: Reached target basic.target - Basic System. Jul 15 23:11:56.104592 systemd[2057]: Reached target default.target - Main User Target. Jul 15 23:11:56.104614 systemd[2057]: Startup finished in 174ms. Jul 15 23:11:56.104871 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:11:56.106195 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:11:56.506808 waagent[2020]: 2025-07-15T23:11:56.506670Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 15 23:11:56.514592 waagent[2020]: 2025-07-15T23:11:56.511291Z INFO Daemon Daemon OS: flatcar 4372.0.1 Jul 15 23:11:56.514759 waagent[2020]: 2025-07-15T23:11:56.514728Z INFO Daemon Daemon Python: 3.11.12 Jul 15 23:11:56.517969 waagent[2020]: 2025-07-15T23:11:56.517907Z INFO Daemon Daemon Run daemon Jul 15 23:11:56.520744 waagent[2020]: 2025-07-15T23:11:56.520716Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.0.1' Jul 15 23:11:56.527360 waagent[2020]: 2025-07-15T23:11:56.527322Z INFO Daemon Daemon Using waagent for provisioning Jul 15 23:11:56.531227 waagent[2020]: 2025-07-15T23:11:56.531194Z INFO Daemon Daemon Activate resource disk Jul 15 23:11:56.534669 waagent[2020]: 2025-07-15T23:11:56.534640Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 15 23:11:56.542488 waagent[2020]: 2025-07-15T23:11:56.542452Z INFO Daemon Daemon Found device: None Jul 15 23:11:56.546046 waagent[2020]: 2025-07-15T23:11:56.546016Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 15 23:11:56.552032 waagent[2020]: 2025-07-15T23:11:56.552005Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 15 23:11:56.561007 waagent[2020]: 2025-07-15T23:11:56.560969Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 23:11:56.565245 waagent[2020]: 2025-07-15T23:11:56.565218Z INFO Daemon Daemon Running default provisioning handler Jul 15 23:11:56.573882 waagent[2020]: 2025-07-15T23:11:56.573840Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 15 23:11:56.583698 waagent[2020]: 2025-07-15T23:11:56.583659Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 15 23:11:56.590465 waagent[2020]: 2025-07-15T23:11:56.590436Z INFO Daemon Daemon cloud-init is enabled: False Jul 15 23:11:56.594099 waagent[2020]: 2025-07-15T23:11:56.594079Z INFO Daemon Daemon Copying ovf-env.xml Jul 15 23:11:56.668338 waagent[2020]: 2025-07-15T23:11:56.666862Z INFO Daemon Daemon Successfully mounted dvd Jul 15 23:11:56.678403 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 15 23:11:56.680237 waagent[2020]: 2025-07-15T23:11:56.680184Z INFO Daemon Daemon Detect protocol endpoint Jul 15 23:11:56.683775 waagent[2020]: 2025-07-15T23:11:56.683733Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 23:11:56.688114 waagent[2020]: 2025-07-15T23:11:56.688078Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 15 23:11:56.692927 waagent[2020]: 2025-07-15T23:11:56.692898Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 15 23:11:56.696822 waagent[2020]: 2025-07-15T23:11:56.696789Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 15 23:11:56.700374 waagent[2020]: 2025-07-15T23:11:56.700347Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 15 23:11:56.737841 waagent[2020]: 2025-07-15T23:11:56.737798Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 15 23:11:56.742644 waagent[2020]: 2025-07-15T23:11:56.742621Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 15 23:11:56.746407 waagent[2020]: 2025-07-15T23:11:56.746379Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 15 23:11:56.865887 login[2024]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:11:56.870139 systemd-logind[1858]: New session 1 of user core. Jul 15 23:11:56.877659 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:11:56.907242 waagent[2020]: 2025-07-15T23:11:56.907153Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 15 23:11:56.911977 waagent[2020]: 2025-07-15T23:11:56.911934Z INFO Daemon Daemon Forcing an update of the goal state. Jul 15 23:11:56.918984 waagent[2020]: 2025-07-15T23:11:56.918941Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 23:11:56.936655 waagent[2020]: 2025-07-15T23:11:56.936620Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 15 23:11:56.941150 waagent[2020]: 2025-07-15T23:11:56.941114Z INFO Daemon Jul 15 23:11:56.943223 waagent[2020]: 2025-07-15T23:11:56.943193Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a0b493a1-c00d-4873-a68b-dccec6751dfa eTag: 1435550390624563333 source: Fabric] Jul 15 23:11:56.951212 waagent[2020]: 2025-07-15T23:11:56.951180Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 15 23:11:56.956031 waagent[2020]: 2025-07-15T23:11:56.956001Z INFO Daemon Jul 15 23:11:56.958100 waagent[2020]: 2025-07-15T23:11:56.958076Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 15 23:11:56.966983 waagent[2020]: 2025-07-15T23:11:56.966952Z INFO Daemon Daemon Downloading artifacts profile blob Jul 15 23:11:57.029882 waagent[2020]: 2025-07-15T23:11:57.029811Z INFO Daemon Downloaded certificate {'thumbprint': 'ECE920A70B7426608CF89B90A1A6ADA80F765B3A', 'hasPrivateKey': False} Jul 15 23:11:57.037004 waagent[2020]: 2025-07-15T23:11:57.036965Z INFO Daemon Downloaded certificate {'thumbprint': '2A92EE4FEF5109CC102D7F3E8F2603A9E2682282', 'hasPrivateKey': True} Jul 15 23:11:57.044176 waagent[2020]: 2025-07-15T23:11:57.044140Z INFO Daemon Fetch goal state completed Jul 15 23:11:57.053619 waagent[2020]: 2025-07-15T23:11:57.053587Z INFO Daemon Daemon Starting provisioning Jul 15 23:11:57.057317 waagent[2020]: 2025-07-15T23:11:57.057288Z INFO Daemon Daemon Handle ovf-env.xml. Jul 15 23:11:57.060921 waagent[2020]: 2025-07-15T23:11:57.060899Z INFO Daemon Daemon Set hostname [ci-4372.0.1-n-7068735510] Jul 15 23:11:57.080101 waagent[2020]: 2025-07-15T23:11:57.080048Z INFO Daemon Daemon Publish hostname [ci-4372.0.1-n-7068735510] Jul 15 23:11:57.084836 waagent[2020]: 2025-07-15T23:11:57.084799Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 15 23:11:57.089311 waagent[2020]: 2025-07-15T23:11:57.089282Z INFO Daemon Daemon Primary interface is [eth0] Jul 15 23:11:57.098980 systemd-networkd[1574]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:11:57.098989 systemd-networkd[1574]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:11:57.099023 systemd-networkd[1574]: eth0: DHCP lease lost Jul 15 23:11:57.099856 waagent[2020]: 2025-07-15T23:11:57.099811Z INFO Daemon Daemon Create user account if not exists Jul 15 23:11:57.103960 waagent[2020]: 2025-07-15T23:11:57.103930Z INFO Daemon Daemon User core already exists, skip useradd Jul 15 23:11:57.108134 waagent[2020]: 2025-07-15T23:11:57.108109Z INFO Daemon Daemon Configure sudoer Jul 15 23:11:57.116168 waagent[2020]: 2025-07-15T23:11:57.116123Z INFO Daemon Daemon Configure sshd Jul 15 23:11:57.123411 waagent[2020]: 2025-07-15T23:11:57.123266Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 15 23:11:57.132509 waagent[2020]: 2025-07-15T23:11:57.132471Z INFO Daemon Daemon Deploy ssh public key. Jul 15 23:11:57.138605 systemd-networkd[1574]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 23:11:58.228164 waagent[2020]: 2025-07-15T23:11:58.228104Z INFO Daemon Daemon Provisioning complete Jul 15 23:11:58.241700 waagent[2020]: 2025-07-15T23:11:58.241662Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 15 23:11:58.245964 waagent[2020]: 2025-07-15T23:11:58.245927Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 15 23:11:58.252986 waagent[2020]: 2025-07-15T23:11:58.252957Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 15 23:11:58.351550 waagent[2111]: 2025-07-15T23:11:58.351058Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 15 23:11:58.351550 waagent[2111]: 2025-07-15T23:11:58.351193Z INFO ExtHandler ExtHandler OS: flatcar 4372.0.1 Jul 15 23:11:58.351550 waagent[2111]: 2025-07-15T23:11:58.351230Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 15 23:11:58.351550 waagent[2111]: 2025-07-15T23:11:58.351264Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 15 23:11:58.368648 waagent[2111]: 2025-07-15T23:11:58.368588Z INFO ExtHandler ExtHandler Distro: flatcar-4372.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 15 23:11:58.368941 waagent[2111]: 2025-07-15T23:11:58.368910Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 23:11:58.369057 waagent[2111]: 2025-07-15T23:11:58.369033Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 23:11:58.375077 waagent[2111]: 2025-07-15T23:11:58.375024Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 23:11:58.380567 waagent[2111]: 2025-07-15T23:11:58.380470Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 15 23:11:58.380952 waagent[2111]: 2025-07-15T23:11:58.380914Z INFO ExtHandler Jul 15 23:11:58.381002 waagent[2111]: 2025-07-15T23:11:58.380983Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 69adeb69-04b8-4f6e-a969-de1b971f39b0 eTag: 1435550390624563333 source: Fabric] Jul 15 23:11:58.381229 waagent[2111]: 2025-07-15T23:11:58.381202Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 15 23:11:58.381677 waagent[2111]: 2025-07-15T23:11:58.381646Z INFO ExtHandler Jul 15 23:11:58.381717 waagent[2111]: 2025-07-15T23:11:58.381701Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 15 23:11:58.385230 waagent[2111]: 2025-07-15T23:11:58.385203Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 15 23:11:58.445931 waagent[2111]: 2025-07-15T23:11:58.445857Z INFO ExtHandler Downloaded certificate {'thumbprint': 'ECE920A70B7426608CF89B90A1A6ADA80F765B3A', 'hasPrivateKey': False} Jul 15 23:11:58.446245 waagent[2111]: 2025-07-15T23:11:58.446213Z INFO ExtHandler Downloaded certificate {'thumbprint': '2A92EE4FEF5109CC102D7F3E8F2603A9E2682282', 'hasPrivateKey': True} Jul 15 23:11:58.446578 waagent[2111]: 2025-07-15T23:11:58.446522Z INFO ExtHandler Fetch goal state completed Jul 15 23:11:58.459031 waagent[2111]: 2025-07-15T23:11:58.458976Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 15 23:11:58.462494 waagent[2111]: 2025-07-15T23:11:58.462444Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2111 Jul 15 23:11:58.462625 waagent[2111]: 2025-07-15T23:11:58.462598Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 15 23:11:58.462876 waagent[2111]: 2025-07-15T23:11:58.462849Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 15 23:11:58.464108 waagent[2111]: 2025-07-15T23:11:58.464072Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 15 23:11:58.464432 waagent[2111]: 2025-07-15T23:11:58.464401Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 15 23:11:58.464574 waagent[2111]: 2025-07-15T23:11:58.464522Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 15 23:11:58.465001 waagent[2111]: 2025-07-15T23:11:58.464973Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 15 23:11:58.492003 waagent[2111]: 2025-07-15T23:11:58.491911Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 15 23:11:58.492123 waagent[2111]: 2025-07-15T23:11:58.492094Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 15 23:11:58.496376 waagent[2111]: 2025-07-15T23:11:58.496350Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 15 23:11:58.500938 systemd[1]: Reload requested from client PID 2128 ('systemctl') (unit waagent.service)... Jul 15 23:11:58.500999 systemd[1]: Reloading... Jul 15 23:11:58.574568 zram_generator::config[2166]: No configuration found. Jul 15 23:11:58.642803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:11:58.729272 systemd[1]: Reloading finished in 228 ms. Jul 15 23:11:58.754154 waagent[2111]: 2025-07-15T23:11:58.753846Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 15 23:11:58.754154 waagent[2111]: 2025-07-15T23:11:58.753989Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 15 23:11:58.978556 waagent[2111]: 2025-07-15T23:11:58.978468Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 15 23:11:58.978840 waagent[2111]: 2025-07-15T23:11:58.978808Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 15 23:11:58.979468 waagent[2111]: 2025-07-15T23:11:58.979430Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 15 23:11:58.979759 waagent[2111]: 2025-07-15T23:11:58.979718Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 15 23:11:58.980123 waagent[2111]: 2025-07-15T23:11:58.980084Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 15 23:11:58.980260 waagent[2111]: 2025-07-15T23:11:58.980228Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 15 23:11:58.980538 waagent[2111]: 2025-07-15T23:11:58.980490Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 15 23:11:58.980647 waagent[2111]: 2025-07-15T23:11:58.980619Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 15 23:11:58.981888 waagent[2111]: 2025-07-15T23:11:58.981287Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 23:11:58.981888 waagent[2111]: 2025-07-15T23:11:58.981333Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 23:11:58.981888 waagent[2111]: 2025-07-15T23:11:58.981438Z INFO EnvHandler ExtHandler Configure routes Jul 15 23:11:58.981888 waagent[2111]: 2025-07-15T23:11:58.981475Z INFO EnvHandler ExtHandler Gateway:None Jul 15 23:11:58.981888 waagent[2111]: 2025-07-15T23:11:58.981497Z INFO EnvHandler ExtHandler Routes:None Jul 15 23:11:58.982082 waagent[2111]: 2025-07-15T23:11:58.981213Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 23:11:58.982207 waagent[2111]: 2025-07-15T23:11:58.982179Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 23:11:58.982401 waagent[2111]: 2025-07-15T23:11:58.982369Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 15 23:11:58.982673 waagent[2111]: 2025-07-15T23:11:58.982638Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 15 23:11:58.983744 waagent[2111]: 2025-07-15T23:11:58.983717Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 15 23:11:58.983744 waagent[2111]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 15 23:11:58.983744 waagent[2111]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 15 23:11:58.983744 waagent[2111]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 15 23:11:58.983744 waagent[2111]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 15 23:11:58.983744 waagent[2111]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 23:11:58.983744 waagent[2111]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 23:11:58.986988 waagent[2111]: 2025-07-15T23:11:58.986951Z INFO ExtHandler ExtHandler Jul 15 23:11:58.987043 waagent[2111]: 2025-07-15T23:11:58.987010Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 742013ea-9239-47a7-a320-c4c38a4e523a correlation ffe3f747-4bf3-4e86-908e-40d2a3e8b327 created: 2025-07-15T23:10:56.409882Z] Jul 15 23:11:58.987291 waagent[2111]: 2025-07-15T23:11:58.987260Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 15 23:11:58.987711 waagent[2111]: 2025-07-15T23:11:58.987681Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 15 23:11:59.029064 waagent[2111]: 2025-07-15T23:11:59.028960Z INFO MonitorHandler ExtHandler Network interfaces: Jul 15 23:11:59.029064 waagent[2111]: Executing ['ip', '-a', '-o', 'link']: Jul 15 23:11:59.029064 waagent[2111]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 15 23:11:59.029064 waagent[2111]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:48:d5 brd ff:ff:ff:ff:ff:ff Jul 15 23:11:59.029064 waagent[2111]: 3: enP15586s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:48:d5 brd ff:ff:ff:ff:ff:ff\ altname enP15586p0s2 Jul 15 23:11:59.029064 waagent[2111]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 15 23:11:59.029064 waagent[2111]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 15 23:11:59.029064 waagent[2111]: 2: eth0 inet 10.200.20.39/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 15 23:11:59.029064 waagent[2111]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 15 23:11:59.029064 waagent[2111]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 15 23:11:59.029064 waagent[2111]: 2: eth0 inet6 fe80::222:48ff:fe7e:48d5/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 23:11:59.029064 waagent[2111]: 3: enP15586s1 inet6 fe80::222:48ff:fe7e:48d5/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 23:11:59.029420 waagent[2111]: 2025-07-15T23:11:59.029383Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 15 23:11:59.029420 waagent[2111]: Try `iptables -h' or 'iptables --help' for more information.) Jul 15 23:11:59.030226 waagent[2111]: 2025-07-15T23:11:59.030155Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2E697B44-5C12-498C-B2B9-58A99295EA92;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 15 23:11:59.121557 waagent[2111]: 2025-07-15T23:11:59.121100Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 15 23:11:59.121557 waagent[2111]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:11:59.121557 waagent[2111]: pkts bytes target prot opt in out source destination Jul 15 23:11:59.121557 waagent[2111]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:11:59.121557 waagent[2111]: pkts bytes target prot opt in out source destination Jul 15 23:11:59.121557 waagent[2111]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:11:59.121557 waagent[2111]: pkts bytes target prot opt in out source destination Jul 15 23:11:59.121557 waagent[2111]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 23:11:59.121557 waagent[2111]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 23:11:59.121557 waagent[2111]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 23:11:59.123508 waagent[2111]: 2025-07-15T23:11:59.123463Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 15 23:11:59.123508 waagent[2111]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:11:59.123508 waagent[2111]: pkts bytes target prot opt in out source destination Jul 15 23:11:59.123508 waagent[2111]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:11:59.123508 waagent[2111]: pkts bytes target prot opt in out source destination Jul 15 23:11:59.123508 waagent[2111]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:11:59.123508 waagent[2111]: pkts bytes target prot opt in out source destination Jul 15 23:11:59.123508 waagent[2111]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 23:11:59.123508 waagent[2111]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 23:11:59.123508 waagent[2111]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 23:11:59.123729 waagent[2111]: 2025-07-15T23:11:59.123705Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 15 23:12:06.035887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:12:06.037178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:06.134661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:06.137105 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:12:06.219896 kubelet[2261]: E0715 23:12:06.219846 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:12:06.222722 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:12:06.222944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:12:06.223485 systemd[1]: kubelet.service: Consumed 163ms CPU time, 107.7M memory peak. Jul 15 23:12:13.158547 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:12:13.160016 systemd[1]: Started sshd@0-10.200.20.39:22-10.200.16.10:49540.service - OpenSSH per-connection server daemon (10.200.16.10:49540). Jul 15 23:12:13.747763 sshd[2269]: Accepted publickey for core from 10.200.16.10 port 49540 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:12:13.748796 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:13.752467 systemd-logind[1858]: New session 3 of user core. Jul 15 23:12:13.760809 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:12:14.166504 systemd[1]: Started sshd@1-10.200.20.39:22-10.200.16.10:49542.service - OpenSSH per-connection server daemon (10.200.16.10:49542). Jul 15 23:12:14.639331 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 49542 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:12:14.640447 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:14.643913 systemd-logind[1858]: New session 4 of user core. Jul 15 23:12:14.650645 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:12:14.989284 sshd[2276]: Connection closed by 10.200.16.10 port 49542 Jul 15 23:12:14.989806 sshd-session[2274]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:14.992694 systemd[1]: sshd@1-10.200.20.39:22-10.200.16.10:49542.service: Deactivated successfully. Jul 15 23:12:14.994050 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:12:14.996018 systemd-logind[1858]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:12:14.997051 systemd-logind[1858]: Removed session 4. Jul 15 23:12:15.074128 systemd[1]: Started sshd@2-10.200.20.39:22-10.200.16.10:49556.service - OpenSSH per-connection server daemon (10.200.16.10:49556). Jul 15 23:12:15.514328 sshd[2282]: Accepted publickey for core from 10.200.16.10 port 49556 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:12:15.515494 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:15.519070 systemd-logind[1858]: New session 5 of user core. Jul 15 23:12:15.528831 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:12:15.835662 sshd[2284]: Connection closed by 10.200.16.10 port 49556 Jul 15 23:12:15.836144 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:15.839220 systemd[1]: sshd@2-10.200.20.39:22-10.200.16.10:49556.service: Deactivated successfully. Jul 15 23:12:15.840492 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:12:15.841053 systemd-logind[1858]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:12:15.842024 systemd-logind[1858]: Removed session 5. Jul 15 23:12:15.919838 systemd[1]: Started sshd@3-10.200.20.39:22-10.200.16.10:49558.service - OpenSSH per-connection server daemon (10.200.16.10:49558). Jul 15 23:12:16.305779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 23:12:16.307389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:16.390560 sshd[2290]: Accepted publickey for core from 10.200.16.10 port 49558 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:12:16.391688 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:16.395210 systemd-logind[1858]: New session 6 of user core. Jul 15 23:12:16.397621 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:12:16.403653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:16.411882 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:12:16.523943 kubelet[2301]: E0715 23:12:16.523874 2301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:12:16.526286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:12:16.526500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:12:16.527026 systemd[1]: kubelet.service: Consumed 103ms CPU time, 105.3M memory peak. Jul 15 23:12:16.728675 sshd[2300]: Connection closed by 10.200.16.10 port 49558 Jul 15 23:12:16.729360 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:16.732155 systemd-logind[1858]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:12:16.732946 systemd[1]: sshd@3-10.200.20.39:22-10.200.16.10:49558.service: Deactivated successfully. Jul 15 23:12:16.734436 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:12:16.736343 systemd-logind[1858]: Removed session 6. Jul 15 23:12:16.806149 systemd[1]: Started sshd@4-10.200.20.39:22-10.200.16.10:49570.service - OpenSSH per-connection server daemon (10.200.16.10:49570). Jul 15 23:12:17.240001 sshd[2313]: Accepted publickey for core from 10.200.16.10 port 49570 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:12:17.241075 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:17.244981 systemd-logind[1858]: New session 7 of user core. Jul 15 23:12:17.255685 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:12:17.559511 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:12:17.559753 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:12:17.573469 sudo[2316]: pam_unix(sudo:session): session closed for user root Jul 15 23:12:17.654322 sshd[2315]: Connection closed by 10.200.16.10 port 49570 Jul 15 23:12:17.654985 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:17.658191 systemd[1]: sshd@4-10.200.20.39:22-10.200.16.10:49570.service: Deactivated successfully. Jul 15 23:12:17.659448 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:12:17.660043 systemd-logind[1858]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:12:17.661477 systemd-logind[1858]: Removed session 7. Jul 15 23:12:17.736720 systemd[1]: Started sshd@5-10.200.20.39:22-10.200.16.10:49586.service - OpenSSH per-connection server daemon (10.200.16.10:49586). Jul 15 23:12:18.194550 sshd[2322]: Accepted publickey for core from 10.200.16.10 port 49586 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:12:18.195704 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:18.199172 systemd-logind[1858]: New session 8 of user core. Jul 15 23:12:18.205654 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:12:18.304042 chronyd[1842]: Selected source PHC0 Jul 15 23:12:18.451782 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:12:18.451988 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:12:18.459446 sudo[2326]: pam_unix(sudo:session): session closed for user root Jul 15 23:12:18.462874 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:12:18.463063 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:12:18.470044 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:12:18.498024 augenrules[2348]: No rules Jul 15 23:12:18.499110 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:12:18.499297 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:12:18.500319 sudo[2325]: pam_unix(sudo:session): session closed for user root Jul 15 23:12:18.568572 sshd[2324]: Connection closed by 10.200.16.10 port 49586 Jul 15 23:12:18.569041 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:18.572038 systemd-logind[1858]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:12:18.572617 systemd[1]: sshd@5-10.200.20.39:22-10.200.16.10:49586.service: Deactivated successfully. Jul 15 23:12:18.573858 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:12:18.575057 systemd-logind[1858]: Removed session 8. Jul 15 23:12:18.652705 systemd[1]: Started sshd@6-10.200.20.39:22-10.200.16.10:49592.service - OpenSSH per-connection server daemon (10.200.16.10:49592). Jul 15 23:12:19.126736 sshd[2357]: Accepted publickey for core from 10.200.16.10 port 49592 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:12:19.127839 sshd-session[2357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:19.131299 systemd-logind[1858]: New session 9 of user core. Jul 15 23:12:19.141641 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:12:19.390958 sudo[2360]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:12:19.391160 sudo[2360]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:12:20.314484 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:12:20.321817 (dockerd)[2377]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:12:20.856558 dockerd[2377]: time="2025-07-15T23:12:20.855574230Z" level=info msg="Starting up" Jul 15 23:12:20.857713 dockerd[2377]: time="2025-07-15T23:12:20.857658627Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:12:20.966941 systemd[1]: var-lib-docker-metacopy\x2dcheck851175050-merged.mount: Deactivated successfully. Jul 15 23:12:20.982098 dockerd[2377]: time="2025-07-15T23:12:20.982065000Z" level=info msg="Loading containers: start." Jul 15 23:12:20.997551 kernel: Initializing XFRM netlink socket Jul 15 23:12:21.257066 systemd-networkd[1574]: docker0: Link UP Jul 15 23:12:21.291010 dockerd[2377]: time="2025-07-15T23:12:21.290915640Z" level=info msg="Loading containers: done." Jul 15 23:12:22.160013 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3055673752-merged.mount: Deactivated successfully. Jul 15 23:12:22.313027 dockerd[2377]: time="2025-07-15T23:12:22.312974631Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:12:22.313377 dockerd[2377]: time="2025-07-15T23:12:22.313072153Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:12:22.313377 dockerd[2377]: time="2025-07-15T23:12:22.313188780Z" level=info msg="Initializing buildkit" Jul 15 23:12:22.363831 dockerd[2377]: time="2025-07-15T23:12:22.363792664Z" level=info msg="Completed buildkit initialization" Jul 15 23:12:22.368442 dockerd[2377]: time="2025-07-15T23:12:22.368409111Z" level=info msg="Daemon has completed initialization" Jul 15 23:12:22.368601 dockerd[2377]: time="2025-07-15T23:12:22.368561843Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:12:22.368803 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:12:22.850350 containerd[1893]: time="2025-07-15T23:12:22.850254300Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Jul 15 23:12:23.852959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3572530917.mount: Deactivated successfully. Jul 15 23:12:25.415469 containerd[1893]: time="2025-07-15T23:12:25.414860503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:25.417953 containerd[1893]: time="2025-07-15T23:12:25.417928742Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=27352094" Jul 15 23:12:25.427349 containerd[1893]: time="2025-07-15T23:12:25.427323767Z" level=info msg="ImageCreate event name:\"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:25.431603 containerd[1893]: time="2025-07-15T23:12:25.431572892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:25.432282 containerd[1893]: time="2025-07-15T23:12:25.432262110Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"27348894\" in 2.581971521s" Jul 15 23:12:25.432340 containerd[1893]: time="2025-07-15T23:12:25.432287734Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\"" Jul 15 23:12:25.433756 containerd[1893]: time="2025-07-15T23:12:25.433736092Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Jul 15 23:12:26.776794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 15 23:12:26.778195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:26.882336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:26.884789 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:12:27.162022 kubelet[2638]: E0715 23:12:27.161878 2638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:12:27.164326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:12:27.164610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:12:27.165157 systemd[1]: kubelet.service: Consumed 107ms CPU time, 107.3M memory peak. Jul 15 23:12:27.807839 containerd[1893]: time="2025-07-15T23:12:27.807776521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:27.813542 containerd[1893]: time="2025-07-15T23:12:27.813488718Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=23537846" Jul 15 23:12:27.819988 containerd[1893]: time="2025-07-15T23:12:27.819942069Z" level=info msg="ImageCreate event name:\"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:27.837015 containerd[1893]: time="2025-07-15T23:12:27.836952063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:27.837541 containerd[1893]: time="2025-07-15T23:12:27.837504290Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"25092764\" in 2.403618075s" Jul 15 23:12:27.837600 containerd[1893]: time="2025-07-15T23:12:27.837550092Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\"" Jul 15 23:12:27.838141 containerd[1893]: time="2025-07-15T23:12:27.837950682Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Jul 15 23:12:29.414221 containerd[1893]: time="2025-07-15T23:12:29.413668490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:29.416499 containerd[1893]: time="2025-07-15T23:12:29.416474019Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=18293524" Jul 15 23:12:29.419088 containerd[1893]: time="2025-07-15T23:12:29.419068045Z" level=info msg="ImageCreate event name:\"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:29.425556 containerd[1893]: time="2025-07-15T23:12:29.425211665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:29.425673 containerd[1893]: time="2025-07-15T23:12:29.425646224Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"19848460\" in 1.587664909s" Jul 15 23:12:29.425759 containerd[1893]: time="2025-07-15T23:12:29.425745827Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\"" Jul 15 23:12:29.426656 containerd[1893]: time="2025-07-15T23:12:29.426611209Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Jul 15 23:12:30.724352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821653785.mount: Deactivated successfully. Jul 15 23:12:31.031463 containerd[1893]: time="2025-07-15T23:12:31.030794382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:31.035394 containerd[1893]: time="2025-07-15T23:12:31.035366068Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=28199472" Jul 15 23:12:31.038803 containerd[1893]: time="2025-07-15T23:12:31.038779873Z" level=info msg="ImageCreate event name:\"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:31.043782 containerd[1893]: time="2025-07-15T23:12:31.043715548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:31.044109 containerd[1893]: time="2025-07-15T23:12:31.043985253Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"28198491\" in 1.617243712s" Jul 15 23:12:31.044109 containerd[1893]: time="2025-07-15T23:12:31.044009558Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\"" Jul 15 23:12:31.044501 containerd[1893]: time="2025-07-15T23:12:31.044474270Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 23:12:31.800917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946143875.mount: Deactivated successfully. Jul 15 23:12:33.155348 containerd[1893]: time="2025-07-15T23:12:33.155290237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:33.158546 containerd[1893]: time="2025-07-15T23:12:33.158415810Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 15 23:12:33.164659 containerd[1893]: time="2025-07-15T23:12:33.164617530Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:33.169584 containerd[1893]: time="2025-07-15T23:12:33.169544727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:33.170289 containerd[1893]: time="2025-07-15T23:12:33.170145471Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.125646809s" Jul 15 23:12:33.170289 containerd[1893]: time="2025-07-15T23:12:33.170174168Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 15 23:12:33.170649 containerd[1893]: time="2025-07-15T23:12:33.170614132Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:12:33.775795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1743959352.mount: Deactivated successfully. Jul 15 23:12:33.821229 containerd[1893]: time="2025-07-15T23:12:33.820732511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:12:33.823651 containerd[1893]: time="2025-07-15T23:12:33.823626909Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 15 23:12:33.829543 containerd[1893]: time="2025-07-15T23:12:33.829506780Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:12:33.836057 containerd[1893]: time="2025-07-15T23:12:33.836032781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:12:33.836368 containerd[1893]: time="2025-07-15T23:12:33.836339997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 665.702561ms" Jul 15 23:12:33.836368 containerd[1893]: time="2025-07-15T23:12:33.836368502Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 23:12:33.836854 containerd[1893]: time="2025-07-15T23:12:33.836813058Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 23:12:34.484384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1698937004.mount: Deactivated successfully. Jul 15 23:12:37.016874 containerd[1893]: time="2025-07-15T23:12:37.016815019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:37.019556 containerd[1893]: time="2025-07-15T23:12:37.019355255Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 15 23:12:37.025076 containerd[1893]: time="2025-07-15T23:12:37.025023788Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:37.035676 containerd[1893]: time="2025-07-15T23:12:37.035613706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:37.036471 containerd[1893]: time="2025-07-15T23:12:37.036315830Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.199475451s" Jul 15 23:12:37.036471 containerd[1893]: time="2025-07-15T23:12:37.036387297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 15 23:12:37.301348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 15 23:12:37.303596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:37.422031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:37.427747 (kubelet)[2795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:12:37.530544 kubelet[2795]: E0715 23:12:37.530490 2795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:12:37.533044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:12:37.533151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:12:37.533400 systemd[1]: kubelet.service: Consumed 106ms CPU time, 107.1M memory peak. Jul 15 23:12:39.447544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:39.447649 systemd[1]: kubelet.service: Consumed 106ms CPU time, 107.1M memory peak. Jul 15 23:12:39.449179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:39.468636 systemd[1]: Reload requested from client PID 2817 ('systemctl') (unit session-9.scope)... Jul 15 23:12:39.468728 systemd[1]: Reloading... Jul 15 23:12:39.572560 zram_generator::config[2863]: No configuration found. Jul 15 23:12:39.640891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:12:39.724468 systemd[1]: Reloading finished in 255 ms. Jul 15 23:12:39.766944 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:12:39.767010 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:12:39.767523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:39.767579 systemd[1]: kubelet.service: Consumed 70ms CPU time, 95M memory peak. Jul 15 23:12:39.769143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:39.793541 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 15 23:12:39.986999 update_engine[1867]: I20250715 23:12:39.986522 1867 update_attempter.cc:509] Updating boot flags... Jul 15 23:12:40.120637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:40.123729 (kubelet)[2932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:12:40.174569 kubelet[2932]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:12:40.174569 kubelet[2932]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:12:40.174569 kubelet[2932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:12:40.174569 kubelet[2932]: I0715 23:12:40.174076 2932 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:12:40.424565 kubelet[2932]: I0715 23:12:40.423674 2932 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 23:12:40.424565 kubelet[2932]: I0715 23:12:40.423701 2932 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:12:40.424565 kubelet[2932]: I0715 23:12:40.423875 2932 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 23:12:40.439089 kubelet[2932]: E0715 23:12:40.439047 2932 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 23:12:40.440630 kubelet[2932]: I0715 23:12:40.440609 2932 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:12:40.446921 kubelet[2932]: I0715 23:12:40.446908 2932 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:12:40.449448 kubelet[2932]: I0715 23:12:40.449430 2932 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:12:40.450252 kubelet[2932]: I0715 23:12:40.450226 2932 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:12:40.450479 kubelet[2932]: I0715 23:12:40.450323 2932 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-7068735510","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:12:40.450641 kubelet[2932]: I0715 23:12:40.450627 2932 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:12:40.450696 kubelet[2932]: I0715 23:12:40.450690 2932 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 23:12:40.451247 kubelet[2932]: I0715 23:12:40.451231 2932 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:12:40.452825 kubelet[2932]: I0715 23:12:40.452809 2932 kubelet.go:480] "Attempting to sync node with API server" Jul 15 23:12:40.452906 kubelet[2932]: I0715 23:12:40.452897 2932 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:12:40.452986 kubelet[2932]: I0715 23:12:40.452966 2932 kubelet.go:386] "Adding apiserver pod source" Jul 15 23:12:40.453844 kubelet[2932]: I0715 23:12:40.453830 2932 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:12:40.454899 kubelet[2932]: E0715 23:12:40.454862 2932 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-n-7068735510&limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 23:12:40.454963 kubelet[2932]: I0715 23:12:40.454949 2932 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:12:40.455334 kubelet[2932]: I0715 23:12:40.455309 2932 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 23:12:40.455372 kubelet[2932]: W0715 23:12:40.455355 2932 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:12:40.457034 kubelet[2932]: I0715 23:12:40.457015 2932 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:12:40.457094 kubelet[2932]: I0715 23:12:40.457048 2932 server.go:1289] "Started kubelet" Jul 15 23:12:40.459375 kubelet[2932]: E0715 23:12:40.459353 2932 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 23:12:40.460101 kubelet[2932]: E0715 23:12:40.459495 2932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-n-7068735510.18528fa8dedfcc94 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-n-7068735510,UID:ci-4372.0.1-n-7068735510,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-n-7068735510,},FirstTimestamp:2025-07-15 23:12:40.457030804 +0000 UTC m=+0.330246991,LastTimestamp:2025-07-15 23:12:40.457030804 +0000 UTC m=+0.330246991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-n-7068735510,}" Jul 15 23:12:40.460360 kubelet[2932]: I0715 23:12:40.460096 2932 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:12:40.460732 kubelet[2932]: I0715 23:12:40.460681 2932 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:12:40.460912 kubelet[2932]: I0715 23:12:40.460891 2932 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:12:40.461318 kubelet[2932]: I0715 23:12:40.460114 2932 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:12:40.461937 kubelet[2932]: I0715 23:12:40.461920 2932 server.go:317] "Adding debug handlers to kubelet server" Jul 15 23:12:40.462833 kubelet[2932]: I0715 23:12:40.462801 2932 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:12:40.463132 kubelet[2932]: I0715 23:12:40.463115 2932 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:12:40.463476 kubelet[2932]: E0715 23:12:40.463454 2932 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-7068735510\" not found" Jul 15 23:12:40.465820 kubelet[2932]: E0715 23:12:40.465014 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-7068735510?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="200ms" Jul 15 23:12:40.465820 kubelet[2932]: I0715 23:12:40.465245 2932 factory.go:223] Registration of the systemd container factory successfully Jul 15 23:12:40.465820 kubelet[2932]: I0715 23:12:40.465319 2932 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:12:40.467834 kubelet[2932]: E0715 23:12:40.467807 2932 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 23:12:40.468019 kubelet[2932]: I0715 23:12:40.468002 2932 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:12:40.468041 kubelet[2932]: I0715 23:12:40.468026 2932 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:12:40.468958 kubelet[2932]: I0715 23:12:40.468885 2932 factory.go:223] Registration of the containerd container factory successfully Jul 15 23:12:40.486336 kubelet[2932]: I0715 23:12:40.486297 2932 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 23:12:40.487154 kubelet[2932]: I0715 23:12:40.487137 2932 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:12:40.487315 kubelet[2932]: I0715 23:12:40.487191 2932 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:12:40.487315 kubelet[2932]: I0715 23:12:40.487206 2932 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:12:40.488427 kubelet[2932]: I0715 23:12:40.488375 2932 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 23:12:40.488427 kubelet[2932]: I0715 23:12:40.488395 2932 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 23:12:40.488427 kubelet[2932]: I0715 23:12:40.488414 2932 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:12:40.488674 kubelet[2932]: I0715 23:12:40.488419 2932 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 23:12:40.488674 kubelet[2932]: E0715 23:12:40.488621 2932 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:12:40.489794 kubelet[2932]: E0715 23:12:40.489763 2932 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 23:12:40.562398 kubelet[2932]: I0715 23:12:40.562369 2932 policy_none.go:49] "None policy: Start" Jul 15 23:12:40.562796 kubelet[2932]: I0715 23:12:40.562544 2932 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:12:40.563024 kubelet[2932]: I0715 23:12:40.562889 2932 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:12:40.563633 kubelet[2932]: E0715 23:12:40.563617 2932 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-7068735510\" not found" Jul 15 23:12:40.580642 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:12:40.588974 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:12:40.589211 kubelet[2932]: E0715 23:12:40.589097 2932 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:12:40.592492 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:12:40.599176 kubelet[2932]: E0715 23:12:40.599156 2932 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 23:12:40.599572 kubelet[2932]: I0715 23:12:40.599503 2932 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:12:40.599572 kubelet[2932]: I0715 23:12:40.599521 2932 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:12:40.599843 kubelet[2932]: I0715 23:12:40.599822 2932 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:12:40.603248 kubelet[2932]: E0715 23:12:40.603198 2932 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:12:40.603248 kubelet[2932]: E0715 23:12:40.603233 2932 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-n-7068735510\" not found" Jul 15 23:12:40.665711 kubelet[2932]: E0715 23:12:40.665677 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-7068735510?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="400ms" Jul 15 23:12:40.701562 kubelet[2932]: I0715 23:12:40.701197 2932 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:40.702103 kubelet[2932]: E0715 23:12:40.702018 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:40.801602 systemd[1]: Created slice kubepods-burstable-podb8d1a9bcf36bdd8519c53ed929d5ed39.slice - libcontainer container kubepods-burstable-podb8d1a9bcf36bdd8519c53ed929d5ed39.slice. Jul 15 23:12:40.811608 kubelet[2932]: E0715 23:12:40.811357 2932 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7068735510\" not found" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:40.815601 systemd[1]: Created slice kubepods-burstable-pod613a8c3311863c8470b3dd787fb6ed9a.slice - libcontainer container kubepods-burstable-pod613a8c3311863c8470b3dd787fb6ed9a.slice. Jul 15 23:12:40.817763 kubelet[2932]: E0715 23:12:40.817311 2932 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7068735510\" not found" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:40.819808 systemd[1]: Created slice kubepods-burstable-pod4d735bb552d8ba9cf4321b09d02c98b6.slice - libcontainer container kubepods-burstable-pod4d735bb552d8ba9cf4321b09d02c98b6.slice. Jul 15 23:12:40.821171 kubelet[2932]: E0715 23:12:40.821040 2932 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7068735510\" not found" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:40.871479 kubelet[2932]: I0715 23:12:40.871438 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8d1a9bcf36bdd8519c53ed929d5ed39-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-7068735510\" (UID: \"b8d1a9bcf36bdd8519c53ed929d5ed39\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:40.871733 kubelet[2932]: I0715 23:12:40.871667 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/613a8c3311863c8470b3dd787fb6ed9a-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-7068735510\" (UID: \"613a8c3311863c8470b3dd787fb6ed9a\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:40.871733 kubelet[2932]: I0715 23:12:40.871691 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/613a8c3311863c8470b3dd787fb6ed9a-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-7068735510\" (UID: \"613a8c3311863c8470b3dd787fb6ed9a\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:40.871733 kubelet[2932]: I0715 23:12:40.871702 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/613a8c3311863c8470b3dd787fb6ed9a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-7068735510\" (UID: \"613a8c3311863c8470b3dd787fb6ed9a\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:40.871733 kubelet[2932]: I0715 23:12:40.871716 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8d1a9bcf36bdd8519c53ed929d5ed39-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-7068735510\" (UID: \"b8d1a9bcf36bdd8519c53ed929d5ed39\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:40.871927 kubelet[2932]: I0715 23:12:40.871742 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/613a8c3311863c8470b3dd787fb6ed9a-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-7068735510\" (UID: \"613a8c3311863c8470b3dd787fb6ed9a\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:40.871927 kubelet[2932]: I0715 23:12:40.871769 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/613a8c3311863c8470b3dd787fb6ed9a-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-7068735510\" (UID: \"613a8c3311863c8470b3dd787fb6ed9a\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:40.871927 kubelet[2932]: I0715 23:12:40.871811 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d735bb552d8ba9cf4321b09d02c98b6-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-7068735510\" (UID: \"4d735bb552d8ba9cf4321b09d02c98b6\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" Jul 15 23:12:40.871927 kubelet[2932]: I0715 23:12:40.871832 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8d1a9bcf36bdd8519c53ed929d5ed39-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-7068735510\" (UID: \"b8d1a9bcf36bdd8519c53ed929d5ed39\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:40.903951 kubelet[2932]: I0715 23:12:40.903906 2932 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:40.904237 kubelet[2932]: E0715 23:12:40.904213 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:41.066887 kubelet[2932]: E0715 23:12:41.066782 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-7068735510?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="800ms" Jul 15 23:12:41.113087 containerd[1893]: time="2025-07-15T23:12:41.113047847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-7068735510,Uid:b8d1a9bcf36bdd8519c53ed929d5ed39,Namespace:kube-system,Attempt:0,}" Jul 15 23:12:41.118550 containerd[1893]: time="2025-07-15T23:12:41.118499283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-7068735510,Uid:613a8c3311863c8470b3dd787fb6ed9a,Namespace:kube-system,Attempt:0,}" Jul 15 23:12:41.122176 containerd[1893]: time="2025-07-15T23:12:41.122152460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-7068735510,Uid:4d735bb552d8ba9cf4321b09d02c98b6,Namespace:kube-system,Attempt:0,}" Jul 15 23:12:41.305743 kubelet[2932]: I0715 23:12:41.305693 2932 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:41.306034 kubelet[2932]: E0715 23:12:41.305997 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:41.679955 kubelet[2932]: E0715 23:12:41.679885 2932 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 23:12:41.697498 kubelet[2932]: E0715 23:12:41.697466 2932 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 23:12:41.773523 kubelet[2932]: E0715 23:12:41.773485 2932 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-n-7068735510&limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 23:12:41.867521 kubelet[2932]: E0715 23:12:41.867479 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-7068735510?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="1.6s" Jul 15 23:12:42.026163 kubelet[2932]: E0715 23:12:42.026051 2932 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 23:12:42.108508 kubelet[2932]: I0715 23:12:42.108445 2932 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:42.108821 kubelet[2932]: E0715 23:12:42.108799 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:42.429507 containerd[1893]: time="2025-07-15T23:12:42.429423835Z" level=info msg="connecting to shim 4f1bec432667a4a1495119a49a93c4d562ec2ffdc46968a9c4558303084452a7" address="unix:///run/containerd/s/3d8fcac188160f72bbd654fa6bee0499646af900fb4bad1df3a74ca1cc361107" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:12:42.453249 containerd[1893]: time="2025-07-15T23:12:42.453202875Z" level=info msg="connecting to shim 46fd2296cf297e1259211b20c46f34e1fa1e257c1acead5fab575d22856f9bae" address="unix:///run/containerd/s/14214f3d7940dd874ac701aa871f1b8def4d0c609506e89659074281313e2b51" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:12:42.455680 systemd[1]: Started cri-containerd-4f1bec432667a4a1495119a49a93c4d562ec2ffdc46968a9c4558303084452a7.scope - libcontainer container 4f1bec432667a4a1495119a49a93c4d562ec2ffdc46968a9c4558303084452a7. Jul 15 23:12:42.459256 containerd[1893]: time="2025-07-15T23:12:42.459219215Z" level=info msg="connecting to shim 4bab8410e3890b4b9662d4a924d2ff12db5affde02d33bdce198429e7937ca6e" address="unix:///run/containerd/s/82cf545e27614e7ad9ca30e08bef94f4188e2ecf47ea879f01637c4385de4db5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:12:42.483794 systemd[1]: Started cri-containerd-46fd2296cf297e1259211b20c46f34e1fa1e257c1acead5fab575d22856f9bae.scope - libcontainer container 46fd2296cf297e1259211b20c46f34e1fa1e257c1acead5fab575d22856f9bae. Jul 15 23:12:42.488206 systemd[1]: Started cri-containerd-4bab8410e3890b4b9662d4a924d2ff12db5affde02d33bdce198429e7937ca6e.scope - libcontainer container 4bab8410e3890b4b9662d4a924d2ff12db5affde02d33bdce198429e7937ca6e. Jul 15 23:12:42.503200 kubelet[2932]: E0715 23:12:42.503155 2932 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 23:12:42.526880 containerd[1893]: time="2025-07-15T23:12:42.526840243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-7068735510,Uid:b8d1a9bcf36bdd8519c53ed929d5ed39,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f1bec432667a4a1495119a49a93c4d562ec2ffdc46968a9c4558303084452a7\"" Jul 15 23:12:42.535982 containerd[1893]: time="2025-07-15T23:12:42.535419293Z" level=info msg="CreateContainer within sandbox \"4f1bec432667a4a1495119a49a93c4d562ec2ffdc46968a9c4558303084452a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:12:42.537208 containerd[1893]: time="2025-07-15T23:12:42.537174517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-7068735510,Uid:4d735bb552d8ba9cf4321b09d02c98b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"46fd2296cf297e1259211b20c46f34e1fa1e257c1acead5fab575d22856f9bae\"" Jul 15 23:12:42.543493 containerd[1893]: time="2025-07-15T23:12:42.543458192Z" level=info msg="CreateContainer within sandbox \"46fd2296cf297e1259211b20c46f34e1fa1e257c1acead5fab575d22856f9bae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:12:42.548472 containerd[1893]: time="2025-07-15T23:12:42.548441400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-7068735510,Uid:613a8c3311863c8470b3dd787fb6ed9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bab8410e3890b4b9662d4a924d2ff12db5affde02d33bdce198429e7937ca6e\"" Jul 15 23:12:42.556113 containerd[1893]: time="2025-07-15T23:12:42.556071544Z" level=info msg="CreateContainer within sandbox \"4bab8410e3890b4b9662d4a924d2ff12db5affde02d33bdce198429e7937ca6e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:12:42.591848 containerd[1893]: time="2025-07-15T23:12:42.591755693Z" level=info msg="Container f40d874dccb1250f16b1ae6c6405203f6fb1345f2e7a381e38461baeedba095d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:12:42.618779 containerd[1893]: time="2025-07-15T23:12:42.618734541Z" level=info msg="CreateContainer within sandbox \"4f1bec432667a4a1495119a49a93c4d562ec2ffdc46968a9c4558303084452a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f40d874dccb1250f16b1ae6c6405203f6fb1345f2e7a381e38461baeedba095d\"" Jul 15 23:12:42.619791 containerd[1893]: time="2025-07-15T23:12:42.619767105Z" level=info msg="StartContainer for \"f40d874dccb1250f16b1ae6c6405203f6fb1345f2e7a381e38461baeedba095d\"" Jul 15 23:12:42.620553 containerd[1893]: time="2025-07-15T23:12:42.620272967Z" level=info msg="Container cc85ef2837aa391ed9964e7d8edd013bd46d82b676142078ea04fc2795715050: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:12:42.620553 containerd[1893]: time="2025-07-15T23:12:42.620519158Z" level=info msg="connecting to shim f40d874dccb1250f16b1ae6c6405203f6fb1345f2e7a381e38461baeedba095d" address="unix:///run/containerd/s/3d8fcac188160f72bbd654fa6bee0499646af900fb4bad1df3a74ca1cc361107" protocol=ttrpc version=3 Jul 15 23:12:42.630843 containerd[1893]: time="2025-07-15T23:12:42.630805230Z" level=info msg="Container 099d7d94d4d9564e64bcd7be953d978843f5750ddb6df84b31c6110790ef43b2: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:12:42.635681 systemd[1]: Started cri-containerd-f40d874dccb1250f16b1ae6c6405203f6fb1345f2e7a381e38461baeedba095d.scope - libcontainer container f40d874dccb1250f16b1ae6c6405203f6fb1345f2e7a381e38461baeedba095d. Jul 15 23:12:42.656488 containerd[1893]: time="2025-07-15T23:12:42.655988109Z" level=info msg="CreateContainer within sandbox \"4bab8410e3890b4b9662d4a924d2ff12db5affde02d33bdce198429e7937ca6e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cc85ef2837aa391ed9964e7d8edd013bd46d82b676142078ea04fc2795715050\"" Jul 15 23:12:42.657467 containerd[1893]: time="2025-07-15T23:12:42.657443973Z" level=info msg="StartContainer for \"cc85ef2837aa391ed9964e7d8edd013bd46d82b676142078ea04fc2795715050\"" Jul 15 23:12:42.658735 containerd[1893]: time="2025-07-15T23:12:42.658681606Z" level=info msg="connecting to shim cc85ef2837aa391ed9964e7d8edd013bd46d82b676142078ea04fc2795715050" address="unix:///run/containerd/s/82cf545e27614e7ad9ca30e08bef94f4188e2ecf47ea879f01637c4385de4db5" protocol=ttrpc version=3 Jul 15 23:12:42.661134 containerd[1893]: time="2025-07-15T23:12:42.661098328Z" level=info msg="CreateContainer within sandbox \"46fd2296cf297e1259211b20c46f34e1fa1e257c1acead5fab575d22856f9bae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"099d7d94d4d9564e64bcd7be953d978843f5750ddb6df84b31c6110790ef43b2\"" Jul 15 23:12:42.661899 containerd[1893]: time="2025-07-15T23:12:42.661875557Z" level=info msg="StartContainer for \"099d7d94d4d9564e64bcd7be953d978843f5750ddb6df84b31c6110790ef43b2\"" Jul 15 23:12:42.662941 containerd[1893]: time="2025-07-15T23:12:42.662899921Z" level=info msg="connecting to shim 099d7d94d4d9564e64bcd7be953d978843f5750ddb6df84b31c6110790ef43b2" address="unix:///run/containerd/s/14214f3d7940dd874ac701aa871f1b8def4d0c609506e89659074281313e2b51" protocol=ttrpc version=3 Jul 15 23:12:42.676446 containerd[1893]: time="2025-07-15T23:12:42.676011887Z" level=info msg="StartContainer for \"f40d874dccb1250f16b1ae6c6405203f6fb1345f2e7a381e38461baeedba095d\" returns successfully" Jul 15 23:12:42.686789 systemd[1]: Started cri-containerd-cc85ef2837aa391ed9964e7d8edd013bd46d82b676142078ea04fc2795715050.scope - libcontainer container cc85ef2837aa391ed9964e7d8edd013bd46d82b676142078ea04fc2795715050. Jul 15 23:12:42.694710 systemd[1]: Started cri-containerd-099d7d94d4d9564e64bcd7be953d978843f5750ddb6df84b31c6110790ef43b2.scope - libcontainer container 099d7d94d4d9564e64bcd7be953d978843f5750ddb6df84b31c6110790ef43b2. Jul 15 23:12:42.749386 containerd[1893]: time="2025-07-15T23:12:42.749304357Z" level=info msg="StartContainer for \"cc85ef2837aa391ed9964e7d8edd013bd46d82b676142078ea04fc2795715050\" returns successfully" Jul 15 23:12:42.755284 containerd[1893]: time="2025-07-15T23:12:42.755252879Z" level=info msg="StartContainer for \"099d7d94d4d9564e64bcd7be953d978843f5750ddb6df84b31c6110790ef43b2\" returns successfully" Jul 15 23:12:43.500262 kubelet[2932]: E0715 23:12:43.500226 2932 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7068735510\" not found" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:43.503612 kubelet[2932]: E0715 23:12:43.503586 2932 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7068735510\" not found" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:43.506128 kubelet[2932]: E0715 23:12:43.506108 2932 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7068735510\" not found" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:43.714368 kubelet[2932]: I0715 23:12:43.714339 2932 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:43.771639 kubelet[2932]: E0715 23:12:43.771518 2932 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-n-7068735510\" not found" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:43.845506 kubelet[2932]: I0715 23:12:43.845300 2932 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:43.845506 kubelet[2932]: E0715 23:12:43.845339 2932 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.0.1-n-7068735510\": node \"ci-4372.0.1-n-7068735510\" not found" Jul 15 23:12:43.885690 kubelet[2932]: E0715 23:12:43.885645 2932 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-7068735510\" not found" Jul 15 23:12:43.986078 kubelet[2932]: E0715 23:12:43.986037 2932 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-7068735510\" not found" Jul 15 23:12:44.164694 kubelet[2932]: I0715 23:12:44.164655 2932 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:44.168574 kubelet[2932]: E0715 23:12:44.168546 2932 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-7068735510\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:44.168700 kubelet[2932]: I0715 23:12:44.168688 2932 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:44.170141 kubelet[2932]: E0715 23:12:44.170113 2932 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-n-7068735510\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:44.170244 kubelet[2932]: I0715 23:12:44.170232 2932 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" Jul 15 23:12:44.171370 kubelet[2932]: E0715 23:12:44.171337 2932 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-7068735510\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" Jul 15 23:12:44.460026 kubelet[2932]: I0715 23:12:44.459739 2932 apiserver.go:52] "Watching apiserver" Jul 15 23:12:44.468940 kubelet[2932]: I0715 23:12:44.468903 2932 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:12:44.504539 kubelet[2932]: I0715 23:12:44.504473 2932 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:44.505269 kubelet[2932]: I0715 23:12:44.504812 2932 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" Jul 15 23:12:44.506543 kubelet[2932]: E0715 23:12:44.506421 2932 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-7068735510\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" Jul 15 23:12:44.506745 kubelet[2932]: E0715 23:12:44.506724 2932 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-7068735510\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:45.506217 kubelet[2932]: I0715 23:12:45.506073 2932 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" Jul 15 23:12:45.514776 kubelet[2932]: I0715 23:12:45.514723 2932 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:12:46.087477 systemd[1]: Reload requested from client PID 3274 ('systemctl') (unit session-9.scope)... Jul 15 23:12:46.087491 systemd[1]: Reloading... Jul 15 23:12:46.160560 zram_generator::config[3317]: No configuration found. Jul 15 23:12:46.232469 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:12:46.324568 systemd[1]: Reloading finished in 236 ms. Jul 15 23:12:46.354411 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:46.369502 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:12:46.369737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:46.369790 systemd[1]: kubelet.service: Consumed 560ms CPU time, 126.1M memory peak. Jul 15 23:12:46.371797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:46.464587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:46.470881 (kubelet)[3384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:12:46.494153 kubelet[3384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:12:46.494438 kubelet[3384]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:12:46.494474 kubelet[3384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:12:46.494610 kubelet[3384]: I0715 23:12:46.494583 3384 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:12:46.498842 kubelet[3384]: I0715 23:12:46.498822 3384 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 23:12:46.498931 kubelet[3384]: I0715 23:12:46.498921 3384 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:12:46.499116 kubelet[3384]: I0715 23:12:46.499104 3384 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 23:12:46.500014 kubelet[3384]: I0715 23:12:46.499997 3384 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 23:12:46.501754 kubelet[3384]: I0715 23:12:46.501734 3384 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:12:46.504945 kubelet[3384]: I0715 23:12:46.504925 3384 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:12:46.509046 kubelet[3384]: I0715 23:12:46.509028 3384 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:12:46.509192 kubelet[3384]: I0715 23:12:46.509174 3384 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:12:46.509295 kubelet[3384]: I0715 23:12:46.509191 3384 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-7068735510","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:12:46.509351 kubelet[3384]: I0715 23:12:46.509301 3384 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:12:46.509351 kubelet[3384]: I0715 23:12:46.509308 3384 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 23:12:46.509351 kubelet[3384]: I0715 23:12:46.509341 3384 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:12:46.509450 kubelet[3384]: I0715 23:12:46.509438 3384 kubelet.go:480] "Attempting to sync node with API server" Jul 15 23:12:46.509450 kubelet[3384]: I0715 23:12:46.509449 3384 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:12:46.509482 kubelet[3384]: I0715 23:12:46.509469 3384 kubelet.go:386] "Adding apiserver pod source" Jul 15 23:12:46.509482 kubelet[3384]: I0715 23:12:46.509480 3384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:12:46.511604 kubelet[3384]: I0715 23:12:46.511585 3384 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:12:46.511947 kubelet[3384]: I0715 23:12:46.511931 3384 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 23:12:46.515451 kubelet[3384]: I0715 23:12:46.515435 3384 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:12:46.515504 kubelet[3384]: I0715 23:12:46.515466 3384 server.go:1289] "Started kubelet" Jul 15 23:12:46.516827 kubelet[3384]: I0715 23:12:46.516808 3384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:12:46.527418 kubelet[3384]: I0715 23:12:46.527387 3384 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 23:12:46.530601 kubelet[3384]: I0715 23:12:46.530565 3384 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:12:46.531237 kubelet[3384]: I0715 23:12:46.530728 3384 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:12:46.532414 kubelet[3384]: I0715 23:12:46.532383 3384 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:12:46.534032 kubelet[3384]: I0715 23:12:46.532597 3384 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:12:46.536088 kubelet[3384]: I0715 23:12:46.532754 3384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:12:46.536223 kubelet[3384]: I0715 23:12:46.536209 3384 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:12:46.536387 kubelet[3384]: I0715 23:12:46.536044 3384 server.go:317] "Adding debug handlers to kubelet server" Jul 15 23:12:46.537179 kubelet[3384]: I0715 23:12:46.537159 3384 factory.go:223] Registration of the systemd container factory successfully Jul 15 23:12:46.537333 kubelet[3384]: I0715 23:12:46.537318 3384 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:12:46.538035 kubelet[3384]: I0715 23:12:46.535070 3384 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:12:46.539077 kubelet[3384]: E0715 23:12:46.539052 3384 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:12:46.540309 kubelet[3384]: I0715 23:12:46.540294 3384 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 23:12:46.540407 kubelet[3384]: I0715 23:12:46.540398 3384 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 23:12:46.540464 kubelet[3384]: I0715 23:12:46.540457 3384 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:12:46.540506 kubelet[3384]: I0715 23:12:46.540498 3384 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 23:12:46.540599 kubelet[3384]: E0715 23:12:46.540576 3384 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:12:46.542242 kubelet[3384]: I0715 23:12:46.541389 3384 factory.go:223] Registration of the containerd container factory successfully Jul 15 23:12:46.571143 kubelet[3384]: I0715 23:12:46.570805 3384 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:12:46.571348 kubelet[3384]: I0715 23:12:46.571334 3384 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:12:46.571433 kubelet[3384]: I0715 23:12:46.571423 3384 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:12:46.571615 kubelet[3384]: I0715 23:12:46.571601 3384 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:12:46.571680 kubelet[3384]: I0715 23:12:46.571661 3384 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:12:46.571727 kubelet[3384]: I0715 23:12:46.571720 3384 policy_none.go:49] "None policy: Start" Jul 15 23:12:46.571771 kubelet[3384]: I0715 23:12:46.571764 3384 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:12:46.571810 kubelet[3384]: I0715 23:12:46.571804 3384 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:12:46.571933 kubelet[3384]: I0715 23:12:46.571921 3384 state_mem.go:75] "Updated machine memory state" Jul 15 23:12:46.576578 kubelet[3384]: E0715 23:12:46.576506 3384 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 23:12:46.577644 kubelet[3384]: I0715 23:12:46.577348 3384 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:12:46.577644 kubelet[3384]: I0715 23:12:46.577363 3384 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:12:46.577644 kubelet[3384]: I0715 23:12:46.577580 3384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:12:46.579417 kubelet[3384]: E0715 23:12:46.579402 3384 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:12:46.641849 kubelet[3384]: I0715 23:12:46.641740 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.642505 kubelet[3384]: I0715 23:12:46.642116 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.642505 kubelet[3384]: I0715 23:12:46.642354 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.656832 kubelet[3384]: I0715 23:12:46.656740 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:12:46.656832 kubelet[3384]: I0715 23:12:46.656784 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:12:46.656832 kubelet[3384]: E0715 23:12:46.656803 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-7068735510\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.657464 kubelet[3384]: I0715 23:12:46.657327 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:12:46.685984 kubelet[3384]: I0715 23:12:46.685897 3384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:46.699756 kubelet[3384]: I0715 23:12:46.699729 3384 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:46.699869 kubelet[3384]: I0715 23:12:46.699806 3384 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-n-7068735510" Jul 15 23:12:46.736937 kubelet[3384]: I0715 23:12:46.736815 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8d1a9bcf36bdd8519c53ed929d5ed39-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-7068735510\" (UID: \"b8d1a9bcf36bdd8519c53ed929d5ed39\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.736937 kubelet[3384]: I0715 23:12:46.736848 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/613a8c3311863c8470b3dd787fb6ed9a-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-7068735510\" (UID: \"613a8c3311863c8470b3dd787fb6ed9a\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.736937 kubelet[3384]: I0715 23:12:46.736864 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d735bb552d8ba9cf4321b09d02c98b6-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-7068735510\" (UID: \"4d735bb552d8ba9cf4321b09d02c98b6\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.736937 kubelet[3384]: I0715 23:12:46.736873 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8d1a9bcf36bdd8519c53ed929d5ed39-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-7068735510\" (UID: \"b8d1a9bcf36bdd8519c53ed929d5ed39\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.736937 kubelet[3384]: I0715 23:12:46.736886 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8d1a9bcf36bdd8519c53ed929d5ed39-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-7068735510\" (UID: \"b8d1a9bcf36bdd8519c53ed929d5ed39\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.737130 kubelet[3384]: I0715 23:12:46.736898 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/613a8c3311863c8470b3dd787fb6ed9a-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-7068735510\" (UID: \"613a8c3311863c8470b3dd787fb6ed9a\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.737130 kubelet[3384]: I0715 23:12:46.736928 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/613a8c3311863c8470b3dd787fb6ed9a-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-7068735510\" (UID: \"613a8c3311863c8470b3dd787fb6ed9a\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.737130 kubelet[3384]: I0715 23:12:46.736958 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/613a8c3311863c8470b3dd787fb6ed9a-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-7068735510\" (UID: \"613a8c3311863c8470b3dd787fb6ed9a\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:46.737130 kubelet[3384]: I0715 23:12:46.736971 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/613a8c3311863c8470b3dd787fb6ed9a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-7068735510\" (UID: \"613a8c3311863c8470b3dd787fb6ed9a\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" Jul 15 23:12:47.511369 kubelet[3384]: I0715 23:12:47.511326 3384 apiserver.go:52] "Watching apiserver" Jul 15 23:12:47.535146 kubelet[3384]: I0715 23:12:47.535114 3384 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:12:47.558396 kubelet[3384]: I0715 23:12:47.558369 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:47.559314 kubelet[3384]: I0715 23:12:47.559288 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" Jul 15 23:12:47.569343 kubelet[3384]: I0715 23:12:47.569318 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:12:47.569343 kubelet[3384]: I0715 23:12:47.569349 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:12:47.569479 kubelet[3384]: E0715 23:12:47.569378 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-7068735510\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" Jul 15 23:12:47.569623 kubelet[3384]: E0715 23:12:47.569593 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-7068735510\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" Jul 15 23:12:47.578883 kubelet[3384]: I0715 23:12:47.578843 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7068735510" podStartSLOduration=2.578830037 podStartE2EDuration="2.578830037s" podCreationTimestamp="2025-07-15 23:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:12:47.578584063 +0000 UTC m=+1.104727108" watchObservedRunningTime="2025-07-15 23:12:47.578830037 +0000 UTC m=+1.104973074" Jul 15 23:12:47.592117 sudo[3419]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 23:12:47.592328 sudo[3419]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 23:12:47.602234 kubelet[3384]: I0715 23:12:47.602184 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7068735510" podStartSLOduration=1.602170746 podStartE2EDuration="1.602170746s" podCreationTimestamp="2025-07-15 23:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:12:47.589994118 +0000 UTC m=+1.116137155" watchObservedRunningTime="2025-07-15 23:12:47.602170746 +0000 UTC m=+1.128313783" Jul 15 23:12:47.951409 sudo[3419]: pam_unix(sudo:session): session closed for user root Jul 15 23:12:49.076346 sudo[2360]: pam_unix(sudo:session): session closed for user root Jul 15 23:12:49.162866 sshd[2359]: Connection closed by 10.200.16.10 port 49592 Jul 15 23:12:49.162213 sshd-session[2357]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:49.164946 systemd-logind[1858]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:12:49.165277 systemd[1]: sshd@6-10.200.20.39:22-10.200.16.10:49592.service: Deactivated successfully. Jul 15 23:12:49.166846 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:12:49.167372 systemd[1]: session-9.scope: Consumed 3.514s CPU time, 271.7M memory peak. Jul 15 23:12:49.169894 systemd-logind[1858]: Removed session 9. Jul 15 23:12:51.854187 kubelet[3384]: I0715 23:12:51.854130 3384 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:12:51.854960 containerd[1893]: time="2025-07-15T23:12:51.854930656Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:12:51.855477 kubelet[3384]: I0715 23:12:51.855309 3384 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:12:52.321718 kubelet[3384]: I0715 23:12:52.321561 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7068735510" podStartSLOduration=6.321545439 podStartE2EDuration="6.321545439s" podCreationTimestamp="2025-07-15 23:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:12:47.603635442 +0000 UTC m=+1.129778479" watchObservedRunningTime="2025-07-15 23:12:52.321545439 +0000 UTC m=+5.847688476" Jul 15 23:12:52.342959 systemd[1]: Created slice kubepods-besteffort-podcbaa45fe_d3e0_4129_abe5_535c073ea88e.slice - libcontainer container kubepods-besteffort-podcbaa45fe_d3e0_4129_abe5_535c073ea88e.slice. Jul 15 23:12:52.358432 systemd[1]: Created slice kubepods-burstable-pod6c383ca1_9f09_448b_a6c8_6030ba209ebb.slice - libcontainer container kubepods-burstable-pod6c383ca1_9f09_448b_a6c8_6030ba209ebb.slice. Jul 15 23:12:52.365005 kubelet[3384]: I0715 23:12:52.364922 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbaa45fe-d3e0-4129-abe5-535c073ea88e-lib-modules\") pod \"kube-proxy-zhcbf\" (UID: \"cbaa45fe-d3e0-4129-abe5-535c073ea88e\") " pod="kube-system/kube-proxy-zhcbf" Jul 15 23:12:52.365121 kubelet[3384]: I0715 23:12:52.365095 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-etc-cni-netd\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365121 kubelet[3384]: I0715 23:12:52.365112 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c383ca1-9f09-448b-a6c8-6030ba209ebb-clustermesh-secrets\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365161 kubelet[3384]: I0715 23:12:52.365123 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-host-proc-sys-net\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365161 kubelet[3384]: I0715 23:12:52.365133 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxbfv\" (UniqueName: \"kubernetes.io/projected/6c383ca1-9f09-448b-a6c8-6030ba209ebb-kube-api-access-gxbfv\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365327 kubelet[3384]: I0715 23:12:52.365312 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cni-path\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365375 kubelet[3384]: I0715 23:12:52.365329 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-xtables-lock\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365375 kubelet[3384]: I0715 23:12:52.365340 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-config-path\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365375 kubelet[3384]: I0715 23:12:52.365349 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-host-proc-sys-kernel\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365375 kubelet[3384]: I0715 23:12:52.365359 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-bpf-maps\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365436 kubelet[3384]: I0715 23:12:52.365377 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-hostproc\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365436 kubelet[3384]: I0715 23:12:52.365389 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cbaa45fe-d3e0-4129-abe5-535c073ea88e-kube-proxy\") pod \"kube-proxy-zhcbf\" (UID: \"cbaa45fe-d3e0-4129-abe5-535c073ea88e\") " pod="kube-system/kube-proxy-zhcbf" Jul 15 23:12:52.365436 kubelet[3384]: I0715 23:12:52.365406 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thjmg\" (UniqueName: \"kubernetes.io/projected/cbaa45fe-d3e0-4129-abe5-535c073ea88e-kube-api-access-thjmg\") pod \"kube-proxy-zhcbf\" (UID: \"cbaa45fe-d3e0-4129-abe5-535c073ea88e\") " pod="kube-system/kube-proxy-zhcbf" Jul 15 23:12:52.365436 kubelet[3384]: I0715 23:12:52.365415 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-run\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365436 kubelet[3384]: I0715 23:12:52.365425 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-cgroup\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365436 kubelet[3384]: I0715 23:12:52.365434 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-lib-modules\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365538 kubelet[3384]: I0715 23:12:52.365449 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c383ca1-9f09-448b-a6c8-6030ba209ebb-hubble-tls\") pod \"cilium-kv6x4\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " pod="kube-system/cilium-kv6x4" Jul 15 23:12:52.365538 kubelet[3384]: I0715 23:12:52.365457 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cbaa45fe-d3e0-4129-abe5-535c073ea88e-xtables-lock\") pod \"kube-proxy-zhcbf\" (UID: \"cbaa45fe-d3e0-4129-abe5-535c073ea88e\") " pod="kube-system/kube-proxy-zhcbf" Jul 15 23:12:52.483701 kubelet[3384]: E0715 23:12:52.483033 3384 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 15 23:12:52.483701 kubelet[3384]: E0715 23:12:52.483065 3384 projected.go:194] Error preparing data for projected volume kube-api-access-gxbfv for pod kube-system/cilium-kv6x4: configmap "kube-root-ca.crt" not found Jul 15 23:12:52.483701 kubelet[3384]: E0715 23:12:52.483086 3384 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 15 23:12:52.483701 kubelet[3384]: E0715 23:12:52.483105 3384 projected.go:194] Error preparing data for projected volume kube-api-access-thjmg for pod kube-system/kube-proxy-zhcbf: configmap "kube-root-ca.crt" not found Jul 15 23:12:52.483701 kubelet[3384]: E0715 23:12:52.483110 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c383ca1-9f09-448b-a6c8-6030ba209ebb-kube-api-access-gxbfv podName:6c383ca1-9f09-448b-a6c8-6030ba209ebb nodeName:}" failed. No retries permitted until 2025-07-15 23:12:52.983092561 +0000 UTC m=+6.509235598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gxbfv" (UniqueName: "kubernetes.io/projected/6c383ca1-9f09-448b-a6c8-6030ba209ebb-kube-api-access-gxbfv") pod "cilium-kv6x4" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb") : configmap "kube-root-ca.crt" not found Jul 15 23:12:52.483701 kubelet[3384]: E0715 23:12:52.483135 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cbaa45fe-d3e0-4129-abe5-535c073ea88e-kube-api-access-thjmg podName:cbaa45fe-d3e0-4129-abe5-535c073ea88e nodeName:}" failed. No retries permitted until 2025-07-15 23:12:52.983124538 +0000 UTC m=+6.509267575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-thjmg" (UniqueName: "kubernetes.io/projected/cbaa45fe-d3e0-4129-abe5-535c073ea88e-kube-api-access-thjmg") pod "kube-proxy-zhcbf" (UID: "cbaa45fe-d3e0-4129-abe5-535c073ea88e") : configmap "kube-root-ca.crt" not found Jul 15 23:12:52.947001 systemd[1]: Created slice kubepods-besteffort-pod449257be_4ff3_4039_aba5_08424b0ae9f1.slice - libcontainer container kubepods-besteffort-pod449257be_4ff3_4039_aba5_08424b0ae9f1.slice. Jul 15 23:12:52.970231 kubelet[3384]: I0715 23:12:52.970191 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/449257be-4ff3-4039-aba5-08424b0ae9f1-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-s6qln\" (UID: \"449257be-4ff3-4039-aba5-08424b0ae9f1\") " pod="kube-system/cilium-operator-6c4d7847fc-s6qln" Jul 15 23:12:52.970231 kubelet[3384]: I0715 23:12:52.970240 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j976h\" (UniqueName: \"kubernetes.io/projected/449257be-4ff3-4039-aba5-08424b0ae9f1-kube-api-access-j976h\") pod \"cilium-operator-6c4d7847fc-s6qln\" (UID: \"449257be-4ff3-4039-aba5-08424b0ae9f1\") " pod="kube-system/cilium-operator-6c4d7847fc-s6qln" Jul 15 23:12:53.251235 containerd[1893]: time="2025-07-15T23:12:53.251121455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s6qln,Uid:449257be-4ff3-4039-aba5-08424b0ae9f1,Namespace:kube-system,Attempt:0,}" Jul 15 23:12:53.255858 containerd[1893]: time="2025-07-15T23:12:53.255639163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhcbf,Uid:cbaa45fe-d3e0-4129-abe5-535c073ea88e,Namespace:kube-system,Attempt:0,}" Jul 15 23:12:53.262694 containerd[1893]: time="2025-07-15T23:12:53.262579639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kv6x4,Uid:6c383ca1-9f09-448b-a6c8-6030ba209ebb,Namespace:kube-system,Attempt:0,}" Jul 15 23:12:53.374660 containerd[1893]: time="2025-07-15T23:12:53.374622788Z" level=info msg="connecting to shim e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31" address="unix:///run/containerd/s/3eb3a3c84e0d8a31220a55c5e51a00ba18e0d60b45c738cb93bd292e46ffff70" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:12:53.393682 systemd[1]: Started cri-containerd-e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31.scope - libcontainer container e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31. Jul 15 23:12:53.402547 containerd[1893]: time="2025-07-15T23:12:53.402490142Z" level=info msg="connecting to shim bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c" address="unix:///run/containerd/s/2059e39a197eaaca7cacdf614516e5943d466d94785ca382fffdf8bec10340e5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:12:53.418025 containerd[1893]: time="2025-07-15T23:12:53.417969037Z" level=info msg="connecting to shim daca3cc41c7d01c51db4840117bfe4da42858c7f2f6b7a8a3fceaeeaddc0891f" address="unix:///run/containerd/s/2b58b0dfe384feff235969c504a2bbac91d70405710d1a5e8fbddf989553ced7" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:12:53.426696 systemd[1]: Started cri-containerd-bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c.scope - libcontainer container bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c. Jul 15 23:12:53.445494 containerd[1893]: time="2025-07-15T23:12:53.445389763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s6qln,Uid:449257be-4ff3-4039-aba5-08424b0ae9f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31\"" Jul 15 23:12:53.445700 systemd[1]: Started cri-containerd-daca3cc41c7d01c51db4840117bfe4da42858c7f2f6b7a8a3fceaeeaddc0891f.scope - libcontainer container daca3cc41c7d01c51db4840117bfe4da42858c7f2f6b7a8a3fceaeeaddc0891f. Jul 15 23:12:53.449306 containerd[1893]: time="2025-07-15T23:12:53.449275981Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 23:12:53.476932 containerd[1893]: time="2025-07-15T23:12:53.476811310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kv6x4,Uid:6c383ca1-9f09-448b-a6c8-6030ba209ebb,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\"" Jul 15 23:12:53.490860 containerd[1893]: time="2025-07-15T23:12:53.490765840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhcbf,Uid:cbaa45fe-d3e0-4129-abe5-535c073ea88e,Namespace:kube-system,Attempt:0,} returns sandbox id \"daca3cc41c7d01c51db4840117bfe4da42858c7f2f6b7a8a3fceaeeaddc0891f\"" Jul 15 23:12:53.500759 containerd[1893]: time="2025-07-15T23:12:53.500713372Z" level=info msg="CreateContainer within sandbox \"daca3cc41c7d01c51db4840117bfe4da42858c7f2f6b7a8a3fceaeeaddc0891f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:12:53.527611 containerd[1893]: time="2025-07-15T23:12:53.526022572Z" level=info msg="Container 85272fd42828a8ff71db714918ec7e6c31ae1184b8b2c5674e7f2242ac7c5487: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:12:53.546940 containerd[1893]: time="2025-07-15T23:12:53.546899721Z" level=info msg="CreateContainer within sandbox \"daca3cc41c7d01c51db4840117bfe4da42858c7f2f6b7a8a3fceaeeaddc0891f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"85272fd42828a8ff71db714918ec7e6c31ae1184b8b2c5674e7f2242ac7c5487\"" Jul 15 23:12:53.548477 containerd[1893]: time="2025-07-15T23:12:53.547936912Z" level=info msg="StartContainer for \"85272fd42828a8ff71db714918ec7e6c31ae1184b8b2c5674e7f2242ac7c5487\"" Jul 15 23:12:53.549732 containerd[1893]: time="2025-07-15T23:12:53.549708524Z" level=info msg="connecting to shim 85272fd42828a8ff71db714918ec7e6c31ae1184b8b2c5674e7f2242ac7c5487" address="unix:///run/containerd/s/2b58b0dfe384feff235969c504a2bbac91d70405710d1a5e8fbddf989553ced7" protocol=ttrpc version=3 Jul 15 23:12:53.566693 systemd[1]: Started cri-containerd-85272fd42828a8ff71db714918ec7e6c31ae1184b8b2c5674e7f2242ac7c5487.scope - libcontainer container 85272fd42828a8ff71db714918ec7e6c31ae1184b8b2c5674e7f2242ac7c5487. Jul 15 23:12:53.599454 containerd[1893]: time="2025-07-15T23:12:53.599422145Z" level=info msg="StartContainer for \"85272fd42828a8ff71db714918ec7e6c31ae1184b8b2c5674e7f2242ac7c5487\" returns successfully" Jul 15 23:12:54.587504 kubelet[3384]: I0715 23:12:54.587412 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zhcbf" podStartSLOduration=2.587401238 podStartE2EDuration="2.587401238s" podCreationTimestamp="2025-07-15 23:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:12:54.587300379 +0000 UTC m=+8.113443424" watchObservedRunningTime="2025-07-15 23:12:54.587401238 +0000 UTC m=+8.113544275" Jul 15 23:12:56.062919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927248958.mount: Deactivated successfully. Jul 15 23:12:56.671574 containerd[1893]: time="2025-07-15T23:12:56.671403510Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:56.673970 containerd[1893]: time="2025-07-15T23:12:56.673837573Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 15 23:12:56.677783 containerd[1893]: time="2025-07-15T23:12:56.677754313Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:56.678896 containerd[1893]: time="2025-07-15T23:12:56.678698476Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.229391894s" Jul 15 23:12:56.678896 containerd[1893]: time="2025-07-15T23:12:56.678724645Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 15 23:12:56.679851 containerd[1893]: time="2025-07-15T23:12:56.679816141Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 23:12:56.685158 containerd[1893]: time="2025-07-15T23:12:56.685134105Z" level=info msg="CreateContainer within sandbox \"e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 23:12:56.718062 containerd[1893]: time="2025-07-15T23:12:56.716872750Z" level=info msg="Container 89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:12:56.737361 containerd[1893]: time="2025-07-15T23:12:56.737318031Z" level=info msg="CreateContainer within sandbox \"e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\"" Jul 15 23:12:56.737894 containerd[1893]: time="2025-07-15T23:12:56.737878519Z" level=info msg="StartContainer for \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\"" Jul 15 23:12:56.738752 containerd[1893]: time="2025-07-15T23:12:56.738714544Z" level=info msg="connecting to shim 89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b" address="unix:///run/containerd/s/3eb3a3c84e0d8a31220a55c5e51a00ba18e0d60b45c738cb93bd292e46ffff70" protocol=ttrpc version=3 Jul 15 23:12:56.751651 systemd[1]: Started cri-containerd-89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b.scope - libcontainer container 89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b. Jul 15 23:12:56.776463 containerd[1893]: time="2025-07-15T23:12:56.776422980Z" level=info msg="StartContainer for \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\" returns successfully" Jul 15 23:12:57.595697 kubelet[3384]: I0715 23:12:57.595639 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-s6qln" podStartSLOduration=2.364073924 podStartE2EDuration="5.595613137s" podCreationTimestamp="2025-07-15 23:12:52 +0000 UTC" firstStartedPulling="2025-07-15 23:12:53.448084762 +0000 UTC m=+6.974227799" lastFinishedPulling="2025-07-15 23:12:56.679623975 +0000 UTC m=+10.205767012" observedRunningTime="2025-07-15 23:12:57.595236798 +0000 UTC m=+11.121379843" watchObservedRunningTime="2025-07-15 23:12:57.595613137 +0000 UTC m=+11.121756190" Jul 15 23:13:04.026305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1674551760.mount: Deactivated successfully. Jul 15 23:13:05.915162 containerd[1893]: time="2025-07-15T23:13:05.914627528Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:05.920018 containerd[1893]: time="2025-07-15T23:13:05.919995221Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 15 23:13:05.925135 containerd[1893]: time="2025-07-15T23:13:05.925114634Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:05.926406 containerd[1893]: time="2025-07-15T23:13:05.926385607Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.246541808s" Jul 15 23:13:05.926539 containerd[1893]: time="2025-07-15T23:13:05.926476873Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 15 23:13:05.934405 containerd[1893]: time="2025-07-15T23:13:05.934337534Z" level=info msg="CreateContainer within sandbox \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:13:05.966882 containerd[1893]: time="2025-07-15T23:13:05.966838840Z" level=info msg="Container 555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:05.984320 containerd[1893]: time="2025-07-15T23:13:05.984254490Z" level=info msg="CreateContainer within sandbox \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\"" Jul 15 23:13:05.985519 containerd[1893]: time="2025-07-15T23:13:05.985373371Z" level=info msg="StartContainer for \"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\"" Jul 15 23:13:05.986070 containerd[1893]: time="2025-07-15T23:13:05.986050783Z" level=info msg="connecting to shim 555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275" address="unix:///run/containerd/s/2059e39a197eaaca7cacdf614516e5943d466d94785ca382fffdf8bec10340e5" protocol=ttrpc version=3 Jul 15 23:13:06.001650 systemd[1]: Started cri-containerd-555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275.scope - libcontainer container 555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275. Jul 15 23:13:06.025166 containerd[1893]: time="2025-07-15T23:13:06.025125175Z" level=info msg="StartContainer for \"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\" returns successfully" Jul 15 23:13:06.030040 systemd[1]: cri-containerd-555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275.scope: Deactivated successfully. Jul 15 23:13:06.031652 containerd[1893]: time="2025-07-15T23:13:06.031614724Z" level=info msg="received exit event container_id:\"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\" id:\"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\" pid:3854 exited_at:{seconds:1752621186 nanos:31287323}" Jul 15 23:13:06.031784 containerd[1893]: time="2025-07-15T23:13:06.031736160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\" id:\"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\" pid:3854 exited_at:{seconds:1752621186 nanos:31287323}" Jul 15 23:13:06.964776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275-rootfs.mount: Deactivated successfully. Jul 15 23:13:08.609950 containerd[1893]: time="2025-07-15T23:13:08.609873184Z" level=info msg="CreateContainer within sandbox \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:13:08.640857 containerd[1893]: time="2025-07-15T23:13:08.640735367Z" level=info msg="Container c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:08.658350 containerd[1893]: time="2025-07-15T23:13:08.658307657Z" level=info msg="CreateContainer within sandbox \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\"" Jul 15 23:13:08.659180 containerd[1893]: time="2025-07-15T23:13:08.659134168Z" level=info msg="StartContainer for \"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\"" Jul 15 23:13:08.660618 containerd[1893]: time="2025-07-15T23:13:08.660590274Z" level=info msg="connecting to shim c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1" address="unix:///run/containerd/s/2059e39a197eaaca7cacdf614516e5943d466d94785ca382fffdf8bec10340e5" protocol=ttrpc version=3 Jul 15 23:13:08.678674 systemd[1]: Started cri-containerd-c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1.scope - libcontainer container c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1. Jul 15 23:13:08.705840 containerd[1893]: time="2025-07-15T23:13:08.705795638Z" level=info msg="StartContainer for \"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\" returns successfully" Jul 15 23:13:08.715483 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:13:08.715662 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:13:08.715974 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:13:08.718709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:13:08.719902 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:13:08.722426 systemd[1]: cri-containerd-c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1.scope: Deactivated successfully. Jul 15 23:13:08.723743 containerd[1893]: time="2025-07-15T23:13:08.723693073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\" id:\"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\" pid:3899 exited_at:{seconds:1752621188 nanos:723084863}" Jul 15 23:13:08.724236 containerd[1893]: time="2025-07-15T23:13:08.723952464Z" level=info msg="received exit event container_id:\"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\" id:\"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\" pid:3899 exited_at:{seconds:1752621188 nanos:723084863}" Jul 15 23:13:08.740258 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:13:09.612621 containerd[1893]: time="2025-07-15T23:13:09.612566366Z" level=info msg="CreateContainer within sandbox \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:13:09.638732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1-rootfs.mount: Deactivated successfully. Jul 15 23:13:09.651658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2874829086.mount: Deactivated successfully. Jul 15 23:13:09.653878 containerd[1893]: time="2025-07-15T23:13:09.653785711Z" level=info msg="Container 843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:09.678157 containerd[1893]: time="2025-07-15T23:13:09.678115682Z" level=info msg="CreateContainer within sandbox \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\"" Jul 15 23:13:09.678665 containerd[1893]: time="2025-07-15T23:13:09.678642313Z" level=info msg="StartContainer for \"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\"" Jul 15 23:13:09.679940 containerd[1893]: time="2025-07-15T23:13:09.679889693Z" level=info msg="connecting to shim 843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4" address="unix:///run/containerd/s/2059e39a197eaaca7cacdf614516e5943d466d94785ca382fffdf8bec10340e5" protocol=ttrpc version=3 Jul 15 23:13:09.694650 systemd[1]: Started cri-containerd-843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4.scope - libcontainer container 843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4. Jul 15 23:13:09.719061 systemd[1]: cri-containerd-843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4.scope: Deactivated successfully. Jul 15 23:13:09.721823 containerd[1893]: time="2025-07-15T23:13:09.721221521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\" id:\"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\" pid:3947 exited_at:{seconds:1752621189 nanos:720924409}" Jul 15 23:13:09.724221 containerd[1893]: time="2025-07-15T23:13:09.723661016Z" level=info msg="received exit event container_id:\"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\" id:\"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\" pid:3947 exited_at:{seconds:1752621189 nanos:720924409}" Jul 15 23:13:09.729336 containerd[1893]: time="2025-07-15T23:13:09.729307802Z" level=info msg="StartContainer for \"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\" returns successfully" Jul 15 23:13:10.619362 containerd[1893]: time="2025-07-15T23:13:10.619264622Z" level=info msg="CreateContainer within sandbox \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:13:10.639199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4-rootfs.mount: Deactivated successfully. Jul 15 23:13:10.652975 containerd[1893]: time="2025-07-15T23:13:10.652949422Z" level=info msg="Container 7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:10.654038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1943022404.mount: Deactivated successfully. Jul 15 23:13:10.695315 containerd[1893]: time="2025-07-15T23:13:10.695286848Z" level=info msg="CreateContainer within sandbox \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\"" Jul 15 23:13:10.695716 containerd[1893]: time="2025-07-15T23:13:10.695693579Z" level=info msg="StartContainer for \"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\"" Jul 15 23:13:10.696503 containerd[1893]: time="2025-07-15T23:13:10.696462089Z" level=info msg="connecting to shim 7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8" address="unix:///run/containerd/s/2059e39a197eaaca7cacdf614516e5943d466d94785ca382fffdf8bec10340e5" protocol=ttrpc version=3 Jul 15 23:13:10.714643 systemd[1]: Started cri-containerd-7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8.scope - libcontainer container 7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8. Jul 15 23:13:10.732153 systemd[1]: cri-containerd-7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8.scope: Deactivated successfully. Jul 15 23:13:10.734045 containerd[1893]: time="2025-07-15T23:13:10.733993144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\" id:\"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\" pid:3985 exited_at:{seconds:1752621190 nanos:732985259}" Jul 15 23:13:10.741469 containerd[1893]: time="2025-07-15T23:13:10.741358964Z" level=info msg="received exit event container_id:\"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\" id:\"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\" pid:3985 exited_at:{seconds:1752621190 nanos:732985259}" Jul 15 23:13:10.742754 containerd[1893]: time="2025-07-15T23:13:10.742716035Z" level=info msg="StartContainer for \"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\" returns successfully" Jul 15 23:13:10.755718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8-rootfs.mount: Deactivated successfully. Jul 15 23:13:11.626999 containerd[1893]: time="2025-07-15T23:13:11.626948491Z" level=info msg="CreateContainer within sandbox \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:13:11.660500 containerd[1893]: time="2025-07-15T23:13:11.660458758Z" level=info msg="Container ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:11.665733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821589474.mount: Deactivated successfully. Jul 15 23:13:11.681679 containerd[1893]: time="2025-07-15T23:13:11.681621223Z" level=info msg="CreateContainer within sandbox \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\"" Jul 15 23:13:11.683592 containerd[1893]: time="2025-07-15T23:13:11.683539150Z" level=info msg="StartContainer for \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\"" Jul 15 23:13:11.684214 containerd[1893]: time="2025-07-15T23:13:11.684183856Z" level=info msg="connecting to shim ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d" address="unix:///run/containerd/s/2059e39a197eaaca7cacdf614516e5943d466d94785ca382fffdf8bec10340e5" protocol=ttrpc version=3 Jul 15 23:13:11.703667 systemd[1]: Started cri-containerd-ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d.scope - libcontainer container ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d. Jul 15 23:13:11.731468 containerd[1893]: time="2025-07-15T23:13:11.731434471Z" level=info msg="StartContainer for \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" returns successfully" Jul 15 23:13:11.779509 containerd[1893]: time="2025-07-15T23:13:11.779335640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" id:\"336e83e8305c3157484d434eaf9e7bb81c8450d19184e0bba9dea80e8b208672\" pid:4055 exited_at:{seconds:1752621191 nanos:779077665}" Jul 15 23:13:11.800952 kubelet[3384]: I0715 23:13:11.800921 3384 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 23:13:11.864808 systemd[1]: Created slice kubepods-burstable-pod71a96da8_b421_4783_b4b8_a1271d11f090.slice - libcontainer container kubepods-burstable-pod71a96da8_b421_4783_b4b8_a1271d11f090.slice. Jul 15 23:13:11.872172 systemd[1]: Created slice kubepods-burstable-pod01eec298_5490_4414_a993_856e564209fb.slice - libcontainer container kubepods-burstable-pod01eec298_5490_4414_a993_856e564209fb.slice. Jul 15 23:13:11.883052 kubelet[3384]: I0715 23:13:11.882874 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rgj\" (UniqueName: \"kubernetes.io/projected/01eec298-5490-4414-a993-856e564209fb-kube-api-access-v8rgj\") pod \"coredns-674b8bbfcf-k8tjs\" (UID: \"01eec298-5490-4414-a993-856e564209fb\") " pod="kube-system/coredns-674b8bbfcf-k8tjs" Jul 15 23:13:11.883052 kubelet[3384]: I0715 23:13:11.882939 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71a96da8-b421-4783-b4b8-a1271d11f090-config-volume\") pod \"coredns-674b8bbfcf-k8d7t\" (UID: \"71a96da8-b421-4783-b4b8-a1271d11f090\") " pod="kube-system/coredns-674b8bbfcf-k8d7t" Jul 15 23:13:11.883052 kubelet[3384]: I0715 23:13:11.882953 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhgrf\" (UniqueName: \"kubernetes.io/projected/71a96da8-b421-4783-b4b8-a1271d11f090-kube-api-access-zhgrf\") pod \"coredns-674b8bbfcf-k8d7t\" (UID: \"71a96da8-b421-4783-b4b8-a1271d11f090\") " pod="kube-system/coredns-674b8bbfcf-k8d7t" Jul 15 23:13:11.883052 kubelet[3384]: I0715 23:13:11.882971 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01eec298-5490-4414-a993-856e564209fb-config-volume\") pod \"coredns-674b8bbfcf-k8tjs\" (UID: \"01eec298-5490-4414-a993-856e564209fb\") " pod="kube-system/coredns-674b8bbfcf-k8tjs" Jul 15 23:13:12.170692 containerd[1893]: time="2025-07-15T23:13:12.170378363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8d7t,Uid:71a96da8-b421-4783-b4b8-a1271d11f090,Namespace:kube-system,Attempt:0,}" Jul 15 23:13:12.175704 containerd[1893]: time="2025-07-15T23:13:12.175589609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8tjs,Uid:01eec298-5490-4414-a993-856e564209fb,Namespace:kube-system,Attempt:0,}" Jul 15 23:13:13.767963 systemd-networkd[1574]: cilium_host: Link UP Jul 15 23:13:13.768520 systemd-networkd[1574]: cilium_net: Link UP Jul 15 23:13:13.769254 systemd-networkd[1574]: cilium_net: Gained carrier Jul 15 23:13:13.769427 systemd-networkd[1574]: cilium_host: Gained carrier Jul 15 23:13:13.910765 systemd-networkd[1574]: cilium_vxlan: Link UP Jul 15 23:13:13.910770 systemd-networkd[1574]: cilium_vxlan: Gained carrier Jul 15 23:13:14.130559 kernel: NET: Registered PF_ALG protocol family Jul 15 23:13:14.310769 systemd-networkd[1574]: cilium_net: Gained IPv6LL Jul 15 23:13:14.661397 systemd-networkd[1574]: lxc_health: Link UP Jul 15 23:13:14.663821 systemd-networkd[1574]: lxc_health: Gained carrier Jul 15 23:13:14.758704 systemd-networkd[1574]: cilium_host: Gained IPv6LL Jul 15 23:13:15.216557 systemd-networkd[1574]: lxce3abc8d8b562: Link UP Jul 15 23:13:15.224583 kernel: eth0: renamed from tmp30588 Jul 15 23:13:15.225375 systemd-networkd[1574]: lxce3abc8d8b562: Gained carrier Jul 15 23:13:15.251033 systemd-networkd[1574]: lxcff229590d415: Link UP Jul 15 23:13:15.258565 kernel: eth0: renamed from tmpba70f Jul 15 23:13:15.259920 systemd-networkd[1574]: lxcff229590d415: Gained carrier Jul 15 23:13:15.290159 kubelet[3384]: I0715 23:13:15.289672 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kv6x4" podStartSLOduration=10.841406009 podStartE2EDuration="23.289656267s" podCreationTimestamp="2025-07-15 23:12:52 +0000 UTC" firstStartedPulling="2025-07-15 23:12:53.478759807 +0000 UTC m=+7.004902852" lastFinishedPulling="2025-07-15 23:13:05.927010073 +0000 UTC m=+19.453153110" observedRunningTime="2025-07-15 23:13:12.633262592 +0000 UTC m=+26.159405629" watchObservedRunningTime="2025-07-15 23:13:15.289656267 +0000 UTC m=+28.815799344" Jul 15 23:13:15.335682 systemd-networkd[1574]: cilium_vxlan: Gained IPv6LL Jul 15 23:13:16.167641 systemd-networkd[1574]: lxc_health: Gained IPv6LL Jul 15 23:13:16.486686 systemd-networkd[1574]: lxcff229590d415: Gained IPv6LL Jul 15 23:13:16.550665 systemd-networkd[1574]: lxce3abc8d8b562: Gained IPv6LL Jul 15 23:13:17.771370 containerd[1893]: time="2025-07-15T23:13:17.771322644Z" level=info msg="connecting to shim ba70f85b0e7fbe307e2b6e04fee8745c751bdd612942e5ff7625611158590763" address="unix:///run/containerd/s/25ee48d9e898cd15f87f54a6196d6be5753c1e2213234737e17ae0b5ca2294f1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:17.787146 containerd[1893]: time="2025-07-15T23:13:17.786952024Z" level=info msg="connecting to shim 30588c0250cffc658c71ef2cca048d77c264bef4cbe3bfa5c40a6285aa08489e" address="unix:///run/containerd/s/433dd867e950d644b29cf4e68c8387ee12f30f55788e1013c44a63d58413eefe" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:17.792690 systemd[1]: Started cri-containerd-ba70f85b0e7fbe307e2b6e04fee8745c751bdd612942e5ff7625611158590763.scope - libcontainer container ba70f85b0e7fbe307e2b6e04fee8745c751bdd612942e5ff7625611158590763. Jul 15 23:13:17.811812 systemd[1]: Started cri-containerd-30588c0250cffc658c71ef2cca048d77c264bef4cbe3bfa5c40a6285aa08489e.scope - libcontainer container 30588c0250cffc658c71ef2cca048d77c264bef4cbe3bfa5c40a6285aa08489e. Jul 15 23:13:17.838163 containerd[1893]: time="2025-07-15T23:13:17.837469208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8tjs,Uid:01eec298-5490-4414-a993-856e564209fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba70f85b0e7fbe307e2b6e04fee8745c751bdd612942e5ff7625611158590763\"" Jul 15 23:13:17.848136 containerd[1893]: time="2025-07-15T23:13:17.848104321Z" level=info msg="CreateContainer within sandbox \"ba70f85b0e7fbe307e2b6e04fee8745c751bdd612942e5ff7625611158590763\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:13:17.853820 containerd[1893]: time="2025-07-15T23:13:17.853772512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8d7t,Uid:71a96da8-b421-4783-b4b8-a1271d11f090,Namespace:kube-system,Attempt:0,} returns sandbox id \"30588c0250cffc658c71ef2cca048d77c264bef4cbe3bfa5c40a6285aa08489e\"" Jul 15 23:13:17.863349 containerd[1893]: time="2025-07-15T23:13:17.863319625Z" level=info msg="CreateContainer within sandbox \"30588c0250cffc658c71ef2cca048d77c264bef4cbe3bfa5c40a6285aa08489e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:13:17.885951 containerd[1893]: time="2025-07-15T23:13:17.885876497Z" level=info msg="Container 512e87a5630fdc0836a617323e7d6bc007b3dfe4cdd50b1cb71dc6d302b82d2c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:17.893827 containerd[1893]: time="2025-07-15T23:13:17.893795738Z" level=info msg="Container cad5e8dd64833e9beb30d5b6f108336d732f2249044e8bb71ee9078def859294: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:17.907799 containerd[1893]: time="2025-07-15T23:13:17.907766933Z" level=info msg="CreateContainer within sandbox \"ba70f85b0e7fbe307e2b6e04fee8745c751bdd612942e5ff7625611158590763\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"512e87a5630fdc0836a617323e7d6bc007b3dfe4cdd50b1cb71dc6d302b82d2c\"" Jul 15 23:13:17.908496 containerd[1893]: time="2025-07-15T23:13:17.908468986Z" level=info msg="StartContainer for \"512e87a5630fdc0836a617323e7d6bc007b3dfe4cdd50b1cb71dc6d302b82d2c\"" Jul 15 23:13:17.909323 containerd[1893]: time="2025-07-15T23:13:17.909297450Z" level=info msg="connecting to shim 512e87a5630fdc0836a617323e7d6bc007b3dfe4cdd50b1cb71dc6d302b82d2c" address="unix:///run/containerd/s/25ee48d9e898cd15f87f54a6196d6be5753c1e2213234737e17ae0b5ca2294f1" protocol=ttrpc version=3 Jul 15 23:13:17.921279 containerd[1893]: time="2025-07-15T23:13:17.921216865Z" level=info msg="CreateContainer within sandbox \"30588c0250cffc658c71ef2cca048d77c264bef4cbe3bfa5c40a6285aa08489e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cad5e8dd64833e9beb30d5b6f108336d732f2249044e8bb71ee9078def859294\"" Jul 15 23:13:17.921984 containerd[1893]: time="2025-07-15T23:13:17.921953383Z" level=info msg="StartContainer for \"cad5e8dd64833e9beb30d5b6f108336d732f2249044e8bb71ee9078def859294\"" Jul 15 23:13:17.923490 containerd[1893]: time="2025-07-15T23:13:17.923453331Z" level=info msg="connecting to shim cad5e8dd64833e9beb30d5b6f108336d732f2249044e8bb71ee9078def859294" address="unix:///run/containerd/s/433dd867e950d644b29cf4e68c8387ee12f30f55788e1013c44a63d58413eefe" protocol=ttrpc version=3 Jul 15 23:13:17.926093 systemd[1]: Started cri-containerd-512e87a5630fdc0836a617323e7d6bc007b3dfe4cdd50b1cb71dc6d302b82d2c.scope - libcontainer container 512e87a5630fdc0836a617323e7d6bc007b3dfe4cdd50b1cb71dc6d302b82d2c. Jul 15 23:13:17.943649 systemd[1]: Started cri-containerd-cad5e8dd64833e9beb30d5b6f108336d732f2249044e8bb71ee9078def859294.scope - libcontainer container cad5e8dd64833e9beb30d5b6f108336d732f2249044e8bb71ee9078def859294. Jul 15 23:13:17.970340 containerd[1893]: time="2025-07-15T23:13:17.970259965Z" level=info msg="StartContainer for \"512e87a5630fdc0836a617323e7d6bc007b3dfe4cdd50b1cb71dc6d302b82d2c\" returns successfully" Jul 15 23:13:17.981156 containerd[1893]: time="2025-07-15T23:13:17.981119613Z" level=info msg="StartContainer for \"cad5e8dd64833e9beb30d5b6f108336d732f2249044e8bb71ee9078def859294\" returns successfully" Jul 15 23:13:18.645262 kubelet[3384]: I0715 23:13:18.644148 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k8d7t" podStartSLOduration=26.644134765 podStartE2EDuration="26.644134765s" podCreationTimestamp="2025-07-15 23:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:13:18.643063534 +0000 UTC m=+32.169206579" watchObservedRunningTime="2025-07-15 23:13:18.644134765 +0000 UTC m=+32.170277802" Jul 15 23:14:21.846976 systemd[1]: Started sshd@7-10.200.20.39:22-10.200.16.10:46092.service - OpenSSH per-connection server daemon (10.200.16.10:46092). Jul 15 23:14:22.307694 sshd[4710]: Accepted publickey for core from 10.200.16.10 port 46092 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:22.309160 sshd-session[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:22.312620 systemd-logind[1858]: New session 10 of user core. Jul 15 23:14:22.324805 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:14:22.705766 sshd[4713]: Connection closed by 10.200.16.10 port 46092 Jul 15 23:14:22.705110 sshd-session[4710]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:22.708506 systemd-logind[1858]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:14:22.710065 systemd[1]: sshd@7-10.200.20.39:22-10.200.16.10:46092.service: Deactivated successfully. Jul 15 23:14:22.711880 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:14:22.713736 systemd-logind[1858]: Removed session 10. Jul 15 23:14:27.790747 systemd[1]: Started sshd@8-10.200.20.39:22-10.200.16.10:46094.service - OpenSSH per-connection server daemon (10.200.16.10:46094). Jul 15 23:14:28.262654 sshd[4728]: Accepted publickey for core from 10.200.16.10 port 46094 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:28.263747 sshd-session[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:28.267574 systemd-logind[1858]: New session 11 of user core. Jul 15 23:14:28.278650 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:14:28.641562 sshd[4730]: Connection closed by 10.200.16.10 port 46094 Jul 15 23:14:28.642084 sshd-session[4728]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:28.645142 systemd[1]: sshd@8-10.200.20.39:22-10.200.16.10:46094.service: Deactivated successfully. Jul 15 23:14:28.647000 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:14:28.648619 systemd-logind[1858]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:14:28.650071 systemd-logind[1858]: Removed session 11. Jul 15 23:14:33.731563 systemd[1]: Started sshd@9-10.200.20.39:22-10.200.16.10:40570.service - OpenSSH per-connection server daemon (10.200.16.10:40570). Jul 15 23:14:34.188934 sshd[4743]: Accepted publickey for core from 10.200.16.10 port 40570 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:34.189936 sshd-session[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:34.193306 systemd-logind[1858]: New session 12 of user core. Jul 15 23:14:34.198649 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:14:34.551870 sshd[4745]: Connection closed by 10.200.16.10 port 40570 Jul 15 23:14:34.551320 sshd-session[4743]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:34.554620 systemd-logind[1858]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:14:34.554771 systemd[1]: sshd@9-10.200.20.39:22-10.200.16.10:40570.service: Deactivated successfully. Jul 15 23:14:34.556058 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:14:34.557341 systemd-logind[1858]: Removed session 12. Jul 15 23:14:39.638825 systemd[1]: Started sshd@10-10.200.20.39:22-10.200.16.10:40574.service - OpenSSH per-connection server daemon (10.200.16.10:40574). Jul 15 23:14:40.096328 sshd[4759]: Accepted publickey for core from 10.200.16.10 port 40574 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:40.097355 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:40.100999 systemd-logind[1858]: New session 13 of user core. Jul 15 23:14:40.108646 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:14:40.456510 sshd[4761]: Connection closed by 10.200.16.10 port 40574 Jul 15 23:14:40.457151 sshd-session[4759]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:40.460065 systemd-logind[1858]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:14:40.460451 systemd[1]: sshd@10-10.200.20.39:22-10.200.16.10:40574.service: Deactivated successfully. Jul 15 23:14:40.462968 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:14:40.464786 systemd-logind[1858]: Removed session 13. Jul 15 23:14:40.534725 systemd[1]: Started sshd@11-10.200.20.39:22-10.200.16.10:48364.service - OpenSSH per-connection server daemon (10.200.16.10:48364). Jul 15 23:14:40.967421 sshd[4773]: Accepted publickey for core from 10.200.16.10 port 48364 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:40.968390 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:40.972211 systemd-logind[1858]: New session 14 of user core. Jul 15 23:14:40.977702 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:14:41.358705 sshd[4775]: Connection closed by 10.200.16.10 port 48364 Jul 15 23:14:41.359002 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:41.361968 systemd[1]: sshd@11-10.200.20.39:22-10.200.16.10:48364.service: Deactivated successfully. Jul 15 23:14:41.363418 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:14:41.364485 systemd-logind[1858]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:14:41.365842 systemd-logind[1858]: Removed session 14. Jul 15 23:14:41.446506 systemd[1]: Started sshd@12-10.200.20.39:22-10.200.16.10:48368.service - OpenSSH per-connection server daemon (10.200.16.10:48368). Jul 15 23:14:41.918334 sshd[4784]: Accepted publickey for core from 10.200.16.10 port 48368 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:41.919349 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:41.923611 systemd-logind[1858]: New session 15 of user core. Jul 15 23:14:41.940651 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:14:42.305122 sshd[4786]: Connection closed by 10.200.16.10 port 48368 Jul 15 23:14:42.305596 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:42.308449 systemd[1]: sshd@12-10.200.20.39:22-10.200.16.10:48368.service: Deactivated successfully. Jul 15 23:14:42.310304 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:14:42.311134 systemd-logind[1858]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:14:42.312378 systemd-logind[1858]: Removed session 15. Jul 15 23:14:47.383723 systemd[1]: Started sshd@13-10.200.20.39:22-10.200.16.10:48378.service - OpenSSH per-connection server daemon (10.200.16.10:48378). Jul 15 23:14:47.819341 sshd[4801]: Accepted publickey for core from 10.200.16.10 port 48378 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:47.820407 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:47.824651 systemd-logind[1858]: New session 16 of user core. Jul 15 23:14:47.829659 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:14:48.182460 sshd[4803]: Connection closed by 10.200.16.10 port 48378 Jul 15 23:14:48.181620 sshd-session[4801]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:48.184077 systemd[1]: sshd@13-10.200.20.39:22-10.200.16.10:48378.service: Deactivated successfully. Jul 15 23:14:48.185845 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:14:48.187754 systemd-logind[1858]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:14:48.188984 systemd-logind[1858]: Removed session 16. Jul 15 23:14:48.270808 systemd[1]: Started sshd@14-10.200.20.39:22-10.200.16.10:48390.service - OpenSSH per-connection server daemon (10.200.16.10:48390). Jul 15 23:14:48.742757 sshd[4815]: Accepted publickey for core from 10.200.16.10 port 48390 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:48.743810 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:48.747652 systemd-logind[1858]: New session 17 of user core. Jul 15 23:14:48.753625 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:14:49.174560 sshd[4817]: Connection closed by 10.200.16.10 port 48390 Jul 15 23:14:49.175111 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:49.178073 systemd-logind[1858]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:14:49.178215 systemd[1]: sshd@14-10.200.20.39:22-10.200.16.10:48390.service: Deactivated successfully. Jul 15 23:14:49.179953 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:14:49.182327 systemd-logind[1858]: Removed session 17. Jul 15 23:14:49.260165 systemd[1]: Started sshd@15-10.200.20.39:22-10.200.16.10:48402.service - OpenSSH per-connection server daemon (10.200.16.10:48402). Jul 15 23:14:49.720493 sshd[4826]: Accepted publickey for core from 10.200.16.10 port 48402 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:49.721611 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:49.725719 systemd-logind[1858]: New session 18 of user core. Jul 15 23:14:49.729661 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:14:50.612282 sshd[4828]: Connection closed by 10.200.16.10 port 48402 Jul 15 23:14:50.612851 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:50.615737 systemd-logind[1858]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:14:50.615878 systemd[1]: sshd@15-10.200.20.39:22-10.200.16.10:48402.service: Deactivated successfully. Jul 15 23:14:50.617508 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:14:50.620173 systemd-logind[1858]: Removed session 18. Jul 15 23:14:50.698562 systemd[1]: Started sshd@16-10.200.20.39:22-10.200.16.10:37132.service - OpenSSH per-connection server daemon (10.200.16.10:37132). Jul 15 23:14:51.156734 sshd[4845]: Accepted publickey for core from 10.200.16.10 port 37132 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:51.157833 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:51.162254 systemd-logind[1858]: New session 19 of user core. Jul 15 23:14:51.169845 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:14:51.597079 sshd[4847]: Connection closed by 10.200.16.10 port 37132 Jul 15 23:14:51.596398 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:51.599784 systemd-logind[1858]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:14:51.599978 systemd[1]: sshd@16-10.200.20.39:22-10.200.16.10:37132.service: Deactivated successfully. Jul 15 23:14:51.602298 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:14:51.604174 systemd-logind[1858]: Removed session 19. Jul 15 23:14:51.675669 systemd[1]: Started sshd@17-10.200.20.39:22-10.200.16.10:37144.service - OpenSSH per-connection server daemon (10.200.16.10:37144). Jul 15 23:14:52.108370 sshd[4857]: Accepted publickey for core from 10.200.16.10 port 37144 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:52.109675 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:52.114187 systemd-logind[1858]: New session 20 of user core. Jul 15 23:14:52.123666 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:14:52.473115 sshd[4859]: Connection closed by 10.200.16.10 port 37144 Jul 15 23:14:52.473627 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:52.477102 systemd-logind[1858]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:14:52.477249 systemd[1]: sshd@17-10.200.20.39:22-10.200.16.10:37144.service: Deactivated successfully. Jul 15 23:14:52.479721 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:14:52.481383 systemd-logind[1858]: Removed session 20. Jul 15 23:14:57.556721 systemd[1]: Started sshd@18-10.200.20.39:22-10.200.16.10:37160.service - OpenSSH per-connection server daemon (10.200.16.10:37160). Jul 15 23:14:58.014725 sshd[4874]: Accepted publickey for core from 10.200.16.10 port 37160 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:58.015732 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:58.019097 systemd-logind[1858]: New session 21 of user core. Jul 15 23:14:58.026721 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:14:58.376293 sshd[4876]: Connection closed by 10.200.16.10 port 37160 Jul 15 23:14:58.375816 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:58.378236 systemd-logind[1858]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:14:58.378458 systemd[1]: sshd@18-10.200.20.39:22-10.200.16.10:37160.service: Deactivated successfully. Jul 15 23:14:58.379838 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:14:58.381507 systemd-logind[1858]: Removed session 21. Jul 15 23:15:03.460570 systemd[1]: Started sshd@19-10.200.20.39:22-10.200.16.10:45284.service - OpenSSH per-connection server daemon (10.200.16.10:45284). Jul 15 23:15:03.932193 sshd[4887]: Accepted publickey for core from 10.200.16.10 port 45284 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:15:03.933202 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:03.936712 systemd-logind[1858]: New session 22 of user core. Jul 15 23:15:03.943641 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:15:04.307022 sshd[4889]: Connection closed by 10.200.16.10 port 45284 Jul 15 23:15:04.306882 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:04.309608 systemd-logind[1858]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:15:04.309809 systemd[1]: sshd@19-10.200.20.39:22-10.200.16.10:45284.service: Deactivated successfully. Jul 15 23:15:04.311145 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:15:04.312911 systemd-logind[1858]: Removed session 22. Jul 15 23:15:04.387748 systemd[1]: Started sshd@20-10.200.20.39:22-10.200.16.10:45296.service - OpenSSH per-connection server daemon (10.200.16.10:45296). Jul 15 23:15:04.825100 sshd[4901]: Accepted publickey for core from 10.200.16.10 port 45296 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:15:04.826121 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:04.829808 systemd-logind[1858]: New session 23 of user core. Jul 15 23:15:04.836641 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:15:06.342157 kubelet[3384]: I0715 23:15:06.341772 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k8tjs" podStartSLOduration=134.341757204 podStartE2EDuration="2m14.341757204s" podCreationTimestamp="2025-07-15 23:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:13:18.673323617 +0000 UTC m=+32.199466654" watchObservedRunningTime="2025-07-15 23:15:06.341757204 +0000 UTC m=+139.867900241" Jul 15 23:15:06.354657 containerd[1893]: time="2025-07-15T23:15:06.354571827Z" level=info msg="StopContainer for \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\" with timeout 30 (s)" Jul 15 23:15:06.355954 containerd[1893]: time="2025-07-15T23:15:06.355923522Z" level=info msg="Stop container \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\" with signal terminated" Jul 15 23:15:06.369669 systemd[1]: cri-containerd-89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b.scope: Deactivated successfully. Jul 15 23:15:06.371190 containerd[1893]: time="2025-07-15T23:15:06.371164663Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:15:06.371836 containerd[1893]: time="2025-07-15T23:15:06.371806681Z" level=info msg="received exit event container_id:\"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\" id:\"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\" pid:3792 exited_at:{seconds:1752621306 nanos:371444519}" Jul 15 23:15:06.372233 containerd[1893]: time="2025-07-15T23:15:06.372209172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\" id:\"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\" pid:3792 exited_at:{seconds:1752621306 nanos:371444519}" Jul 15 23:15:06.377120 containerd[1893]: time="2025-07-15T23:15:06.377100833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" id:\"be72f78b8232f705295ec7c6bc51f10437127b9385a0b1de29daa280926570a0\" pid:4922 exited_at:{seconds:1752621306 nanos:376934196}" Jul 15 23:15:06.379453 containerd[1893]: time="2025-07-15T23:15:06.379403499Z" level=info msg="StopContainer for \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" with timeout 2 (s)" Jul 15 23:15:06.379783 containerd[1893]: time="2025-07-15T23:15:06.379757813Z" level=info msg="Stop container \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" with signal terminated" Jul 15 23:15:06.386634 systemd-networkd[1574]: lxc_health: Link DOWN Jul 15 23:15:06.386640 systemd-networkd[1574]: lxc_health: Lost carrier Jul 15 23:15:06.394941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b-rootfs.mount: Deactivated successfully. Jul 15 23:15:06.400423 systemd[1]: cri-containerd-ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d.scope: Deactivated successfully. Jul 15 23:15:06.400782 systemd[1]: cri-containerd-ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d.scope: Consumed 4.331s CPU time, 122.6M memory peak, 136K read from disk, 12.9M written to disk. Jul 15 23:15:06.402416 containerd[1893]: time="2025-07-15T23:15:06.402390189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" id:\"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" pid:4023 exited_at:{seconds:1752621306 nanos:402189903}" Jul 15 23:15:06.402604 containerd[1893]: time="2025-07-15T23:15:06.402451079Z" level=info msg="received exit event container_id:\"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" id:\"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" pid:4023 exited_at:{seconds:1752621306 nanos:402189903}" Jul 15 23:15:06.415833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d-rootfs.mount: Deactivated successfully. Jul 15 23:15:06.496051 containerd[1893]: time="2025-07-15T23:15:06.496011727Z" level=info msg="StopContainer for \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" returns successfully" Jul 15 23:15:06.497004 containerd[1893]: time="2025-07-15T23:15:06.496951850Z" level=info msg="StopPodSandbox for \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\"" Jul 15 23:15:06.497174 containerd[1893]: time="2025-07-15T23:15:06.497106726Z" level=info msg="Container to stop \"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:15:06.497174 containerd[1893]: time="2025-07-15T23:15:06.497122207Z" level=info msg="Container to stop \"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:15:06.497174 containerd[1893]: time="2025-07-15T23:15:06.497129687Z" level=info msg="Container to stop \"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:15:06.497174 containerd[1893]: time="2025-07-15T23:15:06.497136247Z" level=info msg="Container to stop \"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:15:06.497174 containerd[1893]: time="2025-07-15T23:15:06.497142143Z" level=info msg="Container to stop \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:15:06.499611 containerd[1893]: time="2025-07-15T23:15:06.499519595Z" level=info msg="StopContainer for \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\" returns successfully" Jul 15 23:15:06.500431 containerd[1893]: time="2025-07-15T23:15:06.499970952Z" level=info msg="StopPodSandbox for \"e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31\"" Jul 15 23:15:06.500431 containerd[1893]: time="2025-07-15T23:15:06.500022218Z" level=info msg="Container to stop \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:15:06.505620 systemd[1]: cri-containerd-bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c.scope: Deactivated successfully. Jul 15 23:15:06.508991 systemd[1]: cri-containerd-e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31.scope: Deactivated successfully. Jul 15 23:15:06.510138 containerd[1893]: time="2025-07-15T23:15:06.509803938Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" id:\"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" pid:3571 exit_status:137 exited_at:{seconds:1752621306 nanos:508640593}" Jul 15 23:15:06.529025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c-rootfs.mount: Deactivated successfully. Jul 15 23:15:06.532267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31-rootfs.mount: Deactivated successfully. Jul 15 23:15:06.558291 containerd[1893]: time="2025-07-15T23:15:06.558255683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31\" id:\"e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31\" pid:3496 exit_status:137 exited_at:{seconds:1752621306 nanos:513667471}" Jul 15 23:15:06.558540 containerd[1893]: time="2025-07-15T23:15:06.558378542Z" level=info msg="received exit event sandbox_id:\"e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31\" exit_status:137 exited_at:{seconds:1752621306 nanos:513667471}" Jul 15 23:15:06.558875 containerd[1893]: time="2025-07-15T23:15:06.558838251Z" level=info msg="received exit event sandbox_id:\"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" exit_status:137 exited_at:{seconds:1752621306 nanos:508640593}" Jul 15 23:15:06.559088 containerd[1893]: time="2025-07-15T23:15:06.559014928Z" level=info msg="shim disconnected" id=e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31 namespace=k8s.io Jul 15 23:15:06.559088 containerd[1893]: time="2025-07-15T23:15:06.559031969Z" level=warning msg="cleaning up after shim disconnected" id=e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31 namespace=k8s.io Jul 15 23:15:06.559088 containerd[1893]: time="2025-07-15T23:15:06.559052817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:15:06.560253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c-shm.mount: Deactivated successfully. Jul 15 23:15:06.560319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31-shm.mount: Deactivated successfully. Jul 15 23:15:06.562258 containerd[1893]: time="2025-07-15T23:15:06.562237011Z" level=info msg="shim disconnected" id=bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c namespace=k8s.io Jul 15 23:15:06.562462 containerd[1893]: time="2025-07-15T23:15:06.562344150Z" level=warning msg="cleaning up after shim disconnected" id=bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c namespace=k8s.io Jul 15 23:15:06.562462 containerd[1893]: time="2025-07-15T23:15:06.562370663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:15:06.563210 containerd[1893]: time="2025-07-15T23:15:06.563172806Z" level=info msg="TearDown network for sandbox \"e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31\" successfully" Jul 15 23:15:06.563210 containerd[1893]: time="2025-07-15T23:15:06.563194671Z" level=info msg="StopPodSandbox for \"e5dfbff2add4b1ed3dcd969a26847f12a1961856d4b14cbce091dc5fb5b0aa31\" returns successfully" Jul 15 23:15:06.563782 containerd[1893]: time="2025-07-15T23:15:06.563731734Z" level=info msg="TearDown network for sandbox \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" successfully" Jul 15 23:15:06.563782 containerd[1893]: time="2025-07-15T23:15:06.563745734Z" level=info msg="StopPodSandbox for \"bd8ab680c384da766f39e4fac7b82b567d01b45a86e0c4ca17ba8751f33e545c\" returns successfully" Jul 15 23:15:06.617601 kubelet[3384]: E0715 23:15:06.616785 3384 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 23:15:06.621184 kubelet[3384]: I0715 23:15:06.620754 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-config-path\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621184 kubelet[3384]: I0715 23:15:06.620776 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-host-proc-sys-kernel\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621184 kubelet[3384]: I0715 23:15:06.620788 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-etc-cni-netd\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621184 kubelet[3384]: I0715 23:15:06.620800 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j976h\" (UniqueName: \"kubernetes.io/projected/449257be-4ff3-4039-aba5-08424b0ae9f1-kube-api-access-j976h\") pod \"449257be-4ff3-4039-aba5-08424b0ae9f1\" (UID: \"449257be-4ff3-4039-aba5-08424b0ae9f1\") " Jul 15 23:15:06.621184 kubelet[3384]: I0715 23:15:06.620811 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-lib-modules\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621184 kubelet[3384]: I0715 23:15:06.620823 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-host-proc-sys-net\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621334 kubelet[3384]: I0715 23:15:06.620831 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-run\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621334 kubelet[3384]: I0715 23:15:06.620839 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-cgroup\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621334 kubelet[3384]: I0715 23:15:06.620849 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/449257be-4ff3-4039-aba5-08424b0ae9f1-cilium-config-path\") pod \"449257be-4ff3-4039-aba5-08424b0ae9f1\" (UID: \"449257be-4ff3-4039-aba5-08424b0ae9f1\") " Jul 15 23:15:06.621334 kubelet[3384]: I0715 23:15:06.620860 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c383ca1-9f09-448b-a6c8-6030ba209ebb-clustermesh-secrets\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621334 kubelet[3384]: I0715 23:15:06.620869 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxbfv\" (UniqueName: \"kubernetes.io/projected/6c383ca1-9f09-448b-a6c8-6030ba209ebb-kube-api-access-gxbfv\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621334 kubelet[3384]: I0715 23:15:06.620877 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cni-path\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621422 kubelet[3384]: I0715 23:15:06.620886 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-xtables-lock\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621422 kubelet[3384]: I0715 23:15:06.620895 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-bpf-maps\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621422 kubelet[3384]: I0715 23:15:06.620904 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-hostproc\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621422 kubelet[3384]: I0715 23:15:06.620916 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c383ca1-9f09-448b-a6c8-6030ba209ebb-hubble-tls\") pod \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\" (UID: \"6c383ca1-9f09-448b-a6c8-6030ba209ebb\") " Jul 15 23:15:06.621422 kubelet[3384]: I0715 23:15:06.621201 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:15:06.622467 kubelet[3384]: I0715 23:15:06.622440 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:15:06.622553 kubelet[3384]: I0715 23:15:06.622483 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:15:06.622553 kubelet[3384]: I0715 23:15:06.622498 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:15:06.623430 kubelet[3384]: I0715 23:15:06.623409 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c383ca1-9f09-448b-a6c8-6030ba209ebb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:15:06.624416 kubelet[3384]: I0715 23:15:06.624170 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/449257be-4ff3-4039-aba5-08424b0ae9f1-kube-api-access-j976h" (OuterVolumeSpecName: "kube-api-access-j976h") pod "449257be-4ff3-4039-aba5-08424b0ae9f1" (UID: "449257be-4ff3-4039-aba5-08424b0ae9f1"). InnerVolumeSpecName "kube-api-access-j976h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:15:06.624416 kubelet[3384]: I0715 23:15:06.624207 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:15:06.624416 kubelet[3384]: I0715 23:15:06.624217 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:15:06.624416 kubelet[3384]: I0715 23:15:06.624226 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:15:06.624416 kubelet[3384]: I0715 23:15:06.624236 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cni-path" (OuterVolumeSpecName: "cni-path") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:15:06.625841 kubelet[3384]: I0715 23:15:06.625712 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449257be-4ff3-4039-aba5-08424b0ae9f1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "449257be-4ff3-4039-aba5-08424b0ae9f1" (UID: "449257be-4ff3-4039-aba5-08424b0ae9f1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:15:06.625841 kubelet[3384]: I0715 23:15:06.625749 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:15:06.625841 kubelet[3384]: I0715 23:15:06.625760 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:15:06.625841 kubelet[3384]: I0715 23:15:06.625782 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-hostproc" (OuterVolumeSpecName: "hostproc") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:15:06.626932 kubelet[3384]: I0715 23:15:06.626879 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c383ca1-9f09-448b-a6c8-6030ba209ebb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 23:15:06.627327 kubelet[3384]: I0715 23:15:06.627303 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c383ca1-9f09-448b-a6c8-6030ba209ebb-kube-api-access-gxbfv" (OuterVolumeSpecName: "kube-api-access-gxbfv") pod "6c383ca1-9f09-448b-a6c8-6030ba209ebb" (UID: "6c383ca1-9f09-448b-a6c8-6030ba209ebb"). InnerVolumeSpecName "kube-api-access-gxbfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:15:06.721896 kubelet[3384]: I0715 23:15:06.721759 3384 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-etc-cni-netd\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.721896 kubelet[3384]: I0715 23:15:06.721796 3384 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j976h\" (UniqueName: \"kubernetes.io/projected/449257be-4ff3-4039-aba5-08424b0ae9f1-kube-api-access-j976h\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.721896 kubelet[3384]: I0715 23:15:06.721803 3384 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-lib-modules\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.721896 kubelet[3384]: I0715 23:15:06.721809 3384 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-host-proc-sys-net\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.721896 kubelet[3384]: I0715 23:15:06.721815 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-run\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.721896 kubelet[3384]: I0715 23:15:06.721820 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-cgroup\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.721896 kubelet[3384]: I0715 23:15:06.721826 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/449257be-4ff3-4039-aba5-08424b0ae9f1-cilium-config-path\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.721896 kubelet[3384]: I0715 23:15:06.721833 3384 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c383ca1-9f09-448b-a6c8-6030ba209ebb-clustermesh-secrets\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.722155 kubelet[3384]: I0715 23:15:06.721840 3384 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gxbfv\" (UniqueName: \"kubernetes.io/projected/6c383ca1-9f09-448b-a6c8-6030ba209ebb-kube-api-access-gxbfv\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.722155 kubelet[3384]: I0715 23:15:06.721845 3384 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cni-path\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.722155 kubelet[3384]: I0715 23:15:06.721851 3384 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-xtables-lock\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.722155 kubelet[3384]: I0715 23:15:06.721856 3384 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-bpf-maps\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.722155 kubelet[3384]: I0715 23:15:06.721861 3384 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-hostproc\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.722155 kubelet[3384]: I0715 23:15:06.721866 3384 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c383ca1-9f09-448b-a6c8-6030ba209ebb-hubble-tls\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.722155 kubelet[3384]: I0715 23:15:06.721871 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c383ca1-9f09-448b-a6c8-6030ba209ebb-cilium-config-path\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.722155 kubelet[3384]: I0715 23:15:06.721876 3384 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c383ca1-9f09-448b-a6c8-6030ba209ebb-host-proc-sys-kernel\") on node \"ci-4372.0.1-n-7068735510\" DevicePath \"\"" Jul 15 23:15:06.815511 kubelet[3384]: I0715 23:15:06.815460 3384 scope.go:117] "RemoveContainer" containerID="ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d" Jul 15 23:15:06.820084 containerd[1893]: time="2025-07-15T23:15:06.819666583Z" level=info msg="RemoveContainer for \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\"" Jul 15 23:15:06.822866 systemd[1]: Removed slice kubepods-burstable-pod6c383ca1_9f09_448b_a6c8_6030ba209ebb.slice - libcontainer container kubepods-burstable-pod6c383ca1_9f09_448b_a6c8_6030ba209ebb.slice. Jul 15 23:15:06.822953 systemd[1]: kubepods-burstable-pod6c383ca1_9f09_448b_a6c8_6030ba209ebb.slice: Consumed 4.390s CPU time, 123.1M memory peak, 136K read from disk, 12.9M written to disk. Jul 15 23:15:06.826707 systemd[1]: Removed slice kubepods-besteffort-pod449257be_4ff3_4039_aba5_08424b0ae9f1.slice - libcontainer container kubepods-besteffort-pod449257be_4ff3_4039_aba5_08424b0ae9f1.slice. Jul 15 23:15:06.839808 containerd[1893]: time="2025-07-15T23:15:06.839777928Z" level=info msg="RemoveContainer for \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" returns successfully" Jul 15 23:15:06.840369 kubelet[3384]: I0715 23:15:06.840258 3384 scope.go:117] "RemoveContainer" containerID="7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8" Jul 15 23:15:06.841556 containerd[1893]: time="2025-07-15T23:15:06.841481488Z" level=info msg="RemoveContainer for \"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\"" Jul 15 23:15:06.852804 containerd[1893]: time="2025-07-15T23:15:06.852778391Z" level=info msg="RemoveContainer for \"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\" returns successfully" Jul 15 23:15:06.852987 kubelet[3384]: I0715 23:15:06.852933 3384 scope.go:117] "RemoveContainer" containerID="843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4" Jul 15 23:15:06.854830 containerd[1893]: time="2025-07-15T23:15:06.854755279Z" level=info msg="RemoveContainer for \"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\"" Jul 15 23:15:06.867409 containerd[1893]: time="2025-07-15T23:15:06.867375996Z" level=info msg="RemoveContainer for \"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\" returns successfully" Jul 15 23:15:06.867692 kubelet[3384]: I0715 23:15:06.867594 3384 scope.go:117] "RemoveContainer" containerID="c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1" Jul 15 23:15:06.869970 containerd[1893]: time="2025-07-15T23:15:06.869868794Z" level=info msg="RemoveContainer for \"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\"" Jul 15 23:15:06.881734 containerd[1893]: time="2025-07-15T23:15:06.881672432Z" level=info msg="RemoveContainer for \"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\" returns successfully" Jul 15 23:15:06.881932 kubelet[3384]: I0715 23:15:06.881887 3384 scope.go:117] "RemoveContainer" containerID="555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275" Jul 15 23:15:06.883611 containerd[1893]: time="2025-07-15T23:15:06.883438314Z" level=info msg="RemoveContainer for \"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\"" Jul 15 23:15:06.896576 containerd[1893]: time="2025-07-15T23:15:06.896550692Z" level=info msg="RemoveContainer for \"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\" returns successfully" Jul 15 23:15:06.896780 kubelet[3384]: I0715 23:15:06.896706 3384 scope.go:117] "RemoveContainer" containerID="ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d" Jul 15 23:15:06.897006 containerd[1893]: time="2025-07-15T23:15:06.896975736Z" level=error msg="ContainerStatus for \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\": not found" Jul 15 23:15:06.897113 kubelet[3384]: E0715 23:15:06.897090 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\": not found" containerID="ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d" Jul 15 23:15:06.897156 kubelet[3384]: I0715 23:15:06.897112 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d"} err="failed to get container status \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae0aee44076cf56193df4856e8302b240e3c1bcdd0978852515da0a01394093d\": not found" Jul 15 23:15:06.897156 kubelet[3384]: I0715 23:15:06.897140 3384 scope.go:117] "RemoveContainer" containerID="7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8" Jul 15 23:15:06.897384 containerd[1893]: time="2025-07-15T23:15:06.897359459Z" level=error msg="ContainerStatus for \"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\": not found" Jul 15 23:15:06.897610 kubelet[3384]: E0715 23:15:06.897584 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\": not found" containerID="7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8" Jul 15 23:15:06.897675 kubelet[3384]: I0715 23:15:06.897610 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8"} err="failed to get container status \"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\": rpc error: code = NotFound desc = an error occurred when try to find container \"7405a91a64599546bede6aaadaf63eb61509ac121e6fc2814336518f205f2cb8\": not found" Jul 15 23:15:06.897675 kubelet[3384]: I0715 23:15:06.897620 3384 scope.go:117] "RemoveContainer" containerID="843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4" Jul 15 23:15:06.897930 containerd[1893]: time="2025-07-15T23:15:06.897894978Z" level=error msg="ContainerStatus for \"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\": not found" Jul 15 23:15:06.898078 kubelet[3384]: E0715 23:15:06.898060 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\": not found" containerID="843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4" Jul 15 23:15:06.898137 kubelet[3384]: I0715 23:15:06.898078 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4"} err="failed to get container status \"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\": rpc error: code = NotFound desc = an error occurred when try to find container \"843f51464d4d8530dca81d68b087af16f8b000ce8c9fd52399e88603ba0afcd4\": not found" Jul 15 23:15:06.898137 kubelet[3384]: I0715 23:15:06.898091 3384 scope.go:117] "RemoveContainer" containerID="c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1" Jul 15 23:15:06.898276 containerd[1893]: time="2025-07-15T23:15:06.898250732Z" level=error msg="ContainerStatus for \"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\": not found" Jul 15 23:15:06.898361 kubelet[3384]: E0715 23:15:06.898343 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\": not found" containerID="c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1" Jul 15 23:15:06.898422 kubelet[3384]: I0715 23:15:06.898361 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1"} err="failed to get container status \"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3dbac7483ef2e1e3fa813886081f203dad7d0b7b687ce0db6af382b575412b1\": not found" Jul 15 23:15:06.898422 kubelet[3384]: I0715 23:15:06.898390 3384 scope.go:117] "RemoveContainer" containerID="555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275" Jul 15 23:15:06.898709 containerd[1893]: time="2025-07-15T23:15:06.898663136Z" level=error msg="ContainerStatus for \"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\": not found" Jul 15 23:15:06.898857 kubelet[3384]: E0715 23:15:06.898839 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\": not found" containerID="555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275" Jul 15 23:15:06.899009 kubelet[3384]: I0715 23:15:06.898858 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275"} err="failed to get container status \"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\": rpc error: code = NotFound desc = an error occurred when try to find container \"555d8b5e82aefcce975b54095cb52b15774cdaa17d87076dd1a7f9539e480275\": not found" Jul 15 23:15:06.899009 kubelet[3384]: I0715 23:15:06.898869 3384 scope.go:117] "RemoveContainer" containerID="89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b" Jul 15 23:15:06.900067 containerd[1893]: time="2025-07-15T23:15:06.900041063Z" level=info msg="RemoveContainer for \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\"" Jul 15 23:15:06.910081 containerd[1893]: time="2025-07-15T23:15:06.910053706Z" level=info msg="RemoveContainer for \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\" returns successfully" Jul 15 23:15:06.910347 kubelet[3384]: I0715 23:15:06.910324 3384 scope.go:117] "RemoveContainer" containerID="89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b" Jul 15 23:15:06.910664 containerd[1893]: time="2025-07-15T23:15:06.910636562Z" level=error msg="ContainerStatus for \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\": not found" Jul 15 23:15:06.910850 kubelet[3384]: E0715 23:15:06.910750 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\": not found" containerID="89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b" Jul 15 23:15:06.910929 kubelet[3384]: I0715 23:15:06.910902 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b"} err="failed to get container status \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\": rpc error: code = NotFound desc = an error occurred when try to find container \"89dc76f411fcba521489a78a5895c32f13d6961546078037cf4ab3fc436a068b\": not found" Jul 15 23:15:07.394900 systemd[1]: var-lib-kubelet-pods-449257be\x2d4ff3\x2d4039\x2daba5\x2d08424b0ae9f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj976h.mount: Deactivated successfully. Jul 15 23:15:07.394991 systemd[1]: var-lib-kubelet-pods-6c383ca1\x2d9f09\x2d448b\x2da6c8\x2d6030ba209ebb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgxbfv.mount: Deactivated successfully. Jul 15 23:15:07.395046 systemd[1]: var-lib-kubelet-pods-6c383ca1\x2d9f09\x2d448b\x2da6c8\x2d6030ba209ebb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 23:15:07.395081 systemd[1]: var-lib-kubelet-pods-6c383ca1\x2d9f09\x2d448b\x2da6c8\x2d6030ba209ebb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 23:15:08.382445 sshd[4903]: Connection closed by 10.200.16.10 port 45296 Jul 15 23:15:08.382802 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:08.386083 systemd[1]: sshd@20-10.200.20.39:22-10.200.16.10:45296.service: Deactivated successfully. Jul 15 23:15:08.387549 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:15:08.388756 systemd-logind[1858]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:15:08.390164 systemd-logind[1858]: Removed session 23. Jul 15 23:15:08.460005 systemd[1]: Started sshd@21-10.200.20.39:22-10.200.16.10:45310.service - OpenSSH per-connection server daemon (10.200.16.10:45310). Jul 15 23:15:08.542815 kubelet[3384]: I0715 23:15:08.542773 3384 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="449257be-4ff3-4039-aba5-08424b0ae9f1" path="/var/lib/kubelet/pods/449257be-4ff3-4039-aba5-08424b0ae9f1/volumes" Jul 15 23:15:08.543173 kubelet[3384]: I0715 23:15:08.543041 3384 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c383ca1-9f09-448b-a6c8-6030ba209ebb" path="/var/lib/kubelet/pods/6c383ca1-9f09-448b-a6c8-6030ba209ebb/volumes" Jul 15 23:15:08.896214 sshd[5052]: Accepted publickey for core from 10.200.16.10 port 45310 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:15:08.897278 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:08.900861 systemd-logind[1858]: New session 24 of user core. Jul 15 23:15:08.904656 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 23:15:09.485031 systemd[1]: Created slice kubepods-burstable-pod54015b80_5ff4_4442_8b1e_6dd7d3707b20.slice - libcontainer container kubepods-burstable-pod54015b80_5ff4_4442_8b1e_6dd7d3707b20.slice. Jul 15 23:15:09.525623 sshd[5054]: Connection closed by 10.200.16.10 port 45310 Jul 15 23:15:09.526337 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:09.531024 systemd[1]: sshd@21-10.200.20.39:22-10.200.16.10:45310.service: Deactivated successfully. Jul 15 23:15:09.535423 kubelet[3384]: I0715 23:15:09.535381 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/54015b80-5ff4-4442-8b1e-6dd7d3707b20-hubble-tls\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535617 kubelet[3384]: I0715 23:15:09.535411 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwcd\" (UniqueName: \"kubernetes.io/projected/54015b80-5ff4-4442-8b1e-6dd7d3707b20-kube-api-access-2hwcd\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535617 kubelet[3384]: I0715 23:15:09.535545 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/54015b80-5ff4-4442-8b1e-6dd7d3707b20-cilium-run\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535617 kubelet[3384]: I0715 23:15:09.535562 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/54015b80-5ff4-4442-8b1e-6dd7d3707b20-bpf-maps\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535617 kubelet[3384]: I0715 23:15:09.535572 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54015b80-5ff4-4442-8b1e-6dd7d3707b20-etc-cni-netd\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535617 kubelet[3384]: I0715 23:15:09.535582 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/54015b80-5ff4-4442-8b1e-6dd7d3707b20-clustermesh-secrets\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535617 kubelet[3384]: I0715 23:15:09.535590 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/54015b80-5ff4-4442-8b1e-6dd7d3707b20-hostproc\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535737 kubelet[3384]: I0715 23:15:09.535601 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/54015b80-5ff4-4442-8b1e-6dd7d3707b20-cilium-cgroup\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535877 kubelet[3384]: I0715 23:15:09.535611 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/54015b80-5ff4-4442-8b1e-6dd7d3707b20-cni-path\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535877 kubelet[3384]: I0715 23:15:09.535792 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54015b80-5ff4-4442-8b1e-6dd7d3707b20-xtables-lock\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535877 kubelet[3384]: I0715 23:15:09.535807 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54015b80-5ff4-4442-8b1e-6dd7d3707b20-cilium-config-path\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535877 kubelet[3384]: I0715 23:15:09.535817 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/54015b80-5ff4-4442-8b1e-6dd7d3707b20-cilium-ipsec-secrets\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535877 kubelet[3384]: I0715 23:15:09.535826 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54015b80-5ff4-4442-8b1e-6dd7d3707b20-lib-modules\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.535877 kubelet[3384]: I0715 23:15:09.535845 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/54015b80-5ff4-4442-8b1e-6dd7d3707b20-host-proc-sys-kernel\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.536651 kubelet[3384]: I0715 23:15:09.535855 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/54015b80-5ff4-4442-8b1e-6dd7d3707b20-host-proc-sys-net\") pod \"cilium-kklmv\" (UID: \"54015b80-5ff4-4442-8b1e-6dd7d3707b20\") " pod="kube-system/cilium-kklmv" Jul 15 23:15:09.536813 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 23:15:09.538892 systemd-logind[1858]: Session 24 logged out. Waiting for processes to exit. Jul 15 23:15:09.540444 systemd-logind[1858]: Removed session 24. Jul 15 23:15:09.630120 systemd[1]: Started sshd@22-10.200.20.39:22-10.200.16.10:45312.service - OpenSSH per-connection server daemon (10.200.16.10:45312). Jul 15 23:15:09.790385 containerd[1893]: time="2025-07-15T23:15:09.789978404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kklmv,Uid:54015b80-5ff4-4442-8b1e-6dd7d3707b20,Namespace:kube-system,Attempt:0,}" Jul 15 23:15:09.835199 containerd[1893]: time="2025-07-15T23:15:09.835159409Z" level=info msg="connecting to shim 7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2" address="unix:///run/containerd/s/e04977be5fc2d7dc6ba729691fc608d5c04254bc682b09f27acf22ed1af29671" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:15:09.857657 systemd[1]: Started cri-containerd-7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2.scope - libcontainer container 7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2. Jul 15 23:15:09.876357 containerd[1893]: time="2025-07-15T23:15:09.876313540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kklmv,Uid:54015b80-5ff4-4442-8b1e-6dd7d3707b20,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2\"" Jul 15 23:15:09.884884 containerd[1893]: time="2025-07-15T23:15:09.884848246Z" level=info msg="CreateContainer within sandbox \"7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:15:09.915963 containerd[1893]: time="2025-07-15T23:15:09.915930892Z" level=info msg="Container 3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:09.929953 containerd[1893]: time="2025-07-15T23:15:09.929919455Z" level=info msg="CreateContainer within sandbox \"7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668\"" Jul 15 23:15:09.930692 containerd[1893]: time="2025-07-15T23:15:09.930599955Z" level=info msg="StartContainer for \"3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668\"" Jul 15 23:15:09.931811 containerd[1893]: time="2025-07-15T23:15:09.931785852Z" level=info msg="connecting to shim 3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668" address="unix:///run/containerd/s/e04977be5fc2d7dc6ba729691fc608d5c04254bc682b09f27acf22ed1af29671" protocol=ttrpc version=3 Jul 15 23:15:09.947650 systemd[1]: Started cri-containerd-3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668.scope - libcontainer container 3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668. Jul 15 23:15:09.972742 containerd[1893]: time="2025-07-15T23:15:09.972711321Z" level=info msg="StartContainer for \"3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668\" returns successfully" Jul 15 23:15:09.974946 systemd[1]: cri-containerd-3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668.scope: Deactivated successfully. Jul 15 23:15:09.977361 containerd[1893]: time="2025-07-15T23:15:09.977329075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668\" id:\"3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668\" pid:5129 exited_at:{seconds:1752621309 nanos:977040987}" Jul 15 23:15:09.977576 containerd[1893]: time="2025-07-15T23:15:09.977543337Z" level=info msg="received exit event container_id:\"3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668\" id:\"3e7e5f777220ca8f8be1c485a49292e534e83118b1c55d93cc8439f773c2f668\" pid:5129 exited_at:{seconds:1752621309 nanos:977040987}" Jul 15 23:15:10.103328 sshd[5064]: Accepted publickey for core from 10.200.16.10 port 45312 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:15:10.104518 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:10.108251 systemd-logind[1858]: New session 25 of user core. Jul 15 23:15:10.115681 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 23:15:10.425697 sshd[5160]: Connection closed by 10.200.16.10 port 45312 Jul 15 23:15:10.426234 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:10.429817 systemd-logind[1858]: Session 25 logged out. Waiting for processes to exit. Jul 15 23:15:10.430246 systemd[1]: sshd@22-10.200.20.39:22-10.200.16.10:45312.service: Deactivated successfully. Jul 15 23:15:10.432204 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 23:15:10.434488 systemd-logind[1858]: Removed session 25. Jul 15 23:15:10.461404 kubelet[3384]: I0715 23:15:10.461338 3384 setters.go:618] "Node became not ready" node="ci-4372.0.1-n-7068735510" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T23:15:10Z","lastTransitionTime":"2025-07-15T23:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 23:15:10.523360 systemd[1]: Started sshd@23-10.200.20.39:22-10.200.16.10:46060.service - OpenSSH per-connection server daemon (10.200.16.10:46060). Jul 15 23:15:10.839206 containerd[1893]: time="2025-07-15T23:15:10.839034662Z" level=info msg="CreateContainer within sandbox \"7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:15:10.876794 containerd[1893]: time="2025-07-15T23:15:10.876757216Z" level=info msg="Container 80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:10.894376 containerd[1893]: time="2025-07-15T23:15:10.894344257Z" level=info msg="CreateContainer within sandbox \"7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771\"" Jul 15 23:15:10.895091 containerd[1893]: time="2025-07-15T23:15:10.894767437Z" level=info msg="StartContainer for \"80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771\"" Jul 15 23:15:10.896671 containerd[1893]: time="2025-07-15T23:15:10.896600488Z" level=info msg="connecting to shim 80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771" address="unix:///run/containerd/s/e04977be5fc2d7dc6ba729691fc608d5c04254bc682b09f27acf22ed1af29671" protocol=ttrpc version=3 Jul 15 23:15:10.915648 systemd[1]: Started cri-containerd-80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771.scope - libcontainer container 80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771. Jul 15 23:15:10.942266 systemd[1]: cri-containerd-80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771.scope: Deactivated successfully. Jul 15 23:15:10.943572 containerd[1893]: time="2025-07-15T23:15:10.943509582Z" level=info msg="received exit event container_id:\"80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771\" id:\"80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771\" pid:5181 exited_at:{seconds:1752621310 nanos:943271399}" Jul 15 23:15:10.943842 containerd[1893]: time="2025-07-15T23:15:10.943822591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771\" id:\"80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771\" pid:5181 exited_at:{seconds:1752621310 nanos:943271399}" Jul 15 23:15:10.944235 containerd[1893]: time="2025-07-15T23:15:10.944213554Z" level=info msg="StartContainer for \"80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771\" returns successfully" Jul 15 23:15:10.959794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80d32a6bc1a279a1b843181ba4f3b7098aa20aa9ae6646cb48f2c4334166c771-rootfs.mount: Deactivated successfully. Jul 15 23:15:11.002280 sshd[5167]: Accepted publickey for core from 10.200.16.10 port 46060 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:15:11.003459 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:11.007613 systemd-logind[1858]: New session 26 of user core. Jul 15 23:15:11.013044 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 23:15:11.618348 kubelet[3384]: E0715 23:15:11.618298 3384 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 23:15:11.842541 containerd[1893]: time="2025-07-15T23:15:11.842465189Z" level=info msg="CreateContainer within sandbox \"7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:15:11.873242 containerd[1893]: time="2025-07-15T23:15:11.872502006Z" level=info msg="Container 865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:11.895068 containerd[1893]: time="2025-07-15T23:15:11.895034755Z" level=info msg="CreateContainer within sandbox \"7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da\"" Jul 15 23:15:11.895713 containerd[1893]: time="2025-07-15T23:15:11.895621979Z" level=info msg="StartContainer for \"865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da\"" Jul 15 23:15:11.896657 containerd[1893]: time="2025-07-15T23:15:11.896633664Z" level=info msg="connecting to shim 865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da" address="unix:///run/containerd/s/e04977be5fc2d7dc6ba729691fc608d5c04254bc682b09f27acf22ed1af29671" protocol=ttrpc version=3 Jul 15 23:15:11.913651 systemd[1]: Started cri-containerd-865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da.scope - libcontainer container 865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da. Jul 15 23:15:11.939580 systemd[1]: cri-containerd-865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da.scope: Deactivated successfully. Jul 15 23:15:11.940475 containerd[1893]: time="2025-07-15T23:15:11.940441982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da\" id:\"865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da\" pid:5234 exited_at:{seconds:1752621311 nanos:940120733}" Jul 15 23:15:11.941356 containerd[1893]: time="2025-07-15T23:15:11.941325967Z" level=info msg="received exit event container_id:\"865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da\" id:\"865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da\" pid:5234 exited_at:{seconds:1752621311 nanos:940120733}" Jul 15 23:15:11.942556 containerd[1893]: time="2025-07-15T23:15:11.942510216Z" level=info msg="StartContainer for \"865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da\" returns successfully" Jul 15 23:15:11.960009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-865bebf909df4f406edff9bb1b2fdb44cdcc8afcaccc3598299aa4787ca294da-rootfs.mount: Deactivated successfully. Jul 15 23:15:12.846936 containerd[1893]: time="2025-07-15T23:15:12.846882520Z" level=info msg="CreateContainer within sandbox \"7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:15:12.874252 containerd[1893]: time="2025-07-15T23:15:12.874221349Z" level=info msg="Container 48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:12.876297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4261690788.mount: Deactivated successfully. Jul 15 23:15:12.895900 containerd[1893]: time="2025-07-15T23:15:12.895864304Z" level=info msg="CreateContainer within sandbox \"7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7\"" Jul 15 23:15:12.896356 containerd[1893]: time="2025-07-15T23:15:12.896333814Z" level=info msg="StartContainer for \"48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7\"" Jul 15 23:15:12.896978 containerd[1893]: time="2025-07-15T23:15:12.896894173Z" level=info msg="connecting to shim 48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7" address="unix:///run/containerd/s/e04977be5fc2d7dc6ba729691fc608d5c04254bc682b09f27acf22ed1af29671" protocol=ttrpc version=3 Jul 15 23:15:12.915655 systemd[1]: Started cri-containerd-48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7.scope - libcontainer container 48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7. Jul 15 23:15:12.933821 systemd[1]: cri-containerd-48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7.scope: Deactivated successfully. Jul 15 23:15:12.935041 containerd[1893]: time="2025-07-15T23:15:12.934996394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7\" id:\"48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7\" pid:5276 exited_at:{seconds:1752621312 nanos:934698378}" Jul 15 23:15:12.939199 containerd[1893]: time="2025-07-15T23:15:12.939170512Z" level=info msg="received exit event container_id:\"48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7\" id:\"48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7\" pid:5276 exited_at:{seconds:1752621312 nanos:934698378}" Jul 15 23:15:12.945034 containerd[1893]: time="2025-07-15T23:15:12.945007573Z" level=info msg="StartContainer for \"48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7\" returns successfully" Jul 15 23:15:12.954488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48108d731c2f151e61e692d722384dc68174bc8a1e6e67c8117b45e39019a1e7-rootfs.mount: Deactivated successfully. Jul 15 23:15:13.852374 containerd[1893]: time="2025-07-15T23:15:13.852275455Z" level=info msg="CreateContainer within sandbox \"7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:15:13.887739 containerd[1893]: time="2025-07-15T23:15:13.887701288Z" level=info msg="Container c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:13.909135 containerd[1893]: time="2025-07-15T23:15:13.909067252Z" level=info msg="CreateContainer within sandbox \"7dbc8c62da884d6ae2f62d651a73445b37b433732e483f2fc4c66efdd4524af2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0\"" Jul 15 23:15:13.909829 containerd[1893]: time="2025-07-15T23:15:13.909804905Z" level=info msg="StartContainer for \"c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0\"" Jul 15 23:15:13.910503 containerd[1893]: time="2025-07-15T23:15:13.910461508Z" level=info msg="connecting to shim c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0" address="unix:///run/containerd/s/e04977be5fc2d7dc6ba729691fc608d5c04254bc682b09f27acf22ed1af29671" protocol=ttrpc version=3 Jul 15 23:15:13.930642 systemd[1]: Started cri-containerd-c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0.scope - libcontainer container c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0. Jul 15 23:15:13.956814 containerd[1893]: time="2025-07-15T23:15:13.956777080Z" level=info msg="StartContainer for \"c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0\" returns successfully" Jul 15 23:15:14.005178 containerd[1893]: time="2025-07-15T23:15:14.005127943Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0\" id:\"a5d6057869b2cca0ebe7fbf613d6a791a4eec4e81d383f6de78af6c6d8dbee20\" pid:5344 exited_at:{seconds:1752621314 nanos:4774613}" Jul 15 23:15:14.207553 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 15 23:15:14.867789 kubelet[3384]: I0715 23:15:14.867735 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kklmv" podStartSLOduration=5.86772103 podStartE2EDuration="5.86772103s" podCreationTimestamp="2025-07-15 23:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:15:14.867448782 +0000 UTC m=+148.393591819" watchObservedRunningTime="2025-07-15 23:15:14.86772103 +0000 UTC m=+148.393864067" Jul 15 23:15:15.422065 containerd[1893]: time="2025-07-15T23:15:15.421742988Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0\" id:\"97d515c15f9d98bf5674c3d13ad05f3da47a89f731ba29cfdaa97a6bd860537d\" pid:5419 exit_status:1 exited_at:{seconds:1752621315 nanos:420234370}" Jul 15 23:15:16.579986 systemd-networkd[1574]: lxc_health: Link UP Jul 15 23:15:16.594087 systemd-networkd[1574]: lxc_health: Gained carrier Jul 15 23:15:17.498418 containerd[1893]: time="2025-07-15T23:15:17.498150382Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0\" id:\"c28dbb7e30f7604964eb949699364eb6152e43fb5e00652f6e706fe220647a85\" pid:5874 exited_at:{seconds:1752621317 nanos:497349535}" Jul 15 23:15:18.022667 systemd-networkd[1574]: lxc_health: Gained IPv6LL Jul 15 23:15:19.614056 containerd[1893]: time="2025-07-15T23:15:19.614012514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0\" id:\"8a6e6ecb2efac05c22b7e0f5af8e95c9abd4bb215ad44cb7a22e91e4ab2353b3\" pid:5907 exited_at:{seconds:1752621319 nanos:613469507}" Jul 15 23:15:21.686426 containerd[1893]: time="2025-07-15T23:15:21.686386408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c572e121e15de76dc3ba5ae3e58177a7880b32dab5cd57eac1eb6f702d6a0bf0\" id:\"5fc0084a6ac01173c7e2d30ae8f949be2f4866099af2369fb2877a9710e1da7a\" pid:5929 exited_at:{seconds:1752621321 nanos:685603514}" Jul 15 23:15:21.768050 sshd[5216]: Connection closed by 10.200.16.10 port 46060 Jul 15 23:15:21.768375 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:21.772216 systemd[1]: sshd@23-10.200.20.39:22-10.200.16.10:46060.service: Deactivated successfully. Jul 15 23:15:21.774695 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 23:15:21.776167 systemd-logind[1858]: Session 26 logged out. Waiting for processes to exit. Jul 15 23:15:21.777073 systemd-logind[1858]: Removed session 26.