Jul 15 04:40:56.061969 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 15 04:40:56.061988 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 03:28:41 -00 2025 Jul 15 04:40:56.061994 kernel: KASLR enabled Jul 15 04:40:56.061998 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 15 04:40:56.062003 kernel: printk: legacy bootconsole [pl11] enabled Jul 15 04:40:56.062007 kernel: efi: EFI v2.7 by EDK II Jul 15 04:40:56.062012 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 15 04:40:56.062016 kernel: random: crng init done Jul 15 04:40:56.062020 kernel: secureboot: Secure boot disabled Jul 15 04:40:56.062024 kernel: ACPI: Early table checksum verification disabled Jul 15 04:40:56.062028 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 15 04:40:56.062031 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:40:56.062035 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:40:56.062040 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 15 04:40:56.062045 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:40:56.062049 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:40:56.062054 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:40:56.062058 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:40:56.062063 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:40:56.062067 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:40:56.062071 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 15 04:40:56.062075 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:40:56.062079 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 15 04:40:56.062083 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 04:40:56.062088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 15 04:40:56.062092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 15 04:40:56.062096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 15 04:40:56.062100 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 15 04:40:56.062104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 15 04:40:56.062109 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 15 04:40:56.062113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 15 04:40:56.062133 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 15 04:40:56.062137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 15 04:40:56.062142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 15 04:40:56.062146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 15 04:40:56.062150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 15 04:40:56.062154 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 15 04:40:56.062158 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Jul 15 04:40:56.062162 kernel: Zone ranges: Jul 15 04:40:56.062167 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 15 04:40:56.062174 kernel: DMA32 empty Jul 15 04:40:56.062179 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 15 04:40:56.062183 kernel: Device empty Jul 15 04:40:56.062187 kernel: Movable zone start for each node Jul 15 04:40:56.062192 kernel: Early memory node ranges Jul 15 04:40:56.062197 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 15 04:40:56.062201 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 15 04:40:56.062205 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 15 04:40:56.062210 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 15 04:40:56.062214 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 15 04:40:56.062219 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 15 04:40:56.062223 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 15 04:40:56.062227 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 15 04:40:56.062231 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 15 04:40:56.062236 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 15 04:40:56.062240 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 15 04:40:56.062244 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Jul 15 04:40:56.062250 kernel: psci: probing for conduit method from ACPI. Jul 15 04:40:56.062254 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 04:40:56.062259 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 04:40:56.062263 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 15 04:40:56.062267 kernel: psci: SMC Calling Convention v1.4 Jul 15 04:40:56.062271 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 15 04:40:56.062276 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 15 04:40:56.062280 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 04:40:56.062284 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 04:40:56.062289 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 15 04:40:56.062293 kernel: Detected PIPT I-cache on CPU0 Jul 15 04:40:56.062299 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 15 04:40:56.062303 kernel: CPU features: detected: GIC system register CPU interface Jul 15 04:40:56.062307 kernel: CPU features: detected: Spectre-v4 Jul 15 04:40:56.062312 kernel: CPU features: detected: Spectre-BHB Jul 15 04:40:56.062316 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 04:40:56.062320 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 04:40:56.062325 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 15 04:40:56.062329 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 04:40:56.062334 kernel: alternatives: applying boot alternatives Jul 15 04:40:56.062339 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:40:56.062344 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 04:40:56.062349 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 04:40:56.062353 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 04:40:56.062358 kernel: Fallback order for Node 0: 0 Jul 15 04:40:56.062362 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 15 04:40:56.062366 kernel: Policy zone: Normal Jul 15 04:40:56.062371 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 04:40:56.062375 kernel: software IO TLB: area num 2. Jul 15 04:40:56.062379 kernel: software IO TLB: mapped [mem 0x0000000036210000-0x000000003a210000] (64MB) Jul 15 04:40:56.062384 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 04:40:56.062388 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 04:40:56.062393 kernel: rcu: RCU event tracing is enabled. Jul 15 04:40:56.062399 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 04:40:56.062403 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 04:40:56.062407 kernel: Tracing variant of Tasks RCU enabled. Jul 15 04:40:56.062412 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 04:40:56.062416 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 04:40:56.062421 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 04:40:56.062425 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 04:40:56.062430 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 04:40:56.062434 kernel: GICv3: 960 SPIs implemented Jul 15 04:40:56.062438 kernel: GICv3: 0 Extended SPIs implemented Jul 15 04:40:56.062443 kernel: Root IRQ handler: gic_handle_irq Jul 15 04:40:56.062447 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 15 04:40:56.062452 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 15 04:40:56.062457 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 15 04:40:56.062461 kernel: ITS: No ITS available, not enabling LPIs Jul 15 04:40:56.062466 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 04:40:56.062470 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 15 04:40:56.062475 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 04:40:56.062479 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 15 04:40:56.062483 kernel: Console: colour dummy device 80x25 Jul 15 04:40:56.062488 kernel: printk: legacy console [tty1] enabled Jul 15 04:40:56.062493 kernel: ACPI: Core revision 20240827 Jul 15 04:40:56.062497 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 15 04:40:56.062503 kernel: pid_max: default: 32768 minimum: 301 Jul 15 04:40:56.062507 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 04:40:56.062512 kernel: landlock: Up and running. Jul 15 04:40:56.062516 kernel: SELinux: Initializing. Jul 15 04:40:56.062521 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:40:56.062529 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:40:56.062534 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 15 04:40:56.062539 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 15 04:40:56.062544 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 15 04:40:56.062548 kernel: rcu: Hierarchical SRCU implementation. Jul 15 04:40:56.062553 kernel: rcu: Max phase no-delay instances is 400. Jul 15 04:40:56.062559 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 04:40:56.062564 kernel: Remapping and enabling EFI services. Jul 15 04:40:56.062568 kernel: smp: Bringing up secondary CPUs ... Jul 15 04:40:56.062573 kernel: Detected PIPT I-cache on CPU1 Jul 15 04:40:56.062578 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 15 04:40:56.062583 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 15 04:40:56.062588 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 04:40:56.062593 kernel: SMP: Total of 2 processors activated. Jul 15 04:40:56.062597 kernel: CPU: All CPU(s) started at EL1 Jul 15 04:40:56.062602 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 04:40:56.062607 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 15 04:40:56.062612 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 04:40:56.062617 kernel: CPU features: detected: Common not Private translations Jul 15 04:40:56.062621 kernel: CPU features: detected: CRC32 instructions Jul 15 04:40:56.062627 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 15 04:40:56.062632 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 04:40:56.062636 kernel: CPU features: detected: LSE atomic instructions Jul 15 04:40:56.062641 kernel: CPU features: detected: Privileged Access Never Jul 15 04:40:56.062646 kernel: CPU features: detected: Speculation barrier (SB) Jul 15 04:40:56.062651 kernel: CPU features: detected: TLB range maintenance instructions Jul 15 04:40:56.062655 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 04:40:56.062660 kernel: CPU features: detected: Scalable Vector Extension Jul 15 04:40:56.062665 kernel: alternatives: applying system-wide alternatives Jul 15 04:40:56.062670 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 15 04:40:56.062675 kernel: SVE: maximum available vector length 16 bytes per vector Jul 15 04:40:56.062680 kernel: SVE: default vector length 16 bytes per vector Jul 15 04:40:56.062685 kernel: Memory: 3959156K/4194160K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 213816K reserved, 16384K cma-reserved) Jul 15 04:40:56.062689 kernel: devtmpfs: initialized Jul 15 04:40:56.062694 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 04:40:56.062699 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 04:40:56.062704 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 04:40:56.062708 kernel: 0 pages in range for non-PLT usage Jul 15 04:40:56.062714 kernel: 508448 pages in range for PLT usage Jul 15 04:40:56.062719 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 04:40:56.062723 kernel: SMBIOS 3.1.0 present. Jul 15 04:40:56.062728 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 15 04:40:56.062733 kernel: DMI: Memory slots populated: 2/2 Jul 15 04:40:56.062738 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 04:40:56.062742 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 04:40:56.062747 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 04:40:56.062752 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 04:40:56.062758 kernel: audit: initializing netlink subsys (disabled) Jul 15 04:40:56.062762 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 15 04:40:56.062767 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 04:40:56.062772 kernel: cpuidle: using governor menu Jul 15 04:40:56.062776 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 04:40:56.062781 kernel: ASID allocator initialised with 32768 entries Jul 15 04:40:56.062786 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 04:40:56.062791 kernel: Serial: AMBA PL011 UART driver Jul 15 04:40:56.062795 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 04:40:56.062801 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 04:40:56.062806 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 04:40:56.062810 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 04:40:56.062815 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 04:40:56.062820 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 04:40:56.062825 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 04:40:56.062829 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 04:40:56.062834 kernel: ACPI: Added _OSI(Module Device) Jul 15 04:40:56.062839 kernel: ACPI: Added _OSI(Processor Device) Jul 15 04:40:56.062844 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 04:40:56.062849 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 04:40:56.062854 kernel: ACPI: Interpreter enabled Jul 15 04:40:56.062859 kernel: ACPI: Using GIC for interrupt routing Jul 15 04:40:56.062863 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 15 04:40:56.062868 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 04:40:56.062873 kernel: printk: legacy bootconsole [pl11] disabled Jul 15 04:40:56.062878 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 15 04:40:56.062882 kernel: ACPI: CPU0 has been hot-added Jul 15 04:40:56.062888 kernel: ACPI: CPU1 has been hot-added Jul 15 04:40:56.062892 kernel: iommu: Default domain type: Translated Jul 15 04:40:56.062897 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 04:40:56.062902 kernel: efivars: Registered efivars operations Jul 15 04:40:56.062907 kernel: vgaarb: loaded Jul 15 04:40:56.062911 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 04:40:56.062916 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 04:40:56.062921 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 04:40:56.062925 kernel: pnp: PnP ACPI init Jul 15 04:40:56.062931 kernel: pnp: PnP ACPI: found 0 devices Jul 15 04:40:56.062935 kernel: NET: Registered PF_INET protocol family Jul 15 04:40:56.062940 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 04:40:56.062945 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 04:40:56.062950 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 04:40:56.062955 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 04:40:56.062960 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 04:40:56.062964 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 04:40:56.062969 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:40:56.062974 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:40:56.062979 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 04:40:56.062984 kernel: PCI: CLS 0 bytes, default 64 Jul 15 04:40:56.062988 kernel: kvm [1]: HYP mode not available Jul 15 04:40:56.062993 kernel: Initialise system trusted keyrings Jul 15 04:40:56.062998 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 04:40:56.063003 kernel: Key type asymmetric registered Jul 15 04:40:56.063007 kernel: Asymmetric key parser 'x509' registered Jul 15 04:40:56.063012 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 04:40:56.063017 kernel: io scheduler mq-deadline registered Jul 15 04:40:56.063022 kernel: io scheduler kyber registered Jul 15 04:40:56.063027 kernel: io scheduler bfq registered Jul 15 04:40:56.063031 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 04:40:56.063036 kernel: thunder_xcv, ver 1.0 Jul 15 04:40:56.063041 kernel: thunder_bgx, ver 1.0 Jul 15 04:40:56.063045 kernel: nicpf, ver 1.0 Jul 15 04:40:56.063050 kernel: nicvf, ver 1.0 Jul 15 04:40:56.063169 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 04:40:56.063234 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T04:40:55 UTC (1752554455) Jul 15 04:40:56.063242 kernel: efifb: probing for efifb Jul 15 04:40:56.063248 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 15 04:40:56.063253 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 15 04:40:56.063259 kernel: efifb: scrolling: redraw Jul 15 04:40:56.063265 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 04:40:56.063270 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 04:40:56.063275 kernel: fb0: EFI VGA frame buffer device Jul 15 04:40:56.063282 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 15 04:40:56.063288 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 04:40:56.063293 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 04:40:56.063298 kernel: NET: Registered PF_INET6 protocol family Jul 15 04:40:56.063303 kernel: watchdog: NMI not fully supported Jul 15 04:40:56.063308 kernel: watchdog: Hard watchdog permanently disabled Jul 15 04:40:56.063312 kernel: Segment Routing with IPv6 Jul 15 04:40:56.063317 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 04:40:56.063322 kernel: NET: Registered PF_PACKET protocol family Jul 15 04:40:56.063329 kernel: Key type dns_resolver registered Jul 15 04:40:56.063334 kernel: registered taskstats version 1 Jul 15 04:40:56.063340 kernel: Loading compiled-in X.509 certificates Jul 15 04:40:56.063346 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: b5c59c413839929aea5bd4b52ae6eaff0e245cd2' Jul 15 04:40:56.063351 kernel: Demotion targets for Node 0: null Jul 15 04:40:56.063357 kernel: Key type .fscrypt registered Jul 15 04:40:56.063363 kernel: Key type fscrypt-provisioning registered Jul 15 04:40:56.063368 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 04:40:56.063373 kernel: ima: Allocated hash algorithm: sha1 Jul 15 04:40:56.063380 kernel: ima: No architecture policies found Jul 15 04:40:56.063386 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 04:40:56.063392 kernel: clk: Disabling unused clocks Jul 15 04:40:56.063397 kernel: PM: genpd: Disabling unused power domains Jul 15 04:40:56.063402 kernel: Warning: unable to open an initial console. Jul 15 04:40:56.063406 kernel: Freeing unused kernel memory: 39424K Jul 15 04:40:56.063411 kernel: Run /init as init process Jul 15 04:40:56.063416 kernel: with arguments: Jul 15 04:40:56.063420 kernel: /init Jul 15 04:40:56.063426 kernel: with environment: Jul 15 04:40:56.063431 kernel: HOME=/ Jul 15 04:40:56.063435 kernel: TERM=linux Jul 15 04:40:56.063440 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 04:40:56.063446 systemd[1]: Successfully made /usr/ read-only. Jul 15 04:40:56.063452 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:40:56.063458 systemd[1]: Detected virtualization microsoft. Jul 15 04:40:56.063464 systemd[1]: Detected architecture arm64. Jul 15 04:40:56.063469 systemd[1]: Running in initrd. Jul 15 04:40:56.063474 systemd[1]: No hostname configured, using default hostname. Jul 15 04:40:56.063479 systemd[1]: Hostname set to . Jul 15 04:40:56.063484 systemd[1]: Initializing machine ID from random generator. Jul 15 04:40:56.063489 systemd[1]: Queued start job for default target initrd.target. Jul 15 04:40:56.063494 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:40:56.063500 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:40:56.063505 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 04:40:56.063511 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:40:56.063517 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 04:40:56.063523 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 04:40:56.063528 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 04:40:56.063534 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 04:40:56.063539 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:40:56.063545 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:40:56.063550 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:40:56.063555 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:40:56.063560 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:40:56.063565 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:40:56.063570 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:40:56.063575 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:40:56.063581 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 04:40:56.063586 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 04:40:56.063592 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:40:56.063597 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:40:56.063602 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:40:56.063608 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:40:56.063613 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 04:40:56.063618 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:40:56.063623 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 04:40:56.063629 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 04:40:56.063635 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 04:40:56.063640 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:40:56.063645 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:40:56.063660 systemd-journald[224]: Collecting audit messages is disabled. Jul 15 04:40:56.063675 systemd-journald[224]: Journal started Jul 15 04:40:56.063688 systemd-journald[224]: Runtime Journal (/run/log/journal/37ff020254b442d08376c240bd7ff49d) is 8M, max 78.5M, 70.5M free. Jul 15 04:40:56.066151 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:40:56.071398 systemd-modules-load[226]: Inserted module 'overlay' Jul 15 04:40:56.090285 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:40:56.090325 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 04:40:56.103274 kernel: Bridge firewalling registered Jul 15 04:40:56.106157 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 15 04:40:56.111321 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 04:40:56.117392 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:40:56.129641 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 04:40:56.140387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:40:56.150755 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:40:56.164338 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 04:40:56.195789 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:40:56.202292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:40:56.226358 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:40:56.250287 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:40:56.263506 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:40:56.269037 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:40:56.273987 systemd-tmpfiles[250]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 04:40:56.291352 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:40:56.309633 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 04:40:56.328885 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:40:56.337854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:40:56.359408 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:40:56.393787 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:40:56.407338 systemd-resolved[263]: Positive Trust Anchors: Jul 15 04:40:56.407346 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:40:56.439274 kernel: SCSI subsystem initialized Jul 15 04:40:56.439293 kernel: Loading iSCSI transport class v2.0-870. Jul 15 04:40:56.407365 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:40:56.410562 systemd-resolved[263]: Defaulting to hostname 'linux'. Jul 15 04:40:56.476034 kernel: iscsi: registered transport (tcp) Jul 15 04:40:56.414453 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:40:56.428471 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:40:56.490024 kernel: iscsi: registered transport (qla4xxx) Jul 15 04:40:56.490037 kernel: QLogic iSCSI HBA Driver Jul 15 04:40:56.502599 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:40:56.517460 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:40:56.523190 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:40:56.573075 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 04:40:56.579552 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 04:40:56.639133 kernel: raid6: neonx8 gen() 18543 MB/s Jul 15 04:40:56.658124 kernel: raid6: neonx4 gen() 18544 MB/s Jul 15 04:40:56.678124 kernel: raid6: neonx2 gen() 17074 MB/s Jul 15 04:40:56.698124 kernel: raid6: neonx1 gen() 15027 MB/s Jul 15 04:40:56.717124 kernel: raid6: int64x8 gen() 10548 MB/s Jul 15 04:40:56.737125 kernel: raid6: int64x4 gen() 10615 MB/s Jul 15 04:40:56.757220 kernel: raid6: int64x2 gen() 8979 MB/s Jul 15 04:40:56.779521 kernel: raid6: int64x1 gen() 7006 MB/s Jul 15 04:40:56.779583 kernel: raid6: using algorithm neonx4 gen() 18544 MB/s Jul 15 04:40:56.801879 kernel: raid6: .... xor() 15147 MB/s, rmw enabled Jul 15 04:40:56.801929 kernel: raid6: using neon recovery algorithm Jul 15 04:40:56.809760 kernel: xor: measuring software checksum speed Jul 15 04:40:56.809800 kernel: 8regs : 28613 MB/sec Jul 15 04:40:56.812489 kernel: 32regs : 28830 MB/sec Jul 15 04:40:56.815894 kernel: arm64_neon : 37707 MB/sec Jul 15 04:40:56.818888 kernel: xor: using function: arm64_neon (37707 MB/sec) Jul 15 04:40:56.856131 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 04:40:56.861401 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:40:56.872254 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:40:56.896983 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jul 15 04:40:56.900998 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:40:56.913057 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 04:40:56.935166 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Jul 15 04:40:56.953880 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:40:56.960628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:40:57.009147 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:40:57.023287 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 04:40:57.083255 kernel: hv_vmbus: Vmbus version:5.3 Jul 15 04:40:57.085617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:40:57.089895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:40:57.104898 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:40:57.133936 kernel: hv_vmbus: registering driver hid_hyperv Jul 15 04:40:57.133958 kernel: hv_vmbus: registering driver hv_netvsc Jul 15 04:40:57.133971 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 04:40:57.133978 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 04:40:57.133984 kernel: hv_vmbus: registering driver hv_storvsc Jul 15 04:40:57.121285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:40:57.191381 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 15 04:40:57.191407 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 15 04:40:57.191538 kernel: scsi host1: storvsc_host_t Jul 15 04:40:57.191616 kernel: scsi host0: storvsc_host_t Jul 15 04:40:57.191684 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 15 04:40:57.191752 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 15 04:40:57.191820 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 15 04:40:57.176701 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:40:57.224071 kernel: PTP clock support registered Jul 15 04:40:57.224089 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 15 04:40:57.224097 kernel: hv_utils: Registering HyperV Utility Driver Jul 15 04:40:57.224104 kernel: hv_vmbus: registering driver hv_utils Jul 15 04:40:57.224110 kernel: hv_utils: Heartbeat IC version 3.0 Jul 15 04:40:57.224128 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 15 04:40:57.224291 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jul 15 04:40:57.224445 kernel: sd 1:0:0:0: [sda] Write Protect is off Jul 15 04:40:57.224526 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 15 04:40:57.224595 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 15 04:40:57.224660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#173 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:40:57.224730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#180 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:40:57.185523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:40:57.000997 kernel: hv_utils: Shutdown IC version 3.2 Jul 15 04:40:57.011478 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 04:40:57.011494 kernel: hv_utils: TimeSync IC version 4.0 Jul 15 04:40:57.011500 kernel: hv_netvsc 000d3ac6-888a-000d-3ac6-888a000d3ac6 eth0: VF slot 1 added Jul 15 04:40:57.011608 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jul 15 04:40:57.011683 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jul 15 04:40:57.011748 systemd-journald[224]: Time jumped backwards, rotating. Jul 15 04:40:57.011776 kernel: hv_vmbus: registering driver hv_pci Jul 15 04:40:57.185599 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:40:57.020384 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 04:40:57.203250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:40:57.035301 kernel: hv_pci 8827107f-e80e-4bae-99c9-a2ada26d0a92: PCI VMBus probing: Using version 0x10004 Jul 15 04:40:56.950904 systemd-resolved[263]: Clock change detected. Flushing caches. Jul 15 04:40:57.050798 kernel: hv_pci 8827107f-e80e-4bae-99c9-a2ada26d0a92: PCI host bridge to bus e80e:00 Jul 15 04:40:57.050980 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jul 15 04:40:57.021727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:40:57.062456 kernel: pci_bus e80e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 15 04:40:57.067688 kernel: pci_bus e80e:00: No busn resource found for root bus, will use [bus 00-ff] Jul 15 04:40:57.077135 kernel: pci e80e:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 15 04:40:57.083888 kernel: pci e80e:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 15 04:40:57.089874 kernel: pci e80e:00:02.0: enabling Extended Tags Jul 15 04:40:57.107906 kernel: pci e80e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e80e:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 15 04:40:57.118445 kernel: pci_bus e80e:00: busn_res: [bus 00-ff] end is updated to 00 Jul 15 04:40:57.118573 kernel: pci e80e:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 15 04:40:57.128874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#185 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 04:40:57.152903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#298 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 04:40:57.192251 kernel: mlx5_core e80e:00:02.0: enabling device (0000 -> 0002) Jul 15 04:40:57.202612 kernel: mlx5_core e80e:00:02.0: PTM is not supported by PCIe Jul 15 04:40:57.202838 kernel: mlx5_core e80e:00:02.0: firmware version: 16.30.5006 Jul 15 04:40:57.375538 kernel: hv_netvsc 000d3ac6-888a-000d-3ac6-888a000d3ac6 eth0: VF registering: eth1 Jul 15 04:40:57.375743 kernel: mlx5_core e80e:00:02.0 eth1: joined to eth0 Jul 15 04:40:57.381900 kernel: mlx5_core e80e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 15 04:40:57.390883 kernel: mlx5_core e80e:00:02.0 enP59406s1: renamed from eth1 Jul 15 04:40:57.943920 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 15 04:40:57.967827 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 15 04:40:58.046632 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 15 04:40:58.052720 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 15 04:40:58.068920 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 04:40:58.103593 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 15 04:40:58.114438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#273 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:40:58.115089 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 04:40:58.127816 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:40:58.145508 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 04:40:58.139230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:40:58.151550 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:40:58.173569 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 04:40:58.211614 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:40:59.171888 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#177 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:40:59.183758 disk-uuid[648]: The operation has completed successfully. Jul 15 04:40:59.189185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 04:40:59.244609 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 04:40:59.246677 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 04:40:59.279380 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 04:40:59.306004 sh[819]: Success Jul 15 04:40:59.341876 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 04:40:59.341937 kernel: device-mapper: uevent: version 1.0.3 Jul 15 04:40:59.347714 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 04:40:59.356893 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 04:40:59.536351 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 04:40:59.551452 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 04:40:59.558993 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 04:40:59.584466 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 04:40:59.584521 kernel: BTRFS: device fsid a7b7592d-2d1d-4236-b04f-dc58147b4692 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (837) Jul 15 04:40:59.589954 kernel: BTRFS info (device dm-0): first mount of filesystem a7b7592d-2d1d-4236-b04f-dc58147b4692 Jul 15 04:40:59.594743 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:40:59.598040 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 04:40:59.857193 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 04:40:59.862145 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:40:59.876302 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 04:40:59.877320 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 04:40:59.900446 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 04:40:59.929952 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (860) Jul 15 04:40:59.929996 kernel: BTRFS info (device sda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:40:59.935268 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:40:59.939540 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 04:40:59.962926 kernel: BTRFS info (device sda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:40:59.964624 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 04:40:59.970914 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 04:41:00.021817 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:41:00.036233 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:41:00.065406 systemd-networkd[1006]: lo: Link UP Jul 15 04:41:00.065415 systemd-networkd[1006]: lo: Gained carrier Jul 15 04:41:00.066666 systemd-networkd[1006]: Enumeration completed Jul 15 04:41:00.068991 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:41:00.069355 systemd-networkd[1006]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:41:00.069358 systemd-networkd[1006]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:41:00.075240 systemd[1]: Reached target network.target - Network. Jul 15 04:41:00.155876 kernel: mlx5_core e80e:00:02.0 enP59406s1: Link up Jul 15 04:41:00.194188 kernel: hv_netvsc 000d3ac6-888a-000d-3ac6-888a000d3ac6 eth0: Data path switched to VF: enP59406s1 Jul 15 04:41:00.193977 systemd-networkd[1006]: enP59406s1: Link UP Jul 15 04:41:00.194026 systemd-networkd[1006]: eth0: Link UP Jul 15 04:41:00.194088 systemd-networkd[1006]: eth0: Gained carrier Jul 15 04:41:00.194097 systemd-networkd[1006]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:41:00.209097 systemd-networkd[1006]: enP59406s1: Gained carrier Jul 15 04:41:00.230901 systemd-networkd[1006]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 04:41:00.887559 ignition[937]: Ignition 2.21.0 Jul 15 04:41:00.887575 ignition[937]: Stage: fetch-offline Jul 15 04:41:00.891810 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:41:00.887658 ignition[937]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:41:00.898198 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 04:41:00.887664 ignition[937]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:41:00.887764 ignition[937]: parsed url from cmdline: "" Jul 15 04:41:00.887766 ignition[937]: no config URL provided Jul 15 04:41:00.887769 ignition[937]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:41:00.887774 ignition[937]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:41:00.887777 ignition[937]: failed to fetch config: resource requires networking Jul 15 04:41:00.887998 ignition[937]: Ignition finished successfully Jul 15 04:41:00.928279 ignition[1018]: Ignition 2.21.0 Jul 15 04:41:00.928285 ignition[1018]: Stage: fetch Jul 15 04:41:00.928469 ignition[1018]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:41:00.928476 ignition[1018]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:41:00.928537 ignition[1018]: parsed url from cmdline: "" Jul 15 04:41:00.928539 ignition[1018]: no config URL provided Jul 15 04:41:00.928542 ignition[1018]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:41:00.928547 ignition[1018]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:41:00.928598 ignition[1018]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 15 04:41:01.005000 ignition[1018]: GET result: OK Jul 15 04:41:01.005623 ignition[1018]: config has been read from IMDS userdata Jul 15 04:41:01.005645 ignition[1018]: parsing config with SHA512: 17bb859b120dd8fa1f2827cb090c6e1ede8455cc919275d914035c91e40328804e057b8214e8c1a7f909db3b3e3909e05dde129cc9df75e1c3bf202124bc4a3a Jul 15 04:41:01.011977 unknown[1018]: fetched base config from "system" Jul 15 04:41:01.012129 unknown[1018]: fetched base config from "system" Jul 15 04:41:01.012479 ignition[1018]: fetch: fetch complete Jul 15 04:41:01.012135 unknown[1018]: fetched user config from "azure" Jul 15 04:41:01.012484 ignition[1018]: fetch: fetch passed Jul 15 04:41:01.014664 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 04:41:01.012536 ignition[1018]: Ignition finished successfully Jul 15 04:41:01.020707 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 04:41:01.058294 ignition[1025]: Ignition 2.21.0 Jul 15 04:41:01.058312 ignition[1025]: Stage: kargs Jul 15 04:41:01.058483 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:41:01.058491 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:41:01.065644 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 04:41:01.059967 ignition[1025]: kargs: kargs passed Jul 15 04:41:01.075206 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 04:41:01.060263 ignition[1025]: Ignition finished successfully Jul 15 04:41:01.106604 ignition[1032]: Ignition 2.21.0 Jul 15 04:41:01.106622 ignition[1032]: Stage: disks Jul 15 04:41:01.106813 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:41:01.113008 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 04:41:01.106821 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:41:01.122049 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 04:41:01.107374 ignition[1032]: disks: disks passed Jul 15 04:41:01.132089 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 04:41:01.107417 ignition[1032]: Ignition finished successfully Jul 15 04:41:01.143321 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:41:01.153351 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:41:01.160928 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:41:01.174540 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 04:41:01.254035 systemd-fsck[1040]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 15 04:41:01.263421 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 04:41:01.271966 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 04:41:01.457877 kernel: EXT4-fs (sda9): mounted filesystem 4818953b-9d82-47bd-ab58-d0aa5641a19a r/w with ordered data mode. Quota mode: none. Jul 15 04:41:01.458678 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 04:41:01.462420 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 04:41:01.485910 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:41:01.500413 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 04:41:01.508452 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 15 04:41:01.521255 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 04:41:01.562589 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1054) Jul 15 04:41:01.562614 kernel: BTRFS info (device sda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:41:01.562621 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:41:01.562628 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 04:41:01.521290 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:41:01.544459 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 04:41:01.564252 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 04:41:01.583991 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:41:01.663982 systemd-networkd[1006]: enP59406s1: Gained IPv6LL Jul 15 04:41:01.961340 coreos-metadata[1056]: Jul 15 04:41:01.961 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 04:41:01.968937 coreos-metadata[1056]: Jul 15 04:41:01.968 INFO Fetch successful Jul 15 04:41:01.968937 coreos-metadata[1056]: Jul 15 04:41:01.968 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 15 04:41:01.983628 coreos-metadata[1056]: Jul 15 04:41:01.983 INFO Fetch successful Jul 15 04:41:01.988449 systemd-networkd[1006]: eth0: Gained IPv6LL Jul 15 04:41:01.996464 coreos-metadata[1056]: Jul 15 04:41:01.996 INFO wrote hostname ci-4396.0.0-n-efed024aac to /sysroot/etc/hostname Jul 15 04:41:02.004094 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 04:41:02.212078 initrd-setup-root[1085]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 04:41:02.252884 initrd-setup-root[1092]: cut: /sysroot/etc/group: No such file or directory Jul 15 04:41:02.258326 initrd-setup-root[1099]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 04:41:02.263720 initrd-setup-root[1106]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 04:41:03.127801 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 04:41:03.133977 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 04:41:03.151726 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 04:41:03.166990 kernel: BTRFS info (device sda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:41:03.158551 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 04:41:03.190498 ignition[1174]: INFO : Ignition 2.21.0 Jul 15 04:41:03.190498 ignition[1174]: INFO : Stage: mount Jul 15 04:41:03.198914 ignition[1174]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:41:03.198914 ignition[1174]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:41:03.198914 ignition[1174]: INFO : mount: mount passed Jul 15 04:41:03.198914 ignition[1174]: INFO : Ignition finished successfully Jul 15 04:41:03.197788 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 04:41:03.205754 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 04:41:03.231005 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 04:41:03.243880 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:41:03.270924 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1185) Jul 15 04:41:03.270974 kernel: BTRFS info (device sda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:41:03.275586 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:41:03.279080 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 04:41:03.283621 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:41:03.313753 ignition[1201]: INFO : Ignition 2.21.0 Jul 15 04:41:03.313753 ignition[1201]: INFO : Stage: files Jul 15 04:41:03.325325 ignition[1201]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:41:03.325325 ignition[1201]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:41:03.325325 ignition[1201]: DEBUG : files: compiled without relabeling support, skipping Jul 15 04:41:03.325325 ignition[1201]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 04:41:03.325325 ignition[1201]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 04:41:03.354324 ignition[1201]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 04:41:03.354324 ignition[1201]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 04:41:03.354324 ignition[1201]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 04:41:03.353956 unknown[1201]: wrote ssh authorized keys file for user: core Jul 15 04:41:03.406729 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 04:41:03.414919 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 15 04:41:03.511352 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 04:41:04.231692 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 04:41:04.243624 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 04:41:04.243624 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 15 04:41:04.284123 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 04:41:04.366183 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 04:41:04.374390 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 04:41:04.374390 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 04:41:04.374390 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:41:04.374390 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:41:04.374390 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:41:04.374390 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:41:04.374390 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:41:04.374390 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:41:04.434893 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:41:04.434893 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:41:04.434893 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 04:41:04.434893 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 04:41:04.434893 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 04:41:04.434893 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 15 04:41:04.917080 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 04:41:05.222267 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 04:41:05.222267 ignition[1201]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 04:41:05.396358 ignition[1201]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:41:05.405587 ignition[1201]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:41:05.405587 ignition[1201]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 04:41:05.405587 ignition[1201]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 15 04:41:05.405587 ignition[1201]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 04:41:05.405587 ignition[1201]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:41:05.405587 ignition[1201]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:41:05.405587 ignition[1201]: INFO : files: files passed Jul 15 04:41:05.405587 ignition[1201]: INFO : Ignition finished successfully Jul 15 04:41:05.405914 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 04:41:05.418734 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 04:41:05.453844 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 04:41:05.467129 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 04:41:05.497546 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:41:05.473508 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 04:41:05.525953 initrd-setup-root-after-ignition[1232]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:41:05.525953 initrd-setup-root-after-ignition[1232]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:41:05.494550 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:41:05.503305 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 04:41:05.508717 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 04:41:05.551417 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 04:41:05.551520 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 04:41:05.560793 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 04:41:05.568609 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 04:41:05.578590 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 04:41:05.579449 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 04:41:05.623604 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:41:05.632973 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 04:41:05.656963 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:41:05.662420 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:41:05.673766 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 04:41:05.683209 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 04:41:05.683329 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:41:05.695449 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 04:41:05.699844 systemd[1]: Stopped target basic.target - Basic System. Jul 15 04:41:05.708331 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 04:41:05.717991 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:41:05.728306 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 04:41:05.738002 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:41:05.748100 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 04:41:05.757497 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:41:05.768320 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 04:41:05.779481 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 04:41:05.791699 systemd[1]: Stopped target swap.target - Swaps. Jul 15 04:41:05.800314 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 04:41:05.800423 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:41:05.813832 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:41:05.819016 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:41:05.829615 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 04:41:05.834424 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:41:05.841085 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 04:41:05.841188 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 04:41:05.856236 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 04:41:05.856321 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:41:05.862156 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 04:41:05.921053 ignition[1256]: INFO : Ignition 2.21.0 Jul 15 04:41:05.921053 ignition[1256]: INFO : Stage: umount Jul 15 04:41:05.921053 ignition[1256]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:41:05.921053 ignition[1256]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:41:05.862225 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 04:41:05.967061 ignition[1256]: INFO : umount: umount passed Jul 15 04:41:05.967061 ignition[1256]: INFO : Ignition finished successfully Jul 15 04:41:05.870467 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 15 04:41:05.870531 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 04:41:05.883552 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 04:41:05.908407 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 04:41:05.920960 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 04:41:05.921168 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:41:05.929407 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 04:41:05.930139 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:41:05.948046 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 04:41:05.948125 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 04:41:05.957065 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 04:41:05.959788 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 04:41:05.962628 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 04:41:05.972459 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 04:41:05.972521 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 04:41:05.981022 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 04:41:05.981062 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 04:41:05.988204 systemd[1]: Stopped target network.target - Network. Jul 15 04:41:05.996594 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 04:41:05.996632 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:41:06.006714 systemd[1]: Stopped target paths.target - Path Units. Jul 15 04:41:06.019471 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 04:41:06.022882 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:41:06.029745 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 04:41:06.038017 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 04:41:06.046273 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 04:41:06.046323 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:41:06.056383 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 04:41:06.056409 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:41:06.064374 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 04:41:06.064424 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 04:41:06.072390 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 04:41:06.072416 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 04:41:06.084966 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 04:41:06.093321 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 04:41:06.103024 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 04:41:06.103111 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 04:41:06.316000 kernel: hv_netvsc 000d3ac6-888a-000d-3ac6-888a000d3ac6 eth0: Data path switched from VF: enP59406s1 Jul 15 04:41:06.113801 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 04:41:06.113935 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 04:41:06.127547 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 04:41:06.127737 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 04:41:06.127836 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 04:41:06.141618 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 04:41:06.142682 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 04:41:06.151428 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 04:41:06.151460 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:41:06.162908 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 04:41:06.178551 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 04:41:06.178636 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:41:06.191343 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 04:41:06.191397 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:41:06.205035 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 04:41:06.205120 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 04:41:06.211415 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 04:41:06.211475 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:41:06.226594 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:41:06.236246 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 04:41:06.236310 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:41:06.263332 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 04:41:06.263694 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:41:06.275252 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 04:41:06.275289 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 04:41:06.285146 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 04:41:06.285173 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:41:06.294848 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 04:41:06.294902 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:41:06.316158 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 04:41:06.316217 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 04:41:06.329496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 04:41:06.329560 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:41:06.350052 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 04:41:06.368914 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 04:41:06.368978 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:41:06.384630 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 04:41:06.384681 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:41:06.399958 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 04:41:06.400008 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:41:06.412912 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 04:41:06.412955 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:41:06.418858 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:41:06.418898 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:41:06.435637 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 04:41:06.435685 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 04:41:06.435708 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 04:41:06.435735 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:41:06.436027 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 04:41:06.436148 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 04:41:06.443534 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 04:41:06.443607 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 04:41:10.113072 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 04:41:10.113913 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 04:41:10.123402 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 04:41:10.133071 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 04:41:10.133137 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 04:41:10.144175 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 04:41:10.422021 systemd[1]: Switching root. Jul 15 04:41:10.790685 systemd-journald[224]: Journal stopped Jul 15 04:41:23.801396 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 15 04:41:23.801415 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 04:41:23.801422 kernel: SELinux: policy capability open_perms=1 Jul 15 04:41:23.801429 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 04:41:23.801434 kernel: SELinux: policy capability always_check_network=0 Jul 15 04:41:23.801439 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 04:41:23.801445 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 04:41:23.801451 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 04:41:23.801456 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 04:41:23.801461 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 04:41:23.801467 kernel: audit: type=1403 audit(1752554476.094:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 04:41:23.801473 systemd[1]: Successfully loaded SELinux policy in 409.751ms. Jul 15 04:41:23.801479 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.272ms. Jul 15 04:41:23.804020 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:41:23.804039 systemd[1]: Detected virtualization microsoft. Jul 15 04:41:23.804051 systemd[1]: Detected architecture arm64. Jul 15 04:41:23.804058 systemd[1]: Detected first boot. Jul 15 04:41:23.804065 systemd[1]: Hostname set to . Jul 15 04:41:23.804071 systemd[1]: Initializing machine ID from random generator. Jul 15 04:41:23.804077 zram_generator::config[1299]: No configuration found. Jul 15 04:41:23.804084 kernel: NET: Registered PF_VSOCK protocol family Jul 15 04:41:23.804090 systemd[1]: Populated /etc with preset unit settings. Jul 15 04:41:23.804099 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 04:41:23.804105 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 04:41:23.804111 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 04:41:23.804117 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 04:41:23.804123 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 04:41:23.804129 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 04:41:23.804135 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 04:41:23.804142 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 04:41:23.804149 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 04:41:23.804155 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 04:41:23.804163 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 04:41:23.804169 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 04:41:23.804175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:41:23.804181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:41:23.804188 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 04:41:23.804195 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 04:41:23.804201 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 04:41:23.804207 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:41:23.804215 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 04:41:23.804221 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:41:23.804227 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:41:23.804233 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 04:41:23.804240 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 04:41:23.804247 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 04:41:23.804253 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 04:41:23.804259 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:41:23.804266 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:41:23.804272 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:41:23.804278 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:41:23.804284 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 04:41:23.804290 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 04:41:23.804298 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 04:41:23.804305 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:41:23.804311 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:41:23.804317 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:41:23.804324 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 04:41:23.804331 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 04:41:23.804337 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 04:41:23.804343 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 04:41:23.804349 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 04:41:23.804355 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 04:41:23.804362 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 04:41:23.804368 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 04:41:23.804375 systemd[1]: Reached target machines.target - Containers. Jul 15 04:41:23.804382 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 04:41:23.804389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:41:23.804395 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:41:23.804401 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 04:41:23.804407 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:41:23.804414 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:41:23.804420 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:41:23.804426 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 04:41:23.804434 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:41:23.804441 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 04:41:23.804447 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 04:41:23.804453 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 04:41:23.804460 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 04:41:23.804466 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 04:41:23.804472 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:41:23.804479 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:41:23.804486 kernel: loop: module loaded Jul 15 04:41:23.804491 kernel: fuse: init (API version 7.41) Jul 15 04:41:23.804497 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:41:23.804503 kernel: ACPI: bus type drm_connector registered Jul 15 04:41:23.804509 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:41:23.804516 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 04:41:23.804522 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 04:41:23.804556 systemd-journald[1403]: Collecting audit messages is disabled. Jul 15 04:41:23.804572 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:41:23.804580 systemd-journald[1403]: Journal started Jul 15 04:41:23.804596 systemd-journald[1403]: Runtime Journal (/run/log/journal/9ddaf05b23c24547b8570a8ea4ba5cf6) is 8M, max 78.5M, 70.5M free. Jul 15 04:41:23.010345 systemd[1]: Queued start job for default target multi-user.target. Jul 15 04:41:23.018280 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 04:41:23.018548 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 04:41:23.018814 systemd[1]: systemd-journald.service: Consumed 2.823s CPU time. Jul 15 04:41:23.820304 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 04:41:23.820369 systemd[1]: Stopped verity-setup.service. Jul 15 04:41:23.833176 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:41:23.833813 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 04:41:23.838744 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 04:41:23.843595 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 04:41:23.848230 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 04:41:23.852821 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 04:41:23.857561 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 04:41:23.861906 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 04:41:23.867221 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:41:23.872714 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 04:41:23.872849 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 04:41:23.878266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:41:23.878386 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:41:23.884021 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:41:23.884130 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:41:23.890393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:41:23.890525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:41:23.896776 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 04:41:23.896983 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 04:41:23.902010 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:41:23.902136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:41:23.907441 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:41:23.912586 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:41:23.918165 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 04:41:23.923816 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 04:41:23.929580 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:41:23.943687 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:41:23.949722 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 04:41:23.959944 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 04:41:23.965173 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 04:41:23.965266 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:41:23.970628 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 04:41:23.977370 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 04:41:23.981922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:41:23.988730 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 04:41:23.994428 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 04:41:24.000170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:41:24.001063 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 04:41:24.006754 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:41:24.007599 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:41:24.014899 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 04:41:24.021696 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:41:24.031833 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 04:41:24.037976 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 04:41:24.047352 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 04:41:24.052900 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 04:41:24.060359 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 04:41:24.067376 systemd-journald[1403]: Time spent on flushing to /var/log/journal/9ddaf05b23c24547b8570a8ea4ba5cf6 is 45.241ms for 947 entries. Jul 15 04:41:24.067376 systemd-journald[1403]: System Journal (/var/log/journal/9ddaf05b23c24547b8570a8ea4ba5cf6) is 11.8M, max 2.6G, 2.6G free. Jul 15 04:41:24.206421 systemd-journald[1403]: Received client request to flush runtime journal. Jul 15 04:41:24.206471 kernel: loop0: detected capacity change from 0 to 207008 Jul 15 04:41:24.206486 systemd-journald[1403]: /var/log/journal/9ddaf05b23c24547b8570a8ea4ba5cf6/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 15 04:41:24.206503 systemd-journald[1403]: Rotating system journal. Jul 15 04:41:24.206522 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 04:41:24.206538 kernel: loop1: detected capacity change from 0 to 28800 Jul 15 04:41:24.099278 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:41:24.173060 systemd-tmpfiles[1440]: ACLs are not supported, ignoring. Jul 15 04:41:24.173068 systemd-tmpfiles[1440]: ACLs are not supported, ignoring. Jul 15 04:41:24.176506 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 04:41:24.177163 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 04:41:24.194176 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:41:24.201363 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 04:41:24.211079 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 04:41:24.366227 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 04:41:24.374131 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:41:24.392790 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. Jul 15 04:41:24.392807 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. Jul 15 04:41:24.395323 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:41:24.557035 kernel: loop2: detected capacity change from 0 to 134232 Jul 15 04:41:26.588898 kernel: loop3: detected capacity change from 0 to 105936 Jul 15 04:41:27.279782 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 04:41:27.286756 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:41:27.308333 systemd-udevd[1465]: Using default interface naming scheme 'v255'. Jul 15 04:41:28.098899 kernel: loop4: detected capacity change from 0 to 207008 Jul 15 04:41:28.105877 kernel: loop5: detected capacity change from 0 to 28800 Jul 15 04:41:28.110871 kernel: loop6: detected capacity change from 0 to 134232 Jul 15 04:41:28.118876 kernel: loop7: detected capacity change from 0 to 105936 Jul 15 04:41:28.121133 (sd-merge)[1467]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 15 04:41:28.121498 (sd-merge)[1467]: Merged extensions into '/usr'. Jul 15 04:41:28.123752 systemd[1]: Reload requested from client PID 1438 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 04:41:28.123852 systemd[1]: Reloading... Jul 15 04:41:28.166029 zram_generator::config[1496]: No configuration found. Jul 15 04:41:28.427389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:41:28.679622 systemd[1]: Reloading finished in 555 ms. Jul 15 04:41:28.710124 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 04:41:28.720777 systemd[1]: Starting ensure-sysext.service... Jul 15 04:41:28.725981 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:41:28.759384 systemd[1]: Reload requested from client PID 1548 ('systemctl') (unit ensure-sysext.service)... Jul 15 04:41:28.759399 systemd[1]: Reloading... Jul 15 04:41:28.766757 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 04:41:28.766997 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 04:41:28.767233 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 04:41:28.767375 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 04:41:28.767791 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 04:41:28.767962 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Jul 15 04:41:28.767990 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Jul 15 04:41:28.770509 systemd-tmpfiles[1549]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:41:28.771100 systemd-tmpfiles[1549]: Skipping /boot Jul 15 04:41:28.777240 systemd-tmpfiles[1549]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:41:28.777312 systemd-tmpfiles[1549]: Skipping /boot Jul 15 04:41:28.812235 zram_generator::config[1573]: No configuration found. Jul 15 04:41:28.888016 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:41:28.948142 systemd[1]: Reloading finished in 188 ms. Jul 15 04:41:29.072764 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:41:29.081720 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:41:29.089061 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 04:41:29.094308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:41:29.096511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:41:29.103054 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:41:29.110086 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:41:29.115025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:41:29.115128 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:41:29.124771 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 04:41:29.140052 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:41:29.146981 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 04:41:29.153043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:41:29.159995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:41:29.165474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:41:29.165605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:41:29.171371 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:41:29.171507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:41:29.181731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:41:29.182788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:41:29.199576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:41:29.205943 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:41:29.210311 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:41:29.210438 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:41:29.211382 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:41:29.216989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:41:29.223842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:41:29.224008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:41:29.229802 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:41:29.230025 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:41:29.237894 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 04:41:29.246279 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Jul 15 04:41:29.250707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:41:29.251763 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:41:29.264073 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:41:29.272567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:41:29.288468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:41:29.293134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:41:29.293363 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:41:29.293596 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 04:41:29.298970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:41:29.302118 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:41:29.307513 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:41:29.307764 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:41:29.312584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:41:29.312830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:41:29.318637 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:41:29.318856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:41:29.326133 systemd[1]: Finished ensure-sysext.service. Jul 15 04:41:29.332350 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:41:29.332408 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:41:29.478178 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 04:41:29.729781 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 04:41:29.828731 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 04:41:30.186264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:41:30.192090 systemd-resolved[1640]: Positive Trust Anchors: Jul 15 04:41:30.192101 systemd-resolved[1640]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:41:30.192122 systemd-resolved[1640]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:41:30.196544 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:41:30.255818 systemd-resolved[1640]: Using system hostname 'ci-4396.0.0-n-efed024aac'. Jul 15 04:41:30.257367 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:41:30.266061 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:41:30.275036 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 04:41:30.339878 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 04:41:30.398168 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Jul 15 04:41:30.412111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:41:30.630377 augenrules[1762]: No rules Jul 15 04:41:30.631569 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:41:30.631988 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:41:31.029887 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 04:41:31.029975 kernel: hv_vmbus: registering driver hv_balloon Jul 15 04:41:31.032290 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 15 04:41:31.037935 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 15 04:41:31.392661 kernel: hv_vmbus: registering driver hyperv_fb Jul 15 04:41:31.392754 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 15 04:41:31.393400 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 15 04:41:31.400748 kernel: Console: switching to colour dummy device 80x25 Jul 15 04:41:31.403169 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 04:41:31.575262 systemd-networkd[1701]: lo: Link UP Jul 15 04:41:31.575433 systemd-networkd[1701]: lo: Gained carrier Jul 15 04:41:31.576987 systemd-networkd[1701]: Enumeration completed Jul 15 04:41:31.577087 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:41:31.577489 systemd-networkd[1701]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:41:31.577492 systemd-networkd[1701]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:41:31.581933 systemd[1]: Reached target network.target - Network. Jul 15 04:41:31.586672 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 04:41:31.592458 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 04:41:31.627322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:41:31.627571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:41:31.627873 kernel: mlx5_core e80e:00:02.0 enP59406s1: Link up Jul 15 04:41:31.633340 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:41:31.634381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:41:31.652896 kernel: hv_netvsc 000d3ac6-888a-000d-3ac6-888a000d3ac6 eth0: Data path switched to VF: enP59406s1 Jul 15 04:41:31.653703 systemd-networkd[1701]: enP59406s1: Link UP Jul 15 04:41:31.653841 systemd-networkd[1701]: eth0: Link UP Jul 15 04:41:31.653846 systemd-networkd[1701]: eth0: Gained carrier Jul 15 04:41:31.653877 systemd-networkd[1701]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:41:31.657043 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 04:41:31.663162 systemd-networkd[1701]: enP59406s1: Gained carrier Jul 15 04:41:31.675900 systemd-networkd[1701]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 04:41:32.293359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 15 04:41:32.299617 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 04:41:32.595413 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 04:41:32.610937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:41:32.767985 systemd-networkd[1701]: eth0: Gained IPv6LL Jul 15 04:41:32.770242 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 04:41:32.775798 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 04:41:32.832070 systemd-networkd[1701]: enP59406s1: Gained IPv6LL Jul 15 04:41:33.034892 kernel: MACsec IEEE 802.1AE Jul 15 04:41:34.447349 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 04:41:34.452628 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 04:41:44.407823 ldconfig[1433]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 04:41:44.418536 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 04:41:44.424877 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 04:41:44.436761 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 04:41:44.441424 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:41:44.445659 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 04:41:44.450780 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 04:41:44.456192 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 04:41:44.460686 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 04:41:44.465993 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 04:41:44.471176 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 04:41:44.471201 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:41:44.474742 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:41:44.479467 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 04:41:44.484792 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 04:41:44.489994 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 04:41:44.496279 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 04:41:44.503093 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 04:41:44.518507 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 04:41:44.535624 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 04:41:44.540726 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 04:41:44.545065 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:41:44.548876 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:41:44.552606 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:41:44.552625 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:41:44.554503 systemd[1]: Starting chronyd.service - NTP client/server... Jul 15 04:41:44.566959 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 04:41:44.578068 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 04:41:44.585052 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 04:41:44.591522 (chronyd)[1857]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 15 04:41:44.596057 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 04:41:44.613787 chronyd[1866]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 15 04:41:44.614016 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 04:41:44.621666 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 04:41:44.625856 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 04:41:44.630126 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 15 04:41:44.631231 jq[1867]: false Jul 15 04:41:44.636602 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 15 04:41:44.638711 KVP[1869]: KVP starting; pid is:1869 Jul 15 04:41:44.638985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:41:44.644903 KVP[1869]: KVP LIC Version: 3.1 Jul 15 04:41:44.646877 kernel: hv_utils: KVP IC version 4.0 Jul 15 04:41:44.647990 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 04:41:44.655815 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 04:41:44.664234 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 04:41:44.671451 extend-filesystems[1868]: Found /dev/sda6 Jul 15 04:41:44.671931 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 04:41:44.684430 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 04:41:44.692693 extend-filesystems[1868]: Found /dev/sda9 Jul 15 04:41:44.696691 extend-filesystems[1868]: Checking size of /dev/sda9 Jul 15 04:41:44.695583 chronyd[1866]: Timezone right/UTC failed leap second check, ignoring Jul 15 04:41:44.696193 chronyd[1866]: Loaded seccomp filter (level 2) Jul 15 04:41:44.702959 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 04:41:44.708189 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 04:41:44.708696 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 04:41:44.712036 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 04:41:44.724144 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 04:41:44.732059 systemd[1]: Started chronyd.service - NTP client/server. Jul 15 04:41:44.737184 extend-filesystems[1868]: Old size kept for /dev/sda9 Jul 15 04:41:44.744184 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 04:41:44.753391 jq[1894]: true Jul 15 04:41:44.755480 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 04:41:44.755642 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 04:41:44.755839 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 04:41:44.755983 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 04:41:44.763267 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 04:41:44.763413 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 04:41:44.768277 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 04:41:44.773750 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 04:41:44.774007 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 04:41:44.784818 systemd-logind[1891]: New seat seat0. Jul 15 04:41:44.792534 update_engine[1893]: I20250715 04:41:44.792466 1893 main.cc:92] Flatcar Update Engine starting Jul 15 04:41:44.795572 systemd-logind[1891]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 15 04:41:44.797601 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 04:41:44.804163 (ntainerd)[1922]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 04:41:44.809312 jq[1921]: true Jul 15 04:41:44.852480 tar[1915]: linux-arm64/LICENSE Jul 15 04:41:44.852745 tar[1915]: linux-arm64/helm Jul 15 04:41:44.936339 bash[1974]: Updated "/home/core/.ssh/authorized_keys" Jul 15 04:41:44.935931 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 04:41:44.951378 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 04:41:44.998428 sshd_keygen[1904]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 04:41:45.030619 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 04:41:45.036003 dbus-daemon[1860]: [system] SELinux support is enabled Jul 15 04:41:45.041479 update_engine[1893]: I20250715 04:41:45.041337 1893 update_check_scheduler.cc:74] Next update check in 8m7s Jul 15 04:41:45.043028 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 04:41:45.057854 dbus-daemon[1860]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 04:41:45.059287 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 04:41:45.066198 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 04:41:45.066229 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 04:41:45.073897 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 04:41:45.073917 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 04:41:45.087760 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 15 04:41:45.093815 systemd[1]: Started update-engine.service - Update Engine. Jul 15 04:41:45.104720 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 04:41:45.117991 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 04:41:45.119755 coreos-metadata[1859]: Jul 15 04:41:45.119 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 04:41:45.120104 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 04:41:45.124674 coreos-metadata[1859]: Jul 15 04:41:45.124 INFO Fetch successful Jul 15 04:41:45.125427 coreos-metadata[1859]: Jul 15 04:41:45.125 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 15 04:41:45.131163 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 04:41:45.133945 coreos-metadata[1859]: Jul 15 04:41:45.133 INFO Fetch successful Jul 15 04:41:45.136030 coreos-metadata[1859]: Jul 15 04:41:45.135 INFO Fetching http://168.63.129.16/machine/4dbc8ea3-9ec4-4a07-86a0-5cbc5d42694e/16157aef%2D4fca%2D44ba%2D9bf6%2D12c5e7284e68.%5Fci%2D4396.0.0%2Dn%2Defed024aac?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 15 04:41:45.138616 coreos-metadata[1859]: Jul 15 04:41:45.138 INFO Fetch successful Jul 15 04:41:45.138948 coreos-metadata[1859]: Jul 15 04:41:45.138 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 15 04:41:45.148962 coreos-metadata[1859]: Jul 15 04:41:45.148 INFO Fetch successful Jul 15 04:41:45.158151 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 15 04:41:45.169829 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 04:41:45.183968 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 04:41:45.191817 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 04:41:45.199187 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 04:41:45.208234 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 04:41:45.215227 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 04:41:45.247542 tar[1915]: linux-arm64/README.md Jul 15 04:41:45.260036 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 04:41:45.326374 locksmithd[2019]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 04:41:45.425881 containerd[1922]: time="2025-07-15T04:41:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 04:41:45.427898 containerd[1922]: time="2025-07-15T04:41:45.427298204Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 04:41:45.434260 containerd[1922]: time="2025-07-15T04:41:45.434225772Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.688µs" Jul 15 04:41:45.434260 containerd[1922]: time="2025-07-15T04:41:45.434254404Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 04:41:45.434349 containerd[1922]: time="2025-07-15T04:41:45.434268332Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 04:41:45.434410 containerd[1922]: time="2025-07-15T04:41:45.434391836Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 04:41:45.434410 containerd[1922]: time="2025-07-15T04:41:45.434408788Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 04:41:45.434439 containerd[1922]: time="2025-07-15T04:41:45.434425564Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:41:45.434478 containerd[1922]: time="2025-07-15T04:41:45.434465452Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:41:45.434478 containerd[1922]: time="2025-07-15T04:41:45.434475908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:41:45.434650 containerd[1922]: time="2025-07-15T04:41:45.434634012Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:41:45.434650 containerd[1922]: time="2025-07-15T04:41:45.434648308Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:41:45.434676 containerd[1922]: time="2025-07-15T04:41:45.434662676Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:41:45.434676 containerd[1922]: time="2025-07-15T04:41:45.434667916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 04:41:45.434733 containerd[1922]: time="2025-07-15T04:41:45.434722116Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 04:41:45.435962 containerd[1922]: time="2025-07-15T04:41:45.435499220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:41:45.435962 containerd[1922]: time="2025-07-15T04:41:45.435559340Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:41:45.435962 containerd[1922]: time="2025-07-15T04:41:45.435568780Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 04:41:45.435962 containerd[1922]: time="2025-07-15T04:41:45.435594684Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 04:41:45.435962 containerd[1922]: time="2025-07-15T04:41:45.435753820Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 04:41:45.435962 containerd[1922]: time="2025-07-15T04:41:45.435817828Z" level=info msg="metadata content store policy set" policy=shared Jul 15 04:41:45.447929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454424788Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454477348Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454488884Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454498332Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454511012Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454520076Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454528788Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454535964Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454543388Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454549780Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454555348Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454563412Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454667884Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 04:41:45.454770 containerd[1922]: time="2025-07-15T04:41:45.454682212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454691708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454698420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454706340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454713180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454721004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454727740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454734580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454740996Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454747356Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454798436Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454812628Z" level=info msg="Start snapshots syncer" Jul 15 04:41:45.455004 containerd[1922]: time="2025-07-15T04:41:45.454827692Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 04:41:45.455142 containerd[1922]: time="2025-07-15T04:41:45.455093324Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 04:41:45.455142 containerd[1922]: time="2025-07-15T04:41:45.455137292Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 04:41:45.455646 containerd[1922]: time="2025-07-15T04:41:45.455615508Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 04:41:45.455771 containerd[1922]: time="2025-07-15T04:41:45.455755500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 04:41:45.455788 containerd[1922]: time="2025-07-15T04:41:45.455777548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 04:41:45.455788 containerd[1922]: time="2025-07-15T04:41:45.455785140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 04:41:45.455816 containerd[1922]: time="2025-07-15T04:41:45.455795508Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 04:41:45.455816 containerd[1922]: time="2025-07-15T04:41:45.455803436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 04:41:45.455839 containerd[1922]: time="2025-07-15T04:41:45.455815932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 04:41:45.455839 containerd[1922]: time="2025-07-15T04:41:45.455824508Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 04:41:45.455924 containerd[1922]: time="2025-07-15T04:41:45.455841612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 04:41:45.455924 containerd[1922]: time="2025-07-15T04:41:45.455849396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 04:41:45.455924 containerd[1922]: time="2025-07-15T04:41:45.455856260Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 04:41:45.455924 containerd[1922]: time="2025-07-15T04:41:45.455898628Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:41:45.455924 containerd[1922]: time="2025-07-15T04:41:45.455908420Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:41:45.455924 containerd[1922]: time="2025-07-15T04:41:45.455913564Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:41:45.455924 containerd[1922]: time="2025-07-15T04:41:45.455919756Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:41:45.455924 containerd[1922]: time="2025-07-15T04:41:45.455924420Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 04:41:45.456029 containerd[1922]: time="2025-07-15T04:41:45.455930444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 04:41:45.456029 containerd[1922]: time="2025-07-15T04:41:45.455969724Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 04:41:45.456029 containerd[1922]: time="2025-07-15T04:41:45.455981868Z" level=info msg="runtime interface created" Jul 15 04:41:45.456029 containerd[1922]: time="2025-07-15T04:41:45.455985260Z" level=info msg="created NRI interface" Jul 15 04:41:45.456029 containerd[1922]: time="2025-07-15T04:41:45.455990708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 04:41:45.456029 containerd[1922]: time="2025-07-15T04:41:45.455998500Z" level=info msg="Connect containerd service" Jul 15 04:41:45.456029 containerd[1922]: time="2025-07-15T04:41:45.456018236Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 04:41:45.456769 containerd[1922]: time="2025-07-15T04:41:45.456741228Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:41:45.486160 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:41:45.721106 kubelet[2056]: E0715 04:41:45.720990 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:41:45.723151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:41:45.723358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:41:45.724932 systemd[1]: kubelet.service: Consumed 534ms CPU time, 253.5M memory peak. Jul 15 04:41:46.117358 containerd[1922]: time="2025-07-15T04:41:46.117082252Z" level=info msg="Start subscribing containerd event" Jul 15 04:41:46.117358 containerd[1922]: time="2025-07-15T04:41:46.117153964Z" level=info msg="Start recovering state" Jul 15 04:41:46.117358 containerd[1922]: time="2025-07-15T04:41:46.117235244Z" level=info msg="Start event monitor" Jul 15 04:41:46.117358 containerd[1922]: time="2025-07-15T04:41:46.117244916Z" level=info msg="Start cni network conf syncer for default" Jul 15 04:41:46.117358 containerd[1922]: time="2025-07-15T04:41:46.117251036Z" level=info msg="Start streaming server" Jul 15 04:41:46.117358 containerd[1922]: time="2025-07-15T04:41:46.117251884Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 04:41:46.117777 containerd[1922]: time="2025-07-15T04:41:46.117259044Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 04:41:46.117777 containerd[1922]: time="2025-07-15T04:41:46.117768772Z" level=info msg="runtime interface starting up..." Jul 15 04:41:46.117838 containerd[1922]: time="2025-07-15T04:41:46.117774044Z" level=info msg="starting plugins..." Jul 15 04:41:46.117838 containerd[1922]: time="2025-07-15T04:41:46.117803852Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 04:41:46.117926 containerd[1922]: time="2025-07-15T04:41:46.117907372Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 04:41:46.121847 containerd[1922]: time="2025-07-15T04:41:46.118087260Z" level=info msg="containerd successfully booted in 0.692629s" Jul 15 04:41:46.118212 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 04:41:46.123441 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 04:41:46.133938 systemd[1]: Startup finished in 1.696s (kernel) + 20.358s (initrd) + 30.447s (userspace) = 52.502s. Jul 15 04:41:46.404799 login[2035]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:46.404992 login[2034]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:46.412674 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 04:41:46.415983 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 04:41:46.421189 systemd-logind[1891]: New session 2 of user core. Jul 15 04:41:46.424444 systemd-logind[1891]: New session 1 of user core. Jul 15 04:41:46.427677 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 04:41:46.429471 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 04:41:46.441367 (systemd)[2081]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 04:41:46.443176 systemd-logind[1891]: New session c1 of user core. Jul 15 04:41:46.591401 waagent[2030]: 2025-07-15T04:41:46.591326Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 15 04:41:46.598856 waagent[2030]: 2025-07-15T04:41:46.595586Z INFO Daemon Daemon OS: flatcar 4396.0.0 Jul 15 04:41:46.599055 waagent[2030]: 2025-07-15T04:41:46.599024Z INFO Daemon Daemon Python: 3.11.13 Jul 15 04:41:46.602507 waagent[2030]: 2025-07-15T04:41:46.602467Z INFO Daemon Daemon Run daemon Jul 15 04:41:46.605727 waagent[2030]: 2025-07-15T04:41:46.605690Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4396.0.0' Jul 15 04:41:46.612894 waagent[2030]: 2025-07-15T04:41:46.612839Z INFO Daemon Daemon Using waagent for provisioning Jul 15 04:41:46.616869 waagent[2030]: 2025-07-15T04:41:46.616823Z INFO Daemon Daemon Activate resource disk Jul 15 04:41:46.620434 waagent[2030]: 2025-07-15T04:41:46.620399Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 15 04:41:46.628618 waagent[2030]: 2025-07-15T04:41:46.628585Z INFO Daemon Daemon Found device: None Jul 15 04:41:46.631822 waagent[2030]: 2025-07-15T04:41:46.631793Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 15 04:41:46.638159 waagent[2030]: 2025-07-15T04:41:46.638134Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 15 04:41:46.646621 waagent[2030]: 2025-07-15T04:41:46.646587Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 04:41:46.650678 waagent[2030]: 2025-07-15T04:41:46.650648Z INFO Daemon Daemon Running default provisioning handler Jul 15 04:41:46.659822 waagent[2030]: 2025-07-15T04:41:46.659783Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 15 04:41:46.669674 waagent[2030]: 2025-07-15T04:41:46.669636Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 15 04:41:46.675607 systemd[2081]: Queued start job for default target default.target. Jul 15 04:41:46.677158 waagent[2030]: 2025-07-15T04:41:46.677121Z INFO Daemon Daemon cloud-init is enabled: False Jul 15 04:41:46.680850 waagent[2030]: 2025-07-15T04:41:46.680824Z INFO Daemon Daemon Copying ovf-env.xml Jul 15 04:41:46.682734 systemd[2081]: Created slice app.slice - User Application Slice. Jul 15 04:41:46.682759 systemd[2081]: Reached target paths.target - Paths. Jul 15 04:41:46.682783 systemd[2081]: Reached target timers.target - Timers. Jul 15 04:41:46.684969 systemd[2081]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 04:41:46.691923 systemd[2081]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 04:41:46.691967 systemd[2081]: Reached target sockets.target - Sockets. Jul 15 04:41:46.692003 systemd[2081]: Reached target basic.target - Basic System. Jul 15 04:41:46.692024 systemd[2081]: Reached target default.target - Main User Target. Jul 15 04:41:46.692044 systemd[2081]: Startup finished in 244ms. Jul 15 04:41:46.692139 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 04:41:46.693465 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 04:41:46.694426 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 04:41:46.974169 waagent[2030]: 2025-07-15T04:41:46.974056Z INFO Daemon Daemon Successfully mounted dvd Jul 15 04:41:47.278262 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 15 04:41:47.280882 waagent[2030]: 2025-07-15T04:41:47.280222Z INFO Daemon Daemon Detect protocol endpoint Jul 15 04:41:47.284014 waagent[2030]: 2025-07-15T04:41:47.283977Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 04:41:47.288238 waagent[2030]: 2025-07-15T04:41:47.288212Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 15 04:41:47.293032 waagent[2030]: 2025-07-15T04:41:47.293011Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 15 04:41:47.296813 waagent[2030]: 2025-07-15T04:41:47.296786Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 15 04:41:47.300513 waagent[2030]: 2025-07-15T04:41:47.300492Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 15 04:41:47.312422 waagent[2030]: 2025-07-15T04:41:47.312389Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 15 04:41:47.317290 waagent[2030]: 2025-07-15T04:41:47.317270Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 15 04:41:47.321253 waagent[2030]: 2025-07-15T04:41:47.321231Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 15 04:41:47.717710 waagent[2030]: 2025-07-15T04:41:47.717622Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 15 04:41:47.722517 waagent[2030]: 2025-07-15T04:41:47.722477Z INFO Daemon Daemon Forcing an update of the goal state. Jul 15 04:41:47.729873 waagent[2030]: 2025-07-15T04:41:47.729835Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 04:41:48.145421 waagent[2030]: 2025-07-15T04:41:48.145312Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 15 04:41:48.149848 waagent[2030]: 2025-07-15T04:41:48.149813Z INFO Daemon Jul 15 04:41:48.151942 waagent[2030]: 2025-07-15T04:41:48.151915Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 673dce21-6700-4bec-9c7d-2d9687d95c58 eTag: 6456985404166184270 source: Fabric] Jul 15 04:41:48.160255 waagent[2030]: 2025-07-15T04:41:48.160227Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 15 04:41:48.164997 waagent[2030]: 2025-07-15T04:41:48.164972Z INFO Daemon Jul 15 04:41:48.167006 waagent[2030]: 2025-07-15T04:41:48.166984Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 15 04:41:48.175672 waagent[2030]: 2025-07-15T04:41:48.175647Z INFO Daemon Daemon Downloading artifacts profile blob Jul 15 04:41:48.236895 waagent[2030]: 2025-07-15T04:41:48.236119Z INFO Daemon Downloaded certificate {'thumbprint': 'B1933713E613232BB413F0110A08C8F89949E769', 'hasPrivateKey': True} Jul 15 04:41:48.243374 waagent[2030]: 2025-07-15T04:41:48.243339Z INFO Daemon Downloaded certificate {'thumbprint': '27DA5D2881452654110FA40C1762B6F4F3ED3A1C', 'hasPrivateKey': False} Jul 15 04:41:48.250550 waagent[2030]: 2025-07-15T04:41:48.250518Z INFO Daemon Fetch goal state completed Jul 15 04:41:48.301670 waagent[2030]: 2025-07-15T04:41:48.301634Z INFO Daemon Daemon Starting provisioning Jul 15 04:41:48.305417 waagent[2030]: 2025-07-15T04:41:48.305383Z INFO Daemon Daemon Handle ovf-env.xml. Jul 15 04:41:48.309072 waagent[2030]: 2025-07-15T04:41:48.309048Z INFO Daemon Daemon Set hostname [ci-4396.0.0-n-efed024aac] Jul 15 04:41:48.327913 waagent[2030]: 2025-07-15T04:41:48.327844Z INFO Daemon Daemon Publish hostname [ci-4396.0.0-n-efed024aac] Jul 15 04:41:48.332767 waagent[2030]: 2025-07-15T04:41:48.332733Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 15 04:41:48.337369 waagent[2030]: 2025-07-15T04:41:48.337338Z INFO Daemon Daemon Primary interface is [eth0] Jul 15 04:41:48.346941 systemd-networkd[1701]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:41:48.346947 systemd-networkd[1701]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:41:48.346976 systemd-networkd[1701]: eth0: DHCP lease lost Jul 15 04:41:48.348199 waagent[2030]: 2025-07-15T04:41:48.348098Z INFO Daemon Daemon Create user account if not exists Jul 15 04:41:48.352271 waagent[2030]: 2025-07-15T04:41:48.352243Z INFO Daemon Daemon User core already exists, skip useradd Jul 15 04:41:48.356408 waagent[2030]: 2025-07-15T04:41:48.356379Z INFO Daemon Daemon Configure sudoer Jul 15 04:41:48.376907 systemd-networkd[1701]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 04:41:48.618438 waagent[2030]: 2025-07-15T04:41:48.618279Z INFO Daemon Daemon Configure sshd Jul 15 04:41:48.625624 waagent[2030]: 2025-07-15T04:41:48.625571Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 15 04:41:48.634937 waagent[2030]: 2025-07-15T04:41:48.634903Z INFO Daemon Daemon Deploy ssh public key. Jul 15 04:41:49.727207 waagent[2030]: 2025-07-15T04:41:49.727160Z INFO Daemon Daemon Provisioning complete Jul 15 04:41:49.740619 waagent[2030]: 2025-07-15T04:41:49.740584Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 15 04:41:49.745150 waagent[2030]: 2025-07-15T04:41:49.745116Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 15 04:41:49.752195 waagent[2030]: 2025-07-15T04:41:49.752169Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 15 04:41:49.849849 waagent[2141]: 2025-07-15T04:41:49.849789Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 15 04:41:49.850899 waagent[2141]: 2025-07-15T04:41:49.850244Z INFO ExtHandler ExtHandler OS: flatcar 4396.0.0 Jul 15 04:41:49.850899 waagent[2141]: 2025-07-15T04:41:49.850302Z INFO ExtHandler ExtHandler Python: 3.11.13 Jul 15 04:41:49.850899 waagent[2141]: 2025-07-15T04:41:49.850338Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 15 04:41:55.820452 waagent[2141]: 2025-07-15T04:41:55.820234Z INFO ExtHandler ExtHandler Distro: flatcar-4396.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 15 04:41:55.821585 waagent[2141]: 2025-07-15T04:41:55.821543Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 04:41:55.821653 waagent[2141]: 2025-07-15T04:41:55.821632Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 04:41:55.827495 waagent[2141]: 2025-07-15T04:41:55.827450Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 04:41:55.835033 waagent[2141]: 2025-07-15T04:41:55.835003Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 15 04:41:55.835380 waagent[2141]: 2025-07-15T04:41:55.835351Z INFO ExtHandler Jul 15 04:41:55.835430 waagent[2141]: 2025-07-15T04:41:55.835414Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1027a24f-f488-4966-82fa-eaef7c43a384 eTag: 6456985404166184270 source: Fabric] Jul 15 04:41:55.835641 waagent[2141]: 2025-07-15T04:41:55.835617Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 15 04:41:55.836078 waagent[2141]: 2025-07-15T04:41:55.836046Z INFO ExtHandler Jul 15 04:41:55.836188 waagent[2141]: 2025-07-15T04:41:55.836108Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 15 04:41:55.839449 waagent[2141]: 2025-07-15T04:41:55.839423Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 15 04:41:55.906335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 04:41:55.907490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:41:57.018889 waagent[2141]: 2025-07-15T04:41:57.018263Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B1933713E613232BB413F0110A08C8F89949E769', 'hasPrivateKey': True} Jul 15 04:41:57.018889 waagent[2141]: 2025-07-15T04:41:57.018715Z INFO ExtHandler Downloaded certificate {'thumbprint': '27DA5D2881452654110FA40C1762B6F4F3ED3A1C', 'hasPrivateKey': False} Jul 15 04:41:57.019304 waagent[2141]: 2025-07-15T04:41:57.019089Z INFO ExtHandler Fetch goal state completed Jul 15 04:41:57.032177 waagent[2141]: 2025-07-15T04:41:57.032127Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Jul 15 04:41:57.035446 waagent[2141]: 2025-07-15T04:41:57.035402Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2141 Jul 15 04:41:57.035556 waagent[2141]: 2025-07-15T04:41:57.035521Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 15 04:41:57.035791 waagent[2141]: 2025-07-15T04:41:57.035764Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 15 04:41:57.036899 waagent[2141]: 2025-07-15T04:41:57.036844Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4396.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jul 15 04:41:57.037235 waagent[2141]: 2025-07-15T04:41:57.037205Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4396.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 15 04:41:57.037354 waagent[2141]: 2025-07-15T04:41:57.037331Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 15 04:41:57.037784 waagent[2141]: 2025-07-15T04:41:57.037756Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 15 04:41:58.780871 waagent[2141]: 2025-07-15T04:41:58.780823Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 15 04:41:58.781176 waagent[2141]: 2025-07-15T04:41:58.781035Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 15 04:41:58.785905 waagent[2141]: 2025-07-15T04:41:58.785535Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 15 04:41:58.790089 systemd[1]: Reload requested from client PID 2166 ('systemctl') (unit waagent.service)... Jul 15 04:41:58.790315 systemd[1]: Reloading... Jul 15 04:41:58.848896 zram_generator::config[2209]: No configuration found. Jul 15 04:41:58.912040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:41:58.989931 systemd[1]: Reloading finished in 199 ms. Jul 15 04:41:58.999729 waagent[2141]: 2025-07-15T04:41:58.997957Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 15 04:41:58.999729 waagent[2141]: 2025-07-15T04:41:58.998087Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 15 04:41:59.546547 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 04:41:59.547488 systemd[1]: Started sshd@0-10.200.20.23:22-10.200.16.10:60998.service - OpenSSH per-connection server daemon (10.200.16.10:60998). Jul 15 04:42:02.571303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:42:02.576062 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:42:02.604042 kubelet[2271]: E0715 04:42:02.603982 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:42:02.606645 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:42:02.606840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:42:02.607333 systemd[1]: kubelet.service: Consumed 110ms CPU time, 105.2M memory peak. Jul 15 04:42:04.603464 sshd[2262]: Accepted publickey for core from 10.200.16.10 port 60998 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:04.604568 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:04.608204 systemd-logind[1891]: New session 3 of user core. Jul 15 04:42:04.616156 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 04:42:04.844720 waagent[2141]: 2025-07-15T04:42:04.844635Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 15 04:42:04.845050 waagent[2141]: 2025-07-15T04:42:04.844979Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 15 04:42:04.845643 waagent[2141]: 2025-07-15T04:42:04.845605Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 15 04:42:04.845930 waagent[2141]: 2025-07-15T04:42:04.845870Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 15 04:42:04.846671 waagent[2141]: 2025-07-15T04:42:04.846102Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 04:42:04.846671 waagent[2141]: 2025-07-15T04:42:04.846172Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 04:42:04.846671 waagent[2141]: 2025-07-15T04:42:04.846331Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 15 04:42:04.846671 waagent[2141]: 2025-07-15T04:42:04.846458Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 15 04:42:04.846671 waagent[2141]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 15 04:42:04.846671 waagent[2141]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 15 04:42:04.846671 waagent[2141]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 15 04:42:04.846671 waagent[2141]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 15 04:42:04.846671 waagent[2141]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 04:42:04.846671 waagent[2141]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 04:42:04.846976 waagent[2141]: 2025-07-15T04:42:04.846921Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 15 04:42:04.847168 waagent[2141]: 2025-07-15T04:42:04.847131Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 04:42:04.847211 waagent[2141]: 2025-07-15T04:42:04.847172Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 15 04:42:04.847426 waagent[2141]: 2025-07-15T04:42:04.847391Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 15 04:42:04.847473 waagent[2141]: 2025-07-15T04:42:04.847433Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 15 04:42:04.847838 waagent[2141]: 2025-07-15T04:42:04.847807Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 04:42:04.848004 waagent[2141]: 2025-07-15T04:42:04.847975Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 15 04:42:04.848197 waagent[2141]: 2025-07-15T04:42:04.848164Z INFO EnvHandler ExtHandler Configure routes Jul 15 04:42:04.848742 waagent[2141]: 2025-07-15T04:42:04.848715Z INFO EnvHandler ExtHandler Gateway:None Jul 15 04:42:04.848957 waagent[2141]: 2025-07-15T04:42:04.848934Z INFO EnvHandler ExtHandler Routes:None Jul 15 04:42:04.857205 waagent[2141]: 2025-07-15T04:42:04.855971Z INFO ExtHandler ExtHandler Jul 15 04:42:04.857205 waagent[2141]: 2025-07-15T04:42:04.856027Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b244107f-86dc-4274-84e4-ca6b275fc4e4 correlation a63cd995-c532-4163-9232-88dc5f499c6c created: 2025-07-15T04:40:12.807152Z] Jul 15 04:42:04.857205 waagent[2141]: 2025-07-15T04:42:04.856264Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 15 04:42:04.857205 waagent[2141]: 2025-07-15T04:42:04.856662Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 15 04:42:04.879170 waagent[2141]: 2025-07-15T04:42:04.879130Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 15 04:42:04.879170 waagent[2141]: Try `iptables -h' or 'iptables --help' for more information.) Jul 15 04:42:04.879448 waagent[2141]: 2025-07-15T04:42:04.879418Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D0B2991C-3946-4AF1-88EC-10077150124E;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 15 04:42:04.902404 waagent[2141]: 2025-07-15T04:42:04.902088Z INFO MonitorHandler ExtHandler Network interfaces: Jul 15 04:42:04.902404 waagent[2141]: Executing ['ip', '-a', '-o', 'link']: Jul 15 04:42:04.902404 waagent[2141]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 15 04:42:04.902404 waagent[2141]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c6:88:8a brd ff:ff:ff:ff:ff:ff Jul 15 04:42:04.902404 waagent[2141]: 3: enP59406s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c6:88:8a brd ff:ff:ff:ff:ff:ff\ altname enP59406p0s2 Jul 15 04:42:04.902404 waagent[2141]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 15 04:42:04.902404 waagent[2141]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 15 04:42:04.902404 waagent[2141]: 2: eth0 inet 10.200.20.23/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 15 04:42:04.902404 waagent[2141]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 15 04:42:04.902404 waagent[2141]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 15 04:42:04.902404 waagent[2141]: 2: eth0 inet6 fe80::20d:3aff:fec6:888a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 04:42:04.902404 waagent[2141]: 3: enP59406s1 inet6 fe80::20d:3aff:fec6:888a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 04:42:04.942711 waagent[2141]: 2025-07-15T04:42:04.942669Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 15 04:42:04.942711 waagent[2141]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:42:04.942711 waagent[2141]: pkts bytes target prot opt in out source destination Jul 15 04:42:04.942711 waagent[2141]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:42:04.942711 waagent[2141]: pkts bytes target prot opt in out source destination Jul 15 04:42:04.942711 waagent[2141]: Chain OUTPUT (policy ACCEPT 3 packets, 534 bytes) Jul 15 04:42:04.942711 waagent[2141]: pkts bytes target prot opt in out source destination Jul 15 04:42:04.942711 waagent[2141]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 04:42:04.942711 waagent[2141]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 04:42:04.942711 waagent[2141]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 04:42:04.945311 waagent[2141]: 2025-07-15T04:42:04.945280Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 15 04:42:04.945311 waagent[2141]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:42:04.945311 waagent[2141]: pkts bytes target prot opt in out source destination Jul 15 04:42:04.945311 waagent[2141]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:42:04.945311 waagent[2141]: pkts bytes target prot opt in out source destination Jul 15 04:42:04.945311 waagent[2141]: Chain OUTPUT (policy ACCEPT 3 packets, 534 bytes) Jul 15 04:42:04.945311 waagent[2141]: pkts bytes target prot opt in out source destination Jul 15 04:42:04.945311 waagent[2141]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 04:42:04.945311 waagent[2141]: 6 520 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 04:42:04.945311 waagent[2141]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 04:42:04.945758 waagent[2141]: 2025-07-15T04:42:04.945734Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 15 04:42:05.012089 systemd[1]: Started sshd@1-10.200.20.23:22-10.200.16.10:36168.service - OpenSSH per-connection server daemon (10.200.16.10:36168). Jul 15 04:42:05.470830 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 36168 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:05.471917 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:05.475267 systemd-logind[1891]: New session 4 of user core. Jul 15 04:42:05.484982 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 04:42:05.813023 sshd[2315]: Connection closed by 10.200.16.10 port 36168 Jul 15 04:42:05.813536 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:05.816687 systemd[1]: sshd@1-10.200.20.23:22-10.200.16.10:36168.service: Deactivated successfully. Jul 15 04:42:05.818076 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 04:42:05.818936 systemd-logind[1891]: Session 4 logged out. Waiting for processes to exit. Jul 15 04:42:05.819996 systemd-logind[1891]: Removed session 4. Jul 15 04:42:05.894280 systemd[1]: Started sshd@2-10.200.20.23:22-10.200.16.10:36176.service - OpenSSH per-connection server daemon (10.200.16.10:36176). Jul 15 04:42:06.351005 sshd[2321]: Accepted publickey for core from 10.200.16.10 port 36176 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:06.353219 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:06.356611 systemd-logind[1891]: New session 5 of user core. Jul 15 04:42:06.367125 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 04:42:06.679947 sshd[2324]: Connection closed by 10.200.16.10 port 36176 Jul 15 04:42:06.679306 sshd-session[2321]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:06.682515 systemd[1]: sshd@2-10.200.20.23:22-10.200.16.10:36176.service: Deactivated successfully. Jul 15 04:42:06.683832 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 04:42:06.684430 systemd-logind[1891]: Session 5 logged out. Waiting for processes to exit. Jul 15 04:42:06.685623 systemd-logind[1891]: Removed session 5. Jul 15 04:42:06.764179 systemd[1]: Started sshd@3-10.200.20.23:22-10.200.16.10:36192.service - OpenSSH per-connection server daemon (10.200.16.10:36192). Jul 15 04:42:07.220928 sshd[2330]: Accepted publickey for core from 10.200.16.10 port 36192 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:07.221967 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:07.225554 systemd-logind[1891]: New session 6 of user core. Jul 15 04:42:07.235979 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 04:42:07.563226 sshd[2333]: Connection closed by 10.200.16.10 port 36192 Jul 15 04:42:07.563679 sshd-session[2330]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:07.566676 systemd[1]: sshd@3-10.200.20.23:22-10.200.16.10:36192.service: Deactivated successfully. Jul 15 04:42:07.568343 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 04:42:07.569008 systemd-logind[1891]: Session 6 logged out. Waiting for processes to exit. Jul 15 04:42:07.570228 systemd-logind[1891]: Removed session 6. Jul 15 04:42:07.644264 systemd[1]: Started sshd@4-10.200.20.23:22-10.200.16.10:36196.service - OpenSSH per-connection server daemon (10.200.16.10:36196). Jul 15 04:42:08.101402 sshd[2339]: Accepted publickey for core from 10.200.16.10 port 36196 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:08.102493 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:08.106075 systemd-logind[1891]: New session 7 of user core. Jul 15 04:42:08.113990 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 04:42:08.483240 chronyd[1866]: Selected source PHC0 Jul 15 04:42:08.830971 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 04:42:08.831192 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:42:08.844169 sudo[2343]: pam_unix(sudo:session): session closed for user root Jul 15 04:42:08.919617 sshd[2342]: Connection closed by 10.200.16.10 port 36196 Jul 15 04:42:08.920277 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:08.923683 systemd[1]: sshd@4-10.200.20.23:22-10.200.16.10:36196.service: Deactivated successfully. Jul 15 04:42:08.925301 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 04:42:08.925895 systemd-logind[1891]: Session 7 logged out. Waiting for processes to exit. Jul 15 04:42:08.927022 systemd-logind[1891]: Removed session 7. Jul 15 04:42:09.010059 systemd[1]: Started sshd@5-10.200.20.23:22-10.200.16.10:36210.service - OpenSSH per-connection server daemon (10.200.16.10:36210). Jul 15 04:42:09.491739 sshd[2349]: Accepted publickey for core from 10.200.16.10 port 36210 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:09.492929 sshd-session[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:09.496620 systemd-logind[1891]: New session 8 of user core. Jul 15 04:42:09.503025 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 04:42:09.760876 sudo[2354]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 04:42:09.761308 sudo[2354]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:42:09.767271 sudo[2354]: pam_unix(sudo:session): session closed for user root Jul 15 04:42:09.770741 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 04:42:09.770952 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:42:09.777789 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:42:09.803280 augenrules[2376]: No rules Jul 15 04:42:09.804478 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:42:09.804747 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:42:09.805520 sudo[2353]: pam_unix(sudo:session): session closed for user root Jul 15 04:42:09.883707 sshd[2352]: Connection closed by 10.200.16.10 port 36210 Jul 15 04:42:09.883622 sshd-session[2349]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:09.886346 systemd[1]: sshd@5-10.200.20.23:22-10.200.16.10:36210.service: Deactivated successfully. Jul 15 04:42:09.888013 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 04:42:09.888785 systemd-logind[1891]: Session 8 logged out. Waiting for processes to exit. Jul 15 04:42:09.890394 systemd-logind[1891]: Removed session 8. Jul 15 04:42:09.971647 systemd[1]: Started sshd@6-10.200.20.23:22-10.200.16.10:36222.service - OpenSSH per-connection server daemon (10.200.16.10:36222). Jul 15 04:42:10.467123 sshd[2385]: Accepted publickey for core from 10.200.16.10 port 36222 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:10.468166 sshd-session[2385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:10.471945 systemd-logind[1891]: New session 9 of user core. Jul 15 04:42:10.477974 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 04:42:10.742722 sudo[2389]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 04:42:10.742947 sudo[2389]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:42:11.818484 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 04:42:11.827114 (dockerd)[2407]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 04:42:12.494742 dockerd[2407]: time="2025-07-15T04:42:12.494490222Z" level=info msg="Starting up" Jul 15 04:42:12.495308 dockerd[2407]: time="2025-07-15T04:42:12.495287747Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 04:42:12.503147 dockerd[2407]: time="2025-07-15T04:42:12.503114869Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 04:42:12.656445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 04:42:12.657952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:42:13.398582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:42:13.401102 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:42:13.545569 kubelet[2435]: E0715 04:42:13.545512 2435 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:42:13.547683 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:42:13.547913 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:42:13.548459 systemd[1]: kubelet.service: Consumed 100ms CPU time, 107.2M memory peak. Jul 15 04:42:16.907645 dockerd[2407]: time="2025-07-15T04:42:16.907603696Z" level=info msg="Loading containers: start." Jul 15 04:42:16.946878 kernel: Initializing XFRM netlink socket Jul 15 04:42:17.264917 systemd-networkd[1701]: docker0: Link UP Jul 15 04:42:17.285969 dockerd[2407]: time="2025-07-15T04:42:17.285928337Z" level=info msg="Loading containers: done." Jul 15 04:42:17.311209 dockerd[2407]: time="2025-07-15T04:42:17.311132872Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 04:42:17.311603 dockerd[2407]: time="2025-07-15T04:42:17.311381286Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 04:42:17.311603 dockerd[2407]: time="2025-07-15T04:42:17.311473824Z" level=info msg="Initializing buildkit" Jul 15 04:42:17.366113 dockerd[2407]: time="2025-07-15T04:42:17.366079120Z" level=info msg="Completed buildkit initialization" Jul 15 04:42:17.371776 dockerd[2407]: time="2025-07-15T04:42:17.371732727Z" level=info msg="Daemon has completed initialization" Jul 15 04:42:17.371875 dockerd[2407]: time="2025-07-15T04:42:17.371789857Z" level=info msg="API listen on /run/docker.sock" Jul 15 04:42:17.372095 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 04:42:18.162896 containerd[1922]: time="2025-07-15T04:42:18.162851531Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 15 04:42:19.139500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount474831527.mount: Deactivated successfully. Jul 15 04:42:19.150035 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 15 04:42:22.740837 containerd[1922]: time="2025-07-15T04:42:22.740783760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:22.745761 containerd[1922]: time="2025-07-15T04:42:22.745732605Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 15 04:42:22.751067 containerd[1922]: time="2025-07-15T04:42:22.751028982Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:22.756801 containerd[1922]: time="2025-07-15T04:42:22.756758295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:22.757408 containerd[1922]: time="2025-07-15T04:42:22.757287186Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 4.594393174s" Jul 15 04:42:22.757408 containerd[1922]: time="2025-07-15T04:42:22.757316715Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 15 04:42:22.757842 containerd[1922]: time="2025-07-15T04:42:22.757825493Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 15 04:42:23.656371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 15 04:42:23.657916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:42:23.760602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:42:23.765111 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:42:23.890167 kubelet[2696]: E0715 04:42:23.890099 2696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:42:23.892477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:42:23.892780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:42:23.893378 systemd[1]: kubelet.service: Consumed 106ms CPU time, 107.9M memory peak. Jul 15 04:42:24.649896 containerd[1922]: time="2025-07-15T04:42:24.649310256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:24.655141 containerd[1922]: time="2025-07-15T04:42:24.655110235Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 15 04:42:24.661550 containerd[1922]: time="2025-07-15T04:42:24.661503627Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:24.670811 containerd[1922]: time="2025-07-15T04:42:24.670776507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:24.671358 containerd[1922]: time="2025-07-15T04:42:24.671335085Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.913366652s" Jul 15 04:42:24.671432 containerd[1922]: time="2025-07-15T04:42:24.671420802Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 15 04:42:24.672066 containerd[1922]: time="2025-07-15T04:42:24.672033343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 15 04:42:25.793611 containerd[1922]: time="2025-07-15T04:42:25.793516095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:25.800316 containerd[1922]: time="2025-07-15T04:42:25.800283513Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 15 04:42:25.803669 containerd[1922]: time="2025-07-15T04:42:25.803630680Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:25.814061 containerd[1922]: time="2025-07-15T04:42:25.813483227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:25.814061 containerd[1922]: time="2025-07-15T04:42:25.813936817Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.141875641s" Jul 15 04:42:25.814061 containerd[1922]: time="2025-07-15T04:42:25.813967378Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 15 04:42:25.814678 containerd[1922]: time="2025-07-15T04:42:25.814568327Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 15 04:42:26.824669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2674285594.mount: Deactivated successfully. Jul 15 04:42:27.102651 containerd[1922]: time="2025-07-15T04:42:27.102460026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:27.110735 containerd[1922]: time="2025-07-15T04:42:27.110702177Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 15 04:42:27.114119 containerd[1922]: time="2025-07-15T04:42:27.114071737Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:27.120849 containerd[1922]: time="2025-07-15T04:42:27.120807200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:27.121256 containerd[1922]: time="2025-07-15T04:42:27.121144672Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.306550904s" Jul 15 04:42:27.121256 containerd[1922]: time="2025-07-15T04:42:27.121172002Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 15 04:42:27.121595 containerd[1922]: time="2025-07-15T04:42:27.121581645Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 04:42:28.578427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225860072.mount: Deactivated successfully. Jul 15 04:42:30.645983 update_engine[1893]: I20250715 04:42:30.645583 1893 update_attempter.cc:509] Updating boot flags... Jul 15 04:42:33.906406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 15 04:42:33.908136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:42:34.012534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:42:34.014915 (kubelet)[2875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:42:34.041440 kubelet[2875]: E0715 04:42:34.041392 2875 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:42:34.043459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:42:34.043657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:42:34.044137 systemd[1]: kubelet.service: Consumed 102ms CPU time, 106.7M memory peak. Jul 15 04:42:42.349243 containerd[1922]: time="2025-07-15T04:42:42.349182327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:42.352598 containerd[1922]: time="2025-07-15T04:42:42.352414795Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 15 04:42:42.355993 containerd[1922]: time="2025-07-15T04:42:42.355968380Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:42.361515 containerd[1922]: time="2025-07-15T04:42:42.361478928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:42.362272 containerd[1922]: time="2025-07-15T04:42:42.362247254Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 15.240569221s" Jul 15 04:42:42.362272 containerd[1922]: time="2025-07-15T04:42:42.362273895Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 04:42:42.362703 containerd[1922]: time="2025-07-15T04:42:42.362682110Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 04:42:43.099456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181949622.mount: Deactivated successfully. Jul 15 04:42:43.137707 containerd[1922]: time="2025-07-15T04:42:43.137640175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:42:43.141334 containerd[1922]: time="2025-07-15T04:42:43.141307828Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 15 04:42:43.145684 containerd[1922]: time="2025-07-15T04:42:43.145643179Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:42:43.151362 containerd[1922]: time="2025-07-15T04:42:43.151325022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:42:43.151916 containerd[1922]: time="2025-07-15T04:42:43.151625097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 788.918242ms" Jul 15 04:42:43.151916 containerd[1922]: time="2025-07-15T04:42:43.151657514Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 04:42:43.152169 containerd[1922]: time="2025-07-15T04:42:43.152125284Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 04:42:44.156400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 15 04:42:44.158544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:42:44.188232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176832426.mount: Deactivated successfully. Jul 15 04:42:44.261823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:42:44.264330 (kubelet)[2917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:42:44.288559 kubelet[2917]: E0715 04:42:44.288503 2917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:42:44.290572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:42:44.290780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:42:44.291326 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.9M memory peak. Jul 15 04:42:48.898823 containerd[1922]: time="2025-07-15T04:42:48.898760940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:48.902155 containerd[1922]: time="2025-07-15T04:42:48.901964019Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 15 04:42:48.905413 containerd[1922]: time="2025-07-15T04:42:48.905389258Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:48.911348 containerd[1922]: time="2025-07-15T04:42:48.911309837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:48.912272 containerd[1922]: time="2025-07-15T04:42:48.911818504Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 5.759667435s" Jul 15 04:42:48.912272 containerd[1922]: time="2025-07-15T04:42:48.911844105Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 15 04:42:51.627538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:42:51.627998 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.9M memory peak. Jul 15 04:42:51.629583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:42:51.651134 systemd[1]: Reload requested from client PID 3000 ('systemctl') (unit session-9.scope)... Jul 15 04:42:51.651234 systemd[1]: Reloading... Jul 15 04:42:51.741895 zram_generator::config[3043]: No configuration found. Jul 15 04:42:51.818601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:42:51.900962 systemd[1]: Reloading finished in 249 ms. Jul 15 04:42:51.932204 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 04:42:51.932401 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 04:42:51.933894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:42:51.933932 systemd[1]: kubelet.service: Consumed 69ms CPU time, 95M memory peak. Jul 15 04:42:51.934989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:42:58.589949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:42:58.597066 (kubelet)[3113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:42:58.622588 kubelet[3113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:42:58.622588 kubelet[3113]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 04:42:58.622588 kubelet[3113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:42:58.622588 kubelet[3113]: I0715 04:42:58.622468 3113 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:42:59.129834 kubelet[3113]: I0715 04:42:59.129791 3113 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 04:42:59.129834 kubelet[3113]: I0715 04:42:59.129825 3113 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:42:59.130082 kubelet[3113]: I0715 04:42:59.130063 3113 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 04:42:59.144885 kubelet[3113]: E0715 04:42:59.144835 3113 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:42:59.148199 kubelet[3113]: I0715 04:42:59.148061 3113 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:42:59.152602 kubelet[3113]: I0715 04:42:59.152485 3113 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:42:59.155013 kubelet[3113]: I0715 04:42:59.154996 3113 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:42:59.155530 kubelet[3113]: I0715 04:42:59.155499 3113 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:42:59.155657 kubelet[3113]: I0715 04:42:59.155531 3113 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4396.0.0-n-efed024aac","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:42:59.155750 kubelet[3113]: I0715 04:42:59.155666 3113 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:42:59.155750 kubelet[3113]: I0715 04:42:59.155673 3113 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 04:42:59.155807 kubelet[3113]: I0715 04:42:59.155794 3113 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:42:59.157698 kubelet[3113]: I0715 04:42:59.157681 3113 kubelet.go:446] "Attempting to sync node with API server" Jul 15 04:42:59.157844 kubelet[3113]: I0715 04:42:59.157704 3113 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:42:59.157844 kubelet[3113]: I0715 04:42:59.157724 3113 kubelet.go:352] "Adding apiserver pod source" Jul 15 04:42:59.157844 kubelet[3113]: I0715 04:42:59.157737 3113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:42:59.160182 kubelet[3113]: W0715 04:42:59.160020 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-efed024aac&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:42:59.160318 kubelet[3113]: E0715 04:42:59.160299 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-efed024aac&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:42:59.161118 kubelet[3113]: I0715 04:42:59.160449 3113 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:42:59.161118 kubelet[3113]: I0715 04:42:59.160742 3113 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:42:59.161118 kubelet[3113]: W0715 04:42:59.160785 3113 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 04:42:59.161232 kubelet[3113]: I0715 04:42:59.161224 3113 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 04:42:59.161895 kubelet[3113]: I0715 04:42:59.161250 3113 server.go:1287] "Started kubelet" Jul 15 04:42:59.164625 kubelet[3113]: W0715 04:42:59.163857 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:42:59.164625 kubelet[3113]: E0715 04:42:59.163906 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:42:59.164625 kubelet[3113]: I0715 04:42:59.164021 3113 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:42:59.164625 kubelet[3113]: I0715 04:42:59.164342 3113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:42:59.164625 kubelet[3113]: I0715 04:42:59.164576 3113 server.go:479] "Adding debug handlers to kubelet server" Jul 15 04:42:59.165443 kubelet[3113]: I0715 04:42:59.165392 3113 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:42:59.165593 kubelet[3113]: I0715 04:42:59.165576 3113 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:42:59.169189 kubelet[3113]: I0715 04:42:59.169161 3113 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:42:59.170891 kubelet[3113]: E0715 04:42:59.170755 3113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.23:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4396.0.0-n-efed024aac.1852531ab463b755 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4396.0.0-n-efed024aac,UID:ci-4396.0.0-n-efed024aac,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4396.0.0-n-efed024aac,},FirstTimestamp:2025-07-15 04:42:59.161233237 +0000 UTC m=+0.561968463,LastTimestamp:2025-07-15 04:42:59.161233237 +0000 UTC m=+0.561968463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4396.0.0-n-efed024aac,}" Jul 15 04:42:59.170891 kubelet[3113]: I0715 04:42:59.170886 3113 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 04:42:59.171617 kubelet[3113]: E0715 04:42:59.171413 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:42:59.171617 kubelet[3113]: I0715 04:42:59.171593 3113 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 04:42:59.171696 kubelet[3113]: I0715 04:42:59.171635 3113 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:42:59.171956 kubelet[3113]: W0715 04:42:59.171873 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:42:59.171956 kubelet[3113]: E0715 04:42:59.171907 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:42:59.171956 kubelet[3113]: E0715 04:42:59.171950 3113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-efed024aac?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="200ms" Jul 15 04:42:59.173387 kubelet[3113]: E0715 04:42:59.173336 3113 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:42:59.173989 kubelet[3113]: I0715 04:42:59.173824 3113 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:42:59.175133 kubelet[3113]: I0715 04:42:59.175117 3113 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:42:59.175922 kubelet[3113]: I0715 04:42:59.175247 3113 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:42:59.193132 kubelet[3113]: I0715 04:42:59.193117 3113 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 04:42:59.193319 kubelet[3113]: I0715 04:42:59.193293 3113 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 04:42:59.193385 kubelet[3113]: I0715 04:42:59.193373 3113 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:42:59.241366 kubelet[3113]: I0715 04:42:59.241315 3113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:42:59.664670 kubelet[3113]: I0715 04:42:59.242385 3113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:42:59.664670 kubelet[3113]: I0715 04:42:59.242409 3113 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 04:42:59.664670 kubelet[3113]: I0715 04:42:59.242717 3113 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 04:42:59.664670 kubelet[3113]: I0715 04:42:59.242728 3113 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 04:42:59.664670 kubelet[3113]: E0715 04:42:59.242760 3113 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:42:59.664670 kubelet[3113]: W0715 04:42:59.243823 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:42:59.664670 kubelet[3113]: E0715 04:42:59.243844 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:42:59.664670 kubelet[3113]: E0715 04:42:59.272183 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:42:59.664670 kubelet[3113]: E0715 04:42:59.344516 3113 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 04:42:59.664670 kubelet[3113]: E0715 04:42:59.372709 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:42:59.664670 kubelet[3113]: E0715 04:42:59.373121 3113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-efed024aac?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="400ms" Jul 15 04:42:59.665124 kubelet[3113]: E0715 04:42:59.473339 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:42:59.665124 kubelet[3113]: E0715 04:42:59.545566 3113 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 04:42:59.665124 kubelet[3113]: E0715 04:42:59.573854 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:42:59.667114 kubelet[3113]: I0715 04:42:59.667087 3113 policy_none.go:49] "None policy: Start" Jul 15 04:42:59.667287 kubelet[3113]: I0715 04:42:59.667221 3113 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 04:42:59.667287 kubelet[3113]: I0715 04:42:59.667237 3113 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:42:59.674465 kubelet[3113]: E0715 04:42:59.674437 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:42:59.774151 kubelet[3113]: E0715 04:42:59.774108 3113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-efed024aac?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="800ms" Jul 15 04:42:59.775142 kubelet[3113]: E0715 04:42:59.775115 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:42:59.875503 kubelet[3113]: E0715 04:42:59.875466 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:42:59.946806 kubelet[3113]: E0715 04:42:59.946681 3113 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 04:42:59.976060 kubelet[3113]: E0715 04:42:59.976027 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:42:59.983659 kubelet[3113]: W0715 04:42:59.983519 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:42:59.983659 kubelet[3113]: E0715 04:42:59.983554 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:00.054282 kubelet[3113]: W0715 04:43:00.054233 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:43:00.054359 kubelet[3113]: E0715 04:43:00.054292 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:00.076778 kubelet[3113]: E0715 04:43:00.076738 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:00.177768 kubelet[3113]: E0715 04:43:00.177736 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:00.247039 kubelet[3113]: W0715 04:43:00.246940 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:43:00.247336 kubelet[3113]: E0715 04:43:00.247301 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:00.278778 kubelet[3113]: E0715 04:43:00.278745 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:00.360381 kubelet[3113]: W0715 04:43:00.360284 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-efed024aac&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:43:00.360381 kubelet[3113]: E0715 04:43:00.360353 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-efed024aac&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:00.379801 kubelet[3113]: E0715 04:43:00.379769 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:00.480158 kubelet[3113]: E0715 04:43:00.480123 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:00.575105 kubelet[3113]: E0715 04:43:00.575014 3113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-efed024aac?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="1.6s" Jul 15 04:43:00.580511 kubelet[3113]: E0715 04:43:00.580484 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:00.681151 kubelet[3113]: E0715 04:43:00.681110 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262331 kubelet[3113]: E0715 04:43:00.747272 3113 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 04:43:04.262331 kubelet[3113]: E0715 04:43:00.781763 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262331 kubelet[3113]: E0715 04:43:00.882105 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262331 kubelet[3113]: E0715 04:43:00.982540 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262331 kubelet[3113]: E0715 04:43:01.082909 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262331 kubelet[3113]: E0715 04:43:01.183874 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262331 kubelet[3113]: E0715 04:43:01.265651 3113 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:04.262331 kubelet[3113]: E0715 04:43:01.283962 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262331 kubelet[3113]: E0715 04:43:01.384352 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262331 kubelet[3113]: E0715 04:43:01.484774 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262331 kubelet[3113]: E0715 04:43:01.585267 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:01.685908 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:01.786364 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:01.886983 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:01.987507 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:02.088058 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:02.176252 3113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-efed024aac?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="3.2s" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:02.188338 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:02.288565 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:02.347838 3113 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:02.389128 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:02.489513 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262773 kubelet[3113]: E0715 04:43:02.590079 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262975 kubelet[3113]: E0715 04:43:02.690794 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262975 kubelet[3113]: E0715 04:43:02.791219 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262975 kubelet[3113]: W0715 04:43:02.798797 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:43:04.262975 kubelet[3113]: E0715 04:43:02.798823 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:04.262975 kubelet[3113]: E0715 04:43:02.891277 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262975 kubelet[3113]: W0715 04:43:02.937011 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:43:04.262975 kubelet[3113]: E0715 04:43:02.937055 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:04.262975 kubelet[3113]: E0715 04:43:02.991497 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.262975 kubelet[3113]: E0715 04:43:03.091965 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.263103 kubelet[3113]: W0715 04:43:03.131485 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:43:04.263103 kubelet[3113]: E0715 04:43:03.131525 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:04.263103 kubelet[3113]: E0715 04:43:03.192569 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.263103 kubelet[3113]: E0715 04:43:03.292737 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.263103 kubelet[3113]: W0715 04:43:03.368292 3113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-efed024aac&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jul 15 04:43:04.263103 kubelet[3113]: E0715 04:43:03.368330 3113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-efed024aac&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:04.263103 kubelet[3113]: E0715 04:43:03.392937 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.263103 kubelet[3113]: E0715 04:43:03.493310 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.263240 kubelet[3113]: E0715 04:43:03.593829 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.263240 kubelet[3113]: E0715 04:43:03.694963 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.263240 kubelet[3113]: E0715 04:43:03.795483 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.263240 kubelet[3113]: E0715 04:43:03.896042 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.263240 kubelet[3113]: E0715 04:43:03.996633 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.263240 kubelet[3113]: E0715 04:43:04.097117 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.263240 kubelet[3113]: E0715 04:43:04.198231 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.270408 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 04:43:04.281822 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 04:43:04.285409 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 04:43:04.286592 kubelet[3113]: E0715 04:43:04.286498 3113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.23:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4396.0.0-n-efed024aac.1852531ab463b755 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4396.0.0-n-efed024aac,UID:ci-4396.0.0-n-efed024aac,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4396.0.0-n-efed024aac,},FirstTimestamp:2025-07-15 04:42:59.161233237 +0000 UTC m=+0.561968463,LastTimestamp:2025-07-15 04:42:59.161233237 +0000 UTC m=+0.561968463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4396.0.0-n-efed024aac,}" Jul 15 04:43:04.295576 kubelet[3113]: I0715 04:43:04.295488 3113 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:43:04.295685 kubelet[3113]: I0715 04:43:04.295669 3113 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:43:04.295715 kubelet[3113]: I0715 04:43:04.295683 3113 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:43:04.296008 kubelet[3113]: I0715 04:43:04.295915 3113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:43:04.297033 kubelet[3113]: E0715 04:43:04.297006 3113 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 04:43:04.297175 kubelet[3113]: E0715 04:43:04.297132 3113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:04.398714 kubelet[3113]: I0715 04:43:04.398613 3113 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:04.398997 kubelet[3113]: E0715 04:43:04.398977 3113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:04.601523 kubelet[3113]: I0715 04:43:04.601417 3113 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:04.602012 kubelet[3113]: E0715 04:43:04.601987 3113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.003500 kubelet[3113]: I0715 04:43:05.003476 3113 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.003782 kubelet[3113]: E0715 04:43:05.003762 3113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.326661 kubelet[3113]: E0715 04:43:05.326545 3113 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:05.377320 kubelet[3113]: E0715 04:43:05.377272 3113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-efed024aac?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="6.4s" Jul 15 04:43:05.557691 systemd[1]: Created slice kubepods-burstable-podcaaa0440d0815a2e63a0e5fc2adaead5.slice - libcontainer container kubepods-burstable-podcaaa0440d0815a2e63a0e5fc2adaead5.slice. Jul 15 04:43:05.564838 kubelet[3113]: E0715 04:43:05.564367 3113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-efed024aac\" not found" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.566701 systemd[1]: Created slice kubepods-burstable-pod481885ea80b2ba5f485a4a8b0303d3da.slice - libcontainer container kubepods-burstable-pod481885ea80b2ba5f485a4a8b0303d3da.slice. Jul 15 04:43:05.568472 kubelet[3113]: E0715 04:43:05.568449 3113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-efed024aac\" not found" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.570468 systemd[1]: Created slice kubepods-burstable-podfac573add4257e49b202e6754777d405.slice - libcontainer container kubepods-burstable-podfac573add4257e49b202e6754777d405.slice. Jul 15 04:43:05.572008 kubelet[3113]: E0715 04:43:05.571883 3113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-efed024aac\" not found" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.605335 kubelet[3113]: I0715 04:43:05.605248 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/caaa0440d0815a2e63a0e5fc2adaead5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4396.0.0-n-efed024aac\" (UID: \"caaa0440d0815a2e63a0e5fc2adaead5\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.605583 kubelet[3113]: I0715 04:43:05.605563 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/481885ea80b2ba5f485a4a8b0303d3da-flexvolume-dir\") pod \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" (UID: \"481885ea80b2ba5f485a4a8b0303d3da\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.605744 kubelet[3113]: I0715 04:43:05.605687 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/481885ea80b2ba5f485a4a8b0303d3da-k8s-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" (UID: \"481885ea80b2ba5f485a4a8b0303d3da\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.605744 kubelet[3113]: I0715 04:43:05.605705 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/caaa0440d0815a2e63a0e5fc2adaead5-ca-certs\") pod \"kube-apiserver-ci-4396.0.0-n-efed024aac\" (UID: \"caaa0440d0815a2e63a0e5fc2adaead5\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.605744 kubelet[3113]: I0715 04:43:05.605717 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/caaa0440d0815a2e63a0e5fc2adaead5-k8s-certs\") pod \"kube-apiserver-ci-4396.0.0-n-efed024aac\" (UID: \"caaa0440d0815a2e63a0e5fc2adaead5\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.605744 kubelet[3113]: I0715 04:43:05.605728 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/481885ea80b2ba5f485a4a8b0303d3da-ca-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" (UID: \"481885ea80b2ba5f485a4a8b0303d3da\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.605937 kubelet[3113]: I0715 04:43:05.605894 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/481885ea80b2ba5f485a4a8b0303d3da-kubeconfig\") pod \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" (UID: \"481885ea80b2ba5f485a4a8b0303d3da\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.605937 kubelet[3113]: I0715 04:43:05.605914 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/481885ea80b2ba5f485a4a8b0303d3da-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" (UID: \"481885ea80b2ba5f485a4a8b0303d3da\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.605937 kubelet[3113]: I0715 04:43:05.605925 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fac573add4257e49b202e6754777d405-kubeconfig\") pod \"kube-scheduler-ci-4396.0.0-n-efed024aac\" (UID: \"fac573add4257e49b202e6754777d405\") " pod="kube-system/kube-scheduler-ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.805774 kubelet[3113]: I0715 04:43:05.805720 3113 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.806074 kubelet[3113]: E0715 04:43:05.806054 3113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:05.866088 containerd[1922]: time="2025-07-15T04:43:05.865956680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4396.0.0-n-efed024aac,Uid:caaa0440d0815a2e63a0e5fc2adaead5,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:05.869507 containerd[1922]: time="2025-07-15T04:43:05.869435213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4396.0.0-n-efed024aac,Uid:481885ea80b2ba5f485a4a8b0303d3da,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:05.874880 containerd[1922]: time="2025-07-15T04:43:05.874665002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4396.0.0-n-efed024aac,Uid:fac573add4257e49b202e6754777d405,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:06.016059 containerd[1922]: time="2025-07-15T04:43:06.015929522Z" level=info msg="connecting to shim c2e6d5d172d35e371e4d5a1a98df19970ce60f6e0e3371b4369267c9ac538546" address="unix:///run/containerd/s/d1244bf287901934bd2c824fafd43ba6f403e3f16631147254558bcc150f5f25" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:06.034018 systemd[1]: Started cri-containerd-c2e6d5d172d35e371e4d5a1a98df19970ce60f6e0e3371b4369267c9ac538546.scope - libcontainer container c2e6d5d172d35e371e4d5a1a98df19970ce60f6e0e3371b4369267c9ac538546. Jul 15 04:43:06.048149 containerd[1922]: time="2025-07-15T04:43:06.047764515Z" level=info msg="connecting to shim 2b5e90fc7a585d58bb32d5a1743a2aa397b07a5c4aa7dacbaa99fad2863d4768" address="unix:///run/containerd/s/f6815e84671a3be5f24cbe2ce27f571f735b6f5c2032042eff7ae426c3ff3040" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:06.048858 containerd[1922]: time="2025-07-15T04:43:06.048811361Z" level=info msg="connecting to shim bcab83aeea3620d987248f3296e0e61c52876685959c1fa8eb1646a0301cbad0" address="unix:///run/containerd/s/053ea8396edd978b5f4050b4b2c536d08526d1534a05c63068a8f179cc1919e1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:06.069033 systemd[1]: Started cri-containerd-2b5e90fc7a585d58bb32d5a1743a2aa397b07a5c4aa7dacbaa99fad2863d4768.scope - libcontainer container 2b5e90fc7a585d58bb32d5a1743a2aa397b07a5c4aa7dacbaa99fad2863d4768. Jul 15 04:43:06.069773 systemd[1]: Started cri-containerd-bcab83aeea3620d987248f3296e0e61c52876685959c1fa8eb1646a0301cbad0.scope - libcontainer container bcab83aeea3620d987248f3296e0e61c52876685959c1fa8eb1646a0301cbad0. Jul 15 04:43:06.103190 containerd[1922]: time="2025-07-15T04:43:06.103152059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4396.0.0-n-efed024aac,Uid:caaa0440d0815a2e63a0e5fc2adaead5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2e6d5d172d35e371e4d5a1a98df19970ce60f6e0e3371b4369267c9ac538546\"" Jul 15 04:43:06.110946 containerd[1922]: time="2025-07-15T04:43:06.110913090Z" level=info msg="CreateContainer within sandbox \"c2e6d5d172d35e371e4d5a1a98df19970ce60f6e0e3371b4369267c9ac538546\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 04:43:06.120664 containerd[1922]: time="2025-07-15T04:43:06.120334621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4396.0.0-n-efed024aac,Uid:fac573add4257e49b202e6754777d405,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b5e90fc7a585d58bb32d5a1743a2aa397b07a5c4aa7dacbaa99fad2863d4768\"" Jul 15 04:43:06.125340 containerd[1922]: time="2025-07-15T04:43:06.125312616Z" level=info msg="CreateContainer within sandbox \"2b5e90fc7a585d58bb32d5a1743a2aa397b07a5c4aa7dacbaa99fad2863d4768\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 04:43:06.138267 containerd[1922]: time="2025-07-15T04:43:06.138161470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4396.0.0-n-efed024aac,Uid:481885ea80b2ba5f485a4a8b0303d3da,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcab83aeea3620d987248f3296e0e61c52876685959c1fa8eb1646a0301cbad0\"" Jul 15 04:43:06.140088 containerd[1922]: time="2025-07-15T04:43:06.140061587Z" level=info msg="CreateContainer within sandbox \"bcab83aeea3620d987248f3296e0e61c52876685959c1fa8eb1646a0301cbad0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 04:43:06.171079 containerd[1922]: time="2025-07-15T04:43:06.171044285Z" level=info msg="Container 616c2bb381a1bde01bc4136e7f7125398066cd838f84aaf09ff6d517fe91e3bb: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:06.200666 containerd[1922]: time="2025-07-15T04:43:06.200203414Z" level=info msg="Container c65426bcf9f14bcf477c08c4c9af8746fc5b5e0c5dfc909e977693d5d0bc724e: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:06.209856 containerd[1922]: time="2025-07-15T04:43:06.209823335Z" level=info msg="Container adcaaea40431ad4ecc1a36e4ce4201f7c8fb3583d7819f00f9fc0c8ca267b9e7: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:06.222175 containerd[1922]: time="2025-07-15T04:43:06.222143179Z" level=info msg="CreateContainer within sandbox \"c2e6d5d172d35e371e4d5a1a98df19970ce60f6e0e3371b4369267c9ac538546\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"616c2bb381a1bde01bc4136e7f7125398066cd838f84aaf09ff6d517fe91e3bb\"" Jul 15 04:43:06.222750 containerd[1922]: time="2025-07-15T04:43:06.222725847Z" level=info msg="StartContainer for \"616c2bb381a1bde01bc4136e7f7125398066cd838f84aaf09ff6d517fe91e3bb\"" Jul 15 04:43:06.223567 containerd[1922]: time="2025-07-15T04:43:06.223540701Z" level=info msg="connecting to shim 616c2bb381a1bde01bc4136e7f7125398066cd838f84aaf09ff6d517fe91e3bb" address="unix:///run/containerd/s/d1244bf287901934bd2c824fafd43ba6f403e3f16631147254558bcc150f5f25" protocol=ttrpc version=3 Jul 15 04:43:06.235449 containerd[1922]: time="2025-07-15T04:43:06.235408872Z" level=info msg="CreateContainer within sandbox \"2b5e90fc7a585d58bb32d5a1743a2aa397b07a5c4aa7dacbaa99fad2863d4768\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c65426bcf9f14bcf477c08c4c9af8746fc5b5e0c5dfc909e977693d5d0bc724e\"" Jul 15 04:43:06.235904 containerd[1922]: time="2025-07-15T04:43:06.235839399Z" level=info msg="StartContainer for \"c65426bcf9f14bcf477c08c4c9af8746fc5b5e0c5dfc909e977693d5d0bc724e\"" Jul 15 04:43:06.236721 containerd[1922]: time="2025-07-15T04:43:06.236698398Z" level=info msg="connecting to shim c65426bcf9f14bcf477c08c4c9af8746fc5b5e0c5dfc909e977693d5d0bc724e" address="unix:///run/containerd/s/f6815e84671a3be5f24cbe2ce27f571f735b6f5c2032042eff7ae426c3ff3040" protocol=ttrpc version=3 Jul 15 04:43:06.239015 systemd[1]: Started cri-containerd-616c2bb381a1bde01bc4136e7f7125398066cd838f84aaf09ff6d517fe91e3bb.scope - libcontainer container 616c2bb381a1bde01bc4136e7f7125398066cd838f84aaf09ff6d517fe91e3bb. Jul 15 04:43:06.250590 containerd[1922]: time="2025-07-15T04:43:06.250552576Z" level=info msg="CreateContainer within sandbox \"bcab83aeea3620d987248f3296e0e61c52876685959c1fa8eb1646a0301cbad0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"adcaaea40431ad4ecc1a36e4ce4201f7c8fb3583d7819f00f9fc0c8ca267b9e7\"" Jul 15 04:43:06.252833 containerd[1922]: time="2025-07-15T04:43:06.252641107Z" level=info msg="StartContainer for \"adcaaea40431ad4ecc1a36e4ce4201f7c8fb3583d7819f00f9fc0c8ca267b9e7\"" Jul 15 04:43:06.254250 containerd[1922]: time="2025-07-15T04:43:06.254199067Z" level=info msg="connecting to shim adcaaea40431ad4ecc1a36e4ce4201f7c8fb3583d7819f00f9fc0c8ca267b9e7" address="unix:///run/containerd/s/053ea8396edd978b5f4050b4b2c536d08526d1534a05c63068a8f179cc1919e1" protocol=ttrpc version=3 Jul 15 04:43:06.257068 systemd[1]: Started cri-containerd-c65426bcf9f14bcf477c08c4c9af8746fc5b5e0c5dfc909e977693d5d0bc724e.scope - libcontainer container c65426bcf9f14bcf477c08c4c9af8746fc5b5e0c5dfc909e977693d5d0bc724e. Jul 15 04:43:06.276145 systemd[1]: Started cri-containerd-adcaaea40431ad4ecc1a36e4ce4201f7c8fb3583d7819f00f9fc0c8ca267b9e7.scope - libcontainer container adcaaea40431ad4ecc1a36e4ce4201f7c8fb3583d7819f00f9fc0c8ca267b9e7. Jul 15 04:43:06.291757 containerd[1922]: time="2025-07-15T04:43:06.291722625Z" level=info msg="StartContainer for \"616c2bb381a1bde01bc4136e7f7125398066cd838f84aaf09ff6d517fe91e3bb\" returns successfully" Jul 15 04:43:06.325957 containerd[1922]: time="2025-07-15T04:43:06.325901542Z" level=info msg="StartContainer for \"c65426bcf9f14bcf477c08c4c9af8746fc5b5e0c5dfc909e977693d5d0bc724e\" returns successfully" Jul 15 04:43:06.339163 containerd[1922]: time="2025-07-15T04:43:06.339066064Z" level=info msg="StartContainer for \"adcaaea40431ad4ecc1a36e4ce4201f7c8fb3583d7819f00f9fc0c8ca267b9e7\" returns successfully" Jul 15 04:43:07.265524 kubelet[3113]: E0715 04:43:07.265376 3113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-efed024aac\" not found" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:07.268801 kubelet[3113]: E0715 04:43:07.268665 3113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-efed024aac\" not found" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:07.271173 kubelet[3113]: E0715 04:43:07.271153 3113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-efed024aac\" not found" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:07.409098 kubelet[3113]: I0715 04:43:07.408881 3113 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:07.545752 kubelet[3113]: I0715 04:43:07.545650 3113 kubelet_node_status.go:78] "Successfully registered node" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:07.546115 kubelet[3113]: E0715 04:43:07.545993 3113 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4396.0.0-n-efed024aac\": node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:07.557508 kubelet[3113]: E0715 04:43:07.557485 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:07.658070 kubelet[3113]: E0715 04:43:07.658029 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:07.759096 kubelet[3113]: E0715 04:43:07.759053 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:07.859734 kubelet[3113]: E0715 04:43:07.859615 3113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-efed024aac\" not found" Jul 15 04:43:07.971084 kubelet[3113]: I0715 04:43:07.970897 3113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:07.994942 kubelet[3113]: E0715 04:43:07.994899 3113 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4396.0.0-n-efed024aac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:07.994942 kubelet[3113]: I0715 04:43:07.994942 3113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:07.996314 kubelet[3113]: E0715 04:43:07.996285 3113 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:07.996314 kubelet[3113]: I0715 04:43:07.996306 3113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4396.0.0-n-efed024aac" Jul 15 04:43:07.997462 kubelet[3113]: E0715 04:43:07.997437 3113 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4396.0.0-n-efed024aac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4396.0.0-n-efed024aac" Jul 15 04:43:08.168741 kubelet[3113]: I0715 04:43:08.168685 3113 apiserver.go:52] "Watching apiserver" Jul 15 04:43:08.172616 kubelet[3113]: I0715 04:43:08.172574 3113 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 04:43:08.271970 kubelet[3113]: I0715 04:43:08.271929 3113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:08.272281 kubelet[3113]: I0715 04:43:08.272231 3113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4396.0.0-n-efed024aac" Jul 15 04:43:08.272950 kubelet[3113]: I0715 04:43:08.272667 3113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:08.274622 kubelet[3113]: E0715 04:43:08.274374 3113 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4396.0.0-n-efed024aac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:08.274622 kubelet[3113]: E0715 04:43:08.274547 3113 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4396.0.0-n-efed024aac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4396.0.0-n-efed024aac" Jul 15 04:43:08.274792 kubelet[3113]: E0715 04:43:08.274777 3113 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:09.273420 kubelet[3113]: I0715 04:43:09.273383 3113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4396.0.0-n-efed024aac" Jul 15 04:43:09.273771 kubelet[3113]: I0715 04:43:09.273723 3113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:09.285234 kubelet[3113]: W0715 04:43:09.285149 3113 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:43:09.290549 kubelet[3113]: W0715 04:43:09.290491 3113 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:43:10.024196 systemd[1]: Reload requested from client PID 3387 ('systemctl') (unit session-9.scope)... Jul 15 04:43:10.024209 systemd[1]: Reloading... Jul 15 04:43:10.092892 zram_generator::config[3433]: No configuration found. Jul 15 04:43:10.161619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:43:10.255415 systemd[1]: Reloading finished in 230 ms. Jul 15 04:43:10.287579 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:43:10.306639 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 04:43:10.307022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:43:10.307176 systemd[1]: kubelet.service: Consumed 829ms CPU time, 127.6M memory peak. Jul 15 04:43:10.308691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:43:10.439635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:43:10.445125 (kubelet)[3497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:43:10.474974 kubelet[3497]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:43:10.475291 kubelet[3497]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 04:43:10.475291 kubelet[3497]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:43:10.475291 kubelet[3497]: I0715 04:43:10.475205 3497 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:43:10.479297 kubelet[3497]: I0715 04:43:10.479270 3497 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 04:43:10.479297 kubelet[3497]: I0715 04:43:10.479293 3497 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:43:10.479477 kubelet[3497]: I0715 04:43:10.479460 3497 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 04:43:10.480382 kubelet[3497]: I0715 04:43:10.480365 3497 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 04:43:10.481907 kubelet[3497]: I0715 04:43:10.481887 3497 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:43:10.485113 kubelet[3497]: I0715 04:43:10.485027 3497 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:43:10.488463 kubelet[3497]: I0715 04:43:10.488287 3497 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:43:10.488463 kubelet[3497]: I0715 04:43:10.488443 3497 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:43:10.488573 kubelet[3497]: I0715 04:43:10.488459 3497 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4396.0.0-n-efed024aac","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:43:10.488652 kubelet[3497]: I0715 04:43:10.488576 3497 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:43:10.488652 kubelet[3497]: I0715 04:43:10.488583 3497 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 04:43:10.488652 kubelet[3497]: I0715 04:43:10.488614 3497 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:43:10.488724 kubelet[3497]: I0715 04:43:10.488706 3497 kubelet.go:446] "Attempting to sync node with API server" Jul 15 04:43:10.488724 kubelet[3497]: I0715 04:43:10.488717 3497 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:43:10.489319 kubelet[3497]: I0715 04:43:10.488735 3497 kubelet.go:352] "Adding apiserver pod source" Jul 15 04:43:10.489319 kubelet[3497]: I0715 04:43:10.488745 3497 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:43:10.491210 kubelet[3497]: I0715 04:43:10.491195 3497 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:43:10.491560 kubelet[3497]: I0715 04:43:10.491546 3497 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:43:10.491918 kubelet[3497]: I0715 04:43:10.491902 3497 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 04:43:10.492009 kubelet[3497]: I0715 04:43:10.492000 3497 server.go:1287] "Started kubelet" Jul 15 04:43:10.494045 kubelet[3497]: I0715 04:43:10.494029 3497 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:43:10.502995 kubelet[3497]: I0715 04:43:10.500456 3497 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:43:10.502995 kubelet[3497]: I0715 04:43:10.501172 3497 server.go:479] "Adding debug handlers to kubelet server" Jul 15 04:43:10.502995 kubelet[3497]: I0715 04:43:10.501812 3497 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:43:10.502995 kubelet[3497]: I0715 04:43:10.502007 3497 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:43:10.502995 kubelet[3497]: I0715 04:43:10.502202 3497 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:43:10.503413 kubelet[3497]: I0715 04:43:10.503366 3497 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 04:43:10.504058 kubelet[3497]: I0715 04:43:10.504039 3497 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 04:43:10.504156 kubelet[3497]: I0715 04:43:10.504144 3497 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:43:10.507226 kubelet[3497]: I0715 04:43:10.507199 3497 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:43:10.507292 kubelet[3497]: I0715 04:43:10.507277 3497 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:43:10.509631 kubelet[3497]: I0715 04:43:10.509607 3497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:43:10.511381 kubelet[3497]: I0715 04:43:10.511158 3497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:43:10.511381 kubelet[3497]: I0715 04:43:10.511178 3497 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 04:43:10.511381 kubelet[3497]: I0715 04:43:10.511191 3497 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 04:43:10.511381 kubelet[3497]: I0715 04:43:10.511196 3497 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 04:43:10.511381 kubelet[3497]: E0715 04:43:10.511227 3497 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:43:10.512649 kubelet[3497]: I0715 04:43:10.512074 3497 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:43:10.514386 kubelet[3497]: E0715 04:43:10.514343 3497 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:43:10.553141 kubelet[3497]: I0715 04:43:10.553049 3497 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 04:43:10.553460 kubelet[3497]: I0715 04:43:10.553256 3497 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 04:43:10.553722 kubelet[3497]: I0715 04:43:10.553711 3497 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:43:10.554038 kubelet[3497]: I0715 04:43:10.553951 3497 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 04:43:10.554205 kubelet[3497]: I0715 04:43:10.554120 3497 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 04:43:10.554205 kubelet[3497]: I0715 04:43:10.554149 3497 policy_none.go:49] "None policy: Start" Jul 15 04:43:10.554205 kubelet[3497]: I0715 04:43:10.554158 3497 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 04:43:10.554205 kubelet[3497]: I0715 04:43:10.554169 3497 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:43:10.554432 kubelet[3497]: I0715 04:43:10.554410 3497 state_mem.go:75] "Updated machine memory state" Jul 15 04:43:10.557758 kubelet[3497]: I0715 04:43:10.557739 3497 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:43:10.558049 kubelet[3497]: I0715 04:43:10.557901 3497 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:43:10.558049 kubelet[3497]: I0715 04:43:10.557912 3497 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:43:10.558417 kubelet[3497]: I0715 04:43:10.558138 3497 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:43:10.560890 kubelet[3497]: E0715 04:43:10.560646 3497 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 04:43:10.611955 kubelet[3497]: I0715 04:43:10.611924 3497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.612227 kubelet[3497]: I0715 04:43:10.611926 3497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.612699 kubelet[3497]: I0715 04:43:10.612030 3497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.636389 kubelet[3497]: W0715 04:43:10.636364 3497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:43:10.636660 kubelet[3497]: E0715 04:43:10.636549 3497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4396.0.0-n-efed024aac\" already exists" pod="kube-system/kube-scheduler-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.636660 kubelet[3497]: W0715 04:43:10.636420 3497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:43:10.636725 kubelet[3497]: W0715 04:43:10.636485 3497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:43:10.636757 kubelet[3497]: E0715 04:43:10.636740 3497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4396.0.0-n-efed024aac\" already exists" pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.663402 kubelet[3497]: I0715 04:43:10.663095 3497 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.676660 kubelet[3497]: I0715 04:43:10.676609 3497 kubelet_node_status.go:124] "Node was previously registered" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.676946 kubelet[3497]: I0715 04:43:10.676912 3497 kubelet_node_status.go:78] "Successfully registered node" node="ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.805508 kubelet[3497]: I0715 04:43:10.805379 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/481885ea80b2ba5f485a4a8b0303d3da-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" (UID: \"481885ea80b2ba5f485a4a8b0303d3da\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.806062 kubelet[3497]: I0715 04:43:10.805846 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/481885ea80b2ba5f485a4a8b0303d3da-ca-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" (UID: \"481885ea80b2ba5f485a4a8b0303d3da\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.806062 kubelet[3497]: I0715 04:43:10.806001 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/481885ea80b2ba5f485a4a8b0303d3da-flexvolume-dir\") pod \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" (UID: \"481885ea80b2ba5f485a4a8b0303d3da\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.806062 kubelet[3497]: I0715 04:43:10.806035 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/481885ea80b2ba5f485a4a8b0303d3da-k8s-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" (UID: \"481885ea80b2ba5f485a4a8b0303d3da\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.806322 kubelet[3497]: I0715 04:43:10.806212 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/481885ea80b2ba5f485a4a8b0303d3da-kubeconfig\") pod \"kube-controller-manager-ci-4396.0.0-n-efed024aac\" (UID: \"481885ea80b2ba5f485a4a8b0303d3da\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.806322 kubelet[3497]: I0715 04:43:10.806239 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fac573add4257e49b202e6754777d405-kubeconfig\") pod \"kube-scheduler-ci-4396.0.0-n-efed024aac\" (UID: \"fac573add4257e49b202e6754777d405\") " pod="kube-system/kube-scheduler-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.806322 kubelet[3497]: I0715 04:43:10.806251 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/caaa0440d0815a2e63a0e5fc2adaead5-ca-certs\") pod \"kube-apiserver-ci-4396.0.0-n-efed024aac\" (UID: \"caaa0440d0815a2e63a0e5fc2adaead5\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.806322 kubelet[3497]: I0715 04:43:10.806287 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/caaa0440d0815a2e63a0e5fc2adaead5-k8s-certs\") pod \"kube-apiserver-ci-4396.0.0-n-efed024aac\" (UID: \"caaa0440d0815a2e63a0e5fc2adaead5\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:10.806322 kubelet[3497]: I0715 04:43:10.806298 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/caaa0440d0815a2e63a0e5fc2adaead5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4396.0.0-n-efed024aac\" (UID: \"caaa0440d0815a2e63a0e5fc2adaead5\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" Jul 15 04:43:11.489756 kubelet[3497]: I0715 04:43:11.489706 3497 apiserver.go:52] "Watching apiserver" Jul 15 04:43:13.714200 kubelet[3497]: I0715 04:43:11.505031 3497 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 04:43:13.714200 kubelet[3497]: I0715 04:43:11.563274 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4396.0.0-n-efed024aac" podStartSLOduration=2.563255978 podStartE2EDuration="2.563255978s" podCreationTimestamp="2025-07-15 04:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:43:11.562473214 +0000 UTC m=+1.114671969" watchObservedRunningTime="2025-07-15 04:43:11.563255978 +0000 UTC m=+1.115454725" Jul 15 04:43:13.714200 kubelet[3497]: I0715 04:43:11.589800 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4396.0.0-n-efed024aac" podStartSLOduration=2.58966256 podStartE2EDuration="2.58966256s" podCreationTimestamp="2025-07-15 04:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:43:11.578753207 +0000 UTC m=+1.130951962" watchObservedRunningTime="2025-07-15 04:43:11.58966256 +0000 UTC m=+1.141861315" Jul 15 04:43:13.714200 kubelet[3497]: I0715 04:43:11.589964 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-efed024aac" podStartSLOduration=1.589936306 podStartE2EDuration="1.589936306s" podCreationTimestamp="2025-07-15 04:43:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:43:11.589934289 +0000 UTC m=+1.142133044" watchObservedRunningTime="2025-07-15 04:43:11.589936306 +0000 UTC m=+1.142135061" Jul 15 04:43:13.737934 sudo[3529]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 04:43:13.738510 sudo[3529]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 04:43:13.975379 sudo[3529]: pam_unix(sudo:session): session closed for user root Jul 15 04:43:15.121059 sudo[2389]: pam_unix(sudo:session): session closed for user root Jul 15 04:43:15.207927 sshd[2388]: Connection closed by 10.200.16.10 port 36222 Jul 15 04:43:15.207043 sshd-session[2385]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:15.210168 systemd[1]: sshd@6-10.200.20.23:22-10.200.16.10:36222.service: Deactivated successfully. Jul 15 04:43:15.211720 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 04:43:15.211896 systemd[1]: session-9.scope: Consumed 3.156s CPU time, 261.5M memory peak. Jul 15 04:43:15.212868 systemd-logind[1891]: Session 9 logged out. Waiting for processes to exit. Jul 15 04:43:15.214673 systemd-logind[1891]: Removed session 9. Jul 15 04:43:16.345281 kubelet[3497]: I0715 04:43:16.345229 3497 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 04:43:16.346282 containerd[1922]: time="2025-07-15T04:43:16.346215118Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 04:43:16.346535 kubelet[3497]: I0715 04:43:16.346454 3497 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 04:43:17.044924 systemd[1]: Created slice kubepods-besteffort-pod1a6f37fb_828a_44e1_b45f_c2ae1b99f4be.slice - libcontainer container kubepods-besteffort-pod1a6f37fb_828a_44e1_b45f_c2ae1b99f4be.slice. Jul 15 04:43:17.054287 systemd[1]: Created slice kubepods-burstable-poda8242d21_e741_47a7_8237_0adc0e3e9fec.slice - libcontainer container kubepods-burstable-poda8242d21_e741_47a7_8237_0adc0e3e9fec.slice. Jul 15 04:43:17.139663 kubelet[3497]: I0715 04:43:17.139580 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1a6f37fb-828a-44e1-b45f-c2ae1b99f4be-kube-proxy\") pod \"kube-proxy-w7drp\" (UID: \"1a6f37fb-828a-44e1-b45f-c2ae1b99f4be\") " pod="kube-system/kube-proxy-w7drp" Jul 15 04:43:17.139663 kubelet[3497]: I0715 04:43:17.139618 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-host-proc-sys-net\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.139663 kubelet[3497]: I0715 04:43:17.139630 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a6f37fb-828a-44e1-b45f-c2ae1b99f4be-xtables-lock\") pod \"kube-proxy-w7drp\" (UID: \"1a6f37fb-828a-44e1-b45f-c2ae1b99f4be\") " pod="kube-system/kube-proxy-w7drp" Jul 15 04:43:17.139663 kubelet[3497]: I0715 04:43:17.139645 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-host-proc-sys-kernel\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.139663 kubelet[3497]: I0715 04:43:17.139658 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8242d21-e741-47a7-8237-0adc0e3e9fec-hubble-tls\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.139663 kubelet[3497]: I0715 04:43:17.139669 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-run\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.140114 kubelet[3497]: I0715 04:43:17.139681 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-bpf-maps\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.140114 kubelet[3497]: I0715 04:43:17.139690 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-cgroup\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.140114 kubelet[3497]: I0715 04:43:17.139701 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-etc-cni-netd\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.140114 kubelet[3497]: I0715 04:43:17.139711 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlsb8\" (UniqueName: \"kubernetes.io/projected/a8242d21-e741-47a7-8237-0adc0e3e9fec-kube-api-access-hlsb8\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.140114 kubelet[3497]: I0715 04:43:17.139722 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a6f37fb-828a-44e1-b45f-c2ae1b99f4be-lib-modules\") pod \"kube-proxy-w7drp\" (UID: \"1a6f37fb-828a-44e1-b45f-c2ae1b99f4be\") " pod="kube-system/kube-proxy-w7drp" Jul 15 04:43:17.140114 kubelet[3497]: I0715 04:43:17.139741 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-hostproc\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.140207 kubelet[3497]: I0715 04:43:17.139752 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-xtables-lock\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.140207 kubelet[3497]: I0715 04:43:17.139763 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8242d21-e741-47a7-8237-0adc0e3e9fec-clustermesh-secrets\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.140207 kubelet[3497]: I0715 04:43:17.139809 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-lib-modules\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.140207 kubelet[3497]: I0715 04:43:17.139836 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-config-path\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.140207 kubelet[3497]: I0715 04:43:17.139879 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cni-path\") pod \"cilium-5frr4\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " pod="kube-system/cilium-5frr4" Jul 15 04:43:17.140207 kubelet[3497]: I0715 04:43:17.139893 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbjjc\" (UniqueName: \"kubernetes.io/projected/1a6f37fb-828a-44e1-b45f-c2ae1b99f4be-kube-api-access-fbjjc\") pod \"kube-proxy-w7drp\" (UID: \"1a6f37fb-828a-44e1-b45f-c2ae1b99f4be\") " pod="kube-system/kube-proxy-w7drp" Jul 15 04:43:17.353279 containerd[1922]: time="2025-07-15T04:43:17.352878877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w7drp,Uid:1a6f37fb-828a-44e1-b45f-c2ae1b99f4be,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:17.358513 containerd[1922]: time="2025-07-15T04:43:17.358381232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5frr4,Uid:a8242d21-e741-47a7-8237-0adc0e3e9fec,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:17.393668 systemd[1]: Created slice kubepods-besteffort-podf1f7f9c9_f118_42b7_b358_1687a352ea1a.slice - libcontainer container kubepods-besteffort-podf1f7f9c9_f118_42b7_b358_1687a352ea1a.slice. Jul 15 04:43:17.431152 containerd[1922]: time="2025-07-15T04:43:17.431068697Z" level=info msg="connecting to shim d0a8ebab3780ba9c4acad5b0d01c09b776c06e61facfe3c6cfff53dc7dea733b" address="unix:///run/containerd/s/274a9349fc7fb75948e667da25f6a7ef4421d29614d25c14de5fb25bc06df190" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:17.441948 kubelet[3497]: I0715 04:43:17.441855 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1f7f9c9-f118-42b7-b358-1687a352ea1a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jvlp7\" (UID: \"f1f7f9c9-f118-42b7-b358-1687a352ea1a\") " pod="kube-system/cilium-operator-6c4d7847fc-jvlp7" Jul 15 04:43:17.441948 kubelet[3497]: I0715 04:43:17.441904 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwnj2\" (UniqueName: \"kubernetes.io/projected/f1f7f9c9-f118-42b7-b358-1687a352ea1a-kube-api-access-pwnj2\") pod \"cilium-operator-6c4d7847fc-jvlp7\" (UID: \"f1f7f9c9-f118-42b7-b358-1687a352ea1a\") " pod="kube-system/cilium-operator-6c4d7847fc-jvlp7" Jul 15 04:43:17.446597 systemd[1]: Started cri-containerd-d0a8ebab3780ba9c4acad5b0d01c09b776c06e61facfe3c6cfff53dc7dea733b.scope - libcontainer container d0a8ebab3780ba9c4acad5b0d01c09b776c06e61facfe3c6cfff53dc7dea733b. Jul 15 04:43:17.452514 containerd[1922]: time="2025-07-15T04:43:17.452264573Z" level=info msg="connecting to shim edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c" address="unix:///run/containerd/s/df303501caa4ebb55b1db2970d3543687fcc92b5b0aab4cb2caac8dcbdeb4a23" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:17.473291 containerd[1922]: time="2025-07-15T04:43:17.473168142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w7drp,Uid:1a6f37fb-828a-44e1-b45f-c2ae1b99f4be,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0a8ebab3780ba9c4acad5b0d01c09b776c06e61facfe3c6cfff53dc7dea733b\"" Jul 15 04:43:17.475947 containerd[1922]: time="2025-07-15T04:43:17.475919115Z" level=info msg="CreateContainer within sandbox \"d0a8ebab3780ba9c4acad5b0d01c09b776c06e61facfe3c6cfff53dc7dea733b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 04:43:17.477114 systemd[1]: Started cri-containerd-edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c.scope - libcontainer container edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c. Jul 15 04:43:17.505190 containerd[1922]: time="2025-07-15T04:43:17.505084276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5frr4,Uid:a8242d21-e741-47a7-8237-0adc0e3e9fec,Namespace:kube-system,Attempt:0,} returns sandbox id \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\"" Jul 15 04:43:17.506732 containerd[1922]: time="2025-07-15T04:43:17.506690487Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 04:43:17.517773 containerd[1922]: time="2025-07-15T04:43:17.517735166Z" level=info msg="Container 09408f8e1e6e9e9a876f84b9385ad8b8231ad2106ec4601f3a386e72508f5217: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:17.548841 containerd[1922]: time="2025-07-15T04:43:17.548697529Z" level=info msg="CreateContainer within sandbox \"d0a8ebab3780ba9c4acad5b0d01c09b776c06e61facfe3c6cfff53dc7dea733b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"09408f8e1e6e9e9a876f84b9385ad8b8231ad2106ec4601f3a386e72508f5217\"" Jul 15 04:43:17.549554 containerd[1922]: time="2025-07-15T04:43:17.549447532Z" level=info msg="StartContainer for \"09408f8e1e6e9e9a876f84b9385ad8b8231ad2106ec4601f3a386e72508f5217\"" Jul 15 04:43:17.551117 containerd[1922]: time="2025-07-15T04:43:17.550957460Z" level=info msg="connecting to shim 09408f8e1e6e9e9a876f84b9385ad8b8231ad2106ec4601f3a386e72508f5217" address="unix:///run/containerd/s/274a9349fc7fb75948e667da25f6a7ef4421d29614d25c14de5fb25bc06df190" protocol=ttrpc version=3 Jul 15 04:43:17.576999 systemd[1]: Started cri-containerd-09408f8e1e6e9e9a876f84b9385ad8b8231ad2106ec4601f3a386e72508f5217.scope - libcontainer container 09408f8e1e6e9e9a876f84b9385ad8b8231ad2106ec4601f3a386e72508f5217. Jul 15 04:43:17.609509 containerd[1922]: time="2025-07-15T04:43:17.609110279Z" level=info msg="StartContainer for \"09408f8e1e6e9e9a876f84b9385ad8b8231ad2106ec4601f3a386e72508f5217\" returns successfully" Jul 15 04:43:17.697390 containerd[1922]: time="2025-07-15T04:43:17.697241601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jvlp7,Uid:f1f7f9c9-f118-42b7-b358-1687a352ea1a,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:17.758191 containerd[1922]: time="2025-07-15T04:43:17.757932257Z" level=info msg="connecting to shim f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e" address="unix:///run/containerd/s/57d7d41f295b736b3165412e84750d391c0060e36b9cb938e81630a177b95c8e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:17.775006 systemd[1]: Started cri-containerd-f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e.scope - libcontainer container f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e. Jul 15 04:43:17.807373 containerd[1922]: time="2025-07-15T04:43:17.807330171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jvlp7,Uid:f1f7f9c9-f118-42b7-b358-1687a352ea1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e\"" Jul 15 04:43:18.581890 kubelet[3497]: I0715 04:43:18.581678 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w7drp" podStartSLOduration=1.5816623920000001 podStartE2EDuration="1.581662392s" podCreationTimestamp="2025-07-15 04:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:43:18.581541043 +0000 UTC m=+8.133739790" watchObservedRunningTime="2025-07-15 04:43:18.581662392 +0000 UTC m=+8.133861139" Jul 15 04:43:24.390191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935385424.mount: Deactivated successfully. Jul 15 04:43:31.920547 containerd[1922]: time="2025-07-15T04:43:31.920486295Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:31.923903 containerd[1922]: time="2025-07-15T04:43:31.923877182Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 15 04:43:31.932251 containerd[1922]: time="2025-07-15T04:43:31.932228582Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:31.933449 containerd[1922]: time="2025-07-15T04:43:31.933422794Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.426595758s" Jul 15 04:43:31.933471 containerd[1922]: time="2025-07-15T04:43:31.933454731Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 15 04:43:31.935171 containerd[1922]: time="2025-07-15T04:43:31.934966684Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 04:43:31.936396 containerd[1922]: time="2025-07-15T04:43:31.936362816Z" level=info msg="CreateContainer within sandbox \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 04:43:31.967741 containerd[1922]: time="2025-07-15T04:43:31.967687617Z" level=info msg="Container 43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:31.987298 containerd[1922]: time="2025-07-15T04:43:31.987247828Z" level=info msg="CreateContainer within sandbox \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\"" Jul 15 04:43:31.987770 containerd[1922]: time="2025-07-15T04:43:31.987747023Z" level=info msg="StartContainer for \"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\"" Jul 15 04:43:31.988746 containerd[1922]: time="2025-07-15T04:43:31.988686354Z" level=info msg="connecting to shim 43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80" address="unix:///run/containerd/s/df303501caa4ebb55b1db2970d3543687fcc92b5b0aab4cb2caac8dcbdeb4a23" protocol=ttrpc version=3 Jul 15 04:43:32.007989 systemd[1]: Started cri-containerd-43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80.scope - libcontainer container 43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80. Jul 15 04:43:32.035336 systemd[1]: cri-containerd-43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80.scope: Deactivated successfully. Jul 15 04:43:32.036784 containerd[1922]: time="2025-07-15T04:43:32.036728923Z" level=info msg="StartContainer for \"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\" returns successfully" Jul 15 04:43:32.038591 containerd[1922]: time="2025-07-15T04:43:32.038550375Z" level=info msg="received exit event container_id:\"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\" id:\"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\" pid:3917 exited_at:{seconds:1752554612 nanos:38266645}" Jul 15 04:43:32.038787 containerd[1922]: time="2025-07-15T04:43:32.038763503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\" id:\"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\" pid:3917 exited_at:{seconds:1752554612 nanos:38266645}" Jul 15 04:43:32.054205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80-rootfs.mount: Deactivated successfully. Jul 15 04:43:37.605445 containerd[1922]: time="2025-07-15T04:43:37.605400294Z" level=info msg="CreateContainer within sandbox \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 04:43:37.636172 containerd[1922]: time="2025-07-15T04:43:37.636075822Z" level=info msg="Container cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:37.661729 containerd[1922]: time="2025-07-15T04:43:37.661687466Z" level=info msg="CreateContainer within sandbox \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\"" Jul 15 04:43:37.662547 containerd[1922]: time="2025-07-15T04:43:37.662137109Z" level=info msg="StartContainer for \"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\"" Jul 15 04:43:37.663150 containerd[1922]: time="2025-07-15T04:43:37.663121830Z" level=info msg="connecting to shim cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097" address="unix:///run/containerd/s/df303501caa4ebb55b1db2970d3543687fcc92b5b0aab4cb2caac8dcbdeb4a23" protocol=ttrpc version=3 Jul 15 04:43:37.679973 systemd[1]: Started cri-containerd-cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097.scope - libcontainer container cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097. Jul 15 04:43:37.705668 containerd[1922]: time="2025-07-15T04:43:37.705572184Z" level=info msg="StartContainer for \"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\" returns successfully" Jul 15 04:43:37.714835 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 04:43:37.715153 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:43:37.715553 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:43:37.717696 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:43:37.721327 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 04:43:37.721628 systemd[1]: cri-containerd-cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097.scope: Deactivated successfully. Jul 15 04:43:37.724009 containerd[1922]: time="2025-07-15T04:43:37.723980312Z" level=info msg="received exit event container_id:\"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\" id:\"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\" pid:3963 exited_at:{seconds:1752554617 nanos:721177099}" Jul 15 04:43:37.724355 containerd[1922]: time="2025-07-15T04:43:37.724336942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\" id:\"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\" pid:3963 exited_at:{seconds:1752554617 nanos:721177099}" Jul 15 04:43:37.736571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:43:38.609349 containerd[1922]: time="2025-07-15T04:43:38.609244822Z" level=info msg="CreateContainer within sandbox \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 04:43:38.635542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097-rootfs.mount: Deactivated successfully. Jul 15 04:43:39.473672 containerd[1922]: time="2025-07-15T04:43:39.473568684Z" level=info msg="Container b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:39.475927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4846505.mount: Deactivated successfully. Jul 15 04:43:39.878138 containerd[1922]: time="2025-07-15T04:43:39.877891865Z" level=info msg="CreateContainer within sandbox \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\"" Jul 15 04:43:39.879159 containerd[1922]: time="2025-07-15T04:43:39.878959990Z" level=info msg="StartContainer for \"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\"" Jul 15 04:43:39.880305 containerd[1922]: time="2025-07-15T04:43:39.880285317Z" level=info msg="connecting to shim b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a" address="unix:///run/containerd/s/df303501caa4ebb55b1db2970d3543687fcc92b5b0aab4cb2caac8dcbdeb4a23" protocol=ttrpc version=3 Jul 15 04:43:39.900996 systemd[1]: Started cri-containerd-b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a.scope - libcontainer container b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a. Jul 15 04:43:39.925979 systemd[1]: cri-containerd-b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a.scope: Deactivated successfully. Jul 15 04:43:39.927848 containerd[1922]: time="2025-07-15T04:43:39.927817667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\" id:\"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\" pid:4017 exited_at:{seconds:1752554619 nanos:927086045}" Jul 15 04:43:39.973821 containerd[1922]: time="2025-07-15T04:43:39.973736686Z" level=info msg="received exit event container_id:\"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\" id:\"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\" pid:4017 exited_at:{seconds:1752554619 nanos:927086045}" Jul 15 04:43:39.975879 containerd[1922]: time="2025-07-15T04:43:39.975843134Z" level=info msg="StartContainer for \"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\" returns successfully" Jul 15 04:43:39.994409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a-rootfs.mount: Deactivated successfully. Jul 15 04:43:41.618243 containerd[1922]: time="2025-07-15T04:43:41.618199282Z" level=info msg="CreateContainer within sandbox \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 04:43:41.823418 containerd[1922]: time="2025-07-15T04:43:41.823370535Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:41.871982 containerd[1922]: time="2025-07-15T04:43:41.871848036Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 15 04:43:41.920600 containerd[1922]: time="2025-07-15T04:43:41.920489513Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:41.970741 containerd[1922]: time="2025-07-15T04:43:41.970644748Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 10.035651752s" Jul 15 04:43:41.971016 containerd[1922]: time="2025-07-15T04:43:41.970970225Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 15 04:43:41.971549 containerd[1922]: time="2025-07-15T04:43:41.970725327Z" level=info msg="Container 696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:41.975138 containerd[1922]: time="2025-07-15T04:43:41.975117350Z" level=info msg="CreateContainer within sandbox \"f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 04:43:42.171269 containerd[1922]: time="2025-07-15T04:43:42.171230490Z" level=info msg="CreateContainer within sandbox \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\"" Jul 15 04:43:42.171798 containerd[1922]: time="2025-07-15T04:43:42.171759528Z" level=info msg="StartContainer for \"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\"" Jul 15 04:43:42.173188 containerd[1922]: time="2025-07-15T04:43:42.172909584Z" level=info msg="connecting to shim 696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb" address="unix:///run/containerd/s/df303501caa4ebb55b1db2970d3543687fcc92b5b0aab4cb2caac8dcbdeb4a23" protocol=ttrpc version=3 Jul 15 04:43:42.191975 systemd[1]: Started cri-containerd-696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb.scope - libcontainer container 696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb. Jul 15 04:43:42.210701 systemd[1]: cri-containerd-696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb.scope: Deactivated successfully. Jul 15 04:43:42.214371 containerd[1922]: time="2025-07-15T04:43:42.213913358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\" id:\"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\" pid:4064 exited_at:{seconds:1752554622 nanos:213664668}" Jul 15 04:43:42.221801 containerd[1922]: time="2025-07-15T04:43:42.221769494Z" level=info msg="received exit event container_id:\"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\" id:\"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\" pid:4064 exited_at:{seconds:1752554622 nanos:213664668}" Jul 15 04:43:42.223265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408676848.mount: Deactivated successfully. Jul 15 04:43:42.224258 containerd[1922]: time="2025-07-15T04:43:42.223620171Z" level=info msg="Container d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:42.224743 containerd[1922]: time="2025-07-15T04:43:42.224720209Z" level=info msg="StartContainer for \"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\" returns successfully" Jul 15 04:43:42.970310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb-rootfs.mount: Deactivated successfully. Jul 15 04:43:44.587345 containerd[1922]: time="2025-07-15T04:43:44.587290427Z" level=info msg="CreateContainer within sandbox \"f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\"" Jul 15 04:43:44.589096 containerd[1922]: time="2025-07-15T04:43:44.589034588Z" level=info msg="StartContainer for \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\"" Jul 15 04:43:44.590006 containerd[1922]: time="2025-07-15T04:43:44.589934502Z" level=info msg="connecting to shim d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df" address="unix:///run/containerd/s/57d7d41f295b736b3165412e84750d391c0060e36b9cb938e81630a177b95c8e" protocol=ttrpc version=3 Jul 15 04:43:44.608995 systemd[1]: Started cri-containerd-d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df.scope - libcontainer container d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df. Jul 15 04:43:44.633112 containerd[1922]: time="2025-07-15T04:43:44.632519604Z" level=info msg="CreateContainer within sandbox \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 04:43:44.640487 containerd[1922]: time="2025-07-15T04:43:44.640396465Z" level=info msg="StartContainer for \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\" returns successfully" Jul 15 04:43:44.662523 containerd[1922]: time="2025-07-15T04:43:44.662036240Z" level=info msg="Container 37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:44.666133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794341236.mount: Deactivated successfully. Jul 15 04:43:44.682050 containerd[1922]: time="2025-07-15T04:43:44.681204636Z" level=info msg="CreateContainer within sandbox \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\"" Jul 15 04:43:44.682050 containerd[1922]: time="2025-07-15T04:43:44.681802708Z" level=info msg="StartContainer for \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\"" Jul 15 04:43:44.682677 containerd[1922]: time="2025-07-15T04:43:44.682644254Z" level=info msg="connecting to shim 37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86" address="unix:///run/containerd/s/df303501caa4ebb55b1db2970d3543687fcc92b5b0aab4cb2caac8dcbdeb4a23" protocol=ttrpc version=3 Jul 15 04:43:44.708289 systemd[1]: Started cri-containerd-37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86.scope - libcontainer container 37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86. Jul 15 04:43:44.746571 containerd[1922]: time="2025-07-15T04:43:44.746470440Z" level=info msg="StartContainer for \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" returns successfully" Jul 15 04:43:44.886812 containerd[1922]: time="2025-07-15T04:43:44.886774266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" id:\"1a4c9fe0d6d19c35cc6c13c1ce04ebfb17500964f7a4ac443044e2bfb71c1596\" pid:4168 exited_at:{seconds:1752554624 nanos:885200123}" Jul 15 04:43:44.989340 kubelet[3497]: I0715 04:43:44.989232 3497 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 04:43:45.104241 systemd[1]: Created slice kubepods-burstable-pod04f0d867_99cb_4c0b_ad66_f888bd9517a8.slice - libcontainer container kubepods-burstable-pod04f0d867_99cb_4c0b_ad66_f888bd9517a8.slice. Jul 15 04:43:45.111109 systemd[1]: Created slice kubepods-burstable-pod19e0ccda_2de2_437e_9553_ac01f0b927e1.slice - libcontainer container kubepods-burstable-pod19e0ccda_2de2_437e_9553_ac01f0b927e1.slice. Jul 15 04:43:45.193495 kubelet[3497]: I0715 04:43:45.193458 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktnsr\" (UniqueName: \"kubernetes.io/projected/19e0ccda-2de2-437e-9553-ac01f0b927e1-kube-api-access-ktnsr\") pod \"coredns-668d6bf9bc-zb6n8\" (UID: \"19e0ccda-2de2-437e-9553-ac01f0b927e1\") " pod="kube-system/coredns-668d6bf9bc-zb6n8" Jul 15 04:43:45.193495 kubelet[3497]: I0715 04:43:45.193494 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04f0d867-99cb-4c0b-ad66-f888bd9517a8-config-volume\") pod \"coredns-668d6bf9bc-dtp7j\" (UID: \"04f0d867-99cb-4c0b-ad66-f888bd9517a8\") " pod="kube-system/coredns-668d6bf9bc-dtp7j" Jul 15 04:43:45.193495 kubelet[3497]: I0715 04:43:45.193510 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfn4v\" (UniqueName: \"kubernetes.io/projected/04f0d867-99cb-4c0b-ad66-f888bd9517a8-kube-api-access-tfn4v\") pod \"coredns-668d6bf9bc-dtp7j\" (UID: \"04f0d867-99cb-4c0b-ad66-f888bd9517a8\") " pod="kube-system/coredns-668d6bf9bc-dtp7j" Jul 15 04:43:45.193685 kubelet[3497]: I0715 04:43:45.193540 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e0ccda-2de2-437e-9553-ac01f0b927e1-config-volume\") pod \"coredns-668d6bf9bc-zb6n8\" (UID: \"19e0ccda-2de2-437e-9553-ac01f0b927e1\") " pod="kube-system/coredns-668d6bf9bc-zb6n8" Jul 15 04:43:45.410169 containerd[1922]: time="2025-07-15T04:43:45.410129918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dtp7j,Uid:04f0d867-99cb-4c0b-ad66-f888bd9517a8,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:45.416292 containerd[1922]: time="2025-07-15T04:43:45.416254221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zb6n8,Uid:19e0ccda-2de2-437e-9553-ac01f0b927e1,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:45.674057 kubelet[3497]: I0715 04:43:45.673739 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5frr4" podStartSLOduration=14.24568929 podStartE2EDuration="28.673721661s" podCreationTimestamp="2025-07-15 04:43:17 +0000 UTC" firstStartedPulling="2025-07-15 04:43:17.506188997 +0000 UTC m=+7.058387752" lastFinishedPulling="2025-07-15 04:43:31.934221376 +0000 UTC m=+21.486420123" observedRunningTime="2025-07-15 04:43:45.66201755 +0000 UTC m=+35.214216305" watchObservedRunningTime="2025-07-15 04:43:45.673721661 +0000 UTC m=+35.225920408" Jul 15 04:43:47.783841 systemd-networkd[1701]: cilium_host: Link UP Jul 15 04:43:47.786266 systemd-networkd[1701]: cilium_net: Link UP Jul 15 04:43:47.786579 systemd-networkd[1701]: cilium_net: Gained carrier Jul 15 04:43:47.787001 systemd-networkd[1701]: cilium_host: Gained carrier Jul 15 04:43:47.915187 systemd-networkd[1701]: cilium_vxlan: Link UP Jul 15 04:43:47.915452 systemd-networkd[1701]: cilium_vxlan: Gained carrier Jul 15 04:43:48.008990 systemd-networkd[1701]: cilium_host: Gained IPv6LL Jul 15 04:43:48.209925 kernel: NET: Registered PF_ALG protocol family Jul 15 04:43:48.384178 systemd-networkd[1701]: cilium_net: Gained IPv6LL Jul 15 04:43:48.707269 systemd-networkd[1701]: lxc_health: Link UP Jul 15 04:43:48.716907 systemd-networkd[1701]: lxc_health: Gained carrier Jul 15 04:43:48.959971 kernel: eth0: renamed from tmpef19f Jul 15 04:43:48.959829 systemd-networkd[1701]: lxc56113787286e: Link UP Jul 15 04:43:48.961737 systemd-networkd[1701]: lxc033aceed4911: Link UP Jul 15 04:43:48.969763 systemd-networkd[1701]: lxc56113787286e: Gained carrier Jul 15 04:43:48.974878 kernel: eth0: renamed from tmp868b3 Jul 15 04:43:48.977608 systemd-networkd[1701]: lxc033aceed4911: Gained carrier Jul 15 04:43:49.152009 systemd-networkd[1701]: cilium_vxlan: Gained IPv6LL Jul 15 04:43:49.379389 kubelet[3497]: I0715 04:43:49.379090 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jvlp7" podStartSLOduration=8.215120533 podStartE2EDuration="32.379071903s" podCreationTimestamp="2025-07-15 04:43:17 +0000 UTC" firstStartedPulling="2025-07-15 04:43:17.808621842 +0000 UTC m=+7.360820589" lastFinishedPulling="2025-07-15 04:43:41.972573212 +0000 UTC m=+31.524771959" observedRunningTime="2025-07-15 04:43:45.674408961 +0000 UTC m=+35.226607724" watchObservedRunningTime="2025-07-15 04:43:49.379071903 +0000 UTC m=+38.931270778" Jul 15 04:43:49.856031 systemd-networkd[1701]: lxc_health: Gained IPv6LL Jul 15 04:43:50.496010 systemd-networkd[1701]: lxc033aceed4911: Gained IPv6LL Jul 15 04:43:50.624742 systemd-networkd[1701]: lxc56113787286e: Gained IPv6LL Jul 15 04:43:51.538538 containerd[1922]: time="2025-07-15T04:43:51.538458806Z" level=info msg="connecting to shim 868b3506c84e74190428718399fdf5e50890d0c3a718f98160cad00a5b204e0c" address="unix:///run/containerd/s/9b1d3d99d3b423e5f8b64666f11de94e44f21dea71f3b7116f669869779fcc7f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:51.539386 containerd[1922]: time="2025-07-15T04:43:51.539212043Z" level=info msg="connecting to shim ef19f1620f89600e6b51653fd5ea76e8148ffe665efb63cc0eb5e13467ae5a19" address="unix:///run/containerd/s/56ea83c8360900628682633f2c87928eab8685207f8d458d9319eb914daf036f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:51.569019 systemd[1]: Started cri-containerd-868b3506c84e74190428718399fdf5e50890d0c3a718f98160cad00a5b204e0c.scope - libcontainer container 868b3506c84e74190428718399fdf5e50890d0c3a718f98160cad00a5b204e0c. Jul 15 04:43:51.572061 systemd[1]: Started cri-containerd-ef19f1620f89600e6b51653fd5ea76e8148ffe665efb63cc0eb5e13467ae5a19.scope - libcontainer container ef19f1620f89600e6b51653fd5ea76e8148ffe665efb63cc0eb5e13467ae5a19. Jul 15 04:43:51.607900 containerd[1922]: time="2025-07-15T04:43:51.607834482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zb6n8,Uid:19e0ccda-2de2-437e-9553-ac01f0b927e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"868b3506c84e74190428718399fdf5e50890d0c3a718f98160cad00a5b204e0c\"" Jul 15 04:43:51.611089 containerd[1922]: time="2025-07-15T04:43:51.611051285Z" level=info msg="CreateContainer within sandbox \"868b3506c84e74190428718399fdf5e50890d0c3a718f98160cad00a5b204e0c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:43:51.612572 containerd[1922]: time="2025-07-15T04:43:51.612488963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dtp7j,Uid:04f0d867-99cb-4c0b-ad66-f888bd9517a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef19f1620f89600e6b51653fd5ea76e8148ffe665efb63cc0eb5e13467ae5a19\"" Jul 15 04:43:51.615953 containerd[1922]: time="2025-07-15T04:43:51.615922102Z" level=info msg="CreateContainer within sandbox \"ef19f1620f89600e6b51653fd5ea76e8148ffe665efb63cc0eb5e13467ae5a19\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:43:51.653157 containerd[1922]: time="2025-07-15T04:43:51.652995715Z" level=info msg="Container b88f839bc228255759480ba10c2b2c6e490efbce6eda02d5bb4e1767e18eb395: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:51.659015 containerd[1922]: time="2025-07-15T04:43:51.658648451Z" level=info msg="Container 578b7b13c2789d0f7ac22c7613e6a06652480732752385c93a8d03fb06d03bd9: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:51.678848 containerd[1922]: time="2025-07-15T04:43:51.678813635Z" level=info msg="CreateContainer within sandbox \"868b3506c84e74190428718399fdf5e50890d0c3a718f98160cad00a5b204e0c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b88f839bc228255759480ba10c2b2c6e490efbce6eda02d5bb4e1767e18eb395\"" Jul 15 04:43:51.679252 containerd[1922]: time="2025-07-15T04:43:51.679177497Z" level=info msg="StartContainer for \"b88f839bc228255759480ba10c2b2c6e490efbce6eda02d5bb4e1767e18eb395\"" Jul 15 04:43:51.679924 containerd[1922]: time="2025-07-15T04:43:51.679897925Z" level=info msg="connecting to shim b88f839bc228255759480ba10c2b2c6e490efbce6eda02d5bb4e1767e18eb395" address="unix:///run/containerd/s/9b1d3d99d3b423e5f8b64666f11de94e44f21dea71f3b7116f669869779fcc7f" protocol=ttrpc version=3 Jul 15 04:43:51.683473 containerd[1922]: time="2025-07-15T04:43:51.683437404Z" level=info msg="CreateContainer within sandbox \"ef19f1620f89600e6b51653fd5ea76e8148ffe665efb63cc0eb5e13467ae5a19\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"578b7b13c2789d0f7ac22c7613e6a06652480732752385c93a8d03fb06d03bd9\"" Jul 15 04:43:51.684891 containerd[1922]: time="2025-07-15T04:43:51.684169031Z" level=info msg="StartContainer for \"578b7b13c2789d0f7ac22c7613e6a06652480732752385c93a8d03fb06d03bd9\"" Jul 15 04:43:51.684891 containerd[1922]: time="2025-07-15T04:43:51.684685891Z" level=info msg="connecting to shim 578b7b13c2789d0f7ac22c7613e6a06652480732752385c93a8d03fb06d03bd9" address="unix:///run/containerd/s/56ea83c8360900628682633f2c87928eab8685207f8d458d9319eb914daf036f" protocol=ttrpc version=3 Jul 15 04:43:51.701270 systemd[1]: Started cri-containerd-578b7b13c2789d0f7ac22c7613e6a06652480732752385c93a8d03fb06d03bd9.scope - libcontainer container 578b7b13c2789d0f7ac22c7613e6a06652480732752385c93a8d03fb06d03bd9. Jul 15 04:43:51.702070 systemd[1]: Started cri-containerd-b88f839bc228255759480ba10c2b2c6e490efbce6eda02d5bb4e1767e18eb395.scope - libcontainer container b88f839bc228255759480ba10c2b2c6e490efbce6eda02d5bb4e1767e18eb395. Jul 15 04:43:51.749524 containerd[1922]: time="2025-07-15T04:43:51.749477472Z" level=info msg="StartContainer for \"578b7b13c2789d0f7ac22c7613e6a06652480732752385c93a8d03fb06d03bd9\" returns successfully" Jul 15 04:43:51.750056 containerd[1922]: time="2025-07-15T04:43:51.750032470Z" level=info msg="StartContainer for \"b88f839bc228255759480ba10c2b2c6e490efbce6eda02d5bb4e1767e18eb395\" returns successfully" Jul 15 04:43:52.672500 kubelet[3497]: I0715 04:43:52.672397 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zb6n8" podStartSLOduration=35.672383151 podStartE2EDuration="35.672383151s" podCreationTimestamp="2025-07-15 04:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:43:52.672364806 +0000 UTC m=+42.224563561" watchObservedRunningTime="2025-07-15 04:43:52.672383151 +0000 UTC m=+42.224581898" Jul 15 04:43:52.687927 kubelet[3497]: I0715 04:43:52.687766 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dtp7j" podStartSLOduration=35.687750548 podStartE2EDuration="35.687750548s" podCreationTimestamp="2025-07-15 04:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:43:52.686998152 +0000 UTC m=+42.239196899" watchObservedRunningTime="2025-07-15 04:43:52.687750548 +0000 UTC m=+42.239949319" Jul 15 04:44:47.810220 systemd[1]: Started sshd@7-10.200.20.23:22-10.200.16.10:37840.service - OpenSSH per-connection server daemon (10.200.16.10:37840). Jul 15 04:44:48.293999 sshd[4823]: Accepted publickey for core from 10.200.16.10 port 37840 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:44:48.295160 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:48.298925 systemd-logind[1891]: New session 10 of user core. Jul 15 04:44:48.307981 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 04:44:48.696684 sshd[4826]: Connection closed by 10.200.16.10 port 37840 Jul 15 04:44:48.697252 sshd-session[4823]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:48.700398 systemd[1]: sshd@7-10.200.20.23:22-10.200.16.10:37840.service: Deactivated successfully. Jul 15 04:44:48.702035 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 04:44:48.703662 systemd-logind[1891]: Session 10 logged out. Waiting for processes to exit. Jul 15 04:44:48.704773 systemd-logind[1891]: Removed session 10. Jul 15 04:44:53.780142 systemd[1]: Started sshd@8-10.200.20.23:22-10.200.16.10:47080.service - OpenSSH per-connection server daemon (10.200.16.10:47080). Jul 15 04:44:54.241636 sshd[4839]: Accepted publickey for core from 10.200.16.10 port 47080 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:44:54.242856 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:54.246378 systemd-logind[1891]: New session 11 of user core. Jul 15 04:44:54.259989 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 04:44:54.628844 sshd[4842]: Connection closed by 10.200.16.10 port 47080 Jul 15 04:44:54.629490 sshd-session[4839]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:54.632623 systemd[1]: sshd@8-10.200.20.23:22-10.200.16.10:47080.service: Deactivated successfully. Jul 15 04:44:54.634343 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 04:44:54.635084 systemd-logind[1891]: Session 11 logged out. Waiting for processes to exit. Jul 15 04:44:54.636223 systemd-logind[1891]: Removed session 11. Jul 15 04:44:59.718649 systemd[1]: Started sshd@9-10.200.20.23:22-10.200.16.10:47082.service - OpenSSH per-connection server daemon (10.200.16.10:47082). Jul 15 04:45:00.212929 sshd[4856]: Accepted publickey for core from 10.200.16.10 port 47082 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:00.214056 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:00.217927 systemd-logind[1891]: New session 12 of user core. Jul 15 04:45:00.226978 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 04:45:00.605084 sshd[4859]: Connection closed by 10.200.16.10 port 47082 Jul 15 04:45:00.605701 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:00.608892 systemd[1]: sshd@9-10.200.20.23:22-10.200.16.10:47082.service: Deactivated successfully. Jul 15 04:45:00.610876 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 04:45:00.612857 systemd-logind[1891]: Session 12 logged out. Waiting for processes to exit. Jul 15 04:45:00.614028 systemd-logind[1891]: Removed session 12. Jul 15 04:45:05.695752 systemd[1]: Started sshd@10-10.200.20.23:22-10.200.16.10:58962.service - OpenSSH per-connection server daemon (10.200.16.10:58962). Jul 15 04:45:06.179145 sshd[4871]: Accepted publickey for core from 10.200.16.10 port 58962 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:06.180244 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:06.183795 systemd-logind[1891]: New session 13 of user core. Jul 15 04:45:06.189990 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 04:45:06.556990 sshd[4875]: Connection closed by 10.200.16.10 port 58962 Jul 15 04:45:06.557719 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:06.562186 systemd[1]: sshd@10-10.200.20.23:22-10.200.16.10:58962.service: Deactivated successfully. Jul 15 04:45:06.563849 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 04:45:06.565034 systemd-logind[1891]: Session 13 logged out. Waiting for processes to exit. Jul 15 04:45:06.566517 systemd-logind[1891]: Removed session 13. Jul 15 04:45:06.647050 systemd[1]: Started sshd@11-10.200.20.23:22-10.200.16.10:58974.service - OpenSSH per-connection server daemon (10.200.16.10:58974). Jul 15 04:45:07.145230 sshd[4888]: Accepted publickey for core from 10.200.16.10 port 58974 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:07.146311 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:07.150048 systemd-logind[1891]: New session 14 of user core. Jul 15 04:45:07.152973 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 04:45:07.569228 sshd[4891]: Connection closed by 10.200.16.10 port 58974 Jul 15 04:45:07.568762 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:07.571444 systemd-logind[1891]: Session 14 logged out. Waiting for processes to exit. Jul 15 04:45:07.571797 systemd[1]: sshd@11-10.200.20.23:22-10.200.16.10:58974.service: Deactivated successfully. Jul 15 04:45:07.573478 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 04:45:07.575452 systemd-logind[1891]: Removed session 14. Jul 15 04:45:07.654457 systemd[1]: Started sshd@12-10.200.20.23:22-10.200.16.10:58988.service - OpenSSH per-connection server daemon (10.200.16.10:58988). Jul 15 04:45:08.147131 sshd[4900]: Accepted publickey for core from 10.200.16.10 port 58988 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:08.148658 sshd-session[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:08.152432 systemd-logind[1891]: New session 15 of user core. Jul 15 04:45:08.156980 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 04:45:08.526948 sshd[4903]: Connection closed by 10.200.16.10 port 58988 Jul 15 04:45:08.526456 sshd-session[4900]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:08.529551 systemd-logind[1891]: Session 15 logged out. Waiting for processes to exit. Jul 15 04:45:08.529717 systemd[1]: sshd@12-10.200.20.23:22-10.200.16.10:58988.service: Deactivated successfully. Jul 15 04:45:08.532462 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 04:45:08.534327 systemd-logind[1891]: Removed session 15. Jul 15 04:45:13.615067 systemd[1]: Started sshd@13-10.200.20.23:22-10.200.16.10:55296.service - OpenSSH per-connection server daemon (10.200.16.10:55296). Jul 15 04:45:14.106581 sshd[4916]: Accepted publickey for core from 10.200.16.10 port 55296 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:14.107662 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:14.111366 systemd-logind[1891]: New session 16 of user core. Jul 15 04:45:14.112987 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 04:45:14.498795 sshd[4919]: Connection closed by 10.200.16.10 port 55296 Jul 15 04:45:14.499391 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:14.502660 systemd-logind[1891]: Session 16 logged out. Waiting for processes to exit. Jul 15 04:45:14.502884 systemd[1]: sshd@13-10.200.20.23:22-10.200.16.10:55296.service: Deactivated successfully. Jul 15 04:45:14.504489 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 04:45:14.507131 systemd-logind[1891]: Removed session 16. Jul 15 04:45:14.591236 systemd[1]: Started sshd@14-10.200.20.23:22-10.200.16.10:55310.service - OpenSSH per-connection server daemon (10.200.16.10:55310). Jul 15 04:45:15.079743 sshd[4930]: Accepted publickey for core from 10.200.16.10 port 55310 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:15.080837 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:15.084903 systemd-logind[1891]: New session 17 of user core. Jul 15 04:45:15.090968 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 04:45:15.511804 sshd[4933]: Connection closed by 10.200.16.10 port 55310 Jul 15 04:45:15.512632 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:15.515732 systemd-logind[1891]: Session 17 logged out. Waiting for processes to exit. Jul 15 04:45:15.515904 systemd[1]: sshd@14-10.200.20.23:22-10.200.16.10:55310.service: Deactivated successfully. Jul 15 04:45:15.517402 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 04:45:15.519257 systemd-logind[1891]: Removed session 17. Jul 15 04:45:15.599648 systemd[1]: Started sshd@15-10.200.20.23:22-10.200.16.10:55318.service - OpenSSH per-connection server daemon (10.200.16.10:55318). Jul 15 04:45:16.094486 sshd[4943]: Accepted publickey for core from 10.200.16.10 port 55318 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:16.095582 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:16.099405 systemd-logind[1891]: New session 18 of user core. Jul 15 04:45:16.107987 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 04:45:17.166110 sshd[4946]: Connection closed by 10.200.16.10 port 55318 Jul 15 04:45:17.166703 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:17.170840 systemd-logind[1891]: Session 18 logged out. Waiting for processes to exit. Jul 15 04:45:17.171189 systemd[1]: sshd@15-10.200.20.23:22-10.200.16.10:55318.service: Deactivated successfully. Jul 15 04:45:17.172974 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 04:45:17.174572 systemd-logind[1891]: Removed session 18. Jul 15 04:45:17.248463 systemd[1]: Started sshd@16-10.200.20.23:22-10.200.16.10:55324.service - OpenSSH per-connection server daemon (10.200.16.10:55324). Jul 15 04:45:17.710738 sshd[4963]: Accepted publickey for core from 10.200.16.10 port 55324 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:17.711819 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:17.715596 systemd-logind[1891]: New session 19 of user core. Jul 15 04:45:17.719999 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 04:45:18.165155 sshd[4968]: Connection closed by 10.200.16.10 port 55324 Jul 15 04:45:18.165480 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:18.169181 systemd[1]: sshd@16-10.200.20.23:22-10.200.16.10:55324.service: Deactivated successfully. Jul 15 04:45:18.171218 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 04:45:18.172167 systemd-logind[1891]: Session 19 logged out. Waiting for processes to exit. Jul 15 04:45:18.173184 systemd-logind[1891]: Removed session 19. Jul 15 04:45:18.253646 systemd[1]: Started sshd@17-10.200.20.23:22-10.200.16.10:55332.service - OpenSSH per-connection server daemon (10.200.16.10:55332). Jul 15 04:45:18.746579 sshd[4978]: Accepted publickey for core from 10.200.16.10 port 55332 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:18.747665 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:18.751428 systemd-logind[1891]: New session 20 of user core. Jul 15 04:45:18.755968 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 04:45:19.138978 sshd[4981]: Connection closed by 10.200.16.10 port 55332 Jul 15 04:45:19.139518 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:19.142661 systemd[1]: sshd@17-10.200.20.23:22-10.200.16.10:55332.service: Deactivated successfully. Jul 15 04:45:19.144394 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 04:45:19.145102 systemd-logind[1891]: Session 20 logged out. Waiting for processes to exit. Jul 15 04:45:19.146151 systemd-logind[1891]: Removed session 20. Jul 15 04:45:24.233467 systemd[1]: Started sshd@18-10.200.20.23:22-10.200.16.10:60862.service - OpenSSH per-connection server daemon (10.200.16.10:60862). Jul 15 04:45:24.713241 sshd[4994]: Accepted publickey for core from 10.200.16.10 port 60862 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:24.714370 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:24.717934 systemd-logind[1891]: New session 21 of user core. Jul 15 04:45:24.728097 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 04:45:25.091320 sshd[4997]: Connection closed by 10.200.16.10 port 60862 Jul 15 04:45:25.091847 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:25.095130 systemd[1]: sshd@18-10.200.20.23:22-10.200.16.10:60862.service: Deactivated successfully. Jul 15 04:45:25.096699 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 04:45:25.097363 systemd-logind[1891]: Session 21 logged out. Waiting for processes to exit. Jul 15 04:45:25.098573 systemd-logind[1891]: Removed session 21. Jul 15 04:45:30.188843 systemd[1]: Started sshd@19-10.200.20.23:22-10.200.16.10:34168.service - OpenSSH per-connection server daemon (10.200.16.10:34168). Jul 15 04:45:30.683523 sshd[5008]: Accepted publickey for core from 10.200.16.10 port 34168 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:30.684585 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:30.688125 systemd-logind[1891]: New session 22 of user core. Jul 15 04:45:30.699981 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 04:45:31.066831 sshd[5011]: Connection closed by 10.200.16.10 port 34168 Jul 15 04:45:31.066288 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:31.069627 systemd[1]: sshd@19-10.200.20.23:22-10.200.16.10:34168.service: Deactivated successfully. Jul 15 04:45:31.071687 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 04:45:31.072469 systemd-logind[1891]: Session 22 logged out. Waiting for processes to exit. Jul 15 04:45:31.075506 systemd-logind[1891]: Removed session 22. Jul 15 04:45:36.150980 systemd[1]: Started sshd@20-10.200.20.23:22-10.200.16.10:34180.service - OpenSSH per-connection server daemon (10.200.16.10:34180). Jul 15 04:45:36.616439 sshd[5022]: Accepted publickey for core from 10.200.16.10 port 34180 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:36.617573 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:36.621313 systemd-logind[1891]: New session 23 of user core. Jul 15 04:45:36.629998 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 04:45:36.995708 sshd[5025]: Connection closed by 10.200.16.10 port 34180 Jul 15 04:45:36.996289 sshd-session[5022]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:36.999487 systemd[1]: sshd@20-10.200.20.23:22-10.200.16.10:34180.service: Deactivated successfully. Jul 15 04:45:37.000834 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 04:45:37.002233 systemd-logind[1891]: Session 23 logged out. Waiting for processes to exit. Jul 15 04:45:37.003894 systemd-logind[1891]: Removed session 23. Jul 15 04:45:37.084577 systemd[1]: Started sshd@21-10.200.20.23:22-10.200.16.10:34188.service - OpenSSH per-connection server daemon (10.200.16.10:34188). Jul 15 04:45:37.581311 sshd[5036]: Accepted publickey for core from 10.200.16.10 port 34188 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:37.582449 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:37.586245 systemd-logind[1891]: New session 24 of user core. Jul 15 04:45:37.594991 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 04:45:39.146432 containerd[1922]: time="2025-07-15T04:45:39.146305790Z" level=info msg="StopContainer for \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\" with timeout 30 (s)" Jul 15 04:45:39.147856 containerd[1922]: time="2025-07-15T04:45:39.147792173Z" level=info msg="Stop container \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\" with signal terminated" Jul 15 04:45:39.158356 containerd[1922]: time="2025-07-15T04:45:39.158322173Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:45:39.159655 systemd[1]: cri-containerd-d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df.scope: Deactivated successfully. Jul 15 04:45:39.161720 containerd[1922]: time="2025-07-15T04:45:39.161678026Z" level=info msg="received exit event container_id:\"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\" id:\"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\" pid:4103 exited_at:{seconds:1752554739 nanos:161391303}" Jul 15 04:45:39.162005 containerd[1922]: time="2025-07-15T04:45:39.161836448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\" id:\"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\" pid:4103 exited_at:{seconds:1752554739 nanos:161391303}" Jul 15 04:45:39.163877 containerd[1922]: time="2025-07-15T04:45:39.163672708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" id:\"712d0fa4e00be44c9e6bae223584f0fd1bff6099276ac597674b8d0c589ac832\" pid:5058 exited_at:{seconds:1752554739 nanos:163092223}" Jul 15 04:45:39.168735 containerd[1922]: time="2025-07-15T04:45:39.168469487Z" level=info msg="StopContainer for \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" with timeout 2 (s)" Jul 15 04:45:39.170333 containerd[1922]: time="2025-07-15T04:45:39.170303835Z" level=info msg="Stop container \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" with signal terminated" Jul 15 04:45:39.176562 systemd-networkd[1701]: lxc_health: Link DOWN Jul 15 04:45:39.176970 systemd-networkd[1701]: lxc_health: Lost carrier Jul 15 04:45:39.185456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df-rootfs.mount: Deactivated successfully. Jul 15 04:45:39.191120 systemd[1]: cri-containerd-37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86.scope: Deactivated successfully. Jul 15 04:45:39.191369 systemd[1]: cri-containerd-37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86.scope: Consumed 4.327s CPU time, 124M memory peak, 136K read from disk, 12.9M written to disk. Jul 15 04:45:39.193112 containerd[1922]: time="2025-07-15T04:45:39.193080492Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" id:\"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" pid:4137 exited_at:{seconds:1752554739 nanos:192185210}" Jul 15 04:45:39.193197 containerd[1922]: time="2025-07-15T04:45:39.193140446Z" level=info msg="received exit event container_id:\"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" id:\"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" pid:4137 exited_at:{seconds:1752554739 nanos:192185210}" Jul 15 04:45:39.208216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86-rootfs.mount: Deactivated successfully. Jul 15 04:45:39.245350 containerd[1922]: time="2025-07-15T04:45:39.245308517Z" level=info msg="StopContainer for \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\" returns successfully" Jul 15 04:45:39.246041 containerd[1922]: time="2025-07-15T04:45:39.246011751Z" level=info msg="StopPodSandbox for \"f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e\"" Jul 15 04:45:39.246107 containerd[1922]: time="2025-07-15T04:45:39.246066473Z" level=info msg="Container to stop \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:45:39.249767 containerd[1922]: time="2025-07-15T04:45:39.249715361Z" level=info msg="StopContainer for \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" returns successfully" Jul 15 04:45:39.250164 containerd[1922]: time="2025-07-15T04:45:39.250103735Z" level=info msg="StopPodSandbox for \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\"" Jul 15 04:45:39.250164 containerd[1922]: time="2025-07-15T04:45:39.250150217Z" level=info msg="Container to stop \"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:45:39.250164 containerd[1922]: time="2025-07-15T04:45:39.250158241Z" level=info msg="Container to stop \"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:45:39.250164 containerd[1922]: time="2025-07-15T04:45:39.250164130Z" level=info msg="Container to stop \"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:45:39.250164 containerd[1922]: time="2025-07-15T04:45:39.250169938Z" level=info msg="Container to stop \"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:45:39.250287 containerd[1922]: time="2025-07-15T04:45:39.250175642Z" level=info msg="Container to stop \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:45:39.252758 systemd[1]: cri-containerd-f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e.scope: Deactivated successfully. Jul 15 04:45:39.254261 containerd[1922]: time="2025-07-15T04:45:39.254228961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e\" id:\"f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e\" pid:3771 exit_status:137 exited_at:{seconds:1752554739 nanos:253811634}" Jul 15 04:45:39.257818 systemd[1]: cri-containerd-edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c.scope: Deactivated successfully. Jul 15 04:45:39.278435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c-rootfs.mount: Deactivated successfully. Jul 15 04:45:39.282004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e-rootfs.mount: Deactivated successfully. Jul 15 04:45:39.299572 containerd[1922]: time="2025-07-15T04:45:39.299483263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" id:\"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" pid:3652 exit_status:137 exited_at:{seconds:1752554739 nanos:263026561}" Jul 15 04:45:39.301324 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e-shm.mount: Deactivated successfully. Jul 15 04:45:39.301788 containerd[1922]: time="2025-07-15T04:45:39.301702337Z" level=info msg="received exit event sandbox_id:\"f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e\" exit_status:137 exited_at:{seconds:1752554739 nanos:253811634}" Jul 15 04:45:39.301931 containerd[1922]: time="2025-07-15T04:45:39.301907761Z" level=info msg="received exit event sandbox_id:\"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" exit_status:137 exited_at:{seconds:1752554739 nanos:263026561}" Jul 15 04:45:39.303949 containerd[1922]: time="2025-07-15T04:45:39.302196236Z" level=info msg="shim disconnected" id=f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e namespace=k8s.io Jul 15 04:45:39.303949 containerd[1922]: time="2025-07-15T04:45:39.303762046Z" level=warning msg="cleaning up after shim disconnected" id=f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e namespace=k8s.io Jul 15 04:45:39.303949 containerd[1922]: time="2025-07-15T04:45:39.303785039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 04:45:39.304436 containerd[1922]: time="2025-07-15T04:45:39.304395822Z" level=info msg="shim disconnected" id=edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c namespace=k8s.io Jul 15 04:45:39.304603 containerd[1922]: time="2025-07-15T04:45:39.304416998Z" level=warning msg="cleaning up after shim disconnected" id=edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c namespace=k8s.io Jul 15 04:45:39.304603 containerd[1922]: time="2025-07-15T04:45:39.304511146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 04:45:39.305737 containerd[1922]: time="2025-07-15T04:45:39.305704334Z" level=info msg="TearDown network for sandbox \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" successfully" Jul 15 04:45:39.305737 containerd[1922]: time="2025-07-15T04:45:39.305727887Z" level=info msg="StopPodSandbox for \"edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c\" returns successfully" Jul 15 04:45:39.305824 containerd[1922]: time="2025-07-15T04:45:39.302758009Z" level=info msg="TearDown network for sandbox \"f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e\" successfully" Jul 15 04:45:39.305824 containerd[1922]: time="2025-07-15T04:45:39.305773121Z" level=info msg="StopPodSandbox for \"f01bef271b4d2baca9e0a1b2e55ce17657f4e8a6bb4f6350454761b25707c52e\" returns successfully" Jul 15 04:45:39.467338 kubelet[3497]: I0715 04:45:39.467283 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-lib-modules\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467338 kubelet[3497]: I0715 04:45:39.467334 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-host-proc-sys-net\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467338 kubelet[3497]: I0715 04:45:39.467346 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-xtables-lock\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467782 kubelet[3497]: I0715 04:45:39.467365 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwnj2\" (UniqueName: \"kubernetes.io/projected/f1f7f9c9-f118-42b7-b358-1687a352ea1a-kube-api-access-pwnj2\") pod \"f1f7f9c9-f118-42b7-b358-1687a352ea1a\" (UID: \"f1f7f9c9-f118-42b7-b358-1687a352ea1a\") " Jul 15 04:45:39.467782 kubelet[3497]: I0715 04:45:39.467400 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8242d21-e741-47a7-8237-0adc0e3e9fec-clustermesh-secrets\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467782 kubelet[3497]: I0715 04:45:39.467409 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-host-proc-sys-kernel\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467782 kubelet[3497]: I0715 04:45:39.467419 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-hostproc\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467782 kubelet[3497]: I0715 04:45:39.467429 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-config-path\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467782 kubelet[3497]: I0715 04:45:39.467440 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8242d21-e741-47a7-8237-0adc0e3e9fec-hubble-tls\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467925 kubelet[3497]: I0715 04:45:39.467450 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-run\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467925 kubelet[3497]: I0715 04:45:39.467459 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cni-path\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467925 kubelet[3497]: I0715 04:45:39.467468 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-etc-cni-netd\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467925 kubelet[3497]: I0715 04:45:39.467480 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlsb8\" (UniqueName: \"kubernetes.io/projected/a8242d21-e741-47a7-8237-0adc0e3e9fec-kube-api-access-hlsb8\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.467925 kubelet[3497]: I0715 04:45:39.467492 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1f7f9c9-f118-42b7-b358-1687a352ea1a-cilium-config-path\") pod \"f1f7f9c9-f118-42b7-b358-1687a352ea1a\" (UID: \"f1f7f9c9-f118-42b7-b358-1687a352ea1a\") " Jul 15 04:45:39.467925 kubelet[3497]: I0715 04:45:39.467501 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-bpf-maps\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.468014 kubelet[3497]: I0715 04:45:39.467518 3497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-cgroup\") pod \"a8242d21-e741-47a7-8237-0adc0e3e9fec\" (UID: \"a8242d21-e741-47a7-8237-0adc0e3e9fec\") " Jul 15 04:45:39.468014 kubelet[3497]: I0715 04:45:39.467594 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 04:45:39.468014 kubelet[3497]: I0715 04:45:39.467625 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 04:45:39.468014 kubelet[3497]: I0715 04:45:39.467634 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 04:45:39.468014 kubelet[3497]: I0715 04:45:39.467644 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 04:45:39.468417 kubelet[3497]: I0715 04:45:39.468178 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 04:45:39.469476 kubelet[3497]: I0715 04:45:39.469455 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cni-path" (OuterVolumeSpecName: "cni-path") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 04:45:39.469594 kubelet[3497]: I0715 04:45:39.469581 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 04:45:39.469684 kubelet[3497]: I0715 04:45:39.469659 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 04:45:39.469724 kubelet[3497]: I0715 04:45:39.469690 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-hostproc" (OuterVolumeSpecName: "hostproc") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 04:45:39.470175 kubelet[3497]: I0715 04:45:39.470118 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 04:45:39.472847 kubelet[3497]: I0715 04:45:39.472811 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 04:45:39.473043 kubelet[3497]: I0715 04:45:39.473025 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f7f9c9-f118-42b7-b358-1687a352ea1a-kube-api-access-pwnj2" (OuterVolumeSpecName: "kube-api-access-pwnj2") pod "f1f7f9c9-f118-42b7-b358-1687a352ea1a" (UID: "f1f7f9c9-f118-42b7-b358-1687a352ea1a"). InnerVolumeSpecName "kube-api-access-pwnj2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 04:45:39.473180 kubelet[3497]: I0715 04:45:39.473142 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8242d21-e741-47a7-8237-0adc0e3e9fec-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 04:45:39.473223 kubelet[3497]: I0715 04:45:39.473196 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8242d21-e741-47a7-8237-0adc0e3e9fec-kube-api-access-hlsb8" (OuterVolumeSpecName: "kube-api-access-hlsb8") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "kube-api-access-hlsb8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 04:45:39.473340 kubelet[3497]: I0715 04:45:39.473297 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8242d21-e741-47a7-8237-0adc0e3e9fec-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a8242d21-e741-47a7-8237-0adc0e3e9fec" (UID: "a8242d21-e741-47a7-8237-0adc0e3e9fec"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 04:45:39.473930 kubelet[3497]: I0715 04:45:39.473882 3497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1f7f9c9-f118-42b7-b358-1687a352ea1a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f1f7f9c9-f118-42b7-b358-1687a352ea1a" (UID: "f1f7f9c9-f118-42b7-b358-1687a352ea1a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 04:45:39.568589 kubelet[3497]: I0715 04:45:39.568443 3497 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-host-proc-sys-net\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568589 kubelet[3497]: I0715 04:45:39.568478 3497 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-xtables-lock\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568589 kubelet[3497]: I0715 04:45:39.568485 3497 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8242d21-e741-47a7-8237-0adc0e3e9fec-clustermesh-secrets\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568589 kubelet[3497]: I0715 04:45:39.568493 3497 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pwnj2\" (UniqueName: \"kubernetes.io/projected/f1f7f9c9-f118-42b7-b358-1687a352ea1a-kube-api-access-pwnj2\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568589 kubelet[3497]: I0715 04:45:39.568499 3497 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8242d21-e741-47a7-8237-0adc0e3e9fec-hubble-tls\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568589 kubelet[3497]: I0715 04:45:39.568506 3497 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-host-proc-sys-kernel\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568589 kubelet[3497]: I0715 04:45:39.568511 3497 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-hostproc\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568589 kubelet[3497]: I0715 04:45:39.568517 3497 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-config-path\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568892 kubelet[3497]: I0715 04:45:39.568525 3497 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-run\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568892 kubelet[3497]: I0715 04:45:39.568534 3497 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cni-path\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568892 kubelet[3497]: I0715 04:45:39.568539 3497 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-etc-cni-netd\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568892 kubelet[3497]: I0715 04:45:39.568544 3497 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-bpf-maps\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568892 kubelet[3497]: I0715 04:45:39.568549 3497 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hlsb8\" (UniqueName: \"kubernetes.io/projected/a8242d21-e741-47a7-8237-0adc0e3e9fec-kube-api-access-hlsb8\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568892 kubelet[3497]: I0715 04:45:39.568555 3497 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1f7f9c9-f118-42b7-b358-1687a352ea1a-cilium-config-path\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568892 kubelet[3497]: I0715 04:45:39.568560 3497 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-cilium-cgroup\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.568892 kubelet[3497]: I0715 04:45:39.568567 3497 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8242d21-e741-47a7-8237-0adc0e3e9fec-lib-modules\") on node \"ci-4396.0.0-n-efed024aac\" DevicePath \"\"" Jul 15 04:45:39.851622 kubelet[3497]: I0715 04:45:39.851444 3497 scope.go:117] "RemoveContainer" containerID="37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86" Jul 15 04:45:39.856769 systemd[1]: Removed slice kubepods-burstable-poda8242d21_e741_47a7_8237_0adc0e3e9fec.slice - libcontainer container kubepods-burstable-poda8242d21_e741_47a7_8237_0adc0e3e9fec.slice. Jul 15 04:45:39.856868 systemd[1]: kubepods-burstable-poda8242d21_e741_47a7_8237_0adc0e3e9fec.slice: Consumed 4.384s CPU time, 124.4M memory peak, 136K read from disk, 12.9M written to disk. Jul 15 04:45:39.858808 containerd[1922]: time="2025-07-15T04:45:39.858131774Z" level=info msg="RemoveContainer for \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\"" Jul 15 04:45:39.863257 systemd[1]: Removed slice kubepods-besteffort-podf1f7f9c9_f118_42b7_b358_1687a352ea1a.slice - libcontainer container kubepods-besteffort-podf1f7f9c9_f118_42b7_b358_1687a352ea1a.slice. Jul 15 04:45:39.873378 containerd[1922]: time="2025-07-15T04:45:39.873324988Z" level=info msg="RemoveContainer for \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" returns successfully" Jul 15 04:45:39.873804 kubelet[3497]: I0715 04:45:39.873751 3497 scope.go:117] "RemoveContainer" containerID="696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb" Jul 15 04:45:39.877001 containerd[1922]: time="2025-07-15T04:45:39.876970867Z" level=info msg="RemoveContainer for \"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\"" Jul 15 04:45:39.891813 containerd[1922]: time="2025-07-15T04:45:39.891774659Z" level=info msg="RemoveContainer for \"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\" returns successfully" Jul 15 04:45:39.892227 kubelet[3497]: I0715 04:45:39.892196 3497 scope.go:117] "RemoveContainer" containerID="b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a" Jul 15 04:45:39.894759 containerd[1922]: time="2025-07-15T04:45:39.894717096Z" level=info msg="RemoveContainer for \"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\"" Jul 15 04:45:39.906612 containerd[1922]: time="2025-07-15T04:45:39.906570754Z" level=info msg="RemoveContainer for \"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\" returns successfully" Jul 15 04:45:39.906867 kubelet[3497]: I0715 04:45:39.906836 3497 scope.go:117] "RemoveContainer" containerID="cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097" Jul 15 04:45:39.908341 containerd[1922]: time="2025-07-15T04:45:39.908249320Z" level=info msg="RemoveContainer for \"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\"" Jul 15 04:45:39.921077 containerd[1922]: time="2025-07-15T04:45:39.921040949Z" level=info msg="RemoveContainer for \"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\" returns successfully" Jul 15 04:45:39.921295 kubelet[3497]: I0715 04:45:39.921270 3497 scope.go:117] "RemoveContainer" containerID="43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80" Jul 15 04:45:39.923047 containerd[1922]: time="2025-07-15T04:45:39.923000902Z" level=info msg="RemoveContainer for \"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\"" Jul 15 04:45:39.933886 containerd[1922]: time="2025-07-15T04:45:39.933816481Z" level=info msg="RemoveContainer for \"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\" returns successfully" Jul 15 04:45:39.934066 kubelet[3497]: I0715 04:45:39.934038 3497 scope.go:117] "RemoveContainer" containerID="37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86" Jul 15 04:45:39.934295 containerd[1922]: time="2025-07-15T04:45:39.934262281Z" level=error msg="ContainerStatus for \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\": not found" Jul 15 04:45:39.934564 kubelet[3497]: E0715 04:45:39.934409 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\": not found" containerID="37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86" Jul 15 04:45:39.934564 kubelet[3497]: I0715 04:45:39.934438 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86"} err="failed to get container status \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\": rpc error: code = NotFound desc = an error occurred when try to find container \"37daf7a26e634212d6642c4f6e3daa1c92cd2524120a83fbe162fdbb4570ce86\": not found" Jul 15 04:45:39.934564 kubelet[3497]: I0715 04:45:39.934496 3497 scope.go:117] "RemoveContainer" containerID="696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb" Jul 15 04:45:39.934668 containerd[1922]: time="2025-07-15T04:45:39.934619751Z" level=error msg="ContainerStatus for \"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\": not found" Jul 15 04:45:39.934790 kubelet[3497]: E0715 04:45:39.934762 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\": not found" containerID="696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb" Jul 15 04:45:39.934878 kubelet[3497]: I0715 04:45:39.934840 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb"} err="failed to get container status \"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"696e239145cdc88a775fe3841b9d42465a8d451c0b668f64dbe861b8a9c057fb\": not found" Jul 15 04:45:39.935006 kubelet[3497]: I0715 04:45:39.934933 3497 scope.go:117] "RemoveContainer" containerID="b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a" Jul 15 04:45:39.935187 containerd[1922]: time="2025-07-15T04:45:39.935154091Z" level=error msg="ContainerStatus for \"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\": not found" Jul 15 04:45:39.935307 kubelet[3497]: E0715 04:45:39.935283 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\": not found" containerID="b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a" Jul 15 04:45:39.935350 kubelet[3497]: I0715 04:45:39.935310 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a"} err="failed to get container status \"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3465385db2c28ac45e5bbe5cfe67c57f9fefea83603224148793d950c9ada8a\": not found" Jul 15 04:45:39.935350 kubelet[3497]: I0715 04:45:39.935324 3497 scope.go:117] "RemoveContainer" containerID="cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097" Jul 15 04:45:39.935578 containerd[1922]: time="2025-07-15T04:45:39.935510448Z" level=error msg="ContainerStatus for \"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\": not found" Jul 15 04:45:39.935627 kubelet[3497]: E0715 04:45:39.935596 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\": not found" containerID="cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097" Jul 15 04:45:39.935627 kubelet[3497]: I0715 04:45:39.935611 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097"} err="failed to get container status \"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd0b9bd03dc6fcc052f1899a85c338a0eddc33f1b9138426f07dd1c922f2c097\": not found" Jul 15 04:45:39.935627 kubelet[3497]: I0715 04:45:39.935624 3497 scope.go:117] "RemoveContainer" containerID="43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80" Jul 15 04:45:39.935776 containerd[1922]: time="2025-07-15T04:45:39.935743865Z" level=error msg="ContainerStatus for \"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\": not found" Jul 15 04:45:39.935975 kubelet[3497]: E0715 04:45:39.935953 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\": not found" containerID="43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80" Jul 15 04:45:39.936033 kubelet[3497]: I0715 04:45:39.935974 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80"} err="failed to get container status \"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\": rpc error: code = NotFound desc = an error occurred when try to find container \"43d335c09e3c389982bcd619cb9643de0976556af023da924d41e2cc745c2a80\": not found" Jul 15 04:45:39.936033 kubelet[3497]: I0715 04:45:39.935991 3497 scope.go:117] "RemoveContainer" containerID="d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df" Jul 15 04:45:39.941068 containerd[1922]: time="2025-07-15T04:45:39.941041070Z" level=info msg="RemoveContainer for \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\"" Jul 15 04:45:39.953147 containerd[1922]: time="2025-07-15T04:45:39.953113632Z" level=info msg="RemoveContainer for \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\" returns successfully" Jul 15 04:45:39.953501 kubelet[3497]: I0715 04:45:39.953481 3497 scope.go:117] "RemoveContainer" containerID="d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df" Jul 15 04:45:39.953991 containerd[1922]: time="2025-07-15T04:45:39.953909957Z" level=error msg="ContainerStatus for \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\": not found" Jul 15 04:45:39.954095 kubelet[3497]: E0715 04:45:39.954041 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\": not found" containerID="d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df" Jul 15 04:45:39.954095 kubelet[3497]: I0715 04:45:39.954064 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df"} err="failed to get container status \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2449d2dcdb126bf12860f97d25d3509fd58c4481b6c51189804e99ab6b6b0df\": not found" Jul 15 04:45:40.185578 systemd[1]: var-lib-kubelet-pods-f1f7f9c9\x2df118\x2d42b7\x2db358\x2d1687a352ea1a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpwnj2.mount: Deactivated successfully. Jul 15 04:45:40.185664 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edc07c109a0e8a5dd084c697c3a9ffb12df38a0011d3b8f61b6872ab407a510c-shm.mount: Deactivated successfully. Jul 15 04:45:40.185704 systemd[1]: var-lib-kubelet-pods-a8242d21\x2de741\x2d47a7\x2d8237\x2d0adc0e3e9fec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhlsb8.mount: Deactivated successfully. Jul 15 04:45:40.185750 systemd[1]: var-lib-kubelet-pods-a8242d21\x2de741\x2d47a7\x2d8237\x2d0adc0e3e9fec-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 04:45:40.185783 systemd[1]: var-lib-kubelet-pods-a8242d21\x2de741\x2d47a7\x2d8237\x2d0adc0e3e9fec-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 04:45:40.513499 kubelet[3497]: I0715 04:45:40.513394 3497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8242d21-e741-47a7-8237-0adc0e3e9fec" path="/var/lib/kubelet/pods/a8242d21-e741-47a7-8237-0adc0e3e9fec/volumes" Jul 15 04:45:40.513816 kubelet[3497]: I0715 04:45:40.513793 3497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f7f9c9-f118-42b7-b358-1687a352ea1a" path="/var/lib/kubelet/pods/f1f7f9c9-f118-42b7-b358-1687a352ea1a/volumes" Jul 15 04:45:40.592531 kubelet[3497]: E0715 04:45:40.592491 3497 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 04:45:41.163794 sshd[5039]: Connection closed by 10.200.16.10 port 34188 Jul 15 04:45:41.164382 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:41.167900 systemd[1]: sshd@21-10.200.20.23:22-10.200.16.10:34188.service: Deactivated successfully. Jul 15 04:45:41.169286 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 04:45:41.169904 systemd-logind[1891]: Session 24 logged out. Waiting for processes to exit. Jul 15 04:45:41.171138 systemd-logind[1891]: Removed session 24. Jul 15 04:45:41.250731 systemd[1]: Started sshd@22-10.200.20.23:22-10.200.16.10:55614.service - OpenSSH per-connection server daemon (10.200.16.10:55614). Jul 15 04:45:41.731600 sshd[5191]: Accepted publickey for core from 10.200.16.10 port 55614 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:41.732692 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:41.736598 systemd-logind[1891]: New session 25 of user core. Jul 15 04:45:41.743031 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 04:45:42.546421 kubelet[3497]: I0715 04:45:42.546291 3497 memory_manager.go:355] "RemoveStaleState removing state" podUID="f1f7f9c9-f118-42b7-b358-1687a352ea1a" containerName="cilium-operator" Jul 15 04:45:42.546421 kubelet[3497]: I0715 04:45:42.546321 3497 memory_manager.go:355] "RemoveStaleState removing state" podUID="a8242d21-e741-47a7-8237-0adc0e3e9fec" containerName="cilium-agent" Jul 15 04:45:42.554345 systemd[1]: Created slice kubepods-burstable-pod58355e73_597c_47c5_9fb4_da744404aba9.slice - libcontainer container kubepods-burstable-pod58355e73_597c_47c5_9fb4_da744404aba9.slice. Jul 15 04:45:42.565029 sshd[5194]: Connection closed by 10.200.16.10 port 55614 Jul 15 04:45:42.566459 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:42.570731 systemd-logind[1891]: Session 25 logged out. Waiting for processes to exit. Jul 15 04:45:42.572244 systemd[1]: sshd@22-10.200.20.23:22-10.200.16.10:55614.service: Deactivated successfully. Jul 15 04:45:42.574673 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 04:45:42.578312 systemd-logind[1891]: Removed session 25. Jul 15 04:45:42.664332 systemd[1]: Started sshd@23-10.200.20.23:22-10.200.16.10:55618.service - OpenSSH per-connection server daemon (10.200.16.10:55618). Jul 15 04:45:42.684430 kubelet[3497]: I0715 04:45:42.684393 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58355e73-597c-47c5-9fb4-da744404aba9-cilium-run\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684430 kubelet[3497]: I0715 04:45:42.684429 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58355e73-597c-47c5-9fb4-da744404aba9-lib-modules\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684699 kubelet[3497]: I0715 04:45:42.684443 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58355e73-597c-47c5-9fb4-da744404aba9-xtables-lock\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684699 kubelet[3497]: I0715 04:45:42.684458 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58355e73-597c-47c5-9fb4-da744404aba9-etc-cni-netd\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684699 kubelet[3497]: I0715 04:45:42.684469 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58355e73-597c-47c5-9fb4-da744404aba9-cilium-config-path\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684699 kubelet[3497]: I0715 04:45:42.684479 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/58355e73-597c-47c5-9fb4-da744404aba9-cilium-ipsec-secrets\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684699 kubelet[3497]: I0715 04:45:42.684488 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58355e73-597c-47c5-9fb4-da744404aba9-hubble-tls\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684699 kubelet[3497]: I0715 04:45:42.684502 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58355e73-597c-47c5-9fb4-da744404aba9-hostproc\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684826 kubelet[3497]: I0715 04:45:42.684513 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58355e73-597c-47c5-9fb4-da744404aba9-cilium-cgroup\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684826 kubelet[3497]: I0715 04:45:42.684525 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58355e73-597c-47c5-9fb4-da744404aba9-host-proc-sys-kernel\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684826 kubelet[3497]: I0715 04:45:42.684539 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58355e73-597c-47c5-9fb4-da744404aba9-cni-path\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684826 kubelet[3497]: I0715 04:45:42.684547 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58355e73-597c-47c5-9fb4-da744404aba9-clustermesh-secrets\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684826 kubelet[3497]: I0715 04:45:42.684558 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58355e73-597c-47c5-9fb4-da744404aba9-host-proc-sys-net\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684826 kubelet[3497]: I0715 04:45:42.684569 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58355e73-597c-47c5-9fb4-da744404aba9-bpf-maps\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.684936 kubelet[3497]: I0715 04:45:42.684580 3497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rdkn\" (UniqueName: \"kubernetes.io/projected/58355e73-597c-47c5-9fb4-da744404aba9-kube-api-access-5rdkn\") pod \"cilium-j8wdx\" (UID: \"58355e73-597c-47c5-9fb4-da744404aba9\") " pod="kube-system/cilium-j8wdx" Jul 15 04:45:42.859393 containerd[1922]: time="2025-07-15T04:45:42.859254767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j8wdx,Uid:58355e73-597c-47c5-9fb4-da744404aba9,Namespace:kube-system,Attempt:0,}" Jul 15 04:45:42.915063 containerd[1922]: time="2025-07-15T04:45:42.914990643Z" level=info msg="connecting to shim de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38" address="unix:///run/containerd/s/526661f8f197b594c7ce4977e59334bbf7140300e841dc46feda7ae22e821c90" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:45:42.935005 systemd[1]: Started cri-containerd-de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38.scope - libcontainer container de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38. Jul 15 04:45:42.955966 containerd[1922]: time="2025-07-15T04:45:42.955922799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j8wdx,Uid:58355e73-597c-47c5-9fb4-da744404aba9,Namespace:kube-system,Attempt:0,} returns sandbox id \"de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38\"" Jul 15 04:45:42.959491 containerd[1922]: time="2025-07-15T04:45:42.959463475Z" level=info msg="CreateContainer within sandbox \"de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 04:45:42.990893 containerd[1922]: time="2025-07-15T04:45:42.990842228Z" level=info msg="Container c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:45:43.009706 containerd[1922]: time="2025-07-15T04:45:43.009659329Z" level=info msg="CreateContainer within sandbox \"de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522\"" Jul 15 04:45:43.010379 containerd[1922]: time="2025-07-15T04:45:43.010302849Z" level=info msg="StartContainer for \"c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522\"" Jul 15 04:45:43.011483 containerd[1922]: time="2025-07-15T04:45:43.011443331Z" level=info msg="connecting to shim c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522" address="unix:///run/containerd/s/526661f8f197b594c7ce4977e59334bbf7140300e841dc46feda7ae22e821c90" protocol=ttrpc version=3 Jul 15 04:45:43.031072 systemd[1]: Started cri-containerd-c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522.scope - libcontainer container c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522. Jul 15 04:45:43.056880 containerd[1922]: time="2025-07-15T04:45:43.056814309Z" level=info msg="StartContainer for \"c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522\" returns successfully" Jul 15 04:45:43.061046 systemd[1]: cri-containerd-c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522.scope: Deactivated successfully. Jul 15 04:45:43.065586 containerd[1922]: time="2025-07-15T04:45:43.065553251Z" level=info msg="received exit event container_id:\"c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522\" id:\"c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522\" pid:5271 exited_at:{seconds:1752554743 nanos:65170044}" Jul 15 04:45:43.065812 containerd[1922]: time="2025-07-15T04:45:43.065669935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522\" id:\"c17dd22fdbf80095e72911ee1e98585549da23f768fb2e858d9d4f0447eae522\" pid:5271 exited_at:{seconds:1752554743 nanos:65170044}" Jul 15 04:45:43.152945 sshd[5205]: Accepted publickey for core from 10.200.16.10 port 55618 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:43.151227 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:43.155240 systemd-logind[1891]: New session 26 of user core. Jul 15 04:45:43.159985 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 04:45:43.504415 sshd[5304]: Connection closed by 10.200.16.10 port 55618 Jul 15 04:45:43.504319 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:43.507852 systemd-logind[1891]: Session 26 logged out. Waiting for processes to exit. Jul 15 04:45:43.508021 systemd[1]: sshd@23-10.200.20.23:22-10.200.16.10:55618.service: Deactivated successfully. Jul 15 04:45:43.509553 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 04:45:43.511250 systemd-logind[1891]: Removed session 26. Jul 15 04:45:43.587281 systemd[1]: Started sshd@24-10.200.20.23:22-10.200.16.10:55634.service - OpenSSH per-connection server daemon (10.200.16.10:55634). Jul 15 04:45:43.853744 kubelet[3497]: I0715 04:45:43.853101 3497 setters.go:602] "Node became not ready" node="ci-4396.0.0-n-efed024aac" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T04:45:43Z","lastTransitionTime":"2025-07-15T04:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 04:45:43.873896 containerd[1922]: time="2025-07-15T04:45:43.873847539Z" level=info msg="CreateContainer within sandbox \"de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 04:45:43.905040 containerd[1922]: time="2025-07-15T04:45:43.903479707Z" level=info msg="Container 7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:45:43.905735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount14162697.mount: Deactivated successfully. Jul 15 04:45:43.926111 containerd[1922]: time="2025-07-15T04:45:43.926067852Z" level=info msg="CreateContainer within sandbox \"de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a\"" Jul 15 04:45:43.926587 containerd[1922]: time="2025-07-15T04:45:43.926564830Z" level=info msg="StartContainer for \"7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a\"" Jul 15 04:45:43.927372 containerd[1922]: time="2025-07-15T04:45:43.927349131Z" level=info msg="connecting to shim 7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a" address="unix:///run/containerd/s/526661f8f197b594c7ce4977e59334bbf7140300e841dc46feda7ae22e821c90" protocol=ttrpc version=3 Jul 15 04:45:43.948037 systemd[1]: Started cri-containerd-7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a.scope - libcontainer container 7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a. Jul 15 04:45:43.974845 containerd[1922]: time="2025-07-15T04:45:43.974789914Z" level=info msg="StartContainer for \"7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a\" returns successfully" Jul 15 04:45:43.975886 systemd[1]: cri-containerd-7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a.scope: Deactivated successfully. Jul 15 04:45:43.977786 containerd[1922]: time="2025-07-15T04:45:43.977097408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a\" id:\"7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a\" pid:5326 exited_at:{seconds:1752554743 nanos:976832702}" Jul 15 04:45:43.978059 containerd[1922]: time="2025-07-15T04:45:43.977985297Z" level=info msg="received exit event container_id:\"7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a\" id:\"7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a\" pid:5326 exited_at:{seconds:1752554743 nanos:976832702}" Jul 15 04:45:44.047410 sshd[5311]: Accepted publickey for core from 10.200.16.10 port 55634 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:45:44.048932 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:44.052745 systemd-logind[1891]: New session 27 of user core. Jul 15 04:45:44.056989 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 15 04:45:44.792542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b1fd3efac91ae5c6a72c94825a4fcec30cb41e62c27bfb07811208cf57ffa2a-rootfs.mount: Deactivated successfully. Jul 15 04:45:44.879255 containerd[1922]: time="2025-07-15T04:45:44.879156291Z" level=info msg="CreateContainer within sandbox \"de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 04:45:44.926499 containerd[1922]: time="2025-07-15T04:45:44.926186928Z" level=info msg="Container 211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:45:44.956608 containerd[1922]: time="2025-07-15T04:45:44.956561351Z" level=info msg="CreateContainer within sandbox \"de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79\"" Jul 15 04:45:44.958243 containerd[1922]: time="2025-07-15T04:45:44.957325388Z" level=info msg="StartContainer for \"211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79\"" Jul 15 04:45:44.958553 containerd[1922]: time="2025-07-15T04:45:44.958520656Z" level=info msg="connecting to shim 211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79" address="unix:///run/containerd/s/526661f8f197b594c7ce4977e59334bbf7140300e841dc46feda7ae22e821c90" protocol=ttrpc version=3 Jul 15 04:45:44.979026 systemd[1]: Started cri-containerd-211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79.scope - libcontainer container 211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79. Jul 15 04:45:45.003777 systemd[1]: cri-containerd-211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79.scope: Deactivated successfully. Jul 15 04:45:45.005598 containerd[1922]: time="2025-07-15T04:45:45.005558061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79\" id:\"211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79\" pid:5377 exited_at:{seconds:1752554745 nanos:5246538}" Jul 15 04:45:45.010675 containerd[1922]: time="2025-07-15T04:45:45.010502846Z" level=info msg="received exit event container_id:\"211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79\" id:\"211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79\" pid:5377 exited_at:{seconds:1752554745 nanos:5246538}" Jul 15 04:45:45.012071 containerd[1922]: time="2025-07-15T04:45:45.012047336Z" level=info msg="StartContainer for \"211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79\" returns successfully" Jul 15 04:45:45.027143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-211ff2eb9f5da345a81428f781697b2741e56508db2e45c232da9aba177c6e79-rootfs.mount: Deactivated successfully. Jul 15 04:45:45.593706 kubelet[3497]: E0715 04:45:45.593646 3497 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 04:45:45.884950 containerd[1922]: time="2025-07-15T04:45:45.883234591Z" level=info msg="CreateContainer within sandbox \"de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 04:45:45.909390 containerd[1922]: time="2025-07-15T04:45:45.908972320Z" level=info msg="Container 59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:45:45.931486 containerd[1922]: time="2025-07-15T04:45:45.931446360Z" level=info msg="CreateContainer within sandbox \"de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3\"" Jul 15 04:45:45.932298 containerd[1922]: time="2025-07-15T04:45:45.932278095Z" level=info msg="StartContainer for \"59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3\"" Jul 15 04:45:45.934033 containerd[1922]: time="2025-07-15T04:45:45.933979046Z" level=info msg="connecting to shim 59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3" address="unix:///run/containerd/s/526661f8f197b594c7ce4977e59334bbf7140300e841dc46feda7ae22e821c90" protocol=ttrpc version=3 Jul 15 04:45:45.950991 systemd[1]: Started cri-containerd-59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3.scope - libcontainer container 59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3. Jul 15 04:45:45.970302 systemd[1]: cri-containerd-59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3.scope: Deactivated successfully. Jul 15 04:45:45.971682 containerd[1922]: time="2025-07-15T04:45:45.971648958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3\" id:\"59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3\" pid:5417 exited_at:{seconds:1752554745 nanos:971325321}" Jul 15 04:45:45.974971 containerd[1922]: time="2025-07-15T04:45:45.974920952Z" level=info msg="received exit event container_id:\"59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3\" id:\"59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3\" pid:5417 exited_at:{seconds:1752554745 nanos:971325321}" Jul 15 04:45:45.980423 containerd[1922]: time="2025-07-15T04:45:45.980388948Z" level=info msg="StartContainer for \"59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3\" returns successfully" Jul 15 04:45:45.989956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59bd39adefb27d23bd8792742b233b01bcb57bb152fef041fbdc115e5a4cedc3-rootfs.mount: Deactivated successfully. Jul 15 04:45:46.888104 containerd[1922]: time="2025-07-15T04:45:46.887962867Z" level=info msg="CreateContainer within sandbox \"de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 04:45:46.921681 containerd[1922]: time="2025-07-15T04:45:46.921546306Z" level=info msg="Container a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:45:46.946473 containerd[1922]: time="2025-07-15T04:45:46.946424563Z" level=info msg="CreateContainer within sandbox \"de0071bc120de97c6cc79746aa84d244d63800c8b1d7bf815e9d243915622e38\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f\"" Jul 15 04:45:46.947359 containerd[1922]: time="2025-07-15T04:45:46.947335205Z" level=info msg="StartContainer for \"a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f\"" Jul 15 04:45:46.948038 containerd[1922]: time="2025-07-15T04:45:46.948012630Z" level=info msg="connecting to shim a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f" address="unix:///run/containerd/s/526661f8f197b594c7ce4977e59334bbf7140300e841dc46feda7ae22e821c90" protocol=ttrpc version=3 Jul 15 04:45:46.968998 systemd[1]: Started cri-containerd-a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f.scope - libcontainer container a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f. Jul 15 04:45:46.994923 containerd[1922]: time="2025-07-15T04:45:46.994819123Z" level=info msg="StartContainer for \"a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f\" returns successfully" Jul 15 04:45:47.048601 containerd[1922]: time="2025-07-15T04:45:47.048557978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f\" id:\"e86f96c03984a2db86d832b9d590896cdf4f101834f73d43febb77dc896b369f\" pid:5485 exited_at:{seconds:1752554747 nanos:48137811}" Jul 15 04:45:47.439934 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 15 04:45:48.465725 containerd[1922]: time="2025-07-15T04:45:48.465685620Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f\" id:\"4061f0fc51089624fdd8be392a2ba7ed5d0526b923e032fcc6919d233d0224f4\" pid:5564 exit_status:1 exited_at:{seconds:1752554748 nanos:465340967}" Jul 15 04:45:49.816390 systemd-networkd[1701]: lxc_health: Link UP Jul 15 04:45:49.822673 systemd-networkd[1701]: lxc_health: Gained carrier Jul 15 04:45:50.575210 containerd[1922]: time="2025-07-15T04:45:50.575067337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f\" id:\"2a7a9797fc38bbb00d4fd87f0dbb907391eaa27932164166ce4e9b864d182725\" pid:6016 exited_at:{seconds:1752554750 nanos:574583943}" Jul 15 04:45:50.878622 kubelet[3497]: I0715 04:45:50.878001 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j8wdx" podStartSLOduration=8.877984253 podStartE2EDuration="8.877984253s" podCreationTimestamp="2025-07-15 04:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:45:47.90512316 +0000 UTC m=+157.457321907" watchObservedRunningTime="2025-07-15 04:45:50.877984253 +0000 UTC m=+160.430183008" Jul 15 04:45:51.136143 systemd-networkd[1701]: lxc_health: Gained IPv6LL Jul 15 04:45:52.656125 containerd[1922]: time="2025-07-15T04:45:52.656075658Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f\" id:\"797571bf91d4d51bf2452b627817e156f59c0783bd741f4cdf64d2527f3aa67c\" pid:6051 exited_at:{seconds:1752554752 nanos:655747598}" Jul 15 04:45:54.726568 containerd[1922]: time="2025-07-15T04:45:54.726454208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f\" id:\"ce61ca9a4906e131c6c0a84c332c465de29082495b4ed00e49df7023f15ba24f\" pid:6074 exited_at:{seconds:1752554754 nanos:725998695}" Jul 15 04:45:56.809749 containerd[1922]: time="2025-07-15T04:45:56.809680099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a49cda1315ce24b9adf75a769a3d615598b9b84d8dd6ac3658b7949dde47ef3f\" id:\"28dba024b203e9e05a5f62e51ff489f26d213f35fa029986fb428832735272a4\" pid:6097 exited_at:{seconds:1752554756 nanos:809241218}" Jul 15 04:45:56.813339 kubelet[3497]: E0715 04:45:56.812722 3497 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40314->127.0.0.1:35883: write tcp 127.0.0.1:40314->127.0.0.1:35883: write: connection reset by peer Jul 15 04:45:56.901068 sshd[5358]: Connection closed by 10.200.16.10 port 55634 Jul 15 04:45:56.901698 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:56.904395 systemd-logind[1891]: Session 27 logged out. Waiting for processes to exit. Jul 15 04:45:56.904754 systemd[1]: sshd@24-10.200.20.23:22-10.200.16.10:55634.service: Deactivated successfully. Jul 15 04:45:56.906526 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 04:45:56.908836 systemd-logind[1891]: Removed session 27.