Jul 9 23:45:49.072819 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 9 23:45:49.072837 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 22:19:33 -00 2025 Jul 9 23:45:49.072843 kernel: KASLR enabled Jul 9 23:45:49.072847 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 9 23:45:49.072851 kernel: printk: legacy bootconsole [pl11] enabled Jul 9 23:45:49.072855 kernel: efi: EFI v2.7 by EDK II Jul 9 23:45:49.072860 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 9 23:45:49.072864 kernel: random: crng init done Jul 9 23:45:49.072868 kernel: secureboot: Secure boot disabled Jul 9 23:45:49.072872 kernel: ACPI: Early table checksum verification disabled Jul 9 23:45:49.072876 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 9 23:45:49.072880 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:49.072883 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:49.072888 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 9 23:45:49.072893 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:49.072897 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:49.072901 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:49.072906 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:49.072910 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:49.072915 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:49.072919 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 9 23:45:49.072923 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:49.072927 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 9 23:45:49.072931 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 9 23:45:49.072935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 9 23:45:49.072939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 9 23:45:49.072943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 9 23:45:49.072947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 9 23:45:49.072951 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 9 23:45:49.072956 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 9 23:45:49.072960 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 9 23:45:49.072964 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 9 23:45:49.072968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 9 23:45:49.072972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 9 23:45:49.072976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 9 23:45:49.072980 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 9 23:45:49.072985 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 9 23:45:49.072989 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] Jul 9 23:45:49.072993 kernel: Zone ranges: Jul 9 23:45:49.072997 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 9 23:45:49.073004 kernel: DMA32 empty Jul 9 23:45:49.073008 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 9 23:45:49.073012 kernel: Device empty Jul 9 23:45:49.073016 kernel: Movable zone start for each node Jul 9 23:45:49.073021 kernel: Early memory node ranges Jul 9 23:45:49.073026 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 9 23:45:49.073030 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 9 23:45:49.073034 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 9 23:45:49.073039 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 9 23:45:49.073043 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 9 23:45:49.073047 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 9 23:45:49.073052 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 9 23:45:49.073056 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 9 23:45:49.073060 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 9 23:45:49.073064 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 9 23:45:49.073069 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 9 23:45:49.073073 kernel: psci: probing for conduit method from ACPI. Jul 9 23:45:49.073078 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 23:45:49.073082 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 23:45:49.073086 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 9 23:45:49.073091 kernel: psci: SMC Calling Convention v1.4 Jul 9 23:45:49.073095 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 9 23:45:49.073099 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 9 23:45:49.073103 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 9 23:45:49.073108 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 9 23:45:49.073112 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 9 23:45:49.073117 kernel: Detected PIPT I-cache on CPU0 Jul 9 23:45:49.073121 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 9 23:45:49.073126 kernel: CPU features: detected: GIC system register CPU interface Jul 9 23:45:49.073130 kernel: CPU features: detected: Spectre-v4 Jul 9 23:45:49.073135 kernel: CPU features: detected: Spectre-BHB Jul 9 23:45:49.073139 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 23:45:49.073143 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 23:45:49.073148 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 9 23:45:49.073152 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 23:45:49.073156 kernel: alternatives: applying boot alternatives Jul 9 23:45:49.073161 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:45:49.073166 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 23:45:49.073170 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 23:45:49.073175 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 23:45:49.073180 kernel: Fallback order for Node 0: 0 Jul 9 23:45:49.073184 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 9 23:45:49.073188 kernel: Policy zone: Normal Jul 9 23:45:49.073192 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 23:45:49.073197 kernel: software IO TLB: area num 2. Jul 9 23:45:49.073201 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jul 9 23:45:49.073205 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 9 23:45:49.073210 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 23:45:49.073215 kernel: rcu: RCU event tracing is enabled. Jul 9 23:45:49.073219 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 9 23:45:49.073224 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 23:45:49.073229 kernel: Tracing variant of Tasks RCU enabled. Jul 9 23:45:49.073233 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 23:45:49.073237 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 9 23:45:49.073242 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 23:45:49.073246 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 23:45:49.073251 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 23:45:49.073255 kernel: GICv3: 960 SPIs implemented Jul 9 23:45:49.073259 kernel: GICv3: 0 Extended SPIs implemented Jul 9 23:45:49.073263 kernel: Root IRQ handler: gic_handle_irq Jul 9 23:45:49.073268 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 9 23:45:49.073272 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 9 23:45:49.073277 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 9 23:45:49.073281 kernel: ITS: No ITS available, not enabling LPIs Jul 9 23:45:49.073286 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 23:45:49.073290 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 9 23:45:49.073294 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 9 23:45:49.073299 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 9 23:45:49.073303 kernel: Console: colour dummy device 80x25 Jul 9 23:45:49.073308 kernel: printk: legacy console [tty1] enabled Jul 9 23:45:49.073312 kernel: ACPI: Core revision 20240827 Jul 9 23:45:49.073317 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 9 23:45:49.073322 kernel: pid_max: default: 32768 minimum: 301 Jul 9 23:45:49.073327 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 23:45:49.073331 kernel: landlock: Up and running. Jul 9 23:45:49.073336 kernel: SELinux: Initializing. Jul 9 23:45:49.073340 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:45:49.073345 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:45:49.073352 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 9 23:45:49.073358 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 9 23:45:49.073362 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 9 23:45:49.073367 kernel: rcu: Hierarchical SRCU implementation. Jul 9 23:45:49.073372 kernel: rcu: Max phase no-delay instances is 400. Jul 9 23:45:49.073377 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 23:45:49.073382 kernel: Remapping and enabling EFI services. Jul 9 23:45:49.073387 kernel: smp: Bringing up secondary CPUs ... Jul 9 23:45:49.073391 kernel: Detected PIPT I-cache on CPU1 Jul 9 23:45:49.073396 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 9 23:45:49.073401 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 9 23:45:49.073406 kernel: smp: Brought up 1 node, 2 CPUs Jul 9 23:45:49.073411 kernel: SMP: Total of 2 processors activated. Jul 9 23:45:49.073415 kernel: CPU: All CPU(s) started at EL1 Jul 9 23:45:49.073420 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 23:45:49.073425 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 9 23:45:49.073430 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 23:45:49.073434 kernel: CPU features: detected: Common not Private translations Jul 9 23:45:49.073439 kernel: CPU features: detected: CRC32 instructions Jul 9 23:45:49.073444 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 9 23:45:49.073449 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 23:45:49.073454 kernel: CPU features: detected: LSE atomic instructions Jul 9 23:45:49.073458 kernel: CPU features: detected: Privileged Access Never Jul 9 23:45:49.073463 kernel: CPU features: detected: Speculation barrier (SB) Jul 9 23:45:49.073468 kernel: CPU features: detected: TLB range maintenance instructions Jul 9 23:45:49.073472 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 23:45:49.073477 kernel: CPU features: detected: Scalable Vector Extension Jul 9 23:45:49.073482 kernel: alternatives: applying system-wide alternatives Jul 9 23:45:49.073486 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 9 23:45:49.073492 kernel: SVE: maximum available vector length 16 bytes per vector Jul 9 23:45:49.073496 kernel: SVE: default vector length 16 bytes per vector Jul 9 23:45:49.073501 kernel: Memory: 3975544K/4194160K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 213816K reserved, 0K cma-reserved) Jul 9 23:45:49.073506 kernel: devtmpfs: initialized Jul 9 23:45:49.073511 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 23:45:49.073516 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 9 23:45:49.073520 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 23:45:49.073525 kernel: 0 pages in range for non-PLT usage Jul 9 23:45:49.073530 kernel: 508448 pages in range for PLT usage Jul 9 23:45:49.073535 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 23:45:49.073540 kernel: SMBIOS 3.1.0 present. Jul 9 23:45:49.073544 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 9 23:45:49.073549 kernel: DMI: Memory slots populated: 2/2 Jul 9 23:45:49.073554 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 23:45:49.073558 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 23:45:49.073563 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 23:45:49.073568 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 23:45:49.073573 kernel: audit: initializing netlink subsys (disabled) Jul 9 23:45:49.073578 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 9 23:45:49.073583 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 23:45:49.073587 kernel: cpuidle: using governor menu Jul 9 23:45:49.073602 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 23:45:49.073607 kernel: ASID allocator initialised with 32768 entries Jul 9 23:45:49.073611 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 23:45:49.073616 kernel: Serial: AMBA PL011 UART driver Jul 9 23:45:49.073621 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 23:45:49.073626 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 23:45:49.073631 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 23:45:49.073636 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 23:45:49.073641 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 23:45:49.073646 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 23:45:49.073650 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 23:45:49.073655 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 23:45:49.073659 kernel: ACPI: Added _OSI(Module Device) Jul 9 23:45:49.073664 kernel: ACPI: Added _OSI(Processor Device) Jul 9 23:45:49.073669 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 23:45:49.073674 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 23:45:49.073679 kernel: ACPI: Interpreter enabled Jul 9 23:45:49.073684 kernel: ACPI: Using GIC for interrupt routing Jul 9 23:45:49.073688 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 9 23:45:49.073693 kernel: printk: legacy console [ttyAMA0] enabled Jul 9 23:45:49.073698 kernel: printk: legacy bootconsole [pl11] disabled Jul 9 23:45:49.073702 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 9 23:45:49.073707 kernel: ACPI: CPU0 has been hot-added Jul 9 23:45:49.073712 kernel: ACPI: CPU1 has been hot-added Jul 9 23:45:49.073717 kernel: iommu: Default domain type: Translated Jul 9 23:45:49.073722 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 23:45:49.073727 kernel: efivars: Registered efivars operations Jul 9 23:45:49.073731 kernel: vgaarb: loaded Jul 9 23:45:49.073736 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 23:45:49.073741 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 23:45:49.073745 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 23:45:49.073750 kernel: pnp: PnP ACPI init Jul 9 23:45:49.073755 kernel: pnp: PnP ACPI: found 0 devices Jul 9 23:45:49.073760 kernel: NET: Registered PF_INET protocol family Jul 9 23:45:49.073765 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 23:45:49.073770 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 23:45:49.073774 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 23:45:49.073779 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 23:45:49.073784 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 23:45:49.073789 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 23:45:49.073793 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:45:49.073798 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:45:49.073803 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 23:45:49.073808 kernel: PCI: CLS 0 bytes, default 64 Jul 9 23:45:49.073813 kernel: kvm [1]: HYP mode not available Jul 9 23:45:49.073817 kernel: Initialise system trusted keyrings Jul 9 23:45:49.073822 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 23:45:49.073827 kernel: Key type asymmetric registered Jul 9 23:45:49.073831 kernel: Asymmetric key parser 'x509' registered Jul 9 23:45:49.073836 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 9 23:45:49.073840 kernel: io scheduler mq-deadline registered Jul 9 23:45:49.073846 kernel: io scheduler kyber registered Jul 9 23:45:49.073850 kernel: io scheduler bfq registered Jul 9 23:45:49.073855 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 23:45:49.073860 kernel: thunder_xcv, ver 1.0 Jul 9 23:45:49.073864 kernel: thunder_bgx, ver 1.0 Jul 9 23:45:49.073869 kernel: nicpf, ver 1.0 Jul 9 23:45:49.073874 kernel: nicvf, ver 1.0 Jul 9 23:45:49.073973 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 23:45:49.074024 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T23:45:48 UTC (1752104748) Jul 9 23:45:49.074030 kernel: efifb: probing for efifb Jul 9 23:45:49.074035 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 9 23:45:49.074040 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 9 23:45:49.074045 kernel: efifb: scrolling: redraw Jul 9 23:45:49.074049 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 9 23:45:49.074054 kernel: Console: switching to colour frame buffer device 128x48 Jul 9 23:45:49.074059 kernel: fb0: EFI VGA frame buffer device Jul 9 23:45:49.074063 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 9 23:45:49.074069 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 23:45:49.074074 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 9 23:45:49.074079 kernel: watchdog: NMI not fully supported Jul 9 23:45:49.074083 kernel: watchdog: Hard watchdog permanently disabled Jul 9 23:45:49.074088 kernel: NET: Registered PF_INET6 protocol family Jul 9 23:45:49.074093 kernel: Segment Routing with IPv6 Jul 9 23:45:49.074097 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 23:45:49.074102 kernel: NET: Registered PF_PACKET protocol family Jul 9 23:45:49.074107 kernel: Key type dns_resolver registered Jul 9 23:45:49.074112 kernel: registered taskstats version 1 Jul 9 23:45:49.074117 kernel: Loading compiled-in X.509 certificates Jul 9 23:45:49.074121 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 11eff9deb028731c4f89f27f6fac8d1c08902e5a' Jul 9 23:45:49.074126 kernel: Demotion targets for Node 0: null Jul 9 23:45:49.074131 kernel: Key type .fscrypt registered Jul 9 23:45:49.074135 kernel: Key type fscrypt-provisioning registered Jul 9 23:45:49.074140 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 23:45:49.074145 kernel: ima: Allocated hash algorithm: sha1 Jul 9 23:45:49.074149 kernel: ima: No architecture policies found Jul 9 23:45:49.074155 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 23:45:49.074160 kernel: clk: Disabling unused clocks Jul 9 23:45:49.074164 kernel: PM: genpd: Disabling unused power domains Jul 9 23:45:49.074169 kernel: Warning: unable to open an initial console. Jul 9 23:45:49.074174 kernel: Freeing unused kernel memory: 39488K Jul 9 23:45:49.074178 kernel: Run /init as init process Jul 9 23:45:49.074183 kernel: with arguments: Jul 9 23:45:49.074188 kernel: /init Jul 9 23:45:49.074192 kernel: with environment: Jul 9 23:45:49.074198 kernel: HOME=/ Jul 9 23:45:49.074202 kernel: TERM=linux Jul 9 23:45:49.074207 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 23:45:49.074212 systemd[1]: Successfully made /usr/ read-only. Jul 9 23:45:49.074219 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:45:49.074225 systemd[1]: Detected virtualization microsoft. Jul 9 23:45:49.074230 systemd[1]: Detected architecture arm64. Jul 9 23:45:49.074235 systemd[1]: Running in initrd. Jul 9 23:45:49.074240 systemd[1]: No hostname configured, using default hostname. Jul 9 23:45:49.074246 systemd[1]: Hostname set to . Jul 9 23:45:49.074251 systemd[1]: Initializing machine ID from random generator. Jul 9 23:45:49.074256 systemd[1]: Queued start job for default target initrd.target. Jul 9 23:45:49.074261 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:45:49.074266 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:45:49.074271 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 23:45:49.074277 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:45:49.074283 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 23:45:49.074288 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 23:45:49.074294 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 23:45:49.074299 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 23:45:49.074304 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:45:49.074309 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:45:49.074315 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:45:49.074320 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:45:49.074325 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:45:49.074330 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:45:49.074336 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:45:49.074341 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:45:49.074346 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 23:45:49.074351 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 23:45:49.074356 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:45:49.074362 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:45:49.074367 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:45:49.074372 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:45:49.074377 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 23:45:49.074382 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:45:49.074388 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 23:45:49.074393 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 23:45:49.074398 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 23:45:49.074404 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:45:49.074409 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:45:49.074424 systemd-journald[224]: Collecting audit messages is disabled. Jul 9 23:45:49.074437 systemd-journald[224]: Journal started Jul 9 23:45:49.074452 systemd-journald[224]: Runtime Journal (/run/log/journal/b8ff60718f6f4d3f854107d1d865ce7c) is 8M, max 78.5M, 70.5M free. Jul 9 23:45:49.087889 systemd-modules-load[226]: Inserted module 'overlay' Jul 9 23:45:49.095195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:45:49.111994 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 23:45:49.112028 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:45:49.112619 kernel: Bridge firewalling registered Jul 9 23:45:49.114484 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 9 23:45:49.122490 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 23:45:49.132848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:45:49.140896 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 23:45:49.149815 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:45:49.158055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:49.171451 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:45:49.185648 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:45:49.193910 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:45:49.212550 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:45:49.226922 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:45:49.232649 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:45:49.244473 systemd-tmpfiles[253]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 23:45:49.246616 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:45:49.256776 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:45:49.274734 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 23:45:49.294750 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:45:49.310430 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:45:49.311022 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:45:49.359634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:45:49.380432 systemd-resolved[261]: Positive Trust Anchors: Jul 9 23:45:49.380449 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:45:49.380469 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:45:49.382154 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 9 23:45:49.383755 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:45:49.395355 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:45:49.445606 kernel: SCSI subsystem initialized Jul 9 23:45:49.451603 kernel: Loading iSCSI transport class v2.0-870. Jul 9 23:45:49.458624 kernel: iscsi: registered transport (tcp) Jul 9 23:45:49.472308 kernel: iscsi: registered transport (qla4xxx) Jul 9 23:45:49.472323 kernel: QLogic iSCSI HBA Driver Jul 9 23:45:49.485359 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:45:49.509828 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:45:49.516749 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:45:49.567116 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 23:45:49.573738 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 23:45:49.632604 kernel: raid6: neonx8 gen() 18556 MB/s Jul 9 23:45:49.651598 kernel: raid6: neonx4 gen() 18562 MB/s Jul 9 23:45:49.671598 kernel: raid6: neonx2 gen() 17087 MB/s Jul 9 23:45:49.691598 kernel: raid6: neonx1 gen() 15105 MB/s Jul 9 23:45:49.710597 kernel: raid6: int64x8 gen() 10536 MB/s Jul 9 23:45:49.729598 kernel: raid6: int64x4 gen() 10617 MB/s Jul 9 23:45:49.749680 kernel: raid6: int64x2 gen() 8988 MB/s Jul 9 23:45:49.771237 kernel: raid6: int64x1 gen() 7031 MB/s Jul 9 23:45:49.771258 kernel: raid6: using algorithm neonx4 gen() 18562 MB/s Jul 9 23:45:49.792879 kernel: raid6: .... xor() 15166 MB/s, rmw enabled Jul 9 23:45:49.792887 kernel: raid6: using neon recovery algorithm Jul 9 23:45:49.799598 kernel: xor: measuring software checksum speed Jul 9 23:45:49.804591 kernel: 8regs : 27428 MB/sec Jul 9 23:45:49.804598 kernel: 32regs : 28810 MB/sec Jul 9 23:45:49.807135 kernel: arm64_neon : 37656 MB/sec Jul 9 23:45:49.810134 kernel: xor: using function: arm64_neon (37656 MB/sec) Jul 9 23:45:49.848606 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 23:45:49.853661 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:45:49.862543 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:45:49.888433 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 9 23:45:49.893340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:45:49.906515 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 23:45:49.935781 dracut-pre-trigger[488]: rd.md=0: removing MD RAID activation Jul 9 23:45:49.955544 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:45:49.961580 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:45:50.008058 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:45:50.020415 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 23:45:50.067611 kernel: hv_vmbus: Vmbus version:5.3 Jul 9 23:45:50.096070 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 9 23:45:50.096103 kernel: hv_vmbus: registering driver hid_hyperv Jul 9 23:45:50.096110 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 9 23:45:50.091166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:45:50.117632 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 9 23:45:50.117649 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 9 23:45:50.117658 kernel: hv_vmbus: registering driver hv_netvsc Jul 9 23:45:50.091288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:50.152860 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 9 23:45:50.152995 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 9 23:45:50.153004 kernel: PTP clock support registered Jul 9 23:45:50.153010 kernel: hv_utils: Registering HyperV Utility Driver Jul 9 23:45:50.153016 kernel: hv_vmbus: registering driver hv_utils Jul 9 23:45:50.124926 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:45:50.171730 kernel: hv_vmbus: registering driver hv_storvsc Jul 9 23:45:50.171751 kernel: hv_utils: Shutdown IC version 3.2 Jul 9 23:45:50.163937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:45:49.788945 kernel: hv_utils: TimeSync IC version 4.0 Jul 9 23:45:49.804535 kernel: hv_utils: Heartbeat IC version 3.0 Jul 9 23:45:49.804594 kernel: scsi host0: storvsc_host_t Jul 9 23:45:49.805049 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 9 23:45:49.805108 kernel: scsi host1: storvsc_host_t Jul 9 23:45:49.805988 systemd-journald[224]: Time jumped backwards, rotating. Jul 9 23:45:49.806025 kernel: hv_netvsc 000d3af6-2acf-000d-3af6-2acf000d3af6 eth0: VF slot 1 added Jul 9 23:45:49.806128 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 9 23:45:49.773574 systemd-resolved[261]: Clock change detected. Flushing caches. Jul 9 23:45:49.779025 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:45:49.863914 kernel: hv_vmbus: registering driver hv_pci Jul 9 23:45:49.863936 kernel: hv_pci f5e1ae20-39eb-4f40-a397-87b30b3da903: PCI VMBus probing: Using version 0x10004 Jul 9 23:45:49.864083 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 9 23:45:49.864165 kernel: hv_pci f5e1ae20-39eb-4f40-a397-87b30b3da903: PCI host bridge to bus 39eb:00 Jul 9 23:45:49.864223 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 9 23:45:49.864287 kernel: pci_bus 39eb:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 9 23:45:49.864353 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 9 23:45:49.864413 kernel: pci_bus 39eb:00: No busn resource found for root bus, will use [bus 00-ff] Jul 9 23:45:49.864466 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 9 23:45:49.782534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:45:49.881666 kernel: pci 39eb:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 9 23:45:49.881698 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 9 23:45:49.881806 kernel: pci 39eb:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 9 23:45:49.881818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:45:49.782611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:49.895005 kernel: pci 39eb:00:02.0: enabling Extended Tags Jul 9 23:45:49.813125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:45:49.906353 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#269 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:45:49.909285 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:49.928750 kernel: pci 39eb:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 39eb:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 9 23:45:49.928902 kernel: pci_bus 39eb:00: busn_res: [bus 00-ff] end is updated to 00 Jul 9 23:45:49.937680 kernel: pci 39eb:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 9 23:45:49.945887 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:45:49.945915 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 9 23:45:49.946557 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 9 23:45:49.952669 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 9 23:45:49.954214 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 9 23:45:49.972088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#89 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:45:49.997056 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#306 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:45:50.021403 kernel: mlx5_core 39eb:00:02.0: enabling device (0000 -> 0002) Jul 9 23:45:50.030467 kernel: mlx5_core 39eb:00:02.0: PTM is not supported by PCIe Jul 9 23:45:50.030615 kernel: mlx5_core 39eb:00:02.0: firmware version: 16.30.5006 Jul 9 23:45:50.204321 kernel: hv_netvsc 000d3af6-2acf-000d-3af6-2acf000d3af6 eth0: VF registering: eth1 Jul 9 23:45:50.204528 kernel: mlx5_core 39eb:00:02.0 eth1: joined to eth0 Jul 9 23:45:50.209880 kernel: mlx5_core 39eb:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 9 23:45:50.219125 kernel: mlx5_core 39eb:00:02.0 enP14827s1: renamed from eth1 Jul 9 23:45:50.421829 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 9 23:45:50.536822 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 9 23:45:50.542442 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 9 23:45:50.559546 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 9 23:45:50.565929 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 23:45:50.609283 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 9 23:45:50.715062 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 23:45:50.720438 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:45:50.732834 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:45:50.743226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:45:50.753430 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 23:45:50.776287 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:45:51.606143 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#120 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:45:51.619113 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:45:51.619154 disk-uuid[649]: The operation has completed successfully. Jul 9 23:45:51.679362 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 23:45:51.679446 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 23:45:51.707885 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 23:45:51.721002 sh[826]: Success Jul 9 23:45:51.757782 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 23:45:51.757847 kernel: device-mapper: uevent: version 1.0.3 Jul 9 23:45:51.758058 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 23:45:51.772065 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 9 23:45:51.952229 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 23:45:51.959524 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 23:45:51.971773 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 23:45:51.997426 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 23:45:51.997462 kernel: BTRFS: device fsid 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (844) Jul 9 23:45:52.007289 kernel: BTRFS info (device dm-0): first mount of filesystem 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b Jul 9 23:45:52.007311 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:45:52.010665 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 23:45:52.301650 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 23:45:52.308802 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:45:52.317838 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 23:45:52.326651 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 23:45:52.340425 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 23:45:52.366763 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (873) Jul 9 23:45:52.366804 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:45:52.375733 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:45:52.378895 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:45:52.402059 kernel: BTRFS info (device sda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:45:52.402858 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 23:45:52.408696 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 23:45:52.458068 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:45:52.475621 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:45:52.505988 systemd-networkd[1014]: lo: Link UP Jul 9 23:45:52.505997 systemd-networkd[1014]: lo: Gained carrier Jul 9 23:45:52.507563 systemd-networkd[1014]: Enumeration completed Jul 9 23:45:52.509051 systemd-networkd[1014]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:45:52.509055 systemd-networkd[1014]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:45:52.510060 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:45:52.519645 systemd[1]: Reached target network.target - Network. Jul 9 23:45:52.587060 kernel: mlx5_core 39eb:00:02.0 enP14827s1: Link up Jul 9 23:45:52.621365 kernel: hv_netvsc 000d3af6-2acf-000d-3af6-2acf000d3af6 eth0: Data path switched to VF: enP14827s1 Jul 9 23:45:52.621097 systemd-networkd[1014]: enP14827s1: Link UP Jul 9 23:45:52.621170 systemd-networkd[1014]: eth0: Link UP Jul 9 23:45:52.621258 systemd-networkd[1014]: eth0: Gained carrier Jul 9 23:45:52.621266 systemd-networkd[1014]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:45:52.638797 systemd-networkd[1014]: enP14827s1: Gained carrier Jul 9 23:45:52.647073 systemd-networkd[1014]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:45:53.473545 ignition[952]: Ignition 2.21.0 Jul 9 23:45:53.473560 ignition[952]: Stage: fetch-offline Jul 9 23:45:53.479236 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:45:53.473632 ignition[952]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:53.485672 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 9 23:45:53.473638 ignition[952]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:53.473734 ignition[952]: parsed url from cmdline: "" Jul 9 23:45:53.473736 ignition[952]: no config URL provided Jul 9 23:45:53.473739 ignition[952]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:45:53.473744 ignition[952]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:45:53.473747 ignition[952]: failed to fetch config: resource requires networking Jul 9 23:45:53.474072 ignition[952]: Ignition finished successfully Jul 9 23:45:53.524106 ignition[1034]: Ignition 2.21.0 Jul 9 23:45:53.524114 ignition[1034]: Stage: fetch Jul 9 23:45:53.524435 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:53.524446 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:53.524528 ignition[1034]: parsed url from cmdline: "" Jul 9 23:45:53.524531 ignition[1034]: no config URL provided Jul 9 23:45:53.524535 ignition[1034]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:45:53.524540 ignition[1034]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:45:53.524564 ignition[1034]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 9 23:45:53.635299 ignition[1034]: GET result: OK Jul 9 23:45:53.635359 ignition[1034]: config has been read from IMDS userdata Jul 9 23:45:53.637930 unknown[1034]: fetched base config from "system" Jul 9 23:45:53.635385 ignition[1034]: parsing config with SHA512: d949ad2f1255206d0dd74f9a5a74ea574c626e309fc1543227a58cea5cfb0aee1113f04a361935fdec1dcb017db721b4412ecb1173f1c7e5e25ecab0348d52a2 Jul 9 23:45:53.637935 unknown[1034]: fetched base config from "system" Jul 9 23:45:53.638170 ignition[1034]: fetch: fetch complete Jul 9 23:45:53.637938 unknown[1034]: fetched user config from "azure" Jul 9 23:45:53.638173 ignition[1034]: fetch: fetch passed Jul 9 23:45:53.640658 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 9 23:45:53.638218 ignition[1034]: Ignition finished successfully Jul 9 23:45:53.649116 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 23:45:53.687907 ignition[1041]: Ignition 2.21.0 Jul 9 23:45:53.690821 ignition[1041]: Stage: kargs Jul 9 23:45:53.691138 ignition[1041]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:53.695226 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 23:45:53.691146 ignition[1041]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:53.703957 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 23:45:53.692011 ignition[1041]: kargs: kargs passed Jul 9 23:45:53.692064 ignition[1041]: Ignition finished successfully Jul 9 23:45:53.731119 ignition[1047]: Ignition 2.21.0 Jul 9 23:45:53.731131 ignition[1047]: Stage: disks Jul 9 23:45:53.733326 ignition[1047]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:53.735839 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 23:45:53.733337 ignition[1047]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:53.741901 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 23:45:53.733994 ignition[1047]: disks: disks passed Jul 9 23:45:53.749972 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 23:45:53.734315 ignition[1047]: Ignition finished successfully Jul 9 23:45:53.758815 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:45:53.767559 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:45:53.775937 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:45:53.784860 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 23:45:53.862348 systemd-fsck[1055]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 9 23:45:53.870630 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 23:45:53.877201 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 23:45:54.031166 systemd-networkd[1014]: eth0: Gained IPv6LL Jul 9 23:45:54.086053 kernel: EXT4-fs (sda9): mounted filesystem 961fd3ec-635c-4a87-8aef-ca8f12cd8be8 r/w with ordered data mode. Quota mode: none. Jul 9 23:45:54.087141 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 23:45:54.091228 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 23:45:54.099209 systemd-networkd[1014]: enP14827s1: Gained IPv6LL Jul 9 23:45:54.114128 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:45:54.121665 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 23:45:54.132285 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 9 23:45:54.143662 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 23:45:54.143782 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:45:54.159174 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 23:45:54.169960 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 23:45:54.187017 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1069) Jul 9 23:45:54.187069 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:45:54.197187 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:45:54.200447 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:45:54.203119 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:45:54.588598 coreos-metadata[1071]: Jul 09 23:45:54.588 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 9 23:45:54.596577 coreos-metadata[1071]: Jul 09 23:45:54.596 INFO Fetch successful Jul 9 23:45:54.601238 coreos-metadata[1071]: Jul 09 23:45:54.596 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 9 23:45:54.609375 coreos-metadata[1071]: Jul 09 23:45:54.605 INFO Fetch successful Jul 9 23:45:54.619121 coreos-metadata[1071]: Jul 09 23:45:54.619 INFO wrote hostname ci-4344.1.1-n-4a8bce7214 to /sysroot/etc/hostname Jul 9 23:45:54.626027 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 9 23:45:54.826525 initrd-setup-root[1100]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 23:45:54.859017 initrd-setup-root[1107]: cut: /sysroot/etc/group: No such file or directory Jul 9 23:45:54.863483 initrd-setup-root[1114]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 23:45:54.867865 initrd-setup-root[1121]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 23:45:55.671941 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 23:45:55.677878 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 23:45:55.693590 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 23:45:55.703975 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 23:45:55.715058 kernel: BTRFS info (device sda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:45:55.730095 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 23:45:55.742468 ignition[1190]: INFO : Ignition 2.21.0 Jul 9 23:45:55.746275 ignition[1190]: INFO : Stage: mount Jul 9 23:45:55.746275 ignition[1190]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:55.746275 ignition[1190]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:55.759211 ignition[1190]: INFO : mount: mount passed Jul 9 23:45:55.759211 ignition[1190]: INFO : Ignition finished successfully Jul 9 23:45:55.753243 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 23:45:55.764932 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 23:45:55.786346 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:45:55.812383 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1200) Jul 9 23:45:55.812420 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:45:55.817230 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:45:55.820340 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:45:55.822546 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:45:55.850811 ignition[1217]: INFO : Ignition 2.21.0 Jul 9 23:45:55.850811 ignition[1217]: INFO : Stage: files Jul 9 23:45:55.856623 ignition[1217]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:55.856623 ignition[1217]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:55.856623 ignition[1217]: DEBUG : files: compiled without relabeling support, skipping Jul 9 23:45:55.887314 ignition[1217]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 23:45:55.887314 ignition[1217]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 23:45:55.944087 ignition[1217]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 23:45:55.949584 ignition[1217]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 23:45:55.949584 ignition[1217]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 23:45:55.944446 unknown[1217]: wrote ssh authorized keys file for user: core Jul 9 23:45:55.963955 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 9 23:45:55.971638 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 9 23:45:56.008873 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 23:45:56.156313 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 9 23:45:56.164367 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:45:56.164367 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 9 23:45:56.486468 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 23:45:56.559686 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:45:56.567136 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 23:45:56.567136 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 23:45:56.567136 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:45:56.567136 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:45:56.567136 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:45:56.567136 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:45:56.567136 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:45:56.567136 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:45:56.624342 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:45:56.624342 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:45:56.624342 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:45:56.624342 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:45:56.624342 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:45:56.624342 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 9 23:45:57.183437 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 23:45:57.374235 ignition[1217]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:45:57.374235 ignition[1217]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 9 23:45:57.388082 ignition[1217]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:45:57.396267 ignition[1217]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:45:57.396267 ignition[1217]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 9 23:45:57.396267 ignition[1217]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 9 23:45:57.396267 ignition[1217]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 23:45:57.396267 ignition[1217]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:45:57.396267 ignition[1217]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:45:57.396267 ignition[1217]: INFO : files: files passed Jul 9 23:45:57.396267 ignition[1217]: INFO : Ignition finished successfully Jul 9 23:45:57.404278 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 23:45:57.414977 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 23:45:57.445881 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 23:45:57.453913 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 23:45:57.454033 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 23:45:57.487972 initrd-setup-root-after-ignition[1247]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:45:57.487972 initrd-setup-root-after-ignition[1247]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:45:57.501602 initrd-setup-root-after-ignition[1251]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:45:57.496573 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:45:57.506825 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 23:45:57.517633 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 23:45:57.570538 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 23:45:57.570627 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 23:45:57.576433 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 23:45:57.584205 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 23:45:57.593655 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 23:45:57.594361 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 23:45:57.628946 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:45:57.635282 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 23:45:57.660084 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:45:57.665227 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:45:57.674720 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 23:45:57.683044 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 23:45:57.683133 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:45:57.695017 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 23:45:57.699804 systemd[1]: Stopped target basic.target - Basic System. Jul 9 23:45:57.708068 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 23:45:57.716552 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:45:57.724517 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 23:45:57.733423 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:45:57.742307 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 23:45:57.751037 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:45:57.760925 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 23:45:57.769694 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 23:45:57.779437 systemd[1]: Stopped target swap.target - Swaps. Jul 9 23:45:57.786741 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 23:45:57.786854 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:45:57.799007 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:45:57.803583 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:45:57.812972 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 23:45:57.813055 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:45:57.822412 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 23:45:57.822515 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 23:45:57.835326 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 23:45:57.835412 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:45:57.841427 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 23:45:57.841508 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 23:45:57.849234 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 9 23:45:57.903100 ignition[1271]: INFO : Ignition 2.21.0 Jul 9 23:45:57.903100 ignition[1271]: INFO : Stage: umount Jul 9 23:45:57.903100 ignition[1271]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:57.903100 ignition[1271]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:57.903100 ignition[1271]: INFO : umount: umount passed Jul 9 23:45:57.903100 ignition[1271]: INFO : Ignition finished successfully Jul 9 23:45:57.849305 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 9 23:45:57.861079 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 23:45:57.889189 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 23:45:57.899777 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 23:45:57.899925 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:45:57.905131 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 23:45:57.905232 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:45:57.918353 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 23:45:57.918431 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 23:45:57.933307 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 23:45:57.935724 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 23:45:57.935786 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 23:45:57.940266 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 23:45:57.940308 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 23:45:57.944433 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 9 23:45:57.944464 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 9 23:45:57.952367 systemd[1]: Stopped target network.target - Network. Jul 9 23:45:57.961859 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 23:45:57.961915 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:45:57.971225 systemd[1]: Stopped target paths.target - Path Units. Jul 9 23:45:57.978486 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 23:45:57.982805 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:45:57.988155 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 23:45:57.996691 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 23:45:58.004863 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 23:45:58.004904 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:45:58.013542 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 23:45:58.013568 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:45:58.021729 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 23:45:58.021773 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 23:45:58.025686 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 23:45:58.025712 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 23:45:58.042795 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 23:45:58.047175 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 23:45:58.055667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 23:45:58.055734 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 23:45:58.066393 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 23:45:58.068064 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 23:45:58.080923 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 23:45:58.081134 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 23:45:58.081286 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 23:45:58.281525 kernel: hv_netvsc 000d3af6-2acf-000d-3af6-2acf000d3af6 eth0: Data path switched from VF: enP14827s1 Jul 9 23:45:58.092430 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 23:45:58.093861 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 23:45:58.101875 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 23:45:58.101915 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:45:58.120184 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 23:45:58.136241 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 23:45:58.136315 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:45:58.146575 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:45:58.146631 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:45:58.151347 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 23:45:58.151378 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 23:45:58.161189 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 23:45:58.161231 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:45:58.173768 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:45:58.182256 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:45:58.182311 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:45:58.204243 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 23:45:58.208240 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 23:45:58.217557 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 23:45:58.217669 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:45:58.227888 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 23:45:58.227943 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 23:45:58.235846 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 23:45:58.235877 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:45:58.244859 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 23:45:58.244900 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:45:58.257729 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 23:45:58.257775 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 23:45:58.277101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:45:58.277145 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:45:58.291238 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 23:45:58.291289 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 23:45:58.300863 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 23:45:58.315970 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 23:45:58.316032 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:45:58.325453 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 23:45:58.526982 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 9 23:45:58.325490 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:45:58.343753 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 23:45:58.343794 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:45:58.352948 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 23:45:58.352984 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:45:58.359368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:45:58.359399 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:58.376691 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 23:45:58.376735 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 9 23:45:58.376764 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 23:45:58.376836 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:45:58.377234 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 23:45:58.379057 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 23:45:58.390286 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 23:45:58.390382 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 23:45:58.399640 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 23:45:58.411841 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 23:45:58.441929 systemd[1]: Switching root. Jul 9 23:45:58.616926 systemd-journald[224]: Journal stopped Jul 9 23:46:03.060415 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 23:46:03.060432 kernel: SELinux: policy capability open_perms=1 Jul 9 23:46:03.060441 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 23:46:03.060446 kernel: SELinux: policy capability always_check_network=0 Jul 9 23:46:03.060453 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 23:46:03.060458 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 23:46:03.060464 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 23:46:03.060470 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 23:46:03.060475 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 23:46:03.060481 kernel: audit: type=1403 audit(1752104759.752:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 23:46:03.060488 systemd[1]: Successfully loaded SELinux policy in 129.024ms. Jul 9 23:46:03.060496 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.897ms. Jul 9 23:46:03.060502 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:46:03.060508 systemd[1]: Detected virtualization microsoft. Jul 9 23:46:03.060515 systemd[1]: Detected architecture arm64. Jul 9 23:46:03.060522 systemd[1]: Detected first boot. Jul 9 23:46:03.060528 systemd[1]: Hostname set to . Jul 9 23:46:03.060534 systemd[1]: Initializing machine ID from random generator. Jul 9 23:46:03.060541 zram_generator::config[1314]: No configuration found. Jul 9 23:46:03.060548 kernel: NET: Registered PF_VSOCK protocol family Jul 9 23:46:03.060554 systemd[1]: Populated /etc with preset unit settings. Jul 9 23:46:03.060560 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 23:46:03.060567 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 23:46:03.060573 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 23:46:03.060579 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 23:46:03.060585 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 23:46:03.060591 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 23:46:03.060597 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 23:46:03.060604 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 23:46:03.060611 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 23:46:03.060618 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 23:46:03.060624 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 23:46:03.060630 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 23:46:03.060637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:46:03.060643 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:46:03.060649 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 23:46:03.060655 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 23:46:03.060661 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 23:46:03.060668 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:46:03.060674 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 23:46:03.060682 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:46:03.060688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:46:03.060694 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 23:46:03.060700 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 23:46:03.060706 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 23:46:03.060713 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 23:46:03.060719 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:46:03.060725 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:46:03.060731 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:46:03.060737 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:46:03.060743 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 23:46:03.060749 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 23:46:03.060757 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 23:46:03.060763 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:46:03.060769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:46:03.060776 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:46:03.060782 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 23:46:03.060788 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 23:46:03.060795 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 23:46:03.060802 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 23:46:03.060808 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 23:46:03.060814 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 23:46:03.060821 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 23:46:03.060828 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 23:46:03.060834 systemd[1]: Reached target machines.target - Containers. Jul 9 23:46:03.060840 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 23:46:03.060848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:46:03.060854 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:46:03.060860 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 23:46:03.060867 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:46:03.060873 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:46:03.060879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:46:03.060885 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 23:46:03.060892 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:46:03.060898 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 23:46:03.060905 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 23:46:03.060912 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 23:46:03.060918 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 23:46:03.060924 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 23:46:03.060930 kernel: fuse: init (API version 7.41) Jul 9 23:46:03.060936 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:46:03.060943 kernel: loop: module loaded Jul 9 23:46:03.060949 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:46:03.060956 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:46:03.060963 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:46:03.060969 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 23:46:03.060975 kernel: ACPI: bus type drm_connector registered Jul 9 23:46:03.060982 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 23:46:03.060997 systemd-journald[1418]: Collecting audit messages is disabled. Jul 9 23:46:03.061012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:46:03.061020 systemd-journald[1418]: Journal started Jul 9 23:46:03.061033 systemd-journald[1418]: Runtime Journal (/run/log/journal/9de0e317e5934e759ca5104a8d73cf20) is 8M, max 78.5M, 70.5M free. Jul 9 23:46:02.297379 systemd[1]: Queued start job for default target multi-user.target. Jul 9 23:46:02.301429 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 9 23:46:02.301773 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 23:46:02.302008 systemd[1]: systemd-journald.service: Consumed 2.535s CPU time. Jul 9 23:46:03.079804 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 23:46:03.079834 systemd[1]: Stopped verity-setup.service. Jul 9 23:46:03.093620 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:46:03.096546 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 23:46:03.101172 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 23:46:03.106132 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 23:46:03.110876 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 23:46:03.115624 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 23:46:03.120511 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 23:46:03.124986 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 23:46:03.130500 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:46:03.136448 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 23:46:03.136576 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 23:46:03.142552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:46:03.142690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:46:03.147894 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:46:03.148073 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:46:03.152988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:46:03.153355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:46:03.160351 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 23:46:03.160510 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 23:46:03.166372 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:46:03.166519 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:46:03.173253 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:46:03.181703 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:46:03.188206 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 23:46:03.193791 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 23:46:03.199530 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:46:03.213949 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:46:03.220632 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 23:46:03.231081 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 23:46:03.236233 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 23:46:03.236261 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:46:03.241384 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 23:46:03.247794 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 23:46:03.252533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:46:03.267069 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 23:46:03.277169 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 23:46:03.282020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:46:03.290588 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 23:46:03.295272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:46:03.298091 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:46:03.303695 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 23:46:03.310153 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:46:03.317399 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 23:46:03.324449 systemd-journald[1418]: Time spent on flushing to /var/log/journal/9de0e317e5934e759ca5104a8d73cf20 is 41.798ms for 940 entries. Jul 9 23:46:03.324449 systemd-journald[1418]: System Journal (/var/log/journal/9de0e317e5934e759ca5104a8d73cf20) is 11.8M, max 2.6G, 2.6G free. Jul 9 23:46:03.469304 systemd-journald[1418]: Received client request to flush runtime journal. Jul 9 23:46:03.469340 systemd-journald[1418]: /var/log/journal/9de0e317e5934e759ca5104a8d73cf20/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 9 23:46:03.469357 systemd-journald[1418]: Rotating system journal. Jul 9 23:46:03.469373 kernel: loop0: detected capacity change from 0 to 28936 Jul 9 23:46:03.331998 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 23:46:03.341600 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 23:46:03.347462 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 23:46:03.373518 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 23:46:03.401949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:46:03.453565 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Jul 9 23:46:03.453573 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Jul 9 23:46:03.457003 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:46:03.463356 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 23:46:03.472222 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 23:46:03.515616 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 23:46:03.516963 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 23:46:03.782073 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 23:46:03.908066 kernel: loop1: detected capacity change from 0 to 107312 Jul 9 23:46:03.917590 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 23:46:03.926231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:46:03.948717 systemd-tmpfiles[1473]: ACLs are not supported, ignoring. Jul 9 23:46:03.949005 systemd-tmpfiles[1473]: ACLs are not supported, ignoring. Jul 9 23:46:03.952679 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:46:04.203207 kernel: loop2: detected capacity change from 0 to 207008 Jul 9 23:46:04.241112 kernel: loop3: detected capacity change from 0 to 138376 Jul 9 23:46:04.569746 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 23:46:04.576826 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:46:04.603529 systemd-udevd[1479]: Using default interface naming scheme 'v255'. Jul 9 23:46:04.635056 kernel: loop4: detected capacity change from 0 to 28936 Jul 9 23:46:04.642053 kernel: loop5: detected capacity change from 0 to 107312 Jul 9 23:46:04.648074 kernel: loop6: detected capacity change from 0 to 207008 Jul 9 23:46:04.656052 kernel: loop7: detected capacity change from 0 to 138376 Jul 9 23:46:04.659671 (sd-merge)[1481]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 9 23:46:04.660021 (sd-merge)[1481]: Merged extensions into '/usr'. Jul 9 23:46:04.662403 systemd[1]: Reload requested from client PID 1453 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 23:46:04.662499 systemd[1]: Reloading... Jul 9 23:46:04.717111 zram_generator::config[1506]: No configuration found. Jul 9 23:46:04.847242 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:04.883055 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:46:04.931048 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 23:46:04.963184 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 23:46:04.963319 systemd[1]: Reloading finished in 300 ms. Jul 9 23:46:05.007931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:46:05.017109 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 23:46:05.027209 kernel: hv_vmbus: registering driver hv_balloon Jul 9 23:46:05.027264 kernel: hv_vmbus: registering driver hyperv_fb Jul 9 23:46:05.027280 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 9 23:46:05.032404 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 9 23:46:05.049041 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 9 23:46:05.049102 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 9 23:46:05.053299 kernel: Console: switching to colour dummy device 80x25 Jul 9 23:46:05.057066 kernel: Console: switching to colour frame buffer device 128x48 Jul 9 23:46:05.061004 systemd[1]: Starting ensure-sysext.service... Jul 9 23:46:05.067731 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:46:05.078421 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:46:05.086222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:46:05.101988 systemd[1]: Reload requested from client PID 1626 ('systemctl') (unit ensure-sysext.service)... Jul 9 23:46:05.101999 systemd[1]: Reloading... Jul 9 23:46:05.106370 systemd-tmpfiles[1629]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 23:46:05.108319 systemd-tmpfiles[1629]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 23:46:05.108710 systemd-tmpfiles[1629]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 23:46:05.109286 systemd-tmpfiles[1629]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 23:46:05.111147 systemd-tmpfiles[1629]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 23:46:05.111890 systemd-tmpfiles[1629]: ACLs are not supported, ignoring. Jul 9 23:46:05.112260 systemd-tmpfiles[1629]: ACLs are not supported, ignoring. Jul 9 23:46:05.140480 systemd-tmpfiles[1629]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:46:05.140812 systemd-tmpfiles[1629]: Skipping /boot Jul 9 23:46:05.158723 systemd-tmpfiles[1629]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:46:05.159708 systemd-tmpfiles[1629]: Skipping /boot Jul 9 23:46:05.176077 kernel: MACsec IEEE 802.1AE Jul 9 23:46:05.227062 zram_generator::config[1712]: No configuration found. Jul 9 23:46:05.288895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:05.367940 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 9 23:46:05.372924 systemd[1]: Reloading finished in 270 ms. Jul 9 23:46:05.401178 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:46:05.428016 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:46:05.440224 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 23:46:05.444627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:46:05.445690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:46:05.459099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:46:05.465736 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:46:05.470390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:46:05.472074 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 23:46:05.477154 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:46:05.479210 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 23:46:05.486209 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:46:05.491233 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 23:46:05.502248 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 23:46:05.508890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:46:05.509046 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:46:05.514378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:46:05.518004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:46:05.525350 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:46:05.525511 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:46:05.532164 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:46:05.532716 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:46:05.541110 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:46:05.541694 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 23:46:05.558412 systemd[1]: Finished ensure-sysext.service. Jul 9 23:46:05.562459 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 23:46:05.570455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:46:05.571425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:46:05.585162 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:46:05.599333 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:46:05.607108 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:46:05.612931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:46:05.612971 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:46:05.613012 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 23:46:05.620181 augenrules[1817]: No rules Jul 9 23:46:05.622251 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:46:05.628354 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 23:46:05.637404 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:46:05.637550 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:46:05.641882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:46:05.646749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:46:05.652074 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 23:46:05.658555 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:46:05.659435 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:46:05.668282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:46:05.670092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:46:05.675449 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:46:05.675598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:46:05.684107 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:46:05.684186 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:46:05.738460 systemd-resolved[1787]: Positive Trust Anchors: Jul 9 23:46:05.738753 systemd-resolved[1787]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:46:05.738822 systemd-resolved[1787]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:46:05.741286 systemd-resolved[1787]: Using system hostname 'ci-4344.1.1-n-4a8bce7214'. Jul 9 23:46:05.742577 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:46:05.747544 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:46:05.760164 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:46:05.767783 systemd-networkd[1627]: lo: Link UP Jul 9 23:46:05.767791 systemd-networkd[1627]: lo: Gained carrier Jul 9 23:46:05.771168 systemd-networkd[1627]: Enumeration completed Jul 9 23:46:05.771262 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:46:05.772671 systemd-networkd[1627]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:46:05.772679 systemd-networkd[1627]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:46:05.776689 systemd[1]: Reached target network.target - Network. Jul 9 23:46:05.781400 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 23:46:05.788185 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 23:46:05.831050 kernel: mlx5_core 39eb:00:02.0 enP14827s1: Link up Jul 9 23:46:05.853054 kernel: hv_netvsc 000d3af6-2acf-000d-3af6-2acf000d3af6 eth0: Data path switched to VF: enP14827s1 Jul 9 23:46:05.853761 systemd-networkd[1627]: enP14827s1: Link UP Jul 9 23:46:05.853831 systemd-networkd[1627]: eth0: Link UP Jul 9 23:46:05.853833 systemd-networkd[1627]: eth0: Gained carrier Jul 9 23:46:05.853845 systemd-networkd[1627]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:46:05.862376 systemd-networkd[1627]: enP14827s1: Gained carrier Jul 9 23:46:05.864105 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 23:46:05.874119 systemd-networkd[1627]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:46:06.122631 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 23:46:06.128631 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 23:46:07.151347 systemd-networkd[1627]: eth0: Gained IPv6LL Jul 9 23:46:07.155109 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 23:46:07.160712 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 23:46:07.343207 systemd-networkd[1627]: enP14827s1: Gained IPv6LL Jul 9 23:46:08.362871 ldconfig[1448]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 23:46:08.379267 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 23:46:08.384940 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 23:46:08.409427 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 23:46:08.414755 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:46:08.420080 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 23:46:08.425083 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 23:46:08.430195 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 23:46:08.434199 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 23:46:08.439256 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 23:46:08.443992 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 23:46:08.444018 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:46:08.447474 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:46:08.451918 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 23:46:08.457491 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 23:46:08.462768 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 23:46:08.468086 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 23:46:08.472872 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 23:46:08.483604 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 23:46:08.487976 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 23:46:08.492959 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 23:46:08.497663 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:46:08.501728 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:46:08.505726 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:46:08.505747 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:46:08.507529 systemd[1]: Starting chronyd.service - NTP client/server... Jul 9 23:46:08.522122 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 23:46:08.529174 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 9 23:46:08.534361 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 23:46:08.541167 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 23:46:08.554343 (chronyd)[1850]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 9 23:46:08.561136 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 23:46:08.568164 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 23:46:08.568304 jq[1858]: false Jul 9 23:46:08.572248 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 23:46:08.573181 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 9 23:46:08.579362 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 9 23:46:08.580385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:08.585950 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 23:46:08.591244 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 23:46:08.597600 extend-filesystems[1859]: Found /dev/sda6 Jul 9 23:46:08.605241 KVP[1860]: KVP starting; pid is:1860 Jul 9 23:46:08.604305 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 23:46:08.606958 chronyd[1872]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 9 23:46:08.611739 kernel: hv_utils: KVP IC version 4.0 Jul 9 23:46:08.612883 KVP[1860]: KVP LIC Version: 3.1 Jul 9 23:46:08.612687 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 23:46:08.623567 extend-filesystems[1859]: Found /dev/sda9 Jul 9 23:46:08.619802 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 23:46:08.629161 extend-filesystems[1859]: Checking size of /dev/sda9 Jul 9 23:46:08.634246 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 23:46:08.645425 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 23:46:08.647274 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 23:46:08.649171 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 23:46:08.653740 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 23:46:08.664069 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 23:46:08.664849 jq[1888]: true Jul 9 23:46:08.672049 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 23:46:08.672198 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 23:46:08.674363 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 23:46:08.674507 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 23:46:08.680370 extend-filesystems[1859]: Old size kept for /dev/sda9 Jul 9 23:46:08.687878 chronyd[1872]: Timezone right/UTC failed leap second check, ignoring Jul 9 23:46:08.686751 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 23:46:08.688023 chronyd[1872]: Loaded seccomp filter (level 2) Jul 9 23:46:08.686920 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 23:46:08.702622 systemd[1]: Started chronyd.service - NTP client/server. Jul 9 23:46:08.708144 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 23:46:08.708302 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 23:46:08.714663 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 23:46:08.720127 update_engine[1887]: I20250709 23:46:08.720057 1887 main.cc:92] Flatcar Update Engine starting Jul 9 23:46:08.739406 (ntainerd)[1903]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 23:46:08.743834 jq[1902]: true Jul 9 23:46:08.759281 tar[1896]: linux-arm64/LICENSE Jul 9 23:46:08.762539 tar[1896]: linux-arm64/helm Jul 9 23:46:08.788648 systemd-logind[1884]: New seat seat0. Jul 9 23:46:08.797602 systemd-logind[1884]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 9 23:46:08.797771 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 23:46:08.864184 bash[1944]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:46:08.865862 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 23:46:08.873265 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 23:46:08.875207 dbus-daemon[1853]: [system] SELinux support is enabled Jul 9 23:46:08.875622 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 23:46:08.880880 update_engine[1887]: I20250709 23:46:08.880837 1887 update_check_scheduler.cc:74] Next update check in 3m12s Jul 9 23:46:08.885204 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 23:46:08.885229 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 23:46:08.886138 dbus-daemon[1853]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 9 23:46:08.894127 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 23:46:08.894151 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 23:46:08.901828 systemd[1]: Started update-engine.service - Update Engine. Jul 9 23:46:08.918566 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 23:46:08.943942 coreos-metadata[1852]: Jul 09 23:46:08.943 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 9 23:46:08.948496 coreos-metadata[1852]: Jul 09 23:46:08.948 INFO Fetch successful Jul 9 23:46:08.948496 coreos-metadata[1852]: Jul 09 23:46:08.948 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 9 23:46:08.953001 coreos-metadata[1852]: Jul 09 23:46:08.952 INFO Fetch successful Jul 9 23:46:08.953001 coreos-metadata[1852]: Jul 09 23:46:08.952 INFO Fetching http://168.63.129.16/machine/54409634-c196-4941-90de-a631e31b3f33/e72dc0ad%2Dde57%2D4715%2D8460%2D4bb712ad8d19.%5Fci%2D4344.1.1%2Dn%2D4a8bce7214?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 9 23:46:08.955858 coreos-metadata[1852]: Jul 09 23:46:08.955 INFO Fetch successful Jul 9 23:46:08.955972 coreos-metadata[1852]: Jul 09 23:46:08.955 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 9 23:46:08.965621 coreos-metadata[1852]: Jul 09 23:46:08.965 INFO Fetch successful Jul 9 23:46:09.037542 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 9 23:46:09.043028 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 23:46:09.150672 sshd_keygen[1890]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 23:46:09.174135 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 23:46:09.182233 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 23:46:09.190783 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 9 23:46:09.212762 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 23:46:09.212913 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 23:46:09.225727 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 23:46:09.232067 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 9 23:46:09.269074 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 23:46:09.278396 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 23:46:09.286350 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 23:46:09.292782 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 23:46:09.305907 locksmithd[1988]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 23:46:09.341829 containerd[1903]: time="2025-07-09T23:46:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 23:46:09.345567 containerd[1903]: time="2025-07-09T23:46:09.344333352Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 9 23:46:09.358011 containerd[1903]: time="2025-07-09T23:46:09.357973720Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.304µs" Jul 9 23:46:09.358011 containerd[1903]: time="2025-07-09T23:46:09.358006704Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 23:46:09.358121 containerd[1903]: time="2025-07-09T23:46:09.358021192Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 23:46:09.358179 containerd[1903]: time="2025-07-09T23:46:09.358160776Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 23:46:09.358203 containerd[1903]: time="2025-07-09T23:46:09.358178576Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 23:46:09.358203 containerd[1903]: time="2025-07-09T23:46:09.358198832Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:46:09.358250 containerd[1903]: time="2025-07-09T23:46:09.358238576Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:46:09.358250 containerd[1903]: time="2025-07-09T23:46:09.358248304Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:46:09.358454 containerd[1903]: time="2025-07-09T23:46:09.358438800Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:46:09.358475 containerd[1903]: time="2025-07-09T23:46:09.358453144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:46:09.358475 containerd[1903]: time="2025-07-09T23:46:09.358460640Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:46:09.358475 containerd[1903]: time="2025-07-09T23:46:09.358465728Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 23:46:09.358538 containerd[1903]: time="2025-07-09T23:46:09.358527712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 23:46:09.358718 containerd[1903]: time="2025-07-09T23:46:09.358692144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:46:09.358718 containerd[1903]: time="2025-07-09T23:46:09.358718280Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:46:09.358776 containerd[1903]: time="2025-07-09T23:46:09.358725824Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 23:46:09.358776 containerd[1903]: time="2025-07-09T23:46:09.358749904Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 23:46:09.359058 containerd[1903]: time="2025-07-09T23:46:09.358894808Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 23:46:09.359058 containerd[1903]: time="2025-07-09T23:46:09.358950192Z" level=info msg="metadata content store policy set" policy=shared Jul 9 23:46:09.378754 containerd[1903]: time="2025-07-09T23:46:09.378641048Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 23:46:09.378754 containerd[1903]: time="2025-07-09T23:46:09.378695880Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 23:46:09.378754 containerd[1903]: time="2025-07-09T23:46:09.378707640Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 23:46:09.378754 containerd[1903]: time="2025-07-09T23:46:09.378716568Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 23:46:09.378754 containerd[1903]: time="2025-07-09T23:46:09.378724776Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 23:46:09.378754 containerd[1903]: time="2025-07-09T23:46:09.378732104Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.378740168Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.378944888Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.378958240Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.378975712Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.378982696Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.378992328Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.379846544Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.379870352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.379882688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.379890280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.379897704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.379905120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.379918496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.379924984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 23:46:09.380060 containerd[1903]: time="2025-07-09T23:46:09.379931872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 23:46:09.380287 containerd[1903]: time="2025-07-09T23:46:09.379939232Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 23:46:09.380287 containerd[1903]: time="2025-07-09T23:46:09.379945544Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 23:46:09.380287 containerd[1903]: time="2025-07-09T23:46:09.380007992Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 23:46:09.380287 containerd[1903]: time="2025-07-09T23:46:09.380018272Z" level=info msg="Start snapshots syncer" Jul 9 23:46:09.380374 containerd[1903]: time="2025-07-09T23:46:09.380357520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 23:46:09.383064 containerd[1903]: time="2025-07-09T23:46:09.380626608Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 23:46:09.383064 containerd[1903]: time="2025-07-09T23:46:09.382739976Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 23:46:09.383206 containerd[1903]: time="2025-07-09T23:46:09.382819480Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 23:46:09.383206 containerd[1903]: time="2025-07-09T23:46:09.382930280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 23:46:09.383206 containerd[1903]: time="2025-07-09T23:46:09.382951184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 23:46:09.383206 containerd[1903]: time="2025-07-09T23:46:09.382963192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 23:46:09.383206 containerd[1903]: time="2025-07-09T23:46:09.382972592Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 23:46:09.383206 containerd[1903]: time="2025-07-09T23:46:09.382981152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 23:46:09.383206 containerd[1903]: time="2025-07-09T23:46:09.382990680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 23:46:09.383206 containerd[1903]: time="2025-07-09T23:46:09.383002552Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 23:46:09.383206 containerd[1903]: time="2025-07-09T23:46:09.383028000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.383999952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384059224Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384105152Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384117976Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384126392Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384135112Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384141928Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384148896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384158040Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384172768Z" level=info msg="runtime interface created" Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384176120Z" level=info msg="created NRI interface" Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384181600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384193616Z" level=info msg="Connect containerd service" Jul 9 23:46:09.385692 containerd[1903]: time="2025-07-09T23:46:09.384226792Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 23:46:09.385926 containerd[1903]: time="2025-07-09T23:46:09.384857728Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:46:09.427416 tar[1896]: linux-arm64/README.md Jul 9 23:46:09.442092 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 23:46:09.472247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:09.481349 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:09.704067 kubelet[2054]: E0709 23:46:09.703936 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:09.706238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:09.706468 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:09.708119 systemd[1]: kubelet.service: Consumed 537ms CPU time, 252.5M memory peak. Jul 9 23:46:09.886921 containerd[1903]: time="2025-07-09T23:46:09.886825896Z" level=info msg="Start subscribing containerd event" Jul 9 23:46:09.886921 containerd[1903]: time="2025-07-09T23:46:09.886894856Z" level=info msg="Start recovering state" Jul 9 23:46:09.887072 containerd[1903]: time="2025-07-09T23:46:09.886990352Z" level=info msg="Start event monitor" Jul 9 23:46:09.887072 containerd[1903]: time="2025-07-09T23:46:09.887003656Z" level=info msg="Start cni network conf syncer for default" Jul 9 23:46:09.887072 containerd[1903]: time="2025-07-09T23:46:09.887010640Z" level=info msg="Start streaming server" Jul 9 23:46:09.887072 containerd[1903]: time="2025-07-09T23:46:09.887017392Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 23:46:09.887072 containerd[1903]: time="2025-07-09T23:46:09.887022488Z" level=info msg="runtime interface starting up..." Jul 9 23:46:09.887072 containerd[1903]: time="2025-07-09T23:46:09.887026848Z" level=info msg="starting plugins..." Jul 9 23:46:09.887072 containerd[1903]: time="2025-07-09T23:46:09.887049784Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 23:46:09.887383 containerd[1903]: time="2025-07-09T23:46:09.887250184Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 23:46:09.887383 containerd[1903]: time="2025-07-09T23:46:09.887298840Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 23:46:09.887383 containerd[1903]: time="2025-07-09T23:46:09.887344152Z" level=info msg="containerd successfully booted in 0.545838s" Jul 9 23:46:09.887537 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 23:46:09.893718 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 23:46:09.899557 systemd[1]: Startup finished in 1.644s (kernel) + 11.404s (initrd) + 10.275s (userspace) = 23.323s. Jul 9 23:46:10.166592 login[2039]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 9 23:46:10.167576 login[2040]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:10.202901 systemd-logind[1884]: New session 2 of user core. Jul 9 23:46:10.204685 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 23:46:10.205703 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 23:46:10.224851 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 23:46:10.226703 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 23:46:10.250394 (systemd)[2076]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 23:46:10.253510 systemd-logind[1884]: New session c1 of user core. Jul 9 23:46:10.415859 systemd[2076]: Queued start job for default target default.target. Jul 9 23:46:10.422769 systemd[2076]: Created slice app.slice - User Application Slice. Jul 9 23:46:10.422791 systemd[2076]: Reached target paths.target - Paths. Jul 9 23:46:10.422822 systemd[2076]: Reached target timers.target - Timers. Jul 9 23:46:10.423807 systemd[2076]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 23:46:10.430235 systemd[2076]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 23:46:10.430276 systemd[2076]: Reached target sockets.target - Sockets. Jul 9 23:46:10.430304 systemd[2076]: Reached target basic.target - Basic System. Jul 9 23:46:10.430324 systemd[2076]: Reached target default.target - Main User Target. Jul 9 23:46:10.430342 systemd[2076]: Startup finished in 172ms. Jul 9 23:46:10.430599 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 23:46:10.432606 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 23:46:10.710547 waagent[2032]: 2025-07-09T23:46:10.710428Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 9 23:46:10.714741 waagent[2032]: 2025-07-09T23:46:10.714702Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 9 23:46:10.717927 waagent[2032]: 2025-07-09T23:46:10.717898Z INFO Daemon Daemon Python: 3.11.12 Jul 9 23:46:10.721046 waagent[2032]: 2025-07-09T23:46:10.720993Z INFO Daemon Daemon Run daemon Jul 9 23:46:10.724400 waagent[2032]: 2025-07-09T23:46:10.724148Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 9 23:46:10.730779 waagent[2032]: 2025-07-09T23:46:10.730729Z INFO Daemon Daemon Using waagent for provisioning Jul 9 23:46:10.734569 waagent[2032]: 2025-07-09T23:46:10.734532Z INFO Daemon Daemon Activate resource disk Jul 9 23:46:10.738029 waagent[2032]: 2025-07-09T23:46:10.737986Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 9 23:46:10.746395 waagent[2032]: 2025-07-09T23:46:10.746351Z INFO Daemon Daemon Found device: None Jul 9 23:46:10.749522 waagent[2032]: 2025-07-09T23:46:10.749492Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 9 23:46:10.757126 waagent[2032]: 2025-07-09T23:46:10.757073Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 9 23:46:10.765412 waagent[2032]: 2025-07-09T23:46:10.765356Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 9 23:46:10.769493 waagent[2032]: 2025-07-09T23:46:10.769460Z INFO Daemon Daemon Running default provisioning handler Jul 9 23:46:10.777827 waagent[2032]: 2025-07-09T23:46:10.777788Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 9 23:46:10.787607 waagent[2032]: 2025-07-09T23:46:10.787571Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 9 23:46:10.794557 waagent[2032]: 2025-07-09T23:46:10.794529Z INFO Daemon Daemon cloud-init is enabled: False Jul 9 23:46:10.798014 waagent[2032]: 2025-07-09T23:46:10.797993Z INFO Daemon Daemon Copying ovf-env.xml Jul 9 23:46:10.885972 waagent[2032]: 2025-07-09T23:46:10.885377Z INFO Daemon Daemon Successfully mounted dvd Jul 9 23:46:10.913871 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 9 23:46:10.915791 waagent[2032]: 2025-07-09T23:46:10.915741Z INFO Daemon Daemon Detect protocol endpoint Jul 9 23:46:10.919458 waagent[2032]: 2025-07-09T23:46:10.919427Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 9 23:46:10.923950 waagent[2032]: 2025-07-09T23:46:10.923919Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 9 23:46:10.929146 waagent[2032]: 2025-07-09T23:46:10.929120Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 9 23:46:10.933277 waagent[2032]: 2025-07-09T23:46:10.933248Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 9 23:46:10.937289 waagent[2032]: 2025-07-09T23:46:10.937266Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 9 23:46:10.982594 waagent[2032]: 2025-07-09T23:46:10.982511Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 9 23:46:10.987418 waagent[2032]: 2025-07-09T23:46:10.987399Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 9 23:46:10.991388 waagent[2032]: 2025-07-09T23:46:10.991366Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 9 23:46:11.131744 waagent[2032]: 2025-07-09T23:46:11.131657Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 9 23:46:11.136642 waagent[2032]: 2025-07-09T23:46:11.136606Z INFO Daemon Daemon Forcing an update of the goal state. Jul 9 23:46:11.143951 waagent[2032]: 2025-07-09T23:46:11.143917Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 9 23:46:11.161621 waagent[2032]: 2025-07-09T23:46:11.161592Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 9 23:46:11.166038 waagent[2032]: 2025-07-09T23:46:11.166008Z INFO Daemon Jul 9 23:46:11.168356 login[2039]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:11.168796 waagent[2032]: 2025-07-09T23:46:11.168690Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6f9872cb-50fe-4043-8018-1c82a812edd8 eTag: 16699051084462981123 source: Fabric] Jul 9 23:46:11.177185 waagent[2032]: 2025-07-09T23:46:11.177104Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 9 23:46:11.182749 waagent[2032]: 2025-07-09T23:46:11.182699Z INFO Daemon Jul 9 23:46:11.184927 waagent[2032]: 2025-07-09T23:46:11.184896Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 9 23:46:11.185164 systemd-logind[1884]: New session 1 of user core. Jul 9 23:46:11.193757 waagent[2032]: 2025-07-09T23:46:11.193728Z INFO Daemon Daemon Downloading artifacts profile blob Jul 9 23:46:11.198178 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 23:46:11.324889 waagent[2032]: 2025-07-09T23:46:11.324778Z INFO Daemon Downloaded certificate {'thumbprint': '9128ABDD232547D3B52C8CAD7D4820E1307F1F0C', 'hasPrivateKey': True} Jul 9 23:46:11.331799 waagent[2032]: 2025-07-09T23:46:11.331766Z INFO Daemon Downloaded certificate {'thumbprint': '7242974C056C3DDD1A4CCB11F125ECCED99638EA', 'hasPrivateKey': False} Jul 9 23:46:11.338513 waagent[2032]: 2025-07-09T23:46:11.338482Z INFO Daemon Fetch goal state completed Jul 9 23:46:11.348526 waagent[2032]: 2025-07-09T23:46:11.348489Z INFO Daemon Daemon Starting provisioning Jul 9 23:46:11.352391 waagent[2032]: 2025-07-09T23:46:11.352360Z INFO Daemon Daemon Handle ovf-env.xml. Jul 9 23:46:11.356037 waagent[2032]: 2025-07-09T23:46:11.356012Z INFO Daemon Daemon Set hostname [ci-4344.1.1-n-4a8bce7214] Jul 9 23:46:11.376029 waagent[2032]: 2025-07-09T23:46:11.375992Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-n-4a8bce7214] Jul 9 23:46:11.380458 waagent[2032]: 2025-07-09T23:46:11.380426Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 9 23:46:11.385040 waagent[2032]: 2025-07-09T23:46:11.385006Z INFO Daemon Daemon Primary interface is [eth0] Jul 9 23:46:11.394673 systemd-networkd[1627]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:46:11.394680 systemd-networkd[1627]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:46:11.394709 systemd-networkd[1627]: eth0: DHCP lease lost Jul 9 23:46:11.395215 waagent[2032]: 2025-07-09T23:46:11.395181Z INFO Daemon Daemon Create user account if not exists Jul 9 23:46:11.400061 waagent[2032]: 2025-07-09T23:46:11.399283Z INFO Daemon Daemon User core already exists, skip useradd Jul 9 23:46:11.404128 waagent[2032]: 2025-07-09T23:46:11.404074Z INFO Daemon Daemon Configure sudoer Jul 9 23:46:11.412982 waagent[2032]: 2025-07-09T23:46:11.412943Z INFO Daemon Daemon Configure sshd Jul 9 23:46:11.417099 systemd-networkd[1627]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:46:11.424052 waagent[2032]: 2025-07-09T23:46:11.419347Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 9 23:46:11.428755 waagent[2032]: 2025-07-09T23:46:11.428725Z INFO Daemon Daemon Deploy ssh public key. Jul 9 23:46:12.492434 waagent[2032]: 2025-07-09T23:46:12.492393Z INFO Daemon Daemon Provisioning complete Jul 9 23:46:12.505994 waagent[2032]: 2025-07-09T23:46:12.505958Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 9 23:46:12.510945 waagent[2032]: 2025-07-09T23:46:12.510911Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 9 23:46:12.518306 waagent[2032]: 2025-07-09T23:46:12.518278Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 9 23:46:12.614073 waagent[2130]: 2025-07-09T23:46:12.613987Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 9 23:46:12.614767 waagent[2130]: 2025-07-09T23:46:12.614447Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 9 23:46:12.614767 waagent[2130]: 2025-07-09T23:46:12.614504Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 9 23:46:12.614767 waagent[2130]: 2025-07-09T23:46:12.614539Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 9 23:46:12.635256 waagent[2130]: 2025-07-09T23:46:12.635217Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 9 23:46:12.635467 waagent[2130]: 2025-07-09T23:46:12.635438Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:46:12.635572 waagent[2130]: 2025-07-09T23:46:12.635551Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:46:12.641211 waagent[2130]: 2025-07-09T23:46:12.641168Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 9 23:46:12.646073 waagent[2130]: 2025-07-09T23:46:12.645901Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 9 23:46:12.646298 waagent[2130]: 2025-07-09T23:46:12.646263Z INFO ExtHandler Jul 9 23:46:12.646345 waagent[2130]: 2025-07-09T23:46:12.646328Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 85f9d8db-e1ec-498d-b053-b27aeff0ab91 eTag: 16699051084462981123 source: Fabric] Jul 9 23:46:12.646554 waagent[2130]: 2025-07-09T23:46:12.646529Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 9 23:46:12.646936 waagent[2130]: 2025-07-09T23:46:12.646908Z INFO ExtHandler Jul 9 23:46:12.646972 waagent[2130]: 2025-07-09T23:46:12.646956Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 9 23:46:12.650039 waagent[2130]: 2025-07-09T23:46:12.650013Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 9 23:46:12.702010 waagent[2130]: 2025-07-09T23:46:12.701959Z INFO ExtHandler Downloaded certificate {'thumbprint': '9128ABDD232547D3B52C8CAD7D4820E1307F1F0C', 'hasPrivateKey': True} Jul 9 23:46:12.702300 waagent[2130]: 2025-07-09T23:46:12.702269Z INFO ExtHandler Downloaded certificate {'thumbprint': '7242974C056C3DDD1A4CCB11F125ECCED99638EA', 'hasPrivateKey': False} Jul 9 23:46:12.702581 waagent[2130]: 2025-07-09T23:46:12.702553Z INFO ExtHandler Fetch goal state completed Jul 9 23:46:12.714180 waagent[2130]: 2025-07-09T23:46:12.714137Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 9 23:46:12.717366 waagent[2130]: 2025-07-09T23:46:12.717323Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2130 Jul 9 23:46:12.717462 waagent[2130]: 2025-07-09T23:46:12.717439Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 9 23:46:12.717690 waagent[2130]: 2025-07-09T23:46:12.717665Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 9 23:46:12.718759 waagent[2130]: 2025-07-09T23:46:12.718725Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 9 23:46:12.719122 waagent[2130]: 2025-07-09T23:46:12.719091Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 9 23:46:12.719243 waagent[2130]: 2025-07-09T23:46:12.719221Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 9 23:46:12.719653 waagent[2130]: 2025-07-09T23:46:12.719624Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 9 23:46:12.781837 waagent[2130]: 2025-07-09T23:46:12.781756Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 9 23:46:12.781942 waagent[2130]: 2025-07-09T23:46:12.781914Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 9 23:46:12.786017 waagent[2130]: 2025-07-09T23:46:12.785994Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 9 23:46:12.790613 systemd[1]: Reload requested from client PID 2147 ('systemctl') (unit waagent.service)... Jul 9 23:46:12.790626 systemd[1]: Reloading... Jul 9 23:46:12.855065 zram_generator::config[2185]: No configuration found. Jul 9 23:46:12.919303 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:13.001589 systemd[1]: Reloading finished in 210 ms. Jul 9 23:46:13.024477 waagent[2130]: 2025-07-09T23:46:13.024219Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 9 23:46:13.024477 waagent[2130]: 2025-07-09T23:46:13.024358Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 9 23:46:13.560083 waagent[2130]: 2025-07-09T23:46:13.559725Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 9 23:46:13.560083 waagent[2130]: 2025-07-09T23:46:13.560025Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 9 23:46:13.560707 waagent[2130]: 2025-07-09T23:46:13.560667Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 9 23:46:13.560990 waagent[2130]: 2025-07-09T23:46:13.560952Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 9 23:46:13.561327 waagent[2130]: 2025-07-09T23:46:13.561286Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 9 23:46:13.561441 waagent[2130]: 2025-07-09T23:46:13.561409Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 9 23:46:13.561743 waagent[2130]: 2025-07-09T23:46:13.561698Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 9 23:46:13.561914 waagent[2130]: 2025-07-09T23:46:13.561883Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 9 23:46:13.562455 waagent[2130]: 2025-07-09T23:46:13.562391Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:46:13.563075 waagent[2130]: 2025-07-09T23:46:13.562436Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 9 23:46:13.563549 waagent[2130]: 2025-07-09T23:46:13.563531Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:46:13.563605 waagent[2130]: 2025-07-09T23:46:13.563489Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:46:13.564061 waagent[2130]: 2025-07-09T23:46:13.564009Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 9 23:46:13.564409 waagent[2130]: 2025-07-09T23:46:13.564377Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:46:13.565074 waagent[2130]: 2025-07-09T23:46:13.565029Z INFO EnvHandler ExtHandler Configure routes Jul 9 23:46:13.565650 waagent[2130]: 2025-07-09T23:46:13.565625Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 9 23:46:13.565650 waagent[2130]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 9 23:46:13.565650 waagent[2130]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 9 23:46:13.565650 waagent[2130]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 9 23:46:13.565650 waagent[2130]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:46:13.565650 waagent[2130]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:46:13.565650 waagent[2130]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:46:13.566847 waagent[2130]: 2025-07-09T23:46:13.566755Z INFO EnvHandler ExtHandler Gateway:None Jul 9 23:46:13.567589 waagent[2130]: 2025-07-09T23:46:13.567553Z INFO EnvHandler ExtHandler Routes:None Jul 9 23:46:13.569017 waagent[2130]: 2025-07-09T23:46:13.568988Z INFO ExtHandler ExtHandler Jul 9 23:46:13.569334 waagent[2130]: 2025-07-09T23:46:13.569306Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: afe52c30-3179-4691-8230-99feb32de800 correlation 75153ceb-71ec-407b-85f9-b212780bfb66 created: 2025-07-09T23:45:02.789734Z] Jul 9 23:46:13.569952 waagent[2130]: 2025-07-09T23:46:13.569923Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 9 23:46:13.570447 waagent[2130]: 2025-07-09T23:46:13.570417Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 9 23:46:13.597179 waagent[2130]: 2025-07-09T23:46:13.597142Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 9 23:46:13.597179 waagent[2130]: Try `iptables -h' or 'iptables --help' for more information.) Jul 9 23:46:13.597453 waagent[2130]: 2025-07-09T23:46:13.597423Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6EC64C28-9579-40E8-AAEA-608229A583FF;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 9 23:46:13.616384 waagent[2130]: 2025-07-09T23:46:13.616112Z INFO MonitorHandler ExtHandler Network interfaces: Jul 9 23:46:13.616384 waagent[2130]: Executing ['ip', '-a', '-o', 'link']: Jul 9 23:46:13.616384 waagent[2130]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 9 23:46:13.616384 waagent[2130]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:2a:cf brd ff:ff:ff:ff:ff:ff Jul 9 23:46:13.616384 waagent[2130]: 3: enP14827s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:2a:cf brd ff:ff:ff:ff:ff:ff\ altname enP14827p0s2 Jul 9 23:46:13.616384 waagent[2130]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 9 23:46:13.616384 waagent[2130]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 9 23:46:13.616384 waagent[2130]: 2: eth0 inet 10.200.20.10/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 9 23:46:13.616384 waagent[2130]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 9 23:46:13.616384 waagent[2130]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 9 23:46:13.616384 waagent[2130]: 2: eth0 inet6 fe80::20d:3aff:fef6:2acf/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 9 23:46:13.616384 waagent[2130]: 3: enP14827s1 inet6 fe80::20d:3aff:fef6:2acf/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 9 23:46:13.641150 waagent[2130]: 2025-07-09T23:46:13.641118Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 9 23:46:13.641150 waagent[2130]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:46:13.641150 waagent[2130]: pkts bytes target prot opt in out source destination Jul 9 23:46:13.641150 waagent[2130]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:46:13.641150 waagent[2130]: pkts bytes target prot opt in out source destination Jul 9 23:46:13.641150 waagent[2130]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Jul 9 23:46:13.641150 waagent[2130]: pkts bytes target prot opt in out source destination Jul 9 23:46:13.641150 waagent[2130]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 9 23:46:13.641150 waagent[2130]: 7 940 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 9 23:46:13.641150 waagent[2130]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 9 23:46:13.643945 waagent[2130]: 2025-07-09T23:46:13.643916Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 9 23:46:13.643945 waagent[2130]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:46:13.643945 waagent[2130]: pkts bytes target prot opt in out source destination Jul 9 23:46:13.643945 waagent[2130]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:46:13.643945 waagent[2130]: pkts bytes target prot opt in out source destination Jul 9 23:46:13.643945 waagent[2130]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Jul 9 23:46:13.643945 waagent[2130]: pkts bytes target prot opt in out source destination Jul 9 23:46:13.643945 waagent[2130]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 9 23:46:13.643945 waagent[2130]: 11 1356 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 9 23:46:13.643945 waagent[2130]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 9 23:46:13.644335 waagent[2130]: 2025-07-09T23:46:13.644312Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 9 23:46:19.520952 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 23:46:19.521935 systemd[1]: Started sshd@0-10.200.20.10:22-10.200.16.10:56464.service - OpenSSH per-connection server daemon (10.200.16.10:56464). Jul 9 23:46:19.957142 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 23:46:19.958651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:20.064945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:20.067303 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:20.082099 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 56464 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:20.082744 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:20.086381 systemd-logind[1884]: New session 3 of user core. Jul 9 23:46:20.093176 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 23:46:20.208497 kubelet[2284]: E0709 23:46:20.208387 2284 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:20.211404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:20.211625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:20.212172 systemd[1]: kubelet.service: Consumed 107ms CPU time, 105M memory peak. Jul 9 23:46:20.505204 systemd[1]: Started sshd@1-10.200.20.10:22-10.200.16.10:49764.service - OpenSSH per-connection server daemon (10.200.16.10:49764). Jul 9 23:46:20.981270 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 49764 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:20.982365 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:20.985881 systemd-logind[1884]: New session 4 of user core. Jul 9 23:46:20.997295 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 23:46:21.329997 sshd[2296]: Connection closed by 10.200.16.10 port 49764 Jul 9 23:46:21.330645 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:21.333744 systemd[1]: sshd@1-10.200.20.10:22-10.200.16.10:49764.service: Deactivated successfully. Jul 9 23:46:21.335481 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 23:46:21.336079 systemd-logind[1884]: Session 4 logged out. Waiting for processes to exit. Jul 9 23:46:21.338262 systemd-logind[1884]: Removed session 4. Jul 9 23:46:21.419233 systemd[1]: Started sshd@2-10.200.20.10:22-10.200.16.10:49778.service - OpenSSH per-connection server daemon (10.200.16.10:49778). Jul 9 23:46:21.896900 sshd[2302]: Accepted publickey for core from 10.200.16.10 port 49778 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:21.898025 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:21.901451 systemd-logind[1884]: New session 5 of user core. Jul 9 23:46:21.909314 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 23:46:22.237685 sshd[2304]: Connection closed by 10.200.16.10 port 49778 Jul 9 23:46:22.238339 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:22.241676 systemd-logind[1884]: Session 5 logged out. Waiting for processes to exit. Jul 9 23:46:22.241823 systemd[1]: sshd@2-10.200.20.10:22-10.200.16.10:49778.service: Deactivated successfully. Jul 9 23:46:22.243141 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 23:46:22.244518 systemd-logind[1884]: Removed session 5. Jul 9 23:46:22.319245 systemd[1]: Started sshd@3-10.200.20.10:22-10.200.16.10:49784.service - OpenSSH per-connection server daemon (10.200.16.10:49784). Jul 9 23:46:22.789487 sshd[2310]: Accepted publickey for core from 10.200.16.10 port 49784 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:22.790565 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:22.794129 systemd-logind[1884]: New session 6 of user core. Jul 9 23:46:22.805138 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 23:46:23.134598 sshd[2312]: Connection closed by 10.200.16.10 port 49784 Jul 9 23:46:23.135252 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:23.138494 systemd-logind[1884]: Session 6 logged out. Waiting for processes to exit. Jul 9 23:46:23.138968 systemd[1]: sshd@3-10.200.20.10:22-10.200.16.10:49784.service: Deactivated successfully. Jul 9 23:46:23.140399 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 23:46:23.141830 systemd-logind[1884]: Removed session 6. Jul 9 23:46:23.232243 systemd[1]: Started sshd@4-10.200.20.10:22-10.200.16.10:49792.service - OpenSSH per-connection server daemon (10.200.16.10:49792). Jul 9 23:46:23.726950 sshd[2318]: Accepted publickey for core from 10.200.16.10 port 49792 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:23.728161 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:23.731887 systemd-logind[1884]: New session 7 of user core. Jul 9 23:46:23.739169 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 23:46:24.089467 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 23:46:24.089684 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:24.116858 sudo[2321]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:24.204990 sshd[2320]: Connection closed by 10.200.16.10 port 49792 Jul 9 23:46:24.205664 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:24.209054 systemd[1]: sshd@4-10.200.20.10:22-10.200.16.10:49792.service: Deactivated successfully. Jul 9 23:46:24.210472 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 23:46:24.211022 systemd-logind[1884]: Session 7 logged out. Waiting for processes to exit. Jul 9 23:46:24.212281 systemd-logind[1884]: Removed session 7. Jul 9 23:46:24.293899 systemd[1]: Started sshd@5-10.200.20.10:22-10.200.16.10:49806.service - OpenSSH per-connection server daemon (10.200.16.10:49806). Jul 9 23:46:24.747736 sshd[2327]: Accepted publickey for core from 10.200.16.10 port 49806 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:24.748829 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:24.752389 systemd-logind[1884]: New session 8 of user core. Jul 9 23:46:24.759319 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 23:46:25.002619 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 23:46:25.002824 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:25.010126 sudo[2331]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:25.013641 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 23:46:25.013839 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:25.021216 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:46:25.054800 augenrules[2353]: No rules Jul 9 23:46:25.056072 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:46:25.056257 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:46:25.058202 sudo[2330]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:25.146938 sshd[2329]: Connection closed by 10.200.16.10 port 49806 Jul 9 23:46:25.146836 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:25.149537 systemd-logind[1884]: Session 8 logged out. Waiting for processes to exit. Jul 9 23:46:25.149658 systemd[1]: sshd@5-10.200.20.10:22-10.200.16.10:49806.service: Deactivated successfully. Jul 9 23:46:25.152413 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 23:46:25.154206 systemd-logind[1884]: Removed session 8. Jul 9 23:46:25.231670 systemd[1]: Started sshd@6-10.200.20.10:22-10.200.16.10:49822.service - OpenSSH per-connection server daemon (10.200.16.10:49822). Jul 9 23:46:25.713400 sshd[2362]: Accepted publickey for core from 10.200.16.10 port 49822 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:25.714504 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:25.718075 systemd-logind[1884]: New session 9 of user core. Jul 9 23:46:25.725334 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 23:46:25.981456 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 23:46:25.981657 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:27.051241 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 23:46:27.062306 (dockerd)[2383]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 23:46:27.594107 dockerd[2383]: time="2025-07-09T23:46:27.593842128Z" level=info msg="Starting up" Jul 9 23:46:27.595719 dockerd[2383]: time="2025-07-09T23:46:27.595694968Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 23:46:27.765202 dockerd[2383]: time="2025-07-09T23:46:27.765164552Z" level=info msg="Loading containers: start." Jul 9 23:46:27.793060 kernel: Initializing XFRM netlink socket Jul 9 23:46:28.036407 systemd-networkd[1627]: docker0: Link UP Jul 9 23:46:28.052921 dockerd[2383]: time="2025-07-09T23:46:28.052852888Z" level=info msg="Loading containers: done." Jul 9 23:46:28.061259 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3987846880-merged.mount: Deactivated successfully. Jul 9 23:46:28.081070 dockerd[2383]: time="2025-07-09T23:46:28.080721256Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 23:46:28.081070 dockerd[2383]: time="2025-07-09T23:46:28.080796576Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 9 23:46:28.081070 dockerd[2383]: time="2025-07-09T23:46:28.080904080Z" level=info msg="Initializing buildkit" Jul 9 23:46:28.136616 dockerd[2383]: time="2025-07-09T23:46:28.136567736Z" level=info msg="Completed buildkit initialization" Jul 9 23:46:28.142486 dockerd[2383]: time="2025-07-09T23:46:28.142445072Z" level=info msg="Daemon has completed initialization" Jul 9 23:46:28.142793 dockerd[2383]: time="2025-07-09T23:46:28.142560456Z" level=info msg="API listen on /run/docker.sock" Jul 9 23:46:28.142642 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 23:46:29.140156 containerd[1903]: time="2025-07-09T23:46:29.140106600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 9 23:46:30.125986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386386590.mount: Deactivated successfully. Jul 9 23:46:30.380419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 23:46:30.381651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:30.941917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:30.944219 (kubelet)[2599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:30.973430 kubelet[2599]: E0709 23:46:30.973374 2599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:30.975617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:30.975836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:30.977206 systemd[1]: kubelet.service: Consumed 104ms CPU time, 107.7M memory peak. Jul 9 23:46:32.059223 containerd[1903]: time="2025-07-09T23:46:32.059177512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:32.064253 containerd[1903]: time="2025-07-09T23:46:32.064227232Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 9 23:46:32.070819 containerd[1903]: time="2025-07-09T23:46:32.070784400Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:32.085170 containerd[1903]: time="2025-07-09T23:46:32.085131160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:32.085783 containerd[1903]: time="2025-07-09T23:46:32.085655224Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.945512488s" Jul 9 23:46:32.085783 containerd[1903]: time="2025-07-09T23:46:32.085683360Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 9 23:46:32.086243 containerd[1903]: time="2025-07-09T23:46:32.086227408Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 9 23:46:32.479013 chronyd[1872]: Selected source PHC0 Jul 9 23:46:33.538671 containerd[1903]: time="2025-07-09T23:46:33.538616699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:33.543736 containerd[1903]: time="2025-07-09T23:46:33.543694058Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 9 23:46:33.551791 containerd[1903]: time="2025-07-09T23:46:33.551741712Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:33.564526 containerd[1903]: time="2025-07-09T23:46:33.564442057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:33.565191 containerd[1903]: time="2025-07-09T23:46:33.565052299Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.478682634s" Jul 9 23:46:33.565191 containerd[1903]: time="2025-07-09T23:46:33.565089004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 9 23:46:33.566014 containerd[1903]: time="2025-07-09T23:46:33.565991127Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 9 23:46:35.282757 containerd[1903]: time="2025-07-09T23:46:35.282184356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:35.288005 containerd[1903]: time="2025-07-09T23:46:35.287981649Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 9 23:46:35.295630 containerd[1903]: time="2025-07-09T23:46:35.295609109Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:35.303924 containerd[1903]: time="2025-07-09T23:46:35.303902901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:35.305064 containerd[1903]: time="2025-07-09T23:46:35.304952821Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.738846851s" Jul 9 23:46:35.305064 containerd[1903]: time="2025-07-09T23:46:35.304979030Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 9 23:46:35.305374 containerd[1903]: time="2025-07-09T23:46:35.305362129Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 9 23:46:36.542072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219715267.mount: Deactivated successfully. Jul 9 23:46:37.091064 containerd[1903]: time="2025-07-09T23:46:37.090948496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:37.103612 containerd[1903]: time="2025-07-09T23:46:37.103574593Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 9 23:46:37.108623 containerd[1903]: time="2025-07-09T23:46:37.108574015Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:37.113435 containerd[1903]: time="2025-07-09T23:46:37.113402799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:37.113811 containerd[1903]: time="2025-07-09T23:46:37.113736233Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.808340135s" Jul 9 23:46:37.113811 containerd[1903]: time="2025-07-09T23:46:37.113763866Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 9 23:46:37.114350 containerd[1903]: time="2025-07-09T23:46:37.114332459Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 23:46:37.915557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2060432740.mount: Deactivated successfully. Jul 9 23:46:39.242841 containerd[1903]: time="2025-07-09T23:46:39.242229839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:39.249642 containerd[1903]: time="2025-07-09T23:46:39.249614628Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 9 23:46:39.261547 containerd[1903]: time="2025-07-09T23:46:39.261517800Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:39.266057 containerd[1903]: time="2025-07-09T23:46:39.266012214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:39.266413 containerd[1903]: time="2025-07-09T23:46:39.266386033Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.151863177s" Jul 9 23:46:39.266465 containerd[1903]: time="2025-07-09T23:46:39.266416410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 9 23:46:39.266977 containerd[1903]: time="2025-07-09T23:46:39.266943658Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 23:46:40.454668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194214377.mount: Deactivated successfully. Jul 9 23:46:40.497427 containerd[1903]: time="2025-07-09T23:46:40.497333219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:40.500637 containerd[1903]: time="2025-07-09T23:46:40.500603351Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 9 23:46:40.508671 containerd[1903]: time="2025-07-09T23:46:40.508630097Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:40.517545 containerd[1903]: time="2025-07-09T23:46:40.517491935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:40.518097 containerd[1903]: time="2025-07-09T23:46:40.517812969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.250828014s" Jul 9 23:46:40.518097 containerd[1903]: time="2025-07-09T23:46:40.517840850Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 9 23:46:40.518332 containerd[1903]: time="2025-07-09T23:46:40.518311794Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 9 23:46:41.130274 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 9 23:46:41.132021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:41.233971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:41.238294 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:41.262714 kubelet[2729]: E0709 23:46:41.262665 2729 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:41.265009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:41.265238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:41.267101 systemd[1]: kubelet.service: Consumed 100ms CPU time, 107M memory peak. Jul 9 23:46:41.780790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176292965.mount: Deactivated successfully. Jul 9 23:46:44.725179 containerd[1903]: time="2025-07-09T23:46:44.725127733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:44.737060 containerd[1903]: time="2025-07-09T23:46:44.736981910Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 9 23:46:44.741747 containerd[1903]: time="2025-07-09T23:46:44.741709555Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:44.748241 containerd[1903]: time="2025-07-09T23:46:44.748179681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:44.748871 containerd[1903]: time="2025-07-09T23:46:44.748632240Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.230295534s" Jul 9 23:46:44.748871 containerd[1903]: time="2025-07-09T23:46:44.748658649Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 9 23:46:47.755570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:47.756268 systemd[1]: kubelet.service: Consumed 100ms CPU time, 107M memory peak. Jul 9 23:46:47.758461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:47.776699 systemd[1]: Reload requested from client PID 2816 ('systemctl') (unit session-9.scope)... Jul 9 23:46:47.776801 systemd[1]: Reloading... Jul 9 23:46:47.847059 zram_generator::config[2862]: No configuration found. Jul 9 23:46:47.925754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:48.008093 systemd[1]: Reloading finished in 230 ms. Jul 9 23:46:48.039839 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 23:46:48.039899 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 23:46:48.040176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:48.040217 systemd[1]: kubelet.service: Consumed 52ms CPU time, 75.3M memory peak. Jul 9 23:46:48.042769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:48.235139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:48.238088 (kubelet)[2926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:46:48.262297 kubelet[2926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:48.262297 kubelet[2926]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:46:48.262297 kubelet[2926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:48.262297 kubelet[2926]: I0709 23:46:48.262275 2926 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:46:48.458054 kubelet[2926]: I0709 23:46:48.456854 2926 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 23:46:48.458054 kubelet[2926]: I0709 23:46:48.456883 2926 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:46:48.458054 kubelet[2926]: I0709 23:46:48.457121 2926 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 23:46:48.470355 kubelet[2926]: E0709 23:46:48.470322 2926 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:48.475460 kubelet[2926]: I0709 23:46:48.475055 2926 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:46:48.479499 kubelet[2926]: I0709 23:46:48.479448 2926 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:46:48.482946 kubelet[2926]: I0709 23:46:48.482927 2926 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:46:48.483479 kubelet[2926]: I0709 23:46:48.483453 2926 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:46:48.483680 kubelet[2926]: I0709 23:46:48.483557 2926 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-4a8bce7214","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:46:48.483797 kubelet[2926]: I0709 23:46:48.483785 2926 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:46:48.483843 kubelet[2926]: I0709 23:46:48.483835 2926 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 23:46:48.483990 kubelet[2926]: I0709 23:46:48.483978 2926 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:48.485784 kubelet[2926]: I0709 23:46:48.485603 2926 kubelet.go:446] "Attempting to sync node with API server" Jul 9 23:46:48.485784 kubelet[2926]: I0709 23:46:48.485622 2926 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:46:48.485784 kubelet[2926]: I0709 23:46:48.485639 2926 kubelet.go:352] "Adding apiserver pod source" Jul 9 23:46:48.485784 kubelet[2926]: I0709 23:46:48.485649 2926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:46:48.489647 kubelet[2926]: W0709 23:46:48.489617 2926 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jul 9 23:46:48.489760 kubelet[2926]: E0709 23:46:48.489744 2926 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:48.490152 kubelet[2926]: W0709 23:46:48.490128 2926 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-4a8bce7214&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jul 9 23:46:48.490244 kubelet[2926]: E0709 23:46:48.490231 2926 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-4a8bce7214&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:48.490370 kubelet[2926]: I0709 23:46:48.490359 2926 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:46:48.490784 kubelet[2926]: I0709 23:46:48.490772 2926 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:46:48.490890 kubelet[2926]: W0709 23:46:48.490881 2926 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 23:46:48.491379 kubelet[2926]: I0709 23:46:48.491362 2926 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:46:48.491464 kubelet[2926]: I0709 23:46:48.491455 2926 server.go:1287] "Started kubelet" Jul 9 23:46:48.491958 kubelet[2926]: I0709 23:46:48.491932 2926 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:46:48.493065 kubelet[2926]: I0709 23:46:48.492697 2926 server.go:479] "Adding debug handlers to kubelet server" Jul 9 23:46:48.493065 kubelet[2926]: I0709 23:46:48.492863 2926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:46:48.494831 kubelet[2926]: I0709 23:46:48.494785 2926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:46:48.494987 kubelet[2926]: I0709 23:46:48.494972 2926 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:46:48.496740 kubelet[2926]: I0709 23:46:48.496721 2926 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:46:48.498069 kubelet[2926]: I0709 23:46:48.498054 2926 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:46:48.498236 kubelet[2926]: E0709 23:46:48.498223 2926 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-4a8bce7214\" not found" Jul 9 23:46:48.499727 kubelet[2926]: I0709 23:46:48.499711 2926 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:46:48.499857 kubelet[2926]: I0709 23:46:48.499847 2926 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:46:48.500226 kubelet[2926]: I0709 23:46:48.500212 2926 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:46:48.500384 kubelet[2926]: I0709 23:46:48.500369 2926 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:46:48.500861 kubelet[2926]: E0709 23:46:48.500812 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-4a8bce7214?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="200ms" Jul 9 23:46:48.500941 kubelet[2926]: E0709 23:46:48.500866 2926 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-4a8bce7214.1850ba0a4f62d194 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-4a8bce7214,UID:ci-4344.1.1-n-4a8bce7214,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-4a8bce7214,},FirstTimestamp:2025-07-09 23:46:48.491438484 +0000 UTC m=+0.250811196,LastTimestamp:2025-07-09 23:46:48.491438484 +0000 UTC m=+0.250811196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-4a8bce7214,}" Jul 9 23:46:48.501013 kubelet[2926]: W0709 23:46:48.500989 2926 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jul 9 23:46:48.501049 kubelet[2926]: E0709 23:46:48.501014 2926 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:48.501714 kubelet[2926]: I0709 23:46:48.501701 2926 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:46:48.509545 kubelet[2926]: I0709 23:46:48.509471 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:46:48.510334 kubelet[2926]: I0709 23:46:48.510307 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:46:48.510334 kubelet[2926]: I0709 23:46:48.510331 2926 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 23:46:48.510413 kubelet[2926]: I0709 23:46:48.510346 2926 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:46:48.510413 kubelet[2926]: I0709 23:46:48.510350 2926 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 23:46:48.510413 kubelet[2926]: E0709 23:46:48.510378 2926 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:46:48.515490 kubelet[2926]: W0709 23:46:48.515426 2926 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jul 9 23:46:48.515490 kubelet[2926]: E0709 23:46:48.515446 2926 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:48.522457 kubelet[2926]: E0709 23:46:48.521405 2926 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:46:48.525246 kubelet[2926]: I0709 23:46:48.525180 2926 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:46:48.525246 kubelet[2926]: I0709 23:46:48.525193 2926 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:46:48.525322 kubelet[2926]: I0709 23:46:48.525265 2926 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:48.598590 kubelet[2926]: E0709 23:46:48.598546 2926 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-4a8bce7214\" not found" Jul 9 23:46:48.610781 kubelet[2926]: E0709 23:46:48.610757 2926 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 23:46:48.698984 kubelet[2926]: E0709 23:46:48.698948 2926 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-4a8bce7214\" not found" Jul 9 23:46:48.701526 kubelet[2926]: E0709 23:46:48.701487 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-4a8bce7214?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="400ms" Jul 9 23:46:48.735921 kubelet[2926]: I0709 23:46:48.735894 2926 policy_none.go:49] "None policy: Start" Jul 9 23:46:48.735921 kubelet[2926]: I0709 23:46:48.735925 2926 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:46:48.735994 kubelet[2926]: I0709 23:46:48.735938 2926 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:46:48.747107 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 23:46:48.760426 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 23:46:48.763551 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 23:46:48.775117 kubelet[2926]: I0709 23:46:48.774707 2926 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:46:48.775391 kubelet[2926]: I0709 23:46:48.775378 2926 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:46:48.775482 kubelet[2926]: I0709 23:46:48.775454 2926 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:46:48.775709 kubelet[2926]: I0709 23:46:48.775695 2926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:46:48.776731 kubelet[2926]: E0709 23:46:48.776715 2926 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:46:48.776852 kubelet[2926]: E0709 23:46:48.776842 2926 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-n-4a8bce7214\" not found" Jul 9 23:46:48.821157 systemd[1]: Created slice kubepods-burstable-pod6e641268d9a7b0a4c49472d40761dae0.slice - libcontainer container kubepods-burstable-pod6e641268d9a7b0a4c49472d40761dae0.slice. Jul 9 23:46:48.837350 kubelet[2926]: E0709 23:46:48.837175 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-4a8bce7214\" not found" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.839990 systemd[1]: Created slice kubepods-burstable-podb06e87a255e1a9884557baf171cd2243.slice - libcontainer container kubepods-burstable-podb06e87a255e1a9884557baf171cd2243.slice. Jul 9 23:46:48.847000 kubelet[2926]: E0709 23:46:48.846852 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-4a8bce7214\" not found" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.849113 systemd[1]: Created slice kubepods-burstable-pod33c4bb09593e3f049b2febda3353d8ec.slice - libcontainer container kubepods-burstable-pod33c4bb09593e3f049b2febda3353d8ec.slice. Jul 9 23:46:48.850460 kubelet[2926]: E0709 23:46:48.850340 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-4a8bce7214\" not found" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.878958 kubelet[2926]: I0709 23:46:48.878943 2926 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.879447 kubelet[2926]: E0709 23:46:48.879408 2926 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.901798 kubelet[2926]: I0709 23:46:48.901649 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b06e87a255e1a9884557baf171cd2243-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4a8bce7214\" (UID: \"b06e87a255e1a9884557baf171cd2243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.901798 kubelet[2926]: I0709 23:46:48.901674 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33c4bb09593e3f049b2febda3353d8ec-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" (UID: \"33c4bb09593e3f049b2febda3353d8ec\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.901798 kubelet[2926]: I0709 23:46:48.901687 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33c4bb09593e3f049b2febda3353d8ec-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" (UID: \"33c4bb09593e3f049b2febda3353d8ec\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.901798 kubelet[2926]: I0709 23:46:48.901698 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33c4bb09593e3f049b2febda3353d8ec-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" (UID: \"33c4bb09593e3f049b2febda3353d8ec\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.901798 kubelet[2926]: I0709 23:46:48.901708 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33c4bb09593e3f049b2febda3353d8ec-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" (UID: \"33c4bb09593e3f049b2febda3353d8ec\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.901931 kubelet[2926]: I0709 23:46:48.901718 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33c4bb09593e3f049b2febda3353d8ec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" (UID: \"33c4bb09593e3f049b2febda3353d8ec\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.901931 kubelet[2926]: I0709 23:46:48.901729 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e641268d9a7b0a4c49472d40761dae0-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-4a8bce7214\" (UID: \"6e641268d9a7b0a4c49472d40761dae0\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.901931 kubelet[2926]: I0709 23:46:48.901737 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b06e87a255e1a9884557baf171cd2243-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4a8bce7214\" (UID: \"b06e87a255e1a9884557baf171cd2243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:48.901931 kubelet[2926]: I0709 23:46:48.901746 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b06e87a255e1a9884557baf171cd2243-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-4a8bce7214\" (UID: \"b06e87a255e1a9884557baf171cd2243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:49.081508 kubelet[2926]: I0709 23:46:49.081363 2926 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:49.081975 kubelet[2926]: E0709 23:46:49.081913 2926 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:49.102501 kubelet[2926]: E0709 23:46:49.102468 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-4a8bce7214?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="800ms" Jul 9 23:46:49.138429 containerd[1903]: time="2025-07-09T23:46:49.138387896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-4a8bce7214,Uid:6e641268d9a7b0a4c49472d40761dae0,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:49.148031 containerd[1903]: time="2025-07-09T23:46:49.148002291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-4a8bce7214,Uid:b06e87a255e1a9884557baf171cd2243,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:49.151284 containerd[1903]: time="2025-07-09T23:46:49.151250245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-4a8bce7214,Uid:33c4bb09593e3f049b2febda3353d8ec,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:49.304387 containerd[1903]: time="2025-07-09T23:46:49.304240704Z" level=info msg="connecting to shim 0848201fc87f4e38ae5f29dc7f2fe8100d7b8e6427cdcad6db37499d07c0bfb9" address="unix:///run/containerd/s/ea63871405dbdcda436ab7a9fce1bc53ec13ab9a5d307a4249826d938f609d6f" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:49.305805 containerd[1903]: time="2025-07-09T23:46:49.305781942Z" level=info msg="connecting to shim 94709a0f45bc17e9b081fa04b58e2413f736305102f61b5346be0dc42487b50d" address="unix:///run/containerd/s/16654173e5921c7c538e3d9d9bb7a6549ef250e14695c4cfdf5873ec712b1777" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:49.326099 kubelet[2926]: W0709 23:46:49.326052 2926 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jul 9 23:46:49.326494 kubelet[2926]: E0709 23:46:49.326346 2926 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:49.330571 containerd[1903]: time="2025-07-09T23:46:49.330263852Z" level=info msg="connecting to shim ebfcb025336d67b5225150fe0b36920386f7192381b27aa330b5a48c2f57bf1e" address="unix:///run/containerd/s/41405dc9aea6eba24d31a8c0c929a308a3e081f20d19ea516684dc3f8d2d9f8a" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:49.331180 systemd[1]: Started cri-containerd-0848201fc87f4e38ae5f29dc7f2fe8100d7b8e6427cdcad6db37499d07c0bfb9.scope - libcontainer container 0848201fc87f4e38ae5f29dc7f2fe8100d7b8e6427cdcad6db37499d07c0bfb9. Jul 9 23:46:49.332342 systemd[1]: Started cri-containerd-94709a0f45bc17e9b081fa04b58e2413f736305102f61b5346be0dc42487b50d.scope - libcontainer container 94709a0f45bc17e9b081fa04b58e2413f736305102f61b5346be0dc42487b50d. Jul 9 23:46:49.352168 systemd[1]: Started cri-containerd-ebfcb025336d67b5225150fe0b36920386f7192381b27aa330b5a48c2f57bf1e.scope - libcontainer container ebfcb025336d67b5225150fe0b36920386f7192381b27aa330b5a48c2f57bf1e. Jul 9 23:46:49.382781 containerd[1903]: time="2025-07-09T23:46:49.382686169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-4a8bce7214,Uid:b06e87a255e1a9884557baf171cd2243,Namespace:kube-system,Attempt:0,} returns sandbox id \"94709a0f45bc17e9b081fa04b58e2413f736305102f61b5346be0dc42487b50d\"" Jul 9 23:46:49.385924 containerd[1903]: time="2025-07-09T23:46:49.385891938Z" level=info msg="CreateContainer within sandbox \"94709a0f45bc17e9b081fa04b58e2413f736305102f61b5346be0dc42487b50d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 23:46:49.393453 containerd[1903]: time="2025-07-09T23:46:49.393426822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-4a8bce7214,Uid:6e641268d9a7b0a4c49472d40761dae0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0848201fc87f4e38ae5f29dc7f2fe8100d7b8e6427cdcad6db37499d07c0bfb9\"" Jul 9 23:46:49.395698 containerd[1903]: time="2025-07-09T23:46:49.395467020Z" level=info msg="CreateContainer within sandbox \"0848201fc87f4e38ae5f29dc7f2fe8100d7b8e6427cdcad6db37499d07c0bfb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 23:46:49.414754 containerd[1903]: time="2025-07-09T23:46:49.414717012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-4a8bce7214,Uid:33c4bb09593e3f049b2febda3353d8ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebfcb025336d67b5225150fe0b36920386f7192381b27aa330b5a48c2f57bf1e\"" Jul 9 23:46:49.416216 containerd[1903]: time="2025-07-09T23:46:49.416190104Z" level=info msg="CreateContainer within sandbox \"ebfcb025336d67b5225150fe0b36920386f7192381b27aa330b5a48c2f57bf1e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 23:46:49.432862 containerd[1903]: time="2025-07-09T23:46:49.432830184Z" level=info msg="Container 1da9ce6211386348b61f10c2dfcdf8c147f0308f7e9ae28ec11e166b13c9fad1: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:49.474713 containerd[1903]: time="2025-07-09T23:46:49.474673364Z" level=info msg="Container 4aa942755986ee9a7c4ed2ebc434649be186f87b7a621f3cf148eb70c2e25540: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:49.483681 kubelet[2926]: I0709 23:46:49.483654 2926 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:49.484959 containerd[1903]: time="2025-07-09T23:46:49.484378034Z" level=info msg="CreateContainer within sandbox \"94709a0f45bc17e9b081fa04b58e2413f736305102f61b5346be0dc42487b50d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1da9ce6211386348b61f10c2dfcdf8c147f0308f7e9ae28ec11e166b13c9fad1\"" Jul 9 23:46:49.485025 kubelet[2926]: E0709 23:46:49.484795 2926 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:49.485092 containerd[1903]: time="2025-07-09T23:46:49.485068943Z" level=info msg="StartContainer for \"1da9ce6211386348b61f10c2dfcdf8c147f0308f7e9ae28ec11e166b13c9fad1\"" Jul 9 23:46:49.486307 containerd[1903]: time="2025-07-09T23:46:49.486284732Z" level=info msg="connecting to shim 1da9ce6211386348b61f10c2dfcdf8c147f0308f7e9ae28ec11e166b13c9fad1" address="unix:///run/containerd/s/16654173e5921c7c538e3d9d9bb7a6549ef250e14695c4cfdf5873ec712b1777" protocol=ttrpc version=3 Jul 9 23:46:49.487086 containerd[1903]: time="2025-07-09T23:46:49.487064084Z" level=info msg="Container b66cb3acb58341d45cff00495b451ca7a8a398e65711e6652756f8d5967ee821: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:49.501174 systemd[1]: Started cri-containerd-1da9ce6211386348b61f10c2dfcdf8c147f0308f7e9ae28ec11e166b13c9fad1.scope - libcontainer container 1da9ce6211386348b61f10c2dfcdf8c147f0308f7e9ae28ec11e166b13c9fad1. Jul 9 23:46:49.509206 containerd[1903]: time="2025-07-09T23:46:49.509142265Z" level=info msg="CreateContainer within sandbox \"0848201fc87f4e38ae5f29dc7f2fe8100d7b8e6427cdcad6db37499d07c0bfb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4aa942755986ee9a7c4ed2ebc434649be186f87b7a621f3cf148eb70c2e25540\"" Jul 9 23:46:49.509884 containerd[1903]: time="2025-07-09T23:46:49.509859487Z" level=info msg="StartContainer for \"4aa942755986ee9a7c4ed2ebc434649be186f87b7a621f3cf148eb70c2e25540\"" Jul 9 23:46:49.514541 containerd[1903]: time="2025-07-09T23:46:49.514467074Z" level=info msg="connecting to shim 4aa942755986ee9a7c4ed2ebc434649be186f87b7a621f3cf148eb70c2e25540" address="unix:///run/containerd/s/ea63871405dbdcda436ab7a9fce1bc53ec13ab9a5d307a4249826d938f609d6f" protocol=ttrpc version=3 Jul 9 23:46:49.541795 containerd[1903]: time="2025-07-09T23:46:49.540093627Z" level=info msg="CreateContainer within sandbox \"ebfcb025336d67b5225150fe0b36920386f7192381b27aa330b5a48c2f57bf1e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b66cb3acb58341d45cff00495b451ca7a8a398e65711e6652756f8d5967ee821\"" Jul 9 23:46:49.541795 containerd[1903]: time="2025-07-09T23:46:49.540345723Z" level=info msg="StartContainer for \"b66cb3acb58341d45cff00495b451ca7a8a398e65711e6652756f8d5967ee821\"" Jul 9 23:46:49.541795 containerd[1903]: time="2025-07-09T23:46:49.541027135Z" level=info msg="connecting to shim b66cb3acb58341d45cff00495b451ca7a8a398e65711e6652756f8d5967ee821" address="unix:///run/containerd/s/41405dc9aea6eba24d31a8c0c929a308a3e081f20d19ea516684dc3f8d2d9f8a" protocol=ttrpc version=3 Jul 9 23:46:49.541183 systemd[1]: Started cri-containerd-4aa942755986ee9a7c4ed2ebc434649be186f87b7a621f3cf148eb70c2e25540.scope - libcontainer container 4aa942755986ee9a7c4ed2ebc434649be186f87b7a621f3cf148eb70c2e25540. Jul 9 23:46:49.553845 containerd[1903]: time="2025-07-09T23:46:49.553780858Z" level=info msg="StartContainer for \"1da9ce6211386348b61f10c2dfcdf8c147f0308f7e9ae28ec11e166b13c9fad1\" returns successfully" Jul 9 23:46:49.581332 systemd[1]: Started cri-containerd-b66cb3acb58341d45cff00495b451ca7a8a398e65711e6652756f8d5967ee821.scope - libcontainer container b66cb3acb58341d45cff00495b451ca7a8a398e65711e6652756f8d5967ee821. Jul 9 23:46:50.287445 kubelet[2926]: I0709 23:46:50.287417 2926 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:50.389335 containerd[1903]: time="2025-07-09T23:46:50.389225279Z" level=info msg="StartContainer for \"b66cb3acb58341d45cff00495b451ca7a8a398e65711e6652756f8d5967ee821\" returns successfully" Jul 9 23:46:50.390515 containerd[1903]: time="2025-07-09T23:46:50.390211653Z" level=info msg="StartContainer for \"4aa942755986ee9a7c4ed2ebc434649be186f87b7a621f3cf148eb70c2e25540\" returns successfully" Jul 9 23:46:50.546389 kubelet[2926]: E0709 23:46:50.545865 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-4a8bce7214\" not found" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:50.550504 kubelet[2926]: E0709 23:46:50.550269 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-4a8bce7214\" not found" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:50.550504 kubelet[2926]: E0709 23:46:50.550414 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-4a8bce7214\" not found" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:50.900157 kubelet[2926]: E0709 23:46:50.900117 2926 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-n-4a8bce7214\" not found" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:50.938767 kubelet[2926]: I0709 23:46:50.938693 2926 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:50.999828 kubelet[2926]: I0709 23:46:50.999790 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:51.007116 kubelet[2926]: E0709 23:46:51.007088 2926 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:51.007116 kubelet[2926]: I0709 23:46:51.007111 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:51.008471 kubelet[2926]: E0709 23:46:51.008445 2926 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-4a8bce7214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:51.008471 kubelet[2926]: I0709 23:46:51.008465 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:51.009965 kubelet[2926]: E0709 23:46:51.009925 2926 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-4a8bce7214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:51.488775 kubelet[2926]: I0709 23:46:51.488742 2926 apiserver.go:52] "Watching apiserver" Jul 9 23:46:51.500781 kubelet[2926]: I0709 23:46:51.500759 2926 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:46:51.548503 kubelet[2926]: I0709 23:46:51.548249 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:51.548503 kubelet[2926]: I0709 23:46:51.548267 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:51.548503 kubelet[2926]: I0709 23:46:51.548394 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:51.550759 kubelet[2926]: E0709 23:46:51.550725 2926 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-4a8bce7214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:51.550935 kubelet[2926]: E0709 23:46:51.550915 2926 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-4a8bce7214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:51.551019 kubelet[2926]: E0709 23:46:51.551004 2926 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:52.549737 kubelet[2926]: I0709 23:46:52.549541 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:52.549737 kubelet[2926]: I0709 23:46:52.549622 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:52.561221 kubelet[2926]: W0709 23:46:52.561123 2926 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:52.568723 kubelet[2926]: W0709 23:46:52.568637 2926 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:53.146339 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 9 23:46:53.219333 systemd[1]: Reload requested from client PID 3195 ('systemctl') (unit session-9.scope)... Jul 9 23:46:53.219591 systemd[1]: Reloading... Jul 9 23:46:53.294062 zram_generator::config[3241]: No configuration found. Jul 9 23:46:53.359542 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:53.450007 systemd[1]: Reloading finished in 230 ms. Jul 9 23:46:53.474510 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:53.486392 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:46:53.486570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:53.486609 systemd[1]: kubelet.service: Consumed 503ms CPU time, 127.9M memory peak. Jul 9 23:46:53.488913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:53.664664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:53.668162 (kubelet)[3305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:46:53.700679 kubelet[3305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:53.700679 kubelet[3305]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:46:53.700679 kubelet[3305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:53.700679 kubelet[3305]: I0709 23:46:53.700528 3305 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:46:53.708070 kubelet[3305]: I0709 23:46:53.707438 3305 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 23:46:53.708070 kubelet[3305]: I0709 23:46:53.707462 3305 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:46:53.708070 kubelet[3305]: I0709 23:46:53.707792 3305 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 23:46:53.709314 kubelet[3305]: I0709 23:46:53.709294 3305 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 23:46:53.711458 kubelet[3305]: I0709 23:46:53.711436 3305 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:46:53.715902 kubelet[3305]: I0709 23:46:53.715884 3305 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:46:53.718146 kubelet[3305]: I0709 23:46:53.718125 3305 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:46:53.718289 kubelet[3305]: I0709 23:46:53.718263 3305 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:46:53.718401 kubelet[3305]: I0709 23:46:53.718285 3305 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-4a8bce7214","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:46:53.718481 kubelet[3305]: I0709 23:46:53.718406 3305 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:46:53.718481 kubelet[3305]: I0709 23:46:53.718413 3305 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 23:46:53.718481 kubelet[3305]: I0709 23:46:53.718446 3305 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:53.718551 kubelet[3305]: I0709 23:46:53.718532 3305 kubelet.go:446] "Attempting to sync node with API server" Jul 9 23:46:53.718551 kubelet[3305]: I0709 23:46:53.718541 3305 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:46:53.718582 kubelet[3305]: I0709 23:46:53.718557 3305 kubelet.go:352] "Adding apiserver pod source" Jul 9 23:46:53.718582 kubelet[3305]: I0709 23:46:53.718564 3305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:46:53.721086 kubelet[3305]: I0709 23:46:53.719763 3305 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:46:53.721086 kubelet[3305]: I0709 23:46:53.720032 3305 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:46:53.721086 kubelet[3305]: I0709 23:46:53.720325 3305 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:46:53.721086 kubelet[3305]: I0709 23:46:53.720344 3305 server.go:1287] "Started kubelet" Jul 9 23:46:53.723812 kubelet[3305]: I0709 23:46:53.723335 3305 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:46:53.726449 kubelet[3305]: I0709 23:46:53.725328 3305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:46:53.726449 kubelet[3305]: I0709 23:46:53.725605 3305 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:46:53.728085 kubelet[3305]: I0709 23:46:53.728064 3305 server.go:479] "Adding debug handlers to kubelet server" Jul 9 23:46:53.729688 kubelet[3305]: I0709 23:46:53.729648 3305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:46:53.730755 kubelet[3305]: I0709 23:46:53.730732 3305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:46:53.731619 kubelet[3305]: I0709 23:46:53.731594 3305 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:46:53.731731 kubelet[3305]: E0709 23:46:53.731710 3305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-4a8bce7214\" not found" Jul 9 23:46:53.732245 kubelet[3305]: I0709 23:46:53.732224 3305 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:46:53.733067 kubelet[3305]: I0709 23:46:53.732373 3305 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:46:53.741206 kubelet[3305]: I0709 23:46:53.741167 3305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:46:53.743087 kubelet[3305]: I0709 23:46:53.743061 3305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:46:53.743087 kubelet[3305]: I0709 23:46:53.743087 3305 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 23:46:53.743167 kubelet[3305]: I0709 23:46:53.743103 3305 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:46:53.743167 kubelet[3305]: I0709 23:46:53.743108 3305 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 23:46:53.743167 kubelet[3305]: E0709 23:46:53.743143 3305 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:46:53.748046 kubelet[3305]: I0709 23:46:53.747166 3305 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:46:53.748046 kubelet[3305]: I0709 23:46:53.747261 3305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:46:53.755080 kubelet[3305]: I0709 23:46:53.755018 3305 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:46:53.788778 kubelet[3305]: I0709 23:46:53.788756 3305 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:46:53.788778 kubelet[3305]: I0709 23:46:53.788770 3305 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:46:53.788925 kubelet[3305]: I0709 23:46:53.788796 3305 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:53.788925 kubelet[3305]: I0709 23:46:53.788908 3305 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 23:46:53.788925 kubelet[3305]: I0709 23:46:53.788916 3305 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 23:46:53.788984 kubelet[3305]: I0709 23:46:53.788930 3305 policy_none.go:49] "None policy: Start" Jul 9 23:46:53.788984 kubelet[3305]: I0709 23:46:53.788936 3305 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:46:53.788984 kubelet[3305]: I0709 23:46:53.788943 3305 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:46:53.789045 kubelet[3305]: I0709 23:46:53.789003 3305 state_mem.go:75] "Updated machine memory state" Jul 9 23:46:53.792656 kubelet[3305]: I0709 23:46:53.792637 3305 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:46:53.792998 kubelet[3305]: I0709 23:46:53.792980 3305 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:46:53.793031 kubelet[3305]: I0709 23:46:53.792994 3305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:46:53.793509 kubelet[3305]: I0709 23:46:53.793456 3305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:46:53.795235 kubelet[3305]: E0709 23:46:53.795213 3305 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:46:53.844347 kubelet[3305]: I0709 23:46:53.844091 3305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:53.844347 kubelet[3305]: I0709 23:46:53.844220 3305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:53.844347 kubelet[3305]: I0709 23:46:53.844101 3305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:53.858241 kubelet[3305]: W0709 23:46:53.858220 3305 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:53.858638 kubelet[3305]: E0709 23:46:53.858561 3305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-4a8bce7214\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:53.859584 kubelet[3305]: W0709 23:46:53.859569 3305 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:53.859808 kubelet[3305]: W0709 23:46:53.859693 3305 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:53.859808 kubelet[3305]: E0709 23:46:53.859767 3305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-4a8bce7214\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:53.900234 kubelet[3305]: I0709 23:46:53.900127 3305 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:53.911880 kubelet[3305]: I0709 23:46:53.911847 3305 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:53.911946 kubelet[3305]: I0709 23:46:53.911905 3305 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.033106 kubelet[3305]: I0709 23:46:54.032911 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33c4bb09593e3f049b2febda3353d8ec-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" (UID: \"33c4bb09593e3f049b2febda3353d8ec\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.033106 kubelet[3305]: I0709 23:46:54.032945 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33c4bb09593e3f049b2febda3353d8ec-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" (UID: \"33c4bb09593e3f049b2febda3353d8ec\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.033106 kubelet[3305]: I0709 23:46:54.032956 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33c4bb09593e3f049b2febda3353d8ec-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" (UID: \"33c4bb09593e3f049b2febda3353d8ec\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.033106 kubelet[3305]: I0709 23:46:54.032972 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33c4bb09593e3f049b2febda3353d8ec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" (UID: \"33c4bb09593e3f049b2febda3353d8ec\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.033106 kubelet[3305]: I0709 23:46:54.032986 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e641268d9a7b0a4c49472d40761dae0-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-4a8bce7214\" (UID: \"6e641268d9a7b0a4c49472d40761dae0\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.033271 kubelet[3305]: I0709 23:46:54.032999 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b06e87a255e1a9884557baf171cd2243-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4a8bce7214\" (UID: \"b06e87a255e1a9884557baf171cd2243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.033271 kubelet[3305]: I0709 23:46:54.033012 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b06e87a255e1a9884557baf171cd2243-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-4a8bce7214\" (UID: \"b06e87a255e1a9884557baf171cd2243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.033271 kubelet[3305]: I0709 23:46:54.033053 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b06e87a255e1a9884557baf171cd2243-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4a8bce7214\" (UID: \"b06e87a255e1a9884557baf171cd2243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.033271 kubelet[3305]: I0709 23:46:54.033079 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33c4bb09593e3f049b2febda3353d8ec-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4a8bce7214\" (UID: \"33c4bb09593e3f049b2febda3353d8ec\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.239497 sudo[3337]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 23:46:54.240078 sudo[3337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 23:46:54.576472 update_engine[1887]: I20250709 23:46:54.576400 1887 update_attempter.cc:509] Updating boot flags... Jul 9 23:46:54.583937 sudo[3337]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:54.719090 kubelet[3305]: I0709 23:46:54.718795 3305 apiserver.go:52] "Watching apiserver" Jul 9 23:46:54.733308 kubelet[3305]: I0709 23:46:54.733270 3305 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:46:54.776191 kubelet[3305]: I0709 23:46:54.776168 3305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.777443 kubelet[3305]: I0709 23:46:54.777257 3305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.807476 kubelet[3305]: W0709 23:46:54.807238 3305 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:54.807476 kubelet[3305]: E0709 23:46:54.807292 3305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-4a8bce7214\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.809001 kubelet[3305]: I0709 23:46:54.808910 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4a8bce7214" podStartSLOduration=1.8089010220000001 podStartE2EDuration="1.808901022s" podCreationTimestamp="2025-07-09 23:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:54.807652449 +0000 UTC m=+1.136531835" watchObservedRunningTime="2025-07-09 23:46:54.808901022 +0000 UTC m=+1.137780384" Jul 9 23:46:54.811048 kubelet[3305]: W0709 23:46:54.809712 3305 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:54.811048 kubelet[3305]: E0709 23:46:54.809758 3305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-4a8bce7214\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" Jul 9 23:46:54.840612 kubelet[3305]: I0709 23:46:54.840324 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4a8bce7214" podStartSLOduration=2.840304638 podStartE2EDuration="2.840304638s" podCreationTimestamp="2025-07-09 23:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:54.830276262 +0000 UTC m=+1.159155632" watchObservedRunningTime="2025-07-09 23:46:54.840304638 +0000 UTC m=+1.169184000" Jul 9 23:46:54.851843 kubelet[3305]: I0709 23:46:54.851763 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4a8bce7214" podStartSLOduration=2.851753249 podStartE2EDuration="2.851753249s" podCreationTimestamp="2025-07-09 23:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:54.841155152 +0000 UTC m=+1.170034546" watchObservedRunningTime="2025-07-09 23:46:54.851753249 +0000 UTC m=+1.180632619" Jul 9 23:46:55.661081 sudo[2365]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:55.735862 sshd[2364]: Connection closed by 10.200.16.10 port 49822 Jul 9 23:46:55.736453 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:55.739269 systemd-logind[1884]: Session 9 logged out. Waiting for processes to exit. Jul 9 23:46:55.740865 systemd[1]: sshd@6-10.200.20.10:22-10.200.16.10:49822.service: Deactivated successfully. Jul 9 23:46:55.743617 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 23:46:55.743793 systemd[1]: session-9.scope: Consumed 3.668s CPU time, 268.8M memory peak. Jul 9 23:46:55.746355 systemd-logind[1884]: Removed session 9. Jul 9 23:46:59.574575 kubelet[3305]: I0709 23:46:59.574544 3305 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 23:46:59.575107 containerd[1903]: time="2025-07-09T23:46:59.574877409Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 23:46:59.575497 kubelet[3305]: I0709 23:46:59.575474 3305 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 23:47:00.488488 systemd[1]: Created slice kubepods-besteffort-pod58aba572_b45b_4ea3_b1d0_f479b70137dc.slice - libcontainer container kubepods-besteffort-pod58aba572_b45b_4ea3_b1d0_f479b70137dc.slice. Jul 9 23:47:00.501227 systemd[1]: Created slice kubepods-burstable-pod694a4f10_519d_487c_bd5f_b28b50e5ae88.slice - libcontainer container kubepods-burstable-pod694a4f10_519d_487c_bd5f_b28b50e5ae88.slice. Jul 9 23:47:00.569296 kubelet[3305]: I0709 23:47:00.569253 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-cgroup\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569438 kubelet[3305]: I0709 23:47:00.569327 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-xtables-lock\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569438 kubelet[3305]: I0709 23:47:00.569344 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9qc2\" (UniqueName: \"kubernetes.io/projected/58aba572-b45b-4ea3-b1d0-f479b70137dc-kube-api-access-g9qc2\") pod \"kube-proxy-xcfhl\" (UID: \"58aba572-b45b-4ea3-b1d0-f479b70137dc\") " pod="kube-system/kube-proxy-xcfhl" Jul 9 23:47:00.569438 kubelet[3305]: I0709 23:47:00.569357 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/694a4f10-519d-487c-bd5f-b28b50e5ae88-clustermesh-secrets\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569438 kubelet[3305]: I0709 23:47:00.569400 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-host-proc-sys-net\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569438 kubelet[3305]: I0709 23:47:00.569409 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58aba572-b45b-4ea3-b1d0-f479b70137dc-kube-proxy\") pod \"kube-proxy-xcfhl\" (UID: \"58aba572-b45b-4ea3-b1d0-f479b70137dc\") " pod="kube-system/kube-proxy-xcfhl" Jul 9 23:47:00.569525 kubelet[3305]: I0709 23:47:00.569418 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-run\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569525 kubelet[3305]: I0709 23:47:00.569427 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58aba572-b45b-4ea3-b1d0-f479b70137dc-lib-modules\") pod \"kube-proxy-xcfhl\" (UID: \"58aba572-b45b-4ea3-b1d0-f479b70137dc\") " pod="kube-system/kube-proxy-xcfhl" Jul 9 23:47:00.569525 kubelet[3305]: I0709 23:47:00.569438 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-lib-modules\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569525 kubelet[3305]: I0709 23:47:00.569475 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/694a4f10-519d-487c-bd5f-b28b50e5ae88-hubble-tls\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569525 kubelet[3305]: I0709 23:47:00.569488 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-bpf-maps\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569525 kubelet[3305]: I0709 23:47:00.569496 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cni-path\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569614 kubelet[3305]: I0709 23:47:00.569504 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxw6g\" (UniqueName: \"kubernetes.io/projected/694a4f10-519d-487c-bd5f-b28b50e5ae88-kube-api-access-dxw6g\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569614 kubelet[3305]: I0709 23:47:00.569513 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58aba572-b45b-4ea3-b1d0-f479b70137dc-xtables-lock\") pod \"kube-proxy-xcfhl\" (UID: \"58aba572-b45b-4ea3-b1d0-f479b70137dc\") " pod="kube-system/kube-proxy-xcfhl" Jul 9 23:47:00.569614 kubelet[3305]: I0709 23:47:00.569541 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-host-proc-sys-kernel\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569614 kubelet[3305]: I0709 23:47:00.569573 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-hostproc\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569672 kubelet[3305]: I0709 23:47:00.569614 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-etc-cni-netd\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.569672 kubelet[3305]: I0709 23:47:00.569625 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-config-path\") pod \"cilium-46hpf\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " pod="kube-system/cilium-46hpf" Jul 9 23:47:00.626554 kubelet[3305]: I0709 23:47:00.626416 3305 status_manager.go:890] "Failed to get status for pod" podUID="ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d" pod="kube-system/cilium-operator-6c4d7847fc-wrxbf" err="pods \"cilium-operator-6c4d7847fc-wrxbf\" is forbidden: User \"system:node:ci-4344.1.1-n-4a8bce7214\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.1-n-4a8bce7214' and this object" Jul 9 23:47:00.631748 systemd[1]: Created slice kubepods-besteffort-podef0a66cc_3d78_4bbc_82a6_df4fc6d69f6d.slice - libcontainer container kubepods-besteffort-podef0a66cc_3d78_4bbc_82a6_df4fc6d69f6d.slice. Jul 9 23:47:00.735974 kubelet[3305]: I0709 23:47:00.670151 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4t7m\" (UniqueName: \"kubernetes.io/projected/ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d-kube-api-access-c4t7m\") pod \"cilium-operator-6c4d7847fc-wrxbf\" (UID: \"ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d\") " pod="kube-system/cilium-operator-6c4d7847fc-wrxbf" Jul 9 23:47:00.735974 kubelet[3305]: I0709 23:47:00.670197 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wrxbf\" (UID: \"ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d\") " pod="kube-system/cilium-operator-6c4d7847fc-wrxbf" Jul 9 23:47:00.796523 containerd[1903]: time="2025-07-09T23:47:00.796316531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xcfhl,Uid:58aba572-b45b-4ea3-b1d0-f479b70137dc,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:00.804998 containerd[1903]: time="2025-07-09T23:47:00.804933858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-46hpf,Uid:694a4f10-519d-487c-bd5f-b28b50e5ae88,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:00.907779 containerd[1903]: time="2025-07-09T23:47:00.907691162Z" level=info msg="connecting to shim cdb8af2e7bc112c12be45ee6d50c130843ae62f35ddcde08ad178b3137795f38" address="unix:///run/containerd/s/675157b3f30dce0dacf1308fc0415386173d79cce717e6f5f775ebe2aa1c8ff1" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:00.928177 systemd[1]: Started cri-containerd-cdb8af2e7bc112c12be45ee6d50c130843ae62f35ddcde08ad178b3137795f38.scope - libcontainer container cdb8af2e7bc112c12be45ee6d50c130843ae62f35ddcde08ad178b3137795f38. Jul 9 23:47:00.934895 containerd[1903]: time="2025-07-09T23:47:00.934863449Z" level=info msg="connecting to shim 101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43" address="unix:///run/containerd/s/3ee7ec1d457081ce90732c0fd3d3ffcae0e3d4177866113cbf410bf6895a0d95" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:00.956150 systemd[1]: Started cri-containerd-101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43.scope - libcontainer container 101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43. Jul 9 23:47:00.966748 containerd[1903]: time="2025-07-09T23:47:00.966663277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xcfhl,Uid:58aba572-b45b-4ea3-b1d0-f479b70137dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdb8af2e7bc112c12be45ee6d50c130843ae62f35ddcde08ad178b3137795f38\"" Jul 9 23:47:00.971755 containerd[1903]: time="2025-07-09T23:47:00.971687143Z" level=info msg="CreateContainer within sandbox \"cdb8af2e7bc112c12be45ee6d50c130843ae62f35ddcde08ad178b3137795f38\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 23:47:01.003578 containerd[1903]: time="2025-07-09T23:47:01.003538797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-46hpf,Uid:694a4f10-519d-487c-bd5f-b28b50e5ae88,Namespace:kube-system,Attempt:0,} returns sandbox id \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\"" Jul 9 23:47:01.005075 containerd[1903]: time="2025-07-09T23:47:01.005028139Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 23:47:01.020189 containerd[1903]: time="2025-07-09T23:47:01.020156314Z" level=info msg="Container 4fa888e33707f9c20307351dc86d93aed409705c972313045562e5b14365fe26: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:01.036988 containerd[1903]: time="2025-07-09T23:47:01.036952796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wrxbf,Uid:ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:01.049253 containerd[1903]: time="2025-07-09T23:47:01.049099727Z" level=info msg="CreateContainer within sandbox \"cdb8af2e7bc112c12be45ee6d50c130843ae62f35ddcde08ad178b3137795f38\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4fa888e33707f9c20307351dc86d93aed409705c972313045562e5b14365fe26\"" Jul 9 23:47:01.050032 containerd[1903]: time="2025-07-09T23:47:01.049976226Z" level=info msg="StartContainer for \"4fa888e33707f9c20307351dc86d93aed409705c972313045562e5b14365fe26\"" Jul 9 23:47:01.051391 containerd[1903]: time="2025-07-09T23:47:01.051313571Z" level=info msg="connecting to shim 4fa888e33707f9c20307351dc86d93aed409705c972313045562e5b14365fe26" address="unix:///run/containerd/s/675157b3f30dce0dacf1308fc0415386173d79cce717e6f5f775ebe2aa1c8ff1" protocol=ttrpc version=3 Jul 9 23:47:01.071193 systemd[1]: Started cri-containerd-4fa888e33707f9c20307351dc86d93aed409705c972313045562e5b14365fe26.scope - libcontainer container 4fa888e33707f9c20307351dc86d93aed409705c972313045562e5b14365fe26. Jul 9 23:47:01.106441 containerd[1903]: time="2025-07-09T23:47:01.106330998Z" level=info msg="StartContainer for \"4fa888e33707f9c20307351dc86d93aed409705c972313045562e5b14365fe26\" returns successfully" Jul 9 23:47:01.126910 containerd[1903]: time="2025-07-09T23:47:01.126527328Z" level=info msg="connecting to shim a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b" address="unix:///run/containerd/s/62878dab06a3e79e4c73223d5dd69069f7b79fe9c523b6d255afc2cd20c07efe" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:01.146204 systemd[1]: Started cri-containerd-a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b.scope - libcontainer container a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b. Jul 9 23:47:01.187442 containerd[1903]: time="2025-07-09T23:47:01.187396757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wrxbf,Uid:ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b\"" Jul 9 23:47:02.730915 kubelet[3305]: I0709 23:47:02.730783 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xcfhl" podStartSLOduration=2.7307318289999998 podStartE2EDuration="2.730731829s" podCreationTimestamp="2025-07-09 23:47:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:47:01.808389552 +0000 UTC m=+8.137268914" watchObservedRunningTime="2025-07-09 23:47:02.730731829 +0000 UTC m=+9.059611199" Jul 9 23:47:04.679984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount707116890.mount: Deactivated successfully. Jul 9 23:47:06.310099 containerd[1903]: time="2025-07-09T23:47:06.309727306Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:06.314951 containerd[1903]: time="2025-07-09T23:47:06.314915264Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 9 23:47:06.342694 containerd[1903]: time="2025-07-09T23:47:06.341434220Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:06.342694 containerd[1903]: time="2025-07-09T23:47:06.342455187Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.337263722s" Jul 9 23:47:06.342694 containerd[1903]: time="2025-07-09T23:47:06.342484932Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 9 23:47:06.344854 containerd[1903]: time="2025-07-09T23:47:06.344813578Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 23:47:06.346105 containerd[1903]: time="2025-07-09T23:47:06.346075072Z" level=info msg="CreateContainer within sandbox \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:47:06.526268 containerd[1903]: time="2025-07-09T23:47:06.525933333Z" level=info msg="Container 760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:06.545274 containerd[1903]: time="2025-07-09T23:47:06.545235215Z" level=info msg="CreateContainer within sandbox \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\"" Jul 9 23:47:06.545926 containerd[1903]: time="2025-07-09T23:47:06.545887570Z" level=info msg="StartContainer for \"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\"" Jul 9 23:47:06.546981 containerd[1903]: time="2025-07-09T23:47:06.546796166Z" level=info msg="connecting to shim 760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7" address="unix:///run/containerd/s/3ee7ec1d457081ce90732c0fd3d3ffcae0e3d4177866113cbf410bf6895a0d95" protocol=ttrpc version=3 Jul 9 23:47:06.568163 systemd[1]: Started cri-containerd-760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7.scope - libcontainer container 760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7. Jul 9 23:47:06.596530 containerd[1903]: time="2025-07-09T23:47:06.596447759Z" level=info msg="StartContainer for \"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\" returns successfully" Jul 9 23:47:06.597025 systemd[1]: cri-containerd-760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7.scope: Deactivated successfully. Jul 9 23:47:06.599002 containerd[1903]: time="2025-07-09T23:47:06.598931131Z" level=info msg="received exit event container_id:\"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\" id:\"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\" pid:3782 exited_at:{seconds:1752104826 nanos:598309072}" Jul 9 23:47:06.599002 containerd[1903]: time="2025-07-09T23:47:06.598992468Z" level=info msg="TaskExit event in podsandbox handler container_id:\"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\" id:\"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\" pid:3782 exited_at:{seconds:1752104826 nanos:598309072}" Jul 9 23:47:06.612496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7-rootfs.mount: Deactivated successfully. Jul 9 23:47:08.809661 containerd[1903]: time="2025-07-09T23:47:08.809143674Z" level=info msg="CreateContainer within sandbox \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:47:08.841816 containerd[1903]: time="2025-07-09T23:47:08.841785464Z" level=info msg="Container 285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:08.844204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932165257.mount: Deactivated successfully. Jul 9 23:47:08.865709 containerd[1903]: time="2025-07-09T23:47:08.865665100Z" level=info msg="CreateContainer within sandbox \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\"" Jul 9 23:47:08.867073 containerd[1903]: time="2025-07-09T23:47:08.866397754Z" level=info msg="StartContainer for \"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\"" Jul 9 23:47:08.867726 containerd[1903]: time="2025-07-09T23:47:08.867695673Z" level=info msg="connecting to shim 285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4" address="unix:///run/containerd/s/3ee7ec1d457081ce90732c0fd3d3ffcae0e3d4177866113cbf410bf6895a0d95" protocol=ttrpc version=3 Jul 9 23:47:08.883145 systemd[1]: Started cri-containerd-285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4.scope - libcontainer container 285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4. Jul 9 23:47:08.910445 containerd[1903]: time="2025-07-09T23:47:08.910399744Z" level=info msg="StartContainer for \"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\" returns successfully" Jul 9 23:47:08.919984 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:47:08.920345 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:47:08.920608 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:47:08.922432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:47:08.924236 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:47:08.926799 systemd[1]: cri-containerd-285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4.scope: Deactivated successfully. Jul 9 23:47:08.929626 containerd[1903]: time="2025-07-09T23:47:08.929601574Z" level=info msg="received exit event container_id:\"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\" id:\"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\" pid:3827 exited_at:{seconds:1752104828 nanos:928150290}" Jul 9 23:47:08.929821 containerd[1903]: time="2025-07-09T23:47:08.929789900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\" id:\"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\" pid:3827 exited_at:{seconds:1752104828 nanos:928150290}" Jul 9 23:47:08.937301 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:47:09.496055 containerd[1903]: time="2025-07-09T23:47:09.495950776Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:09.501607 containerd[1903]: time="2025-07-09T23:47:09.501580019Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 9 23:47:09.511299 containerd[1903]: time="2025-07-09T23:47:09.511256712Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:09.512032 containerd[1903]: time="2025-07-09T23:47:09.511950237Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.166888516s" Jul 9 23:47:09.512032 containerd[1903]: time="2025-07-09T23:47:09.511975230Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 9 23:47:09.514651 containerd[1903]: time="2025-07-09T23:47:09.514626718Z" level=info msg="CreateContainer within sandbox \"a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 23:47:09.540430 containerd[1903]: time="2025-07-09T23:47:09.540397940Z" level=info msg="Container c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:09.566324 containerd[1903]: time="2025-07-09T23:47:09.566289125Z" level=info msg="CreateContainer within sandbox \"a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\"" Jul 9 23:47:09.566915 containerd[1903]: time="2025-07-09T23:47:09.566665048Z" level=info msg="StartContainer for \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\"" Jul 9 23:47:09.568436 containerd[1903]: time="2025-07-09T23:47:09.568363443Z" level=info msg="connecting to shim c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5" address="unix:///run/containerd/s/62878dab06a3e79e4c73223d5dd69069f7b79fe9c523b6d255afc2cd20c07efe" protocol=ttrpc version=3 Jul 9 23:47:09.585164 systemd[1]: Started cri-containerd-c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5.scope - libcontainer container c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5. Jul 9 23:47:09.609280 containerd[1903]: time="2025-07-09T23:47:09.609250003Z" level=info msg="StartContainer for \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\" returns successfully" Jul 9 23:47:09.815386 containerd[1903]: time="2025-07-09T23:47:09.815234832Z" level=info msg="CreateContainer within sandbox \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:47:09.829047 kubelet[3305]: I0709 23:47:09.828989 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wrxbf" podStartSLOduration=1.5051945519999999 podStartE2EDuration="9.828872541s" podCreationTimestamp="2025-07-09 23:47:00 +0000 UTC" firstStartedPulling="2025-07-09 23:47:01.188827265 +0000 UTC m=+7.517706635" lastFinishedPulling="2025-07-09 23:47:09.512505262 +0000 UTC m=+15.841384624" observedRunningTime="2025-07-09 23:47:09.828630022 +0000 UTC m=+16.157509392" watchObservedRunningTime="2025-07-09 23:47:09.828872541 +0000 UTC m=+16.157751927" Jul 9 23:47:09.842512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4-rootfs.mount: Deactivated successfully. Jul 9 23:47:09.852893 containerd[1903]: time="2025-07-09T23:47:09.852854980Z" level=info msg="Container 2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:09.888057 containerd[1903]: time="2025-07-09T23:47:09.887815984Z" level=info msg="CreateContainer within sandbox \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\"" Jul 9 23:47:09.888781 containerd[1903]: time="2025-07-09T23:47:09.888760645Z" level=info msg="StartContainer for \"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\"" Jul 9 23:47:09.889972 containerd[1903]: time="2025-07-09T23:47:09.889854542Z" level=info msg="connecting to shim 2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d" address="unix:///run/containerd/s/3ee7ec1d457081ce90732c0fd3d3ffcae0e3d4177866113cbf410bf6895a0d95" protocol=ttrpc version=3 Jul 9 23:47:09.916176 systemd[1]: Started cri-containerd-2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d.scope - libcontainer container 2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d. Jul 9 23:47:09.954072 containerd[1903]: time="2025-07-09T23:47:09.953700478Z" level=info msg="StartContainer for \"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\" returns successfully" Jul 9 23:47:09.968393 systemd[1]: cri-containerd-2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d.scope: Deactivated successfully. Jul 9 23:47:09.970205 containerd[1903]: time="2025-07-09T23:47:09.970172041Z" level=info msg="received exit event container_id:\"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\" id:\"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\" pid:3922 exited_at:{seconds:1752104829 nanos:968890474}" Jul 9 23:47:09.970471 containerd[1903]: time="2025-07-09T23:47:09.970450402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\" id:\"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\" pid:3922 exited_at:{seconds:1752104829 nanos:968890474}" Jul 9 23:47:10.818294 containerd[1903]: time="2025-07-09T23:47:10.818253009Z" level=info msg="CreateContainer within sandbox \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:47:10.839980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d-rootfs.mount: Deactivated successfully. Jul 9 23:47:10.854169 containerd[1903]: time="2025-07-09T23:47:10.854067647Z" level=info msg="Container 233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:10.874207 containerd[1903]: time="2025-07-09T23:47:10.874108454Z" level=info msg="CreateContainer within sandbox \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\"" Jul 9 23:47:10.875096 containerd[1903]: time="2025-07-09T23:47:10.874885006Z" level=info msg="StartContainer for \"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\"" Jul 9 23:47:10.875732 containerd[1903]: time="2025-07-09T23:47:10.875712455Z" level=info msg="connecting to shim 233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a" address="unix:///run/containerd/s/3ee7ec1d457081ce90732c0fd3d3ffcae0e3d4177866113cbf410bf6895a0d95" protocol=ttrpc version=3 Jul 9 23:47:10.893151 systemd[1]: Started cri-containerd-233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a.scope - libcontainer container 233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a. Jul 9 23:47:10.911985 systemd[1]: cri-containerd-233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a.scope: Deactivated successfully. Jul 9 23:47:10.912470 containerd[1903]: time="2025-07-09T23:47:10.912309004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\" id:\"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\" pid:3961 exited_at:{seconds:1752104830 nanos:912124775}" Jul 9 23:47:10.916837 containerd[1903]: time="2025-07-09T23:47:10.916748523Z" level=info msg="received exit event container_id:\"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\" id:\"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\" pid:3961 exited_at:{seconds:1752104830 nanos:912124775}" Jul 9 23:47:10.921488 containerd[1903]: time="2025-07-09T23:47:10.921463090Z" level=info msg="StartContainer for \"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\" returns successfully" Jul 9 23:47:10.929909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a-rootfs.mount: Deactivated successfully. Jul 9 23:47:11.823104 containerd[1903]: time="2025-07-09T23:47:11.822976870Z" level=info msg="CreateContainer within sandbox \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:47:11.862445 containerd[1903]: time="2025-07-09T23:47:11.860259496Z" level=info msg="Container 7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:11.860816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642477733.mount: Deactivated successfully. Jul 9 23:47:11.889660 containerd[1903]: time="2025-07-09T23:47:11.889620722Z" level=info msg="CreateContainer within sandbox \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\"" Jul 9 23:47:11.890373 containerd[1903]: time="2025-07-09T23:47:11.890199668Z" level=info msg="StartContainer for \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\"" Jul 9 23:47:11.891067 containerd[1903]: time="2025-07-09T23:47:11.891032637Z" level=info msg="connecting to shim 7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef" address="unix:///run/containerd/s/3ee7ec1d457081ce90732c0fd3d3ffcae0e3d4177866113cbf410bf6895a0d95" protocol=ttrpc version=3 Jul 9 23:47:11.907156 systemd[1]: Started cri-containerd-7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef.scope - libcontainer container 7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef. Jul 9 23:47:11.951486 containerd[1903]: time="2025-07-09T23:47:11.951445964Z" level=info msg="StartContainer for \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" returns successfully" Jul 9 23:47:11.994195 containerd[1903]: time="2025-07-09T23:47:11.994165124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" id:\"57e92498c146fcd12cce8290c8df604dc636f67e09a36e1aa1460dc4745586e4\" pid:4031 exited_at:{seconds:1752104831 nanos:993923788}" Jul 9 23:47:12.032229 kubelet[3305]: I0709 23:47:12.032199 3305 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 23:47:12.070638 systemd[1]: Created slice kubepods-burstable-pod0e9999c5_fc34_424b_bb32_9b46ce4c04b8.slice - libcontainer container kubepods-burstable-pod0e9999c5_fc34_424b_bb32_9b46ce4c04b8.slice. Jul 9 23:47:12.077919 systemd[1]: Created slice kubepods-burstable-podeec62852_9ca4_4b0d_9662_beff02aa03c1.slice - libcontainer container kubepods-burstable-podeec62852_9ca4_4b0d_9662_beff02aa03c1.slice. Jul 9 23:47:12.138050 kubelet[3305]: I0709 23:47:12.137998 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e9999c5-fc34-424b-bb32-9b46ce4c04b8-config-volume\") pod \"coredns-668d6bf9bc-lllgj\" (UID: \"0e9999c5-fc34-424b-bb32-9b46ce4c04b8\") " pod="kube-system/coredns-668d6bf9bc-lllgj" Jul 9 23:47:12.138307 kubelet[3305]: I0709 23:47:12.138288 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt5pk\" (UniqueName: \"kubernetes.io/projected/0e9999c5-fc34-424b-bb32-9b46ce4c04b8-kube-api-access-xt5pk\") pod \"coredns-668d6bf9bc-lllgj\" (UID: \"0e9999c5-fc34-424b-bb32-9b46ce4c04b8\") " pod="kube-system/coredns-668d6bf9bc-lllgj" Jul 9 23:47:12.138403 kubelet[3305]: I0709 23:47:12.138310 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9mvc\" (UniqueName: \"kubernetes.io/projected/eec62852-9ca4-4b0d-9662-beff02aa03c1-kube-api-access-f9mvc\") pod \"coredns-668d6bf9bc-44ssb\" (UID: \"eec62852-9ca4-4b0d-9662-beff02aa03c1\") " pod="kube-system/coredns-668d6bf9bc-44ssb" Jul 9 23:47:12.138403 kubelet[3305]: I0709 23:47:12.138325 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eec62852-9ca4-4b0d-9662-beff02aa03c1-config-volume\") pod \"coredns-668d6bf9bc-44ssb\" (UID: \"eec62852-9ca4-4b0d-9662-beff02aa03c1\") " pod="kube-system/coredns-668d6bf9bc-44ssb" Jul 9 23:47:12.375328 containerd[1903]: time="2025-07-09T23:47:12.375223884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lllgj,Uid:0e9999c5-fc34-424b-bb32-9b46ce4c04b8,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:12.381134 containerd[1903]: time="2025-07-09T23:47:12.381020012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-44ssb,Uid:eec62852-9ca4-4b0d-9662-beff02aa03c1,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:13.885390 systemd-networkd[1627]: cilium_host: Link UP Jul 9 23:47:13.885481 systemd-networkd[1627]: cilium_net: Link UP Jul 9 23:47:13.885564 systemd-networkd[1627]: cilium_host: Gained carrier Jul 9 23:47:13.886329 systemd-networkd[1627]: cilium_net: Gained carrier Jul 9 23:47:14.057147 systemd-networkd[1627]: cilium_vxlan: Link UP Jul 9 23:47:14.057153 systemd-networkd[1627]: cilium_vxlan: Gained carrier Jul 9 23:47:14.255079 kernel: NET: Registered PF_ALG protocol family Jul 9 23:47:14.543289 systemd-networkd[1627]: cilium_net: Gained IPv6LL Jul 9 23:47:14.671254 systemd-networkd[1627]: cilium_host: Gained IPv6LL Jul 9 23:47:14.716455 systemd-networkd[1627]: lxc_health: Link UP Jul 9 23:47:14.716613 systemd-networkd[1627]: lxc_health: Gained carrier Jul 9 23:47:14.824678 kubelet[3305]: I0709 23:47:14.823750 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-46hpf" podStartSLOduration=9.484970428 podStartE2EDuration="14.823737581s" podCreationTimestamp="2025-07-09 23:47:00 +0000 UTC" firstStartedPulling="2025-07-09 23:47:01.004678936 +0000 UTC m=+7.333558306" lastFinishedPulling="2025-07-09 23:47:06.343446097 +0000 UTC m=+12.672325459" observedRunningTime="2025-07-09 23:47:12.844696805 +0000 UTC m=+19.173576175" watchObservedRunningTime="2025-07-09 23:47:14.823737581 +0000 UTC m=+21.152616943" Jul 9 23:47:14.907557 systemd-networkd[1627]: lxceb86329ef643: Link UP Jul 9 23:47:14.912079 kernel: eth0: renamed from tmp5b332 Jul 9 23:47:14.913142 systemd-networkd[1627]: lxceb86329ef643: Gained carrier Jul 9 23:47:14.923754 systemd-networkd[1627]: lxcb3fdadd8bea8: Link UP Jul 9 23:47:14.937063 kernel: eth0: renamed from tmp4d829 Jul 9 23:47:14.939834 systemd-networkd[1627]: lxcb3fdadd8bea8: Gained carrier Jul 9 23:47:15.631267 systemd-networkd[1627]: cilium_vxlan: Gained IPv6LL Jul 9 23:47:15.921459 kubelet[3305]: I0709 23:47:15.921226 3305 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:47:16.592286 systemd-networkd[1627]: lxc_health: Gained IPv6LL Jul 9 23:47:16.592888 systemd-networkd[1627]: lxceb86329ef643: Gained IPv6LL Jul 9 23:47:16.655206 systemd-networkd[1627]: lxcb3fdadd8bea8: Gained IPv6LL Jul 9 23:47:17.470971 containerd[1903]: time="2025-07-09T23:47:17.470927107Z" level=info msg="connecting to shim 5b332eac8d9ba5f14f9ee9bc21baa83dc7cf42be8e1896e7a6e798e53ae0fd77" address="unix:///run/containerd/s/4f7a625e937e535a018a537dabcf3f9c7a1530379a0e69b5fc40cbb8ec744923" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:17.478859 containerd[1903]: time="2025-07-09T23:47:17.478811927Z" level=info msg="connecting to shim 4d829bb9e12353007635c0e9603c09856952a7f586cb2c0b5b05e19f6a5fd9c0" address="unix:///run/containerd/s/b7d9352e268dd7d9c02ca6a9df91dd72efbfa5b759374f92cf983a9fde3d5510" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:17.488164 systemd[1]: Started cri-containerd-5b332eac8d9ba5f14f9ee9bc21baa83dc7cf42be8e1896e7a6e798e53ae0fd77.scope - libcontainer container 5b332eac8d9ba5f14f9ee9bc21baa83dc7cf42be8e1896e7a6e798e53ae0fd77. Jul 9 23:47:17.501152 systemd[1]: Started cri-containerd-4d829bb9e12353007635c0e9603c09856952a7f586cb2c0b5b05e19f6a5fd9c0.scope - libcontainer container 4d829bb9e12353007635c0e9603c09856952a7f586cb2c0b5b05e19f6a5fd9c0. Jul 9 23:47:17.528355 containerd[1903]: time="2025-07-09T23:47:17.528289416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lllgj,Uid:0e9999c5-fc34-424b-bb32-9b46ce4c04b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b332eac8d9ba5f14f9ee9bc21baa83dc7cf42be8e1896e7a6e798e53ae0fd77\"" Jul 9 23:47:17.531672 containerd[1903]: time="2025-07-09T23:47:17.531624443Z" level=info msg="CreateContainer within sandbox \"5b332eac8d9ba5f14f9ee9bc21baa83dc7cf42be8e1896e7a6e798e53ae0fd77\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:47:17.543706 containerd[1903]: time="2025-07-09T23:47:17.543675651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-44ssb,Uid:eec62852-9ca4-4b0d-9662-beff02aa03c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d829bb9e12353007635c0e9603c09856952a7f586cb2c0b5b05e19f6a5fd9c0\"" Jul 9 23:47:17.547057 containerd[1903]: time="2025-07-09T23:47:17.546510749Z" level=info msg="CreateContainer within sandbox \"4d829bb9e12353007635c0e9603c09856952a7f586cb2c0b5b05e19f6a5fd9c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:47:17.572462 containerd[1903]: time="2025-07-09T23:47:17.572429152Z" level=info msg="Container 8197d24385e8867d772bcda7d08f6114bae1b70fae1a01a32dbaec2890d8a3aa: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:17.579193 containerd[1903]: time="2025-07-09T23:47:17.579167982Z" level=info msg="Container c3422002db54ebf3be8e75a03b99a560ee04fb763bc29def1b48e18528810a57: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:17.597863 containerd[1903]: time="2025-07-09T23:47:17.597836209Z" level=info msg="CreateContainer within sandbox \"5b332eac8d9ba5f14f9ee9bc21baa83dc7cf42be8e1896e7a6e798e53ae0fd77\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8197d24385e8867d772bcda7d08f6114bae1b70fae1a01a32dbaec2890d8a3aa\"" Jul 9 23:47:17.598842 containerd[1903]: time="2025-07-09T23:47:17.598268767Z" level=info msg="StartContainer for \"8197d24385e8867d772bcda7d08f6114bae1b70fae1a01a32dbaec2890d8a3aa\"" Jul 9 23:47:17.598842 containerd[1903]: time="2025-07-09T23:47:17.598813705Z" level=info msg="connecting to shim 8197d24385e8867d772bcda7d08f6114bae1b70fae1a01a32dbaec2890d8a3aa" address="unix:///run/containerd/s/4f7a625e937e535a018a537dabcf3f9c7a1530379a0e69b5fc40cbb8ec744923" protocol=ttrpc version=3 Jul 9 23:47:17.614288 systemd[1]: Started cri-containerd-8197d24385e8867d772bcda7d08f6114bae1b70fae1a01a32dbaec2890d8a3aa.scope - libcontainer container 8197d24385e8867d772bcda7d08f6114bae1b70fae1a01a32dbaec2890d8a3aa. Jul 9 23:47:17.615334 containerd[1903]: time="2025-07-09T23:47:17.615107992Z" level=info msg="CreateContainer within sandbox \"4d829bb9e12353007635c0e9603c09856952a7f586cb2c0b5b05e19f6a5fd9c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c3422002db54ebf3be8e75a03b99a560ee04fb763bc29def1b48e18528810a57\"" Jul 9 23:47:17.615827 containerd[1903]: time="2025-07-09T23:47:17.615803142Z" level=info msg="StartContainer for \"c3422002db54ebf3be8e75a03b99a560ee04fb763bc29def1b48e18528810a57\"" Jul 9 23:47:17.616381 containerd[1903]: time="2025-07-09T23:47:17.616356752Z" level=info msg="connecting to shim c3422002db54ebf3be8e75a03b99a560ee04fb763bc29def1b48e18528810a57" address="unix:///run/containerd/s/b7d9352e268dd7d9c02ca6a9df91dd72efbfa5b759374f92cf983a9fde3d5510" protocol=ttrpc version=3 Jul 9 23:47:17.638340 systemd[1]: Started cri-containerd-c3422002db54ebf3be8e75a03b99a560ee04fb763bc29def1b48e18528810a57.scope - libcontainer container c3422002db54ebf3be8e75a03b99a560ee04fb763bc29def1b48e18528810a57. Jul 9 23:47:17.647793 containerd[1903]: time="2025-07-09T23:47:17.647314547Z" level=info msg="StartContainer for \"8197d24385e8867d772bcda7d08f6114bae1b70fae1a01a32dbaec2890d8a3aa\" returns successfully" Jul 9 23:47:17.672193 containerd[1903]: time="2025-07-09T23:47:17.672096777Z" level=info msg="StartContainer for \"c3422002db54ebf3be8e75a03b99a560ee04fb763bc29def1b48e18528810a57\" returns successfully" Jul 9 23:47:17.869364 kubelet[3305]: I0709 23:47:17.869207 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lllgj" podStartSLOduration=17.869191949 podStartE2EDuration="17.869191949s" podCreationTimestamp="2025-07-09 23:47:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:47:17.867795592 +0000 UTC m=+24.196675002" watchObservedRunningTime="2025-07-09 23:47:17.869191949 +0000 UTC m=+24.198071311" Jul 9 23:47:17.870253 kubelet[3305]: I0709 23:47:17.869281 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-44ssb" podStartSLOduration=17.869276936 podStartE2EDuration="17.869276936s" podCreationTimestamp="2025-07-09 23:47:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:47:17.85335546 +0000 UTC m=+24.182234822" watchObservedRunningTime="2025-07-09 23:47:17.869276936 +0000 UTC m=+24.198156298" Jul 9 23:48:29.081116 systemd[1]: Started sshd@7-10.200.20.10:22-10.200.16.10:53800.service - OpenSSH per-connection server daemon (10.200.16.10:53800). Jul 9 23:48:29.559545 sshd[4685]: Accepted publickey for core from 10.200.16.10 port 53800 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:29.560636 sshd-session[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:29.564452 systemd-logind[1884]: New session 10 of user core. Jul 9 23:48:29.575172 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 23:48:29.953924 sshd[4687]: Connection closed by 10.200.16.10 port 53800 Jul 9 23:48:29.954551 sshd-session[4685]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:29.957629 systemd[1]: sshd@7-10.200.20.10:22-10.200.16.10:53800.service: Deactivated successfully. Jul 9 23:48:29.959354 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 23:48:29.961001 systemd-logind[1884]: Session 10 logged out. Waiting for processes to exit. Jul 9 23:48:29.962394 systemd-logind[1884]: Removed session 10. Jul 9 23:48:35.048303 systemd[1]: Started sshd@8-10.200.20.10:22-10.200.16.10:39380.service - OpenSSH per-connection server daemon (10.200.16.10:39380). Jul 9 23:48:35.540361 sshd[4704]: Accepted publickey for core from 10.200.16.10 port 39380 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:35.541431 sshd-session[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:35.545120 systemd-logind[1884]: New session 11 of user core. Jul 9 23:48:35.552178 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 23:48:35.942064 sshd[4706]: Connection closed by 10.200.16.10 port 39380 Jul 9 23:48:35.942630 sshd-session[4704]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:35.945571 systemd[1]: sshd@8-10.200.20.10:22-10.200.16.10:39380.service: Deactivated successfully. Jul 9 23:48:35.947003 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 23:48:35.947651 systemd-logind[1884]: Session 11 logged out. Waiting for processes to exit. Jul 9 23:48:35.948705 systemd-logind[1884]: Removed session 11. Jul 9 23:48:41.031507 systemd[1]: Started sshd@9-10.200.20.10:22-10.200.16.10:55294.service - OpenSSH per-connection server daemon (10.200.16.10:55294). Jul 9 23:48:41.523672 sshd[4718]: Accepted publickey for core from 10.200.16.10 port 55294 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:41.524772 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:41.529090 systemd-logind[1884]: New session 12 of user core. Jul 9 23:48:41.535147 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 23:48:41.919521 sshd[4720]: Connection closed by 10.200.16.10 port 55294 Jul 9 23:48:41.920079 sshd-session[4718]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:41.923144 systemd[1]: sshd@9-10.200.20.10:22-10.200.16.10:55294.service: Deactivated successfully. Jul 9 23:48:41.924582 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 23:48:41.925506 systemd-logind[1884]: Session 12 logged out. Waiting for processes to exit. Jul 9 23:48:41.926798 systemd-logind[1884]: Removed session 12. Jul 9 23:48:47.014390 systemd[1]: Started sshd@10-10.200.20.10:22-10.200.16.10:55300.service - OpenSSH per-connection server daemon (10.200.16.10:55300). Jul 9 23:48:47.507084 sshd[4732]: Accepted publickey for core from 10.200.16.10 port 55300 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:47.508171 sshd-session[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:47.511820 systemd-logind[1884]: New session 13 of user core. Jul 9 23:48:47.519148 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 23:48:47.897176 sshd[4734]: Connection closed by 10.200.16.10 port 55300 Jul 9 23:48:47.897692 sshd-session[4732]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:47.900350 systemd-logind[1884]: Session 13 logged out. Waiting for processes to exit. Jul 9 23:48:47.900458 systemd[1]: sshd@10-10.200.20.10:22-10.200.16.10:55300.service: Deactivated successfully. Jul 9 23:48:47.902499 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 23:48:47.904352 systemd-logind[1884]: Removed session 13. Jul 9 23:48:47.984053 systemd[1]: Started sshd@11-10.200.20.10:22-10.200.16.10:55302.service - OpenSSH per-connection server daemon (10.200.16.10:55302). Jul 9 23:48:48.461356 sshd[4747]: Accepted publickey for core from 10.200.16.10 port 55302 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:48.462756 sshd-session[4747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:48.466289 systemd-logind[1884]: New session 14 of user core. Jul 9 23:48:48.473170 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 23:48:48.867126 sshd[4749]: Connection closed by 10.200.16.10 port 55302 Jul 9 23:48:48.866937 sshd-session[4747]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:48.870193 systemd[1]: sshd@11-10.200.20.10:22-10.200.16.10:55302.service: Deactivated successfully. Jul 9 23:48:48.871740 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 23:48:48.872440 systemd-logind[1884]: Session 14 logged out. Waiting for processes to exit. Jul 9 23:48:48.873611 systemd-logind[1884]: Removed session 14. Jul 9 23:48:48.960435 systemd[1]: Started sshd@12-10.200.20.10:22-10.200.16.10:55312.service - OpenSSH per-connection server daemon (10.200.16.10:55312). Jul 9 23:48:49.454933 sshd[4758]: Accepted publickey for core from 10.200.16.10 port 55312 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:49.457318 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:49.461394 systemd-logind[1884]: New session 15 of user core. Jul 9 23:48:49.465148 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 23:48:49.847203 sshd[4760]: Connection closed by 10.200.16.10 port 55312 Jul 9 23:48:49.847676 sshd-session[4758]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:49.850899 systemd[1]: sshd@12-10.200.20.10:22-10.200.16.10:55312.service: Deactivated successfully. Jul 9 23:48:49.852707 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 23:48:49.854246 systemd-logind[1884]: Session 15 logged out. Waiting for processes to exit. Jul 9 23:48:49.855670 systemd-logind[1884]: Removed session 15. Jul 9 23:48:54.937167 systemd[1]: Started sshd@13-10.200.20.10:22-10.200.16.10:35208.service - OpenSSH per-connection server daemon (10.200.16.10:35208). Jul 9 23:48:55.416299 sshd[4774]: Accepted publickey for core from 10.200.16.10 port 35208 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:55.417442 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:55.420885 systemd-logind[1884]: New session 16 of user core. Jul 9 23:48:55.430141 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 23:48:55.795160 sshd[4776]: Connection closed by 10.200.16.10 port 35208 Jul 9 23:48:55.795789 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:55.798821 systemd[1]: sshd@13-10.200.20.10:22-10.200.16.10:35208.service: Deactivated successfully. Jul 9 23:48:55.800254 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 23:48:55.801334 systemd-logind[1884]: Session 16 logged out. Waiting for processes to exit. Jul 9 23:48:55.802607 systemd-logind[1884]: Removed session 16. Jul 9 23:48:55.876506 systemd[1]: Started sshd@14-10.200.20.10:22-10.200.16.10:35224.service - OpenSSH per-connection server daemon (10.200.16.10:35224). Jul 9 23:48:56.335651 sshd[4787]: Accepted publickey for core from 10.200.16.10 port 35224 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:56.336677 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:56.341182 systemd-logind[1884]: New session 17 of user core. Jul 9 23:48:56.345148 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 23:48:56.744883 sshd[4789]: Connection closed by 10.200.16.10 port 35224 Jul 9 23:48:56.745450 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:56.748543 systemd[1]: sshd@14-10.200.20.10:22-10.200.16.10:35224.service: Deactivated successfully. Jul 9 23:48:56.750536 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 23:48:56.752170 systemd-logind[1884]: Session 17 logged out. Waiting for processes to exit. Jul 9 23:48:56.753353 systemd-logind[1884]: Removed session 17. Jul 9 23:48:56.824528 systemd[1]: Started sshd@15-10.200.20.10:22-10.200.16.10:35226.service - OpenSSH per-connection server daemon (10.200.16.10:35226). Jul 9 23:48:57.282759 sshd[4799]: Accepted publickey for core from 10.200.16.10 port 35226 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:57.283830 sshd-session[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:57.287861 systemd-logind[1884]: New session 18 of user core. Jul 9 23:48:57.294166 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 23:48:58.158136 sshd[4801]: Connection closed by 10.200.16.10 port 35226 Jul 9 23:48:58.158509 sshd-session[4799]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:58.161440 systemd-logind[1884]: Session 18 logged out. Waiting for processes to exit. Jul 9 23:48:58.162356 systemd[1]: sshd@15-10.200.20.10:22-10.200.16.10:35226.service: Deactivated successfully. Jul 9 23:48:58.164260 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 23:48:58.165775 systemd-logind[1884]: Removed session 18. Jul 9 23:48:58.253756 systemd[1]: Started sshd@16-10.200.20.10:22-10.200.16.10:35228.service - OpenSSH per-connection server daemon (10.200.16.10:35228). Jul 9 23:48:58.746633 sshd[4818]: Accepted publickey for core from 10.200.16.10 port 35228 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:58.747796 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:58.751431 systemd-logind[1884]: New session 19 of user core. Jul 9 23:48:58.762162 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 23:48:59.227207 sshd[4820]: Connection closed by 10.200.16.10 port 35228 Jul 9 23:48:59.227580 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:59.230998 systemd[1]: sshd@16-10.200.20.10:22-10.200.16.10:35228.service: Deactivated successfully. Jul 9 23:48:59.232428 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 23:48:59.233794 systemd-logind[1884]: Session 19 logged out. Waiting for processes to exit. Jul 9 23:48:59.235254 systemd-logind[1884]: Removed session 19. Jul 9 23:48:59.313151 systemd[1]: Started sshd@17-10.200.20.10:22-10.200.16.10:35242.service - OpenSSH per-connection server daemon (10.200.16.10:35242). Jul 9 23:48:59.792884 sshd[4830]: Accepted publickey for core from 10.200.16.10 port 35242 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:59.794146 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:59.797951 systemd-logind[1884]: New session 20 of user core. Jul 9 23:48:59.806160 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 23:49:00.173331 sshd[4832]: Connection closed by 10.200.16.10 port 35242 Jul 9 23:49:00.172835 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:00.175630 systemd[1]: sshd@17-10.200.20.10:22-10.200.16.10:35242.service: Deactivated successfully. Jul 9 23:49:00.177596 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 23:49:00.178940 systemd-logind[1884]: Session 20 logged out. Waiting for processes to exit. Jul 9 23:49:00.180424 systemd-logind[1884]: Removed session 20. Jul 9 23:49:05.262651 systemd[1]: Started sshd@18-10.200.20.10:22-10.200.16.10:59080.service - OpenSSH per-connection server daemon (10.200.16.10:59080). Jul 9 23:49:05.757824 sshd[4848]: Accepted publickey for core from 10.200.16.10 port 59080 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:05.758865 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:05.762808 systemd-logind[1884]: New session 21 of user core. Jul 9 23:49:05.770147 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 23:49:06.155949 sshd[4850]: Connection closed by 10.200.16.10 port 59080 Jul 9 23:49:06.157266 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:06.160453 systemd[1]: sshd@18-10.200.20.10:22-10.200.16.10:59080.service: Deactivated successfully. Jul 9 23:49:06.162348 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 23:49:06.163295 systemd-logind[1884]: Session 21 logged out. Waiting for processes to exit. Jul 9 23:49:06.164417 systemd-logind[1884]: Removed session 21. Jul 9 23:49:11.245607 systemd[1]: Started sshd@19-10.200.20.10:22-10.200.16.10:43650.service - OpenSSH per-connection server daemon (10.200.16.10:43650). Jul 9 23:49:11.736825 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 43650 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:11.737896 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:11.741768 systemd-logind[1884]: New session 22 of user core. Jul 9 23:49:11.745210 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 23:49:12.134916 sshd[4864]: Connection closed by 10.200.16.10 port 43650 Jul 9 23:49:12.135436 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:12.138569 systemd[1]: sshd@19-10.200.20.10:22-10.200.16.10:43650.service: Deactivated successfully. Jul 9 23:49:12.140886 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 23:49:12.141725 systemd-logind[1884]: Session 22 logged out. Waiting for processes to exit. Jul 9 23:49:12.143053 systemd-logind[1884]: Removed session 22. Jul 9 23:49:17.222628 systemd[1]: Started sshd@20-10.200.20.10:22-10.200.16.10:43664.service - OpenSSH per-connection server daemon (10.200.16.10:43664). Jul 9 23:49:17.702226 sshd[4875]: Accepted publickey for core from 10.200.16.10 port 43664 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:17.703318 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:17.707354 systemd-logind[1884]: New session 23 of user core. Jul 9 23:49:17.716175 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 23:49:18.084377 sshd[4877]: Connection closed by 10.200.16.10 port 43664 Jul 9 23:49:18.084927 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:18.088455 systemd-logind[1884]: Session 23 logged out. Waiting for processes to exit. Jul 9 23:49:18.088570 systemd[1]: sshd@20-10.200.20.10:22-10.200.16.10:43664.service: Deactivated successfully. Jul 9 23:49:18.091032 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 23:49:18.093570 systemd-logind[1884]: Removed session 23. Jul 9 23:49:18.167057 systemd[1]: Started sshd@21-10.200.20.10:22-10.200.16.10:43666.service - OpenSSH per-connection server daemon (10.200.16.10:43666). Jul 9 23:49:18.620875 sshd[4888]: Accepted publickey for core from 10.200.16.10 port 43666 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:18.622000 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:18.625588 systemd-logind[1884]: New session 24 of user core. Jul 9 23:49:18.633147 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 23:49:20.167768 containerd[1903]: time="2025-07-09T23:49:20.167692373Z" level=info msg="StopContainer for \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\" with timeout 30 (s)" Jul 9 23:49:20.169213 containerd[1903]: time="2025-07-09T23:49:20.169185535Z" level=info msg="Stop container \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\" with signal terminated" Jul 9 23:49:20.174511 containerd[1903]: time="2025-07-09T23:49:20.174484859Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:49:20.178781 systemd[1]: cri-containerd-c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5.scope: Deactivated successfully. Jul 9 23:49:20.180568 containerd[1903]: time="2025-07-09T23:49:20.180323306Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" id:\"cdd2b78a0a5f25161b4a2d6dec6472996121d60bfb0db99af24dfb392dc2cbf6\" pid:4909 exited_at:{seconds:1752104960 nanos:179976102}" Jul 9 23:49:20.180992 containerd[1903]: time="2025-07-09T23:49:20.180971840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\" id:\"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\" pid:3888 exited_at:{seconds:1752104960 nanos:179328760}" Jul 9 23:49:20.181213 containerd[1903]: time="2025-07-09T23:49:20.181195239Z" level=info msg="received exit event container_id:\"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\" id:\"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\" pid:3888 exited_at:{seconds:1752104960 nanos:179328760}" Jul 9 23:49:20.183438 containerd[1903]: time="2025-07-09T23:49:20.183375105Z" level=info msg="StopContainer for \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" with timeout 2 (s)" Jul 9 23:49:20.183977 containerd[1903]: time="2025-07-09T23:49:20.183934316Z" level=info msg="Stop container \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" with signal terminated" Jul 9 23:49:20.190243 systemd-networkd[1627]: lxc_health: Link DOWN Jul 9 23:49:20.190247 systemd-networkd[1627]: lxc_health: Lost carrier Jul 9 23:49:20.204370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5-rootfs.mount: Deactivated successfully. Jul 9 23:49:20.206688 systemd[1]: cri-containerd-7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef.scope: Deactivated successfully. Jul 9 23:49:20.207360 systemd[1]: cri-containerd-7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef.scope: Consumed 4.342s CPU time, 124.3M memory peak, 128K read from disk, 12.9M written to disk. Jul 9 23:49:20.208315 containerd[1903]: time="2025-07-09T23:49:20.208194196Z" level=info msg="received exit event container_id:\"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" id:\"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" pid:4000 exited_at:{seconds:1752104960 nanos:207982805}" Jul 9 23:49:20.208513 containerd[1903]: time="2025-07-09T23:49:20.208487726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" id:\"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" pid:4000 exited_at:{seconds:1752104960 nanos:207982805}" Jul 9 23:49:20.221750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef-rootfs.mount: Deactivated successfully. Jul 9 23:49:20.297636 containerd[1903]: time="2025-07-09T23:49:20.297603937Z" level=info msg="StopContainer for \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\" returns successfully" Jul 9 23:49:20.298959 containerd[1903]: time="2025-07-09T23:49:20.298906893Z" level=info msg="StopPodSandbox for \"a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b\"" Jul 9 23:49:20.299138 containerd[1903]: time="2025-07-09T23:49:20.299074995Z" level=info msg="Container to stop \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:49:20.305727 systemd[1]: cri-containerd-a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b.scope: Deactivated successfully. Jul 9 23:49:20.308552 containerd[1903]: time="2025-07-09T23:49:20.308514835Z" level=info msg="StopContainer for \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" returns successfully" Jul 9 23:49:20.309103 containerd[1903]: time="2025-07-09T23:49:20.309082495Z" level=info msg="StopPodSandbox for \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\"" Jul 9 23:49:20.309164 containerd[1903]: time="2025-07-09T23:49:20.309141185Z" level=info msg="Container to stop \"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:49:20.309164 containerd[1903]: time="2025-07-09T23:49:20.309149697Z" level=info msg="Container to stop \"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:49:20.309164 containerd[1903]: time="2025-07-09T23:49:20.309155217Z" level=info msg="Container to stop \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:49:20.309164 containerd[1903]: time="2025-07-09T23:49:20.309160777Z" level=info msg="Container to stop \"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:49:20.309235 containerd[1903]: time="2025-07-09T23:49:20.309167609Z" level=info msg="Container to stop \"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:49:20.313315 systemd[1]: cri-containerd-101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43.scope: Deactivated successfully. Jul 9 23:49:20.314233 containerd[1903]: time="2025-07-09T23:49:20.313530406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b\" id:\"a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b\" pid:3601 exit_status:137 exited_at:{seconds:1752104960 nanos:313249396}" Jul 9 23:49:20.333131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43-rootfs.mount: Deactivated successfully. Jul 9 23:49:20.337264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b-rootfs.mount: Deactivated successfully. Jul 9 23:49:20.371503 containerd[1903]: time="2025-07-09T23:49:20.371462037Z" level=info msg="shim disconnected" id=a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b namespace=k8s.io Jul 9 23:49:20.371503 containerd[1903]: time="2025-07-09T23:49:20.371495614Z" level=warning msg="cleaning up after shim disconnected" id=a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b namespace=k8s.io Jul 9 23:49:20.371701 containerd[1903]: time="2025-07-09T23:49:20.371520591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:49:20.371960 containerd[1903]: time="2025-07-09T23:49:20.371819865Z" level=info msg="shim disconnected" id=101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43 namespace=k8s.io Jul 9 23:49:20.371960 containerd[1903]: time="2025-07-09T23:49:20.371840602Z" level=warning msg="cleaning up after shim disconnected" id=101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43 namespace=k8s.io Jul 9 23:49:20.371960 containerd[1903]: time="2025-07-09T23:49:20.371856322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:49:20.382062 containerd[1903]: time="2025-07-09T23:49:20.382013091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" id:\"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" pid:3523 exit_status:137 exited_at:{seconds:1752104960 nanos:318245846}" Jul 9 23:49:20.382244 containerd[1903]: time="2025-07-09T23:49:20.382149704Z" level=info msg="received exit event sandbox_id:\"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" exit_status:137 exited_at:{seconds:1752104960 nanos:318245846}" Jul 9 23:49:20.383522 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b-shm.mount: Deactivated successfully. Jul 9 23:49:20.383762 containerd[1903]: time="2025-07-09T23:49:20.383740054Z" level=info msg="TearDown network for sandbox \"a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b\" successfully" Jul 9 23:49:20.383839 containerd[1903]: time="2025-07-09T23:49:20.383823849Z" level=info msg="StopPodSandbox for \"a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b\" returns successfully" Jul 9 23:49:20.387227 containerd[1903]: time="2025-07-09T23:49:20.386665793Z" level=info msg="received exit event sandbox_id:\"a533c00451210b373f9311e3245b51aa9cca2792cce9c4b6634130e430a2004b\" exit_status:137 exited_at:{seconds:1752104960 nanos:313249396}" Jul 9 23:49:20.388021 containerd[1903]: time="2025-07-09T23:49:20.387995038Z" level=info msg="TearDown network for sandbox \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" successfully" Jul 9 23:49:20.388021 containerd[1903]: time="2025-07-09T23:49:20.388017295Z" level=info msg="StopPodSandbox for \"101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43\" returns successfully" Jul 9 23:49:20.456912 kubelet[3305]: I0709 23:49:20.456789 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-host-proc-sys-net\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.457681 kubelet[3305]: I0709 23:49:20.457325 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-run\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.457681 kubelet[3305]: I0709 23:49:20.457351 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cni-path\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.457681 kubelet[3305]: I0709 23:49:20.456910 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:49:20.457681 kubelet[3305]: I0709 23:49:20.457411 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cni-path" (OuterVolumeSpecName: "cni-path") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:49:20.457681 kubelet[3305]: I0709 23:49:20.457422 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxw6g\" (UniqueName: \"kubernetes.io/projected/694a4f10-519d-487c-bd5f-b28b50e5ae88-kube-api-access-dxw6g\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.457931 kubelet[3305]: I0709 23:49:20.457433 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:49:20.457931 kubelet[3305]: I0709 23:49:20.457446 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/694a4f10-519d-487c-bd5f-b28b50e5ae88-clustermesh-secrets\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.457931 kubelet[3305]: I0709 23:49:20.457461 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-config-path\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.457931 kubelet[3305]: I0709 23:49:20.457477 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/694a4f10-519d-487c-bd5f-b28b50e5ae88-hubble-tls\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.457931 kubelet[3305]: I0709 23:49:20.457488 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d-cilium-config-path\") pod \"ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d\" (UID: \"ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d\") " Jul 9 23:49:20.457931 kubelet[3305]: I0709 23:49:20.457499 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-xtables-lock\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.458030 kubelet[3305]: I0709 23:49:20.457510 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-hostproc\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.458030 kubelet[3305]: I0709 23:49:20.457524 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-cgroup\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.458030 kubelet[3305]: I0709 23:49:20.457532 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-bpf-maps\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.458030 kubelet[3305]: I0709 23:49:20.457540 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-lib-modules\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.458030 kubelet[3305]: I0709 23:49:20.457551 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-host-proc-sys-kernel\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.458030 kubelet[3305]: I0709 23:49:20.457561 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-etc-cni-netd\") pod \"694a4f10-519d-487c-bd5f-b28b50e5ae88\" (UID: \"694a4f10-519d-487c-bd5f-b28b50e5ae88\") " Jul 9 23:49:20.458372 kubelet[3305]: I0709 23:49:20.457571 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4t7m\" (UniqueName: \"kubernetes.io/projected/ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d-kube-api-access-c4t7m\") pod \"ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d\" (UID: \"ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d\") " Jul 9 23:49:20.458372 kubelet[3305]: I0709 23:49:20.457602 3305 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-host-proc-sys-net\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.458372 kubelet[3305]: I0709 23:49:20.457609 3305 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-run\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.458372 kubelet[3305]: I0709 23:49:20.457615 3305 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cni-path\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.458835 kubelet[3305]: I0709 23:49:20.458631 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-hostproc" (OuterVolumeSpecName: "hostproc") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:49:20.460352 kubelet[3305]: I0709 23:49:20.460322 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d-kube-api-access-c4t7m" (OuterVolumeSpecName: "kube-api-access-c4t7m") pod "ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d" (UID: "ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d"). InnerVolumeSpecName "kube-api-access-c4t7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:49:20.460421 kubelet[3305]: I0709 23:49:20.460369 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:49:20.460421 kubelet[3305]: I0709 23:49:20.460381 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:49:20.460421 kubelet[3305]: I0709 23:49:20.460390 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:49:20.460421 kubelet[3305]: I0709 23:49:20.460398 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:49:20.460421 kubelet[3305]: I0709 23:49:20.460407 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:49:20.461562 kubelet[3305]: I0709 23:49:20.461540 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:49:20.462655 kubelet[3305]: I0709 23:49:20.462633 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d" (UID: "ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 23:49:20.462884 kubelet[3305]: I0709 23:49:20.462779 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 23:49:20.462884 kubelet[3305]: I0709 23:49:20.462820 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694a4f10-519d-487c-bd5f-b28b50e5ae88-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 23:49:20.462884 kubelet[3305]: I0709 23:49:20.462820 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694a4f10-519d-487c-bd5f-b28b50e5ae88-kube-api-access-dxw6g" (OuterVolumeSpecName: "kube-api-access-dxw6g") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "kube-api-access-dxw6g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:49:20.463641 kubelet[3305]: I0709 23:49:20.463620 3305 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694a4f10-519d-487c-bd5f-b28b50e5ae88-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "694a4f10-519d-487c-bd5f-b28b50e5ae88" (UID: "694a4f10-519d-487c-bd5f-b28b50e5ae88"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:49:20.558007 kubelet[3305]: I0709 23:49:20.557964 3305 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-cgroup\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558007 kubelet[3305]: I0709 23:49:20.557999 3305 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-bpf-maps\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558007 kubelet[3305]: I0709 23:49:20.558006 3305 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-lib-modules\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558007 kubelet[3305]: I0709 23:49:20.558017 3305 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-host-proc-sys-kernel\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558007 kubelet[3305]: I0709 23:49:20.558025 3305 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-etc-cni-netd\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558242 kubelet[3305]: I0709 23:49:20.558033 3305 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c4t7m\" (UniqueName: \"kubernetes.io/projected/ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d-kube-api-access-c4t7m\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558242 kubelet[3305]: I0709 23:49:20.558053 3305 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dxw6g\" (UniqueName: \"kubernetes.io/projected/694a4f10-519d-487c-bd5f-b28b50e5ae88-kube-api-access-dxw6g\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558242 kubelet[3305]: I0709 23:49:20.558058 3305 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/694a4f10-519d-487c-bd5f-b28b50e5ae88-clustermesh-secrets\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558242 kubelet[3305]: I0709 23:49:20.558063 3305 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/694a4f10-519d-487c-bd5f-b28b50e5ae88-cilium-config-path\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558242 kubelet[3305]: I0709 23:49:20.558068 3305 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/694a4f10-519d-487c-bd5f-b28b50e5ae88-hubble-tls\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558242 kubelet[3305]: I0709 23:49:20.558074 3305 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d-cilium-config-path\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558242 kubelet[3305]: I0709 23:49:20.558082 3305 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-xtables-lock\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:20.558242 kubelet[3305]: I0709 23:49:20.558087 3305 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/694a4f10-519d-487c-bd5f-b28b50e5ae88-hostproc\") on node \"ci-4344.1.1-n-4a8bce7214\" DevicePath \"\"" Jul 9 23:49:21.049674 kubelet[3305]: I0709 23:49:21.049637 3305 scope.go:117] "RemoveContainer" containerID="c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5" Jul 9 23:49:21.053187 systemd[1]: Removed slice kubepods-besteffort-podef0a66cc_3d78_4bbc_82a6_df4fc6d69f6d.slice - libcontainer container kubepods-besteffort-podef0a66cc_3d78_4bbc_82a6_df4fc6d69f6d.slice. Jul 9 23:49:21.054511 containerd[1903]: time="2025-07-09T23:49:21.054465724Z" level=info msg="RemoveContainer for \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\"" Jul 9 23:49:21.066360 systemd[1]: Removed slice kubepods-burstable-pod694a4f10_519d_487c_bd5f_b28b50e5ae88.slice - libcontainer container kubepods-burstable-pod694a4f10_519d_487c_bd5f_b28b50e5ae88.slice. Jul 9 23:49:21.066430 systemd[1]: kubepods-burstable-pod694a4f10_519d_487c_bd5f_b28b50e5ae88.slice: Consumed 4.399s CPU time, 124.7M memory peak, 128K read from disk, 12.9M written to disk. Jul 9 23:49:21.070560 containerd[1903]: time="2025-07-09T23:49:21.069448145Z" level=info msg="RemoveContainer for \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\" returns successfully" Jul 9 23:49:21.070648 kubelet[3305]: I0709 23:49:21.069848 3305 scope.go:117] "RemoveContainer" containerID="c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5" Jul 9 23:49:21.071515 containerd[1903]: time="2025-07-09T23:49:21.071468364Z" level=error msg="ContainerStatus for \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\": not found" Jul 9 23:49:21.071723 kubelet[3305]: E0709 23:49:21.071698 3305 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\": not found" containerID="c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5" Jul 9 23:49:21.071943 kubelet[3305]: I0709 23:49:21.071725 3305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5"} err="failed to get container status \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4703ed5f5211e7c5cd46fdaad82e3d6e0f03d81383707d666a4b0a9708532e5\": not found" Jul 9 23:49:21.071943 kubelet[3305]: I0709 23:49:21.071775 3305 scope.go:117] "RemoveContainer" containerID="7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef" Jul 9 23:49:21.073692 containerd[1903]: time="2025-07-09T23:49:21.073590475Z" level=info msg="RemoveContainer for \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\"" Jul 9 23:49:21.085834 containerd[1903]: time="2025-07-09T23:49:21.085771482Z" level=info msg="RemoveContainer for \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" returns successfully" Jul 9 23:49:21.086220 kubelet[3305]: I0709 23:49:21.086060 3305 scope.go:117] "RemoveContainer" containerID="233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a" Jul 9 23:49:21.087751 containerd[1903]: time="2025-07-09T23:49:21.087466123Z" level=info msg="RemoveContainer for \"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\"" Jul 9 23:49:21.100964 containerd[1903]: time="2025-07-09T23:49:21.100932324Z" level=info msg="RemoveContainer for \"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\" returns successfully" Jul 9 23:49:21.101200 kubelet[3305]: I0709 23:49:21.101179 3305 scope.go:117] "RemoveContainer" containerID="2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d" Jul 9 23:49:21.103861 containerd[1903]: time="2025-07-09T23:49:21.103791140Z" level=info msg="RemoveContainer for \"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\"" Jul 9 23:49:21.118597 containerd[1903]: time="2025-07-09T23:49:21.118560609Z" level=info msg="RemoveContainer for \"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\" returns successfully" Jul 9 23:49:21.118924 kubelet[3305]: I0709 23:49:21.118804 3305 scope.go:117] "RemoveContainer" containerID="285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4" Jul 9 23:49:21.120232 containerd[1903]: time="2025-07-09T23:49:21.120205712Z" level=info msg="RemoveContainer for \"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\"" Jul 9 23:49:21.131612 containerd[1903]: time="2025-07-09T23:49:21.131579916Z" level=info msg="RemoveContainer for \"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\" returns successfully" Jul 9 23:49:21.131822 kubelet[3305]: I0709 23:49:21.131803 3305 scope.go:117] "RemoveContainer" containerID="760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7" Jul 9 23:49:21.133110 containerd[1903]: time="2025-07-09T23:49:21.133073270Z" level=info msg="RemoveContainer for \"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\"" Jul 9 23:49:21.162880 containerd[1903]: time="2025-07-09T23:49:21.162843176Z" level=info msg="RemoveContainer for \"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\" returns successfully" Jul 9 23:49:21.163234 kubelet[3305]: I0709 23:49:21.163128 3305 scope.go:117] "RemoveContainer" containerID="7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef" Jul 9 23:49:21.163570 containerd[1903]: time="2025-07-09T23:49:21.163460053Z" level=error msg="ContainerStatus for \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\": not found" Jul 9 23:49:21.163715 kubelet[3305]: E0709 23:49:21.163695 3305 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\": not found" containerID="7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef" Jul 9 23:49:21.163838 kubelet[3305]: I0709 23:49:21.163816 3305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef"} err="failed to get container status \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\": rpc error: code = NotFound desc = an error occurred when try to find container \"7fdd9c2a09aab2f0ea318b7c2f7412aabea90633f272415e3f30955b9e337aef\": not found" Jul 9 23:49:21.163977 kubelet[3305]: I0709 23:49:21.163920 3305 scope.go:117] "RemoveContainer" containerID="233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a" Jul 9 23:49:21.164298 containerd[1903]: time="2025-07-09T23:49:21.164275408Z" level=error msg="ContainerStatus for \"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\": not found" Jul 9 23:49:21.164544 kubelet[3305]: E0709 23:49:21.164521 3305 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\": not found" containerID="233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a" Jul 9 23:49:21.164544 kubelet[3305]: I0709 23:49:21.164544 3305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a"} err="failed to get container status \"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"233a41abe6768ede8fe7bd0c1b450cdd9c6226ec48162f8a2e4257fac223ca6a\": not found" Jul 9 23:49:21.164640 kubelet[3305]: I0709 23:49:21.164557 3305 scope.go:117] "RemoveContainer" containerID="2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d" Jul 9 23:49:21.164779 containerd[1903]: time="2025-07-09T23:49:21.164752920Z" level=error msg="ContainerStatus for \"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\": not found" Jul 9 23:49:21.165515 kubelet[3305]: E0709 23:49:21.165493 3305 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\": not found" containerID="2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d" Jul 9 23:49:21.165604 kubelet[3305]: I0709 23:49:21.165515 3305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d"} err="failed to get container status \"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2188dcbacd9a4b87c0c00b9ec5befc49b997e1357e8160571b1b99b6e9ed636d\": not found" Jul 9 23:49:21.165604 kubelet[3305]: I0709 23:49:21.165548 3305 scope.go:117] "RemoveContainer" containerID="285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4" Jul 9 23:49:21.165785 containerd[1903]: time="2025-07-09T23:49:21.165753825Z" level=error msg="ContainerStatus for \"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\": not found" Jul 9 23:49:21.165921 kubelet[3305]: E0709 23:49:21.165895 3305 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\": not found" containerID="285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4" Jul 9 23:49:21.165955 kubelet[3305]: I0709 23:49:21.165920 3305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4"} err="failed to get container status \"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"285ab99e5b7c8d44f349b940630bb6616d515b5478c480c3150be0010faa9cb4\": not found" Jul 9 23:49:21.165955 kubelet[3305]: I0709 23:49:21.165933 3305 scope.go:117] "RemoveContainer" containerID="760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7" Jul 9 23:49:21.166254 containerd[1903]: time="2025-07-09T23:49:21.166215321Z" level=error msg="ContainerStatus for \"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\": not found" Jul 9 23:49:21.166359 kubelet[3305]: E0709 23:49:21.166321 3305 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\": not found" containerID="760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7" Jul 9 23:49:21.166446 kubelet[3305]: I0709 23:49:21.166367 3305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7"} err="failed to get container status \"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"760a82b601ff32d47273ba7396ee789d17b3150b13f8462d0d9b80a1abdd78b7\": not found" Jul 9 23:49:21.205184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-101cc49d208446f9215ae69ff9f1009ab4cb4dce9bdc860d5a0420ccc82b5e43-shm.mount: Deactivated successfully. Jul 9 23:49:21.205276 systemd[1]: var-lib-kubelet-pods-ef0a66cc\x2d3d78\x2d4bbc\x2d82a6\x2ddf4fc6d69f6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc4t7m.mount: Deactivated successfully. Jul 9 23:49:21.205326 systemd[1]: var-lib-kubelet-pods-694a4f10\x2d519d\x2d487c\x2dbd5f\x2db28b50e5ae88-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 23:49:21.205366 systemd[1]: var-lib-kubelet-pods-694a4f10\x2d519d\x2d487c\x2dbd5f\x2db28b50e5ae88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxw6g.mount: Deactivated successfully. Jul 9 23:49:21.205400 systemd[1]: var-lib-kubelet-pods-694a4f10\x2d519d\x2d487c\x2dbd5f\x2db28b50e5ae88-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 23:49:21.577798 update_engine[1887]: I20250709 23:49:21.577359 1887 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 9 23:49:21.577798 update_engine[1887]: I20250709 23:49:21.577411 1887 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 9 23:49:21.577798 update_engine[1887]: I20250709 23:49:21.577574 1887 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 9 23:49:21.578177 update_engine[1887]: I20250709 23:49:21.577874 1887 omaha_request_params.cc:62] Current group set to beta Jul 9 23:49:21.578177 update_engine[1887]: I20250709 23:49:21.577962 1887 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 9 23:49:21.578177 update_engine[1887]: I20250709 23:49:21.577968 1887 update_attempter.cc:643] Scheduling an action processor start. Jul 9 23:49:21.578177 update_engine[1887]: I20250709 23:49:21.577982 1887 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 9 23:49:21.578177 update_engine[1887]: I20250709 23:49:21.578007 1887 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 9 23:49:21.578177 update_engine[1887]: I20250709 23:49:21.578084 1887 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 9 23:49:21.578177 update_engine[1887]: I20250709 23:49:21.578089 1887 omaha_request_action.cc:272] Request: Jul 9 23:49:21.578177 update_engine[1887]: Jul 9 23:49:21.578177 update_engine[1887]: Jul 9 23:49:21.578177 update_engine[1887]: Jul 9 23:49:21.578177 update_engine[1887]: Jul 9 23:49:21.578177 update_engine[1887]: Jul 9 23:49:21.578177 update_engine[1887]: Jul 9 23:49:21.578177 update_engine[1887]: Jul 9 23:49:21.578177 update_engine[1887]: Jul 9 23:49:21.578177 update_engine[1887]: I20250709 23:49:21.578094 1887 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 9 23:49:21.578699 locksmithd[1988]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 9 23:49:21.578928 update_engine[1887]: I20250709 23:49:21.578894 1887 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 9 23:49:21.579327 update_engine[1887]: I20250709 23:49:21.579222 1887 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 9 23:49:21.745995 kubelet[3305]: I0709 23:49:21.745961 3305 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694a4f10-519d-487c-bd5f-b28b50e5ae88" path="/var/lib/kubelet/pods/694a4f10-519d-487c-bd5f-b28b50e5ae88/volumes" Jul 9 23:49:21.746387 kubelet[3305]: I0709 23:49:21.746365 3305 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d" path="/var/lib/kubelet/pods/ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d/volumes" Jul 9 23:49:21.762368 update_engine[1887]: E20250709 23:49:21.762315 1887 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 9 23:49:21.762449 update_engine[1887]: I20250709 23:49:21.762413 1887 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 9 23:49:22.191810 sshd[4890]: Connection closed by 10.200.16.10 port 43666 Jul 9 23:49:22.192432 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:22.195053 systemd-logind[1884]: Session 24 logged out. Waiting for processes to exit. Jul 9 23:49:22.196717 systemd[1]: sshd@21-10.200.20.10:22-10.200.16.10:43666.service: Deactivated successfully. Jul 9 23:49:22.198816 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 23:49:22.200318 systemd-logind[1884]: Removed session 24. Jul 9 23:49:22.279730 systemd[1]: Started sshd@22-10.200.20.10:22-10.200.16.10:36058.service - OpenSSH per-connection server daemon (10.200.16.10:36058). Jul 9 23:49:22.383243 containerd[1903]: time="2025-07-09T23:49:22.383193101Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1752104960 nanos:313249396}" Jul 9 23:49:22.774244 sshd[5041]: Accepted publickey for core from 10.200.16.10 port 36058 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:22.775354 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:22.778941 systemd-logind[1884]: New session 25 of user core. Jul 9 23:49:22.786325 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 23:49:23.472669 kubelet[3305]: I0709 23:49:23.472546 3305 memory_manager.go:355] "RemoveStaleState removing state" podUID="694a4f10-519d-487c-bd5f-b28b50e5ae88" containerName="cilium-agent" Jul 9 23:49:23.472669 kubelet[3305]: I0709 23:49:23.472575 3305 memory_manager.go:355] "RemoveStaleState removing state" podUID="ef0a66cc-3d78-4bbc-82a6-df4fc6d69f6d" containerName="cilium-operator" Jul 9 23:49:23.479000 systemd[1]: Created slice kubepods-burstable-pod2f3c4005_6baa_4e03_b0d7_cda2b553b72a.slice - libcontainer container kubepods-burstable-pod2f3c4005_6baa_4e03_b0d7_cda2b553b72a.slice. Jul 9 23:49:23.548103 sshd[5043]: Connection closed by 10.200.16.10 port 36058 Jul 9 23:49:23.548662 sshd-session[5041]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:23.551372 systemd-logind[1884]: Session 25 logged out. Waiting for processes to exit. Jul 9 23:49:23.551486 systemd[1]: sshd@22-10.200.20.10:22-10.200.16.10:36058.service: Deactivated successfully. Jul 9 23:49:23.552942 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 23:49:23.555436 systemd-logind[1884]: Removed session 25. Jul 9 23:49:23.576828 kubelet[3305]: I0709 23:49:23.576762 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-host-proc-sys-kernel\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.576998 kubelet[3305]: I0709 23:49:23.576866 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-hostproc\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.576998 kubelet[3305]: I0709 23:49:23.576884 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5p42\" (UniqueName: \"kubernetes.io/projected/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-kube-api-access-d5p42\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.576998 kubelet[3305]: I0709 23:49:23.576897 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-cilium-run\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.577239 kubelet[3305]: I0709 23:49:23.576910 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-cilium-cgroup\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.577239 kubelet[3305]: I0709 23:49:23.577187 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-cilium-ipsec-secrets\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.577239 kubelet[3305]: I0709 23:49:23.577201 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-clustermesh-secrets\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.577421 kubelet[3305]: I0709 23:49:23.577306 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-lib-modules\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.577421 kubelet[3305]: I0709 23:49:23.577324 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-xtables-lock\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.577421 kubelet[3305]: I0709 23:49:23.577335 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-host-proc-sys-net\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.577612 kubelet[3305]: I0709 23:49:23.577347 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-cilium-config-path\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.577612 kubelet[3305]: I0709 23:49:23.577523 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-cni-path\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.577612 kubelet[3305]: I0709 23:49:23.577535 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-hubble-tls\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.577612 kubelet[3305]: I0709 23:49:23.577544 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-bpf-maps\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.577612 kubelet[3305]: I0709 23:49:23.577565 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f3c4005-6baa-4e03-b0d7-cda2b553b72a-etc-cni-netd\") pod \"cilium-2l5dk\" (UID: \"2f3c4005-6baa-4e03-b0d7-cda2b553b72a\") " pod="kube-system/cilium-2l5dk" Jul 9 23:49:23.629422 systemd[1]: Started sshd@23-10.200.20.10:22-10.200.16.10:36070.service - OpenSSH per-connection server daemon (10.200.16.10:36070). Jul 9 23:49:23.782658 containerd[1903]: time="2025-07-09T23:49:23.782541949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2l5dk,Uid:2f3c4005-6baa-4e03-b0d7-cda2b553b72a,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:23.824858 kubelet[3305]: E0709 23:49:23.824819 3305 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 23:49:23.857247 containerd[1903]: time="2025-07-09T23:49:23.857180138Z" level=info msg="connecting to shim 041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f" address="unix:///run/containerd/s/397a1b3d15084344a16602eda35d00b294ee864274fd8c5b931430dc08c0c5d0" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:23.876188 systemd[1]: Started cri-containerd-041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f.scope - libcontainer container 041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f. Jul 9 23:49:23.902829 containerd[1903]: time="2025-07-09T23:49:23.902786685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2l5dk,Uid:2f3c4005-6baa-4e03-b0d7-cda2b553b72a,Namespace:kube-system,Attempt:0,} returns sandbox id \"041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f\"" Jul 9 23:49:23.905963 containerd[1903]: time="2025-07-09T23:49:23.905773528Z" level=info msg="CreateContainer within sandbox \"041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:49:23.944292 containerd[1903]: time="2025-07-09T23:49:23.944258566Z" level=info msg="Container 2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:23.967160 containerd[1903]: time="2025-07-09T23:49:23.967118137Z" level=info msg="CreateContainer within sandbox \"041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb\"" Jul 9 23:49:23.968821 containerd[1903]: time="2025-07-09T23:49:23.967956877Z" level=info msg="StartContainer for \"2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb\"" Jul 9 23:49:23.968821 containerd[1903]: time="2025-07-09T23:49:23.968576266Z" level=info msg="connecting to shim 2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb" address="unix:///run/containerd/s/397a1b3d15084344a16602eda35d00b294ee864274fd8c5b931430dc08c0c5d0" protocol=ttrpc version=3 Jul 9 23:49:23.983165 systemd[1]: Started cri-containerd-2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb.scope - libcontainer container 2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb. Jul 9 23:49:24.018717 containerd[1903]: time="2025-07-09T23:49:24.018679043Z" level=info msg="StartContainer for \"2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb\" returns successfully" Jul 9 23:49:24.023193 systemd[1]: cri-containerd-2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb.scope: Deactivated successfully. Jul 9 23:49:24.025763 containerd[1903]: time="2025-07-09T23:49:24.025707206Z" level=info msg="received exit event container_id:\"2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb\" id:\"2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb\" pid:5116 exited_at:{seconds:1752104964 nanos:25157132}" Jul 9 23:49:24.025950 containerd[1903]: time="2025-07-09T23:49:24.025860771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb\" id:\"2601a1a074cc977aedf340c494d5423336495511863276c9fea69f894678fddb\" pid:5116 exited_at:{seconds:1752104964 nanos:25157132}" Jul 9 23:49:24.104873 sshd[5053]: Accepted publickey for core from 10.200.16.10 port 36070 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:24.107665 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:24.112635 systemd-logind[1884]: New session 26 of user core. Jul 9 23:49:24.118154 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 23:49:24.448077 sshd[5150]: Connection closed by 10.200.16.10 port 36070 Jul 9 23:49:24.447727 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:24.450270 systemd-logind[1884]: Session 26 logged out. Waiting for processes to exit. Jul 9 23:49:24.451783 systemd[1]: sshd@23-10.200.20.10:22-10.200.16.10:36070.service: Deactivated successfully. Jul 9 23:49:24.454686 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 23:49:24.456734 systemd-logind[1884]: Removed session 26. Jul 9 23:49:24.532351 systemd[1]: Started sshd@24-10.200.20.10:22-10.200.16.10:36082.service - OpenSSH per-connection server daemon (10.200.16.10:36082). Jul 9 23:49:24.986927 sshd[5157]: Accepted publickey for core from 10.200.16.10 port 36082 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:24.988467 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:24.992628 systemd-logind[1884]: New session 27 of user core. Jul 9 23:49:25.000151 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 23:49:25.076674 containerd[1903]: time="2025-07-09T23:49:25.076622345Z" level=info msg="CreateContainer within sandbox \"041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:49:25.122890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474973477.mount: Deactivated successfully. Jul 9 23:49:25.124949 containerd[1903]: time="2025-07-09T23:49:25.124911693Z" level=info msg="Container 16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:25.148554 containerd[1903]: time="2025-07-09T23:49:25.148511658Z" level=info msg="CreateContainer within sandbox \"041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02\"" Jul 9 23:49:25.149181 containerd[1903]: time="2025-07-09T23:49:25.149116750Z" level=info msg="StartContainer for \"16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02\"" Jul 9 23:49:25.150203 containerd[1903]: time="2025-07-09T23:49:25.150099215Z" level=info msg="connecting to shim 16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02" address="unix:///run/containerd/s/397a1b3d15084344a16602eda35d00b294ee864274fd8c5b931430dc08c0c5d0" protocol=ttrpc version=3 Jul 9 23:49:25.170322 systemd[1]: Started cri-containerd-16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02.scope - libcontainer container 16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02. Jul 9 23:49:25.195290 systemd[1]: cri-containerd-16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02.scope: Deactivated successfully. Jul 9 23:49:25.196719 containerd[1903]: time="2025-07-09T23:49:25.196691555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02\" id:\"16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02\" pid:5172 exited_at:{seconds:1752104965 nanos:196473867}" Jul 9 23:49:25.197852 containerd[1903]: time="2025-07-09T23:49:25.197744222Z" level=info msg="received exit event container_id:\"16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02\" id:\"16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02\" pid:5172 exited_at:{seconds:1752104965 nanos:196473867}" Jul 9 23:49:25.198989 containerd[1903]: time="2025-07-09T23:49:25.198943126Z" level=info msg="StartContainer for \"16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02\" returns successfully" Jul 9 23:49:25.681980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16ca39def09d86fd93fe25302ed13ca301b4578bc43cdf3b7a8f365ade016b02-rootfs.mount: Deactivated successfully. Jul 9 23:49:26.080384 containerd[1903]: time="2025-07-09T23:49:26.080282824Z" level=info msg="CreateContainer within sandbox \"041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:49:26.126583 containerd[1903]: time="2025-07-09T23:49:26.126544569Z" level=info msg="Container 22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:26.148618 containerd[1903]: time="2025-07-09T23:49:26.148567649Z" level=info msg="CreateContainer within sandbox \"041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055\"" Jul 9 23:49:26.150271 containerd[1903]: time="2025-07-09T23:49:26.149078298Z" level=info msg="StartContainer for \"22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055\"" Jul 9 23:49:26.150271 containerd[1903]: time="2025-07-09T23:49:26.149992120Z" level=info msg="connecting to shim 22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055" address="unix:///run/containerd/s/397a1b3d15084344a16602eda35d00b294ee864274fd8c5b931430dc08c0c5d0" protocol=ttrpc version=3 Jul 9 23:49:26.169163 systemd[1]: Started cri-containerd-22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055.scope - libcontainer container 22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055. Jul 9 23:49:26.192410 systemd[1]: cri-containerd-22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055.scope: Deactivated successfully. Jul 9 23:49:26.194614 containerd[1903]: time="2025-07-09T23:49:26.194585266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055\" id:\"22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055\" pid:5219 exited_at:{seconds:1752104966 nanos:194014551}" Jul 9 23:49:26.195822 containerd[1903]: time="2025-07-09T23:49:26.195788890Z" level=info msg="received exit event container_id:\"22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055\" id:\"22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055\" pid:5219 exited_at:{seconds:1752104966 nanos:194014551}" Jul 9 23:49:26.201477 containerd[1903]: time="2025-07-09T23:49:26.201455551Z" level=info msg="StartContainer for \"22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055\" returns successfully" Jul 9 23:49:26.211236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22954e5821bebcd250376f35c6e00cb47c4d5827165f02a2d4478c7f65375055-rootfs.mount: Deactivated successfully. Jul 9 23:49:26.979517 kubelet[3305]: I0709 23:49:26.979470 3305 setters.go:602] "Node became not ready" node="ci-4344.1.1-n-4a8bce7214" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-09T23:49:26Z","lastTransitionTime":"2025-07-09T23:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 9 23:49:27.085050 containerd[1903]: time="2025-07-09T23:49:27.084969395Z" level=info msg="CreateContainer within sandbox \"041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:49:27.135698 containerd[1903]: time="2025-07-09T23:49:27.135654495Z" level=info msg="Container 32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:27.138079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4228802760.mount: Deactivated successfully. Jul 9 23:49:27.157981 containerd[1903]: time="2025-07-09T23:49:27.157940416Z" level=info msg="CreateContainer within sandbox \"041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707\"" Jul 9 23:49:27.158762 containerd[1903]: time="2025-07-09T23:49:27.158554748Z" level=info msg="StartContainer for \"32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707\"" Jul 9 23:49:27.159939 containerd[1903]: time="2025-07-09T23:49:27.159917306Z" level=info msg="connecting to shim 32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707" address="unix:///run/containerd/s/397a1b3d15084344a16602eda35d00b294ee864274fd8c5b931430dc08c0c5d0" protocol=ttrpc version=3 Jul 9 23:49:27.177155 systemd[1]: Started cri-containerd-32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707.scope - libcontainer container 32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707. Jul 9 23:49:27.193961 systemd[1]: cri-containerd-32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707.scope: Deactivated successfully. Jul 9 23:49:27.195395 containerd[1903]: time="2025-07-09T23:49:27.195274967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707\" id:\"32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707\" pid:5258 exited_at:{seconds:1752104967 nanos:194279605}" Jul 9 23:49:27.204008 containerd[1903]: time="2025-07-09T23:49:27.203977753Z" level=info msg="received exit event container_id:\"32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707\" id:\"32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707\" pid:5258 exited_at:{seconds:1752104967 nanos:194279605}" Jul 9 23:49:27.205813 containerd[1903]: time="2025-07-09T23:49:27.205789542Z" level=info msg="StartContainer for \"32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707\" returns successfully" Jul 9 23:49:27.220867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32e839c05e37f2f9ead0c0116758b30eaffa0430ddb7c1e011e4c54e7de11707-rootfs.mount: Deactivated successfully. Jul 9 23:49:28.090359 containerd[1903]: time="2025-07-09T23:49:28.089871917Z" level=info msg="CreateContainer within sandbox \"041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:49:28.126835 containerd[1903]: time="2025-07-09T23:49:28.126794390Z" level=info msg="Container 4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:28.156941 containerd[1903]: time="2025-07-09T23:49:28.156898603Z" level=info msg="CreateContainer within sandbox \"041f724ffb8b418e3344e341659e5f1a370b32728ee3983020853b801a2c764f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724\"" Jul 9 23:49:28.158048 containerd[1903]: time="2025-07-09T23:49:28.157963559Z" level=info msg="StartContainer for \"4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724\"" Jul 9 23:49:28.159005 containerd[1903]: time="2025-07-09T23:49:28.158920695Z" level=info msg="connecting to shim 4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724" address="unix:///run/containerd/s/397a1b3d15084344a16602eda35d00b294ee864274fd8c5b931430dc08c0c5d0" protocol=ttrpc version=3 Jul 9 23:49:28.177159 systemd[1]: Started cri-containerd-4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724.scope - libcontainer container 4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724. Jul 9 23:49:28.203882 containerd[1903]: time="2025-07-09T23:49:28.203844939Z" level=info msg="StartContainer for \"4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724\" returns successfully" Jul 9 23:49:28.255855 containerd[1903]: time="2025-07-09T23:49:28.255805426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724\" id:\"06f83354e47f736b5866fb615a49da7fd1106bbf571c7504d96abcfd9600058c\" pid:5325 exited_at:{seconds:1752104968 nanos:255536817}" Jul 9 23:49:28.469076 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 9 23:49:29.368683 containerd[1903]: time="2025-07-09T23:49:29.368560059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724\" id:\"8ac2b9781457f4d39f0b104b1e0255156a541f625713cbe97e14a7ae3efc4c1a\" pid:5400 exit_status:1 exited_at:{seconds:1752104969 nanos:368076891}" Jul 9 23:49:30.848254 systemd-networkd[1627]: lxc_health: Link UP Jul 9 23:49:30.858387 systemd-networkd[1627]: lxc_health: Gained carrier Jul 9 23:49:31.462655 containerd[1903]: time="2025-07-09T23:49:31.462447966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724\" id:\"8fca4e464d83cdbd692ba54cd71ac3b5ada7aab90c93d8f904da487ee9d4a082\" pid:5839 exited_at:{seconds:1752104971 nanos:462145548}" Jul 9 23:49:31.579170 update_engine[1887]: I20250709 23:49:31.579089 1887 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 9 23:49:31.579510 update_engine[1887]: I20250709 23:49:31.579327 1887 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 9 23:49:31.579598 update_engine[1887]: I20250709 23:49:31.579569 1887 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 9 23:49:31.778104 update_engine[1887]: E20250709 23:49:31.777940 1887 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 9 23:49:31.778104 update_engine[1887]: I20250709 23:49:31.778033 1887 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 9 23:49:31.801234 kubelet[3305]: I0709 23:49:31.800961 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2l5dk" podStartSLOduration=8.800945658 podStartE2EDuration="8.800945658s" podCreationTimestamp="2025-07-09 23:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:29.108697414 +0000 UTC m=+155.437576776" watchObservedRunningTime="2025-07-09 23:49:31.800945658 +0000 UTC m=+158.129825028" Jul 9 23:49:32.655816 systemd-networkd[1627]: lxc_health: Gained IPv6LL Jul 9 23:49:33.576880 containerd[1903]: time="2025-07-09T23:49:33.576828284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724\" id:\"e5ae653b2a02726c1b8358e1c096b0f965999b67d1173dda3d3651809b0c2a8b\" pid:5878 exited_at:{seconds:1752104973 nanos:576370469}" Jul 9 23:49:35.656804 containerd[1903]: time="2025-07-09T23:49:35.656753683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724\" id:\"af78afe7f92f3e666c275cca9abb8c2a058ecd570ce6c540659203486c2b4cb3\" pid:5911 exited_at:{seconds:1752104975 nanos:656290364}" Jul 9 23:49:37.731463 containerd[1903]: time="2025-07-09T23:49:37.731420619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d210885adc5fb3a297d07d1c3a232c1e38de820bf1550f851a7833635ed4724\" id:\"f1d06337b984d23093c12db1e3b48ee8a5fc596f79d1c0f26bd98048a5ce3b91\" pid:5934 exited_at:{seconds:1752104977 nanos:730624337}" Jul 9 23:49:37.825217 sshd[5159]: Connection closed by 10.200.16.10 port 36082 Jul 9 23:49:37.825853 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:37.829419 systemd-logind[1884]: Session 27 logged out. Waiting for processes to exit. Jul 9 23:49:37.829952 systemd[1]: sshd@24-10.200.20.10:22-10.200.16.10:36082.service: Deactivated successfully. Jul 9 23:49:37.832397 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 23:49:37.834012 systemd-logind[1884]: Removed session 27.