Jul 6 23:47:29.080042 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 6 23:47:29.080059 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:52:18 -00 2025 Jul 6 23:47:29.080066 kernel: KASLR enabled Jul 6 23:47:29.080070 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 6 23:47:29.080075 kernel: printk: legacy bootconsole [pl11] enabled Jul 6 23:47:29.080079 kernel: efi: EFI v2.7 by EDK II Jul 6 23:47:29.080084 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3eac8018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Jul 6 23:47:29.080088 kernel: random: crng init done Jul 6 23:47:29.080092 kernel: secureboot: Secure boot disabled Jul 6 23:47:29.080096 kernel: ACPI: Early table checksum verification disabled Jul 6 23:47:29.080100 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 6 23:47:29.080104 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:47:29.080108 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:47:29.080113 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 6 23:47:29.080118 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:47:29.080122 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:47:29.080126 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:47:29.080131 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:47:29.080135 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:47:29.080140 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:47:29.080144 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 6 23:47:29.080148 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:47:29.080152 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 6 23:47:29.080157 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:47:29.080161 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 6 23:47:29.080165 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 6 23:47:29.080169 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 6 23:47:29.080173 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 6 23:47:29.080178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 6 23:47:29.080183 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 6 23:47:29.080187 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 6 23:47:29.080191 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 6 23:47:29.080195 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 6 23:47:29.080199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 6 23:47:29.080203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 6 23:47:29.080208 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 6 23:47:29.080212 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 6 23:47:29.080216 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] Jul 6 23:47:29.080220 kernel: Zone ranges: Jul 6 23:47:29.080225 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 6 23:47:29.080232 kernel: DMA32 empty Jul 6 23:47:29.080236 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:47:29.080240 kernel: Device empty Jul 6 23:47:29.080245 kernel: Movable zone start for each node Jul 6 23:47:29.080249 kernel: Early memory node ranges Jul 6 23:47:29.080255 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 6 23:47:29.080259 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 6 23:47:29.080263 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 6 23:47:29.080268 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 6 23:47:29.080272 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 6 23:47:29.080276 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 6 23:47:29.080281 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 6 23:47:29.080285 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 6 23:47:29.080289 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:47:29.080293 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 6 23:47:29.080298 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 6 23:47:29.080302 kernel: psci: probing for conduit method from ACPI. Jul 6 23:47:29.080307 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:47:29.080312 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:47:29.080316 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 6 23:47:29.080320 kernel: psci: SMC Calling Convention v1.4 Jul 6 23:47:29.080325 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 6 23:47:29.080329 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 6 23:47:29.080333 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:47:29.080338 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:47:29.080342 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:47:29.080347 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:47:29.080351 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 6 23:47:29.080356 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:47:29.080371 kernel: CPU features: detected: Spectre-v4 Jul 6 23:47:29.080376 kernel: CPU features: detected: Spectre-BHB Jul 6 23:47:29.080380 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:47:29.080385 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:47:29.080389 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 6 23:47:29.080394 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:47:29.080398 kernel: alternatives: applying boot alternatives Jul 6 23:47:29.080403 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:47:29.080408 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:47:29.080412 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:47:29.080418 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:47:29.080422 kernel: Fallback order for Node 0: 0 Jul 6 23:47:29.080427 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 6 23:47:29.080431 kernel: Policy zone: Normal Jul 6 23:47:29.080435 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:47:29.080440 kernel: software IO TLB: area num 2. Jul 6 23:47:29.080444 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jul 6 23:47:29.080449 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:47:29.080453 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:47:29.080458 kernel: rcu: RCU event tracing is enabled. Jul 6 23:47:29.080462 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:47:29.080468 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:47:29.080473 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:47:29.080477 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:47:29.080481 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:47:29.080486 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:47:29.080490 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:47:29.080495 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:47:29.080499 kernel: GICv3: 960 SPIs implemented Jul 6 23:47:29.080504 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:47:29.080508 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:47:29.080512 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 6 23:47:29.080517 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 6 23:47:29.080522 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 6 23:47:29.080526 kernel: ITS: No ITS available, not enabling LPIs Jul 6 23:47:29.080531 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:47:29.080535 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 6 23:47:29.080540 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:47:29.080544 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 6 23:47:29.080549 kernel: Console: colour dummy device 80x25 Jul 6 23:47:29.080553 kernel: printk: legacy console [tty1] enabled Jul 6 23:47:29.080558 kernel: ACPI: Core revision 20240827 Jul 6 23:47:29.080563 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 6 23:47:29.080568 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:47:29.080572 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:47:29.080577 kernel: landlock: Up and running. Jul 6 23:47:29.080581 kernel: SELinux: Initializing. Jul 6 23:47:29.080586 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:47:29.080591 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:47:29.080599 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 6 23:47:29.080604 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 6 23:47:29.080609 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:47:29.080614 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:47:29.080619 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:47:29.080623 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:47:29.080629 kernel: Remapping and enabling EFI services. Jul 6 23:47:29.080634 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:47:29.080639 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:47:29.080644 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 6 23:47:29.080648 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 6 23:47:29.080654 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:47:29.080659 kernel: SMP: Total of 2 processors activated. Jul 6 23:47:29.080663 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:47:29.080668 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:47:29.080673 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 6 23:47:29.080678 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:47:29.080683 kernel: CPU features: detected: Common not Private translations Jul 6 23:47:29.080688 kernel: CPU features: detected: CRC32 instructions Jul 6 23:47:29.080693 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 6 23:47:29.080698 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:47:29.080703 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:47:29.080708 kernel: CPU features: detected: Privileged Access Never Jul 6 23:47:29.080713 kernel: CPU features: detected: Speculation barrier (SB) Jul 6 23:47:29.080717 kernel: CPU features: detected: TLB range maintenance instructions Jul 6 23:47:29.080722 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:47:29.080727 kernel: CPU features: detected: Scalable Vector Extension Jul 6 23:47:29.080732 kernel: alternatives: applying system-wide alternatives Jul 6 23:47:29.080737 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 6 23:47:29.080742 kernel: SVE: maximum available vector length 16 bytes per vector Jul 6 23:47:29.080747 kernel: SVE: default vector length 16 bytes per vector Jul 6 23:47:29.080752 kernel: Memory: 3975672K/4194160K available (11072K kernel code, 2428K rwdata, 9032K rodata, 39424K init, 1035K bss, 213688K reserved, 0K cma-reserved) Jul 6 23:47:29.080757 kernel: devtmpfs: initialized Jul 6 23:47:29.080762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:47:29.080767 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:47:29.080771 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:47:29.080776 kernel: 0 pages in range for non-PLT usage Jul 6 23:47:29.080781 kernel: 508480 pages in range for PLT usage Jul 6 23:47:29.080786 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:47:29.080791 kernel: SMBIOS 3.1.0 present. Jul 6 23:47:29.080796 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 6 23:47:29.080801 kernel: DMI: Memory slots populated: 2/2 Jul 6 23:47:29.080806 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:47:29.080810 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:47:29.080815 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:47:29.080820 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:47:29.080825 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:47:29.080830 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 6 23:47:29.080835 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:47:29.080840 kernel: cpuidle: using governor menu Jul 6 23:47:29.080845 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:47:29.080850 kernel: ASID allocator initialised with 32768 entries Jul 6 23:47:29.080854 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:47:29.080859 kernel: Serial: AMBA PL011 UART driver Jul 6 23:47:29.080864 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:47:29.080869 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:47:29.080874 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:47:29.080879 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:47:29.080884 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:47:29.080889 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:47:29.080894 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:47:29.080898 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:47:29.080903 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:47:29.080908 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:47:29.080913 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:47:29.080918 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:47:29.080923 kernel: ACPI: Interpreter enabled Jul 6 23:47:29.080928 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:47:29.080932 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:47:29.080937 kernel: printk: legacy console [ttyAMA0] enabled Jul 6 23:47:29.080942 kernel: printk: legacy bootconsole [pl11] disabled Jul 6 23:47:29.080947 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 6 23:47:29.080951 kernel: ACPI: CPU0 has been hot-added Jul 6 23:47:29.080956 kernel: ACPI: CPU1 has been hot-added Jul 6 23:47:29.080962 kernel: iommu: Default domain type: Translated Jul 6 23:47:29.080967 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:47:29.080971 kernel: efivars: Registered efivars operations Jul 6 23:47:29.080976 kernel: vgaarb: loaded Jul 6 23:47:29.080981 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:47:29.080986 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:47:29.080991 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:47:29.080995 kernel: pnp: PnP ACPI init Jul 6 23:47:29.081000 kernel: pnp: PnP ACPI: found 0 devices Jul 6 23:47:29.081006 kernel: NET: Registered PF_INET protocol family Jul 6 23:47:29.081010 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:47:29.081015 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:47:29.081020 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:47:29.081025 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:47:29.081030 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:47:29.081034 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:47:29.081039 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:47:29.081044 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:47:29.081050 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:47:29.081054 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:47:29.081059 kernel: kvm [1]: HYP mode not available Jul 6 23:47:29.081064 kernel: Initialise system trusted keyrings Jul 6 23:47:29.081068 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:47:29.081073 kernel: Key type asymmetric registered Jul 6 23:47:29.081078 kernel: Asymmetric key parser 'x509' registered Jul 6 23:47:29.081083 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:47:29.081087 kernel: io scheduler mq-deadline registered Jul 6 23:47:29.081093 kernel: io scheduler kyber registered Jul 6 23:47:29.081098 kernel: io scheduler bfq registered Jul 6 23:47:29.081102 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:47:29.081107 kernel: thunder_xcv, ver 1.0 Jul 6 23:47:29.081112 kernel: thunder_bgx, ver 1.0 Jul 6 23:47:29.081116 kernel: nicpf, ver 1.0 Jul 6 23:47:29.081121 kernel: nicvf, ver 1.0 Jul 6 23:47:29.081228 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:47:29.081280 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:47:28 UTC (1751845648) Jul 6 23:47:29.081286 kernel: efifb: probing for efifb Jul 6 23:47:29.081291 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:47:29.081296 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:47:29.081300 kernel: efifb: scrolling: redraw Jul 6 23:47:29.081305 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:47:29.081310 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:47:29.081315 kernel: fb0: EFI VGA frame buffer device Jul 6 23:47:29.081319 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 6 23:47:29.081325 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:47:29.081330 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 6 23:47:29.081335 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:47:29.081340 kernel: watchdog: NMI not fully supported Jul 6 23:47:29.081344 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:47:29.081349 kernel: Segment Routing with IPv6 Jul 6 23:47:29.081354 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:47:29.081366 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:47:29.081371 kernel: Key type dns_resolver registered Jul 6 23:47:29.081377 kernel: registered taskstats version 1 Jul 6 23:47:29.081382 kernel: Loading compiled-in X.509 certificates Jul 6 23:47:29.081387 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 90fb300ebe1fa0773739bb35dad461c5679d8dfb' Jul 6 23:47:29.081392 kernel: Demotion targets for Node 0: null Jul 6 23:47:29.081396 kernel: Key type .fscrypt registered Jul 6 23:47:29.081401 kernel: Key type fscrypt-provisioning registered Jul 6 23:47:29.081406 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:47:29.081411 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:47:29.081415 kernel: ima: No architecture policies found Jul 6 23:47:29.081421 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:47:29.081426 kernel: clk: Disabling unused clocks Jul 6 23:47:29.081431 kernel: PM: genpd: Disabling unused power domains Jul 6 23:47:29.081435 kernel: Warning: unable to open an initial console. Jul 6 23:47:29.081440 kernel: Freeing unused kernel memory: 39424K Jul 6 23:47:29.081445 kernel: Run /init as init process Jul 6 23:47:29.081450 kernel: with arguments: Jul 6 23:47:29.081454 kernel: /init Jul 6 23:47:29.081459 kernel: with environment: Jul 6 23:47:29.081464 kernel: HOME=/ Jul 6 23:47:29.081469 kernel: TERM=linux Jul 6 23:47:29.081474 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:47:29.081480 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:47:29.081487 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:47:29.081492 systemd[1]: Detected virtualization microsoft. Jul 6 23:47:29.081497 systemd[1]: Detected architecture arm64. Jul 6 23:47:29.081503 systemd[1]: Running in initrd. Jul 6 23:47:29.081508 systemd[1]: No hostname configured, using default hostname. Jul 6 23:47:29.081514 systemd[1]: Hostname set to . Jul 6 23:47:29.081519 systemd[1]: Initializing machine ID from random generator. Jul 6 23:47:29.081524 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:47:29.081529 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:47:29.081534 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:47:29.081540 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:47:29.081546 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:47:29.081551 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:47:29.081557 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:47:29.081563 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:47:29.081568 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:47:29.081573 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:47:29.081578 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:47:29.081584 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:47:29.081590 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:47:29.081595 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:47:29.081600 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:47:29.081605 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:47:29.081610 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:47:29.081616 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:47:29.081621 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:47:29.081626 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:47:29.081632 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:47:29.081637 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:47:29.081642 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:47:29.081648 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:47:29.081653 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:47:29.081658 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:47:29.081663 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:47:29.081669 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:47:29.081675 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:47:29.081680 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:47:29.081685 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:47:29.081700 systemd-journald[224]: Collecting audit messages is disabled. Jul 6 23:47:29.081718 systemd-journald[224]: Journal started Jul 6 23:47:29.081732 systemd-journald[224]: Runtime Journal (/run/log/journal/9e7d15b5be8647f6808b4174cdec56b9) is 8M, max 78.5M, 70.5M free. Jul 6 23:47:29.084575 systemd-modules-load[225]: Inserted module 'overlay' Jul 6 23:47:29.116969 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:47:29.117000 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:47:29.117008 kernel: Bridge firewalling registered Jul 6 23:47:29.116964 systemd-modules-load[225]: Inserted module 'br_netfilter' Jul 6 23:47:29.121408 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:47:29.131273 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:47:29.148378 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:47:29.152133 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:47:29.159656 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:47:29.171307 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:47:29.190842 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:47:29.196923 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:47:29.216592 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:47:29.227577 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:47:29.233780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:47:29.245489 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:47:29.255452 systemd-tmpfiles[254]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:47:29.259799 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:47:29.272648 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:47:29.293305 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:47:29.300810 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:47:29.331529 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:47:29.347183 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:47:29.366569 systemd-resolved[283]: Positive Trust Anchors: Jul 6 23:47:29.366582 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:47:29.366602 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:47:29.368251 systemd-resolved[283]: Defaulting to hostname 'linux'. Jul 6 23:47:29.370031 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:47:29.381398 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:47:29.457383 kernel: SCSI subsystem initialized Jul 6 23:47:29.462384 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:47:29.470382 kernel: iscsi: registered transport (tcp) Jul 6 23:47:29.482664 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:47:29.482676 kernel: QLogic iSCSI HBA Driver Jul 6 23:47:29.495859 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:47:29.515786 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:47:29.521201 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:47:29.567084 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:47:29.573477 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:47:29.626387 kernel: raid6: neonx8 gen() 18547 MB/s Jul 6 23:47:29.645368 kernel: raid6: neonx4 gen() 18568 MB/s Jul 6 23:47:29.664366 kernel: raid6: neonx2 gen() 17093 MB/s Jul 6 23:47:29.684459 kernel: raid6: neonx1 gen() 15101 MB/s Jul 6 23:47:29.703386 kernel: raid6: int64x8 gen() 10548 MB/s Jul 6 23:47:29.722367 kernel: raid6: int64x4 gen() 10617 MB/s Jul 6 23:47:29.742451 kernel: raid6: int64x2 gen() 8988 MB/s Jul 6 23:47:29.764549 kernel: raid6: int64x1 gen() 7013 MB/s Jul 6 23:47:29.764601 kernel: raid6: using algorithm neonx4 gen() 18568 MB/s Jul 6 23:47:29.787136 kernel: raid6: .... xor() 15151 MB/s, rmw enabled Jul 6 23:47:29.787178 kernel: raid6: using neon recovery algorithm Jul 6 23:47:29.795862 kernel: xor: measuring software checksum speed Jul 6 23:47:29.795899 kernel: 8regs : 28659 MB/sec Jul 6 23:47:29.799806 kernel: 32regs : 28783 MB/sec Jul 6 23:47:29.802385 kernel: arm64_neon : 37573 MB/sec Jul 6 23:47:29.805604 kernel: xor: using function: arm64_neon (37573 MB/sec) Jul 6 23:47:29.843396 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:47:29.848300 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:47:29.858494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:47:29.884433 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jul 6 23:47:29.888571 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:47:29.901015 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:47:29.928055 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Jul 6 23:47:29.948760 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:47:29.955425 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:47:30.005696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:47:30.017464 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:47:30.071395 kernel: hv_vmbus: Vmbus version:5.3 Jul 6 23:47:30.085322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:47:30.103040 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:47:30.103069 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:47:30.103076 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 6 23:47:30.103083 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:47:30.085610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:47:30.142357 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:47:30.150879 kernel: PTP clock support registered Jul 6 23:47:30.150889 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:47:30.150895 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:47:30.150902 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:47:30.150908 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:47:30.150914 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:47:30.150920 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:47:30.121590 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:47:29.970036 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 6 23:47:29.974579 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:47:29.974593 kernel: scsi host1: storvsc_host_t Jul 6 23:47:29.974717 kernel: scsi host0: storvsc_host_t Jul 6 23:47:29.974783 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:47:29.974853 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 6 23:47:29.974919 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:47:29.974925 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:47:29.974986 systemd-journald[224]: Time jumped backwards, rotating. Jul 6 23:47:29.975012 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:47:30.129696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:47:30.000493 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:47:30.000680 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:47:30.000751 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:47:30.000820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 6 23:47:29.932482 systemd-resolved[283]: Clock change detected. Flushing caches. Jul 6 23:47:29.996101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:47:30.015438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#173 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 6 23:47:30.025205 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:47:30.029831 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:47:30.037223 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 6 23:47:30.037372 kernel: hv_netvsc 002248bc-65c6-0022-48bc-65c6002248bc eth0: VF slot 1 added Jul 6 23:47:30.037441 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:47:30.048206 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:47:30.054200 kernel: hv_pci b5349c2d-2a6c-446e-a9db-a22c5ee743d7: PCI VMBus probing: Using version 0x10004 Jul 6 23:47:30.054341 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:47:30.066844 kernel: hv_pci b5349c2d-2a6c-446e-a9db-a22c5ee743d7: PCI host bridge to bus 2a6c:00 Jul 6 23:47:30.066995 kernel: pci_bus 2a6c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 6 23:47:30.072452 kernel: pci_bus 2a6c:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:47:30.078252 kernel: pci 2a6c:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 6 23:47:30.085196 kernel: pci 2a6c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 6 23:47:30.085224 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#173 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 6 23:47:30.094766 kernel: pci 2a6c:00:02.0: enabling Extended Tags Jul 6 23:47:30.111197 kernel: pci 2a6c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2a6c:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 6 23:47:30.111242 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#147 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 6 23:47:30.121431 kernel: pci_bus 2a6c:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:47:30.126230 kernel: pci 2a6c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 6 23:47:30.184762 kernel: mlx5_core 2a6c:00:02.0: enabling device (0000 -> 0002) Jul 6 23:47:30.193476 kernel: mlx5_core 2a6c:00:02.0: PTM is not supported by PCIe Jul 6 23:47:30.193600 kernel: mlx5_core 2a6c:00:02.0: firmware version: 16.30.5006 Jul 6 23:47:30.367662 kernel: hv_netvsc 002248bc-65c6-0022-48bc-65c6002248bc eth0: VF registering: eth1 Jul 6 23:47:30.367851 kernel: mlx5_core 2a6c:00:02.0 eth1: joined to eth0 Jul 6 23:47:30.374207 kernel: mlx5_core 2a6c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 6 23:47:30.383203 kernel: mlx5_core 2a6c:00:02.0 enP10860s1: renamed from eth1 Jul 6 23:47:30.596726 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:47:30.649423 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:47:30.739506 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:47:30.744592 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:47:30.755313 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:47:30.817842 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:47:30.926234 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:47:30.930709 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:47:30.939622 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:47:30.949057 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:47:30.962313 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:47:30.984888 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:47:31.804205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#174 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 6 23:47:31.815178 disk-uuid[639]: The operation has completed successfully. Jul 6 23:47:31.819542 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:47:31.872142 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:47:31.872246 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:47:31.906973 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:47:31.931094 sh[817]: Success Jul 6 23:47:31.965845 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:47:31.965884 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:47:31.970822 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:47:31.979209 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:47:32.251235 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:47:32.260664 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:47:32.272117 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:47:32.290199 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:47:32.290234 kernel: BTRFS: device fsid aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (835) Jul 6 23:47:32.302256 kernel: BTRFS info (device dm-0): first mount of filesystem aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 Jul 6 23:47:32.307427 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:47:32.310698 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:47:32.572314 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:47:32.577179 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:47:32.585289 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:47:32.585958 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:47:32.608885 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:47:32.635906 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (858) Jul 6 23:47:32.635936 kernel: BTRFS info (device sda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:47:32.641191 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:47:32.644608 kernel: BTRFS info (device sda6): using free-space-tree Jul 6 23:47:32.670228 kernel: BTRFS info (device sda6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:47:32.671641 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:47:32.677826 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:47:32.732287 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:47:32.743968 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:47:32.769534 systemd-networkd[1004]: lo: Link UP Jul 6 23:47:32.769544 systemd-networkd[1004]: lo: Gained carrier Jul 6 23:47:32.770747 systemd-networkd[1004]: Enumeration completed Jul 6 23:47:32.772672 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:47:32.773001 systemd-networkd[1004]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:47:32.773004 systemd-networkd[1004]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:47:32.777550 systemd[1]: Reached target network.target - Network. Jul 6 23:47:32.853207 kernel: mlx5_core 2a6c:00:02.0 enP10860s1: Link up Jul 6 23:47:32.888305 kernel: hv_netvsc 002248bc-65c6-0022-48bc-65c6002248bc eth0: Data path switched to VF: enP10860s1 Jul 6 23:47:32.888014 systemd-networkd[1004]: enP10860s1: Link UP Jul 6 23:47:32.888074 systemd-networkd[1004]: eth0: Link UP Jul 6 23:47:32.888171 systemd-networkd[1004]: eth0: Gained carrier Jul 6 23:47:32.888180 systemd-networkd[1004]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:47:32.907922 systemd-networkd[1004]: enP10860s1: Gained carrier Jul 6 23:47:32.920218 systemd-networkd[1004]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:47:34.172059 ignition[924]: Ignition 2.21.0 Jul 6 23:47:34.172074 ignition[924]: Stage: fetch-offline Jul 6 23:47:34.175708 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:47:34.172146 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:34.185061 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:47:34.172152 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:47:34.172265 ignition[924]: parsed url from cmdline: "" Jul 6 23:47:34.172268 ignition[924]: no config URL provided Jul 6 23:47:34.172271 ignition[924]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:47:34.172281 ignition[924]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:47:34.172284 ignition[924]: failed to fetch config: resource requires networking Jul 6 23:47:34.172488 ignition[924]: Ignition finished successfully Jul 6 23:47:34.219982 ignition[1016]: Ignition 2.21.0 Jul 6 23:47:34.219987 ignition[1016]: Stage: fetch Jul 6 23:47:34.220175 ignition[1016]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:34.220182 ignition[1016]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:47:34.220269 ignition[1016]: parsed url from cmdline: "" Jul 6 23:47:34.220272 ignition[1016]: no config URL provided Jul 6 23:47:34.220276 ignition[1016]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:47:34.220281 ignition[1016]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:47:34.220321 ignition[1016]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:47:34.291493 ignition[1016]: GET result: OK Jul 6 23:47:34.291584 ignition[1016]: config has been read from IMDS userdata Jul 6 23:47:34.291618 ignition[1016]: parsing config with SHA512: e7dd525408f47d05868ebc408faa1ad7551dc45a2658d590eabf9f0e84080e0d3b455f9ce1e62734afa5518d153850a05046dd682b4c1941a7972926e3f7e733 Jul 6 23:47:34.297072 unknown[1016]: fetched base config from "system" Jul 6 23:47:34.297370 ignition[1016]: fetch: fetch complete Jul 6 23:47:34.297077 unknown[1016]: fetched base config from "system" Jul 6 23:47:34.297374 ignition[1016]: fetch: fetch passed Jul 6 23:47:34.297081 unknown[1016]: fetched user config from "azure" Jul 6 23:47:34.297411 ignition[1016]: Ignition finished successfully Jul 6 23:47:34.300433 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:47:34.306868 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:47:34.343135 ignition[1023]: Ignition 2.21.0 Jul 6 23:47:34.343146 ignition[1023]: Stage: kargs Jul 6 23:47:34.343374 ignition[1023]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:34.347847 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:47:34.343382 ignition[1023]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:47:34.355984 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:47:34.345221 ignition[1023]: kargs: kargs passed Jul 6 23:47:34.345279 ignition[1023]: Ignition finished successfully Jul 6 23:47:34.386324 ignition[1030]: Ignition 2.21.0 Jul 6 23:47:34.386334 ignition[1030]: Stage: disks Jul 6 23:47:34.390256 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:47:34.386654 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:34.396948 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:47:34.386663 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:47:34.410224 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:47:34.387837 ignition[1030]: disks: disks passed Jul 6 23:47:34.419149 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:47:34.387909 ignition[1030]: Ignition finished successfully Jul 6 23:47:34.428284 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:47:34.437280 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:47:34.448466 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:47:34.521934 systemd-fsck[1038]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 6 23:47:34.526307 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:47:34.534102 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:47:34.633283 systemd-networkd[1004]: enP10860s1: Gained IPv6LL Jul 6 23:47:34.754200 kernel: EXT4-fs (sda9): mounted filesystem a6b10247-fbe6-4a25-95d9-ddd4b58604ec r/w with ordered data mode. Quota mode: none. Jul 6 23:47:34.754527 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:47:34.758784 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:47:34.783373 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:47:34.800707 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:47:34.808831 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:47:34.821149 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:47:34.844661 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1052) Jul 6 23:47:34.844680 kernel: BTRFS info (device sda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:47:34.821227 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:47:34.873125 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:47:34.873144 kernel: BTRFS info (device sda6): using free-space-tree Jul 6 23:47:34.827930 systemd-networkd[1004]: eth0: Gained IPv6LL Jul 6 23:47:34.835025 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:47:34.868367 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:47:34.879371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:47:35.643912 coreos-metadata[1054]: Jul 06 23:47:35.643 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:47:35.652508 coreos-metadata[1054]: Jul 06 23:47:35.652 INFO Fetch successful Jul 6 23:47:35.657037 coreos-metadata[1054]: Jul 06 23:47:35.656 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:47:35.666259 coreos-metadata[1054]: Jul 06 23:47:35.666 INFO Fetch successful Jul 6 23:47:35.680288 coreos-metadata[1054]: Jul 06 23:47:35.680 INFO wrote hostname ci-4344.1.1-a-aa3e6ac533 to /sysroot/etc/hostname Jul 6 23:47:35.689235 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:47:35.901124 initrd-setup-root[1082]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:47:35.937905 initrd-setup-root[1089]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:47:35.957202 initrd-setup-root[1096]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:47:35.963118 initrd-setup-root[1103]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:47:36.825937 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:47:36.831825 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:47:36.848887 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:47:36.858822 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:47:36.872226 kernel: BTRFS info (device sda6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:47:36.885162 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:47:36.898913 ignition[1172]: INFO : Ignition 2.21.0 Jul 6 23:47:36.898913 ignition[1172]: INFO : Stage: mount Jul 6 23:47:36.906005 ignition[1172]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:36.906005 ignition[1172]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:47:36.906005 ignition[1172]: INFO : mount: mount passed Jul 6 23:47:36.906005 ignition[1172]: INFO : Ignition finished successfully Jul 6 23:47:36.904489 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:47:36.914363 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:47:36.939392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:47:36.973412 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1182) Jul 6 23:47:36.973442 kernel: BTRFS info (device sda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:47:36.978010 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:47:36.981300 kernel: BTRFS info (device sda6): using free-space-tree Jul 6 23:47:36.983695 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:47:37.005818 ignition[1199]: INFO : Ignition 2.21.0 Jul 6 23:47:37.009149 ignition[1199]: INFO : Stage: files Jul 6 23:47:37.012071 ignition[1199]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:37.012071 ignition[1199]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:47:37.012071 ignition[1199]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:47:37.042357 ignition[1199]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:47:37.042357 ignition[1199]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:47:37.091057 ignition[1199]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:47:37.098782 ignition[1199]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:47:37.098782 ignition[1199]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:47:37.091480 unknown[1199]: wrote ssh authorized keys file for user: core Jul 6 23:47:37.116626 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:47:37.116626 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 6 23:47:37.156925 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:47:37.330416 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:47:37.339192 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:47:37.339192 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 6 23:47:37.675579 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:47:37.744737 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:47:37.744737 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:47:37.761143 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:47:37.761143 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:47:37.761143 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:47:37.761143 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:47:37.761143 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:47:37.761143 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:47:37.761143 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:47:37.816220 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:47:37.816220 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:47:37.816220 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:47:37.816220 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:47:37.816220 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:47:37.816220 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 6 23:47:38.412538 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:47:38.618210 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:47:38.627585 ignition[1199]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:47:38.649470 ignition[1199]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:47:38.659445 ignition[1199]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:47:38.659445 ignition[1199]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:47:38.673719 ignition[1199]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:47:38.673719 ignition[1199]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:47:38.673719 ignition[1199]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:47:38.673719 ignition[1199]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:47:38.673719 ignition[1199]: INFO : files: files passed Jul 6 23:47:38.673719 ignition[1199]: INFO : Ignition finished successfully Jul 6 23:47:38.669282 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:47:38.678879 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:47:38.684113 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:47:38.708889 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:47:38.781858 initrd-setup-root-after-ignition[1227]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:47:38.781858 initrd-setup-root-after-ignition[1227]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:47:38.708971 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:47:38.801903 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:47:38.723836 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:47:38.737799 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:47:38.747027 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:47:38.811380 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:47:38.811551 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:47:38.821570 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:47:38.830837 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:47:38.841475 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:47:38.842225 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:47:38.882827 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:47:38.890229 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:47:38.915751 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:47:38.921244 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:47:38.931521 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:47:38.941355 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:47:38.941511 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:47:38.955315 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:47:38.967139 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:47:38.976261 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:47:38.985143 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:47:38.994652 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:47:39.004092 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:47:39.013924 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:47:39.023147 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:47:39.032309 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:47:39.042012 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:47:39.050762 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:47:39.060719 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:47:39.060875 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:47:39.074715 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:47:39.083500 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:47:39.093872 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:47:39.093957 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:47:39.105275 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:47:39.105421 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:47:39.120293 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:47:39.120420 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:47:39.129502 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:47:39.129613 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:47:39.138813 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:47:39.138912 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:47:39.156436 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:47:39.207377 ignition[1252]: INFO : Ignition 2.21.0 Jul 6 23:47:39.207377 ignition[1252]: INFO : Stage: umount Jul 6 23:47:39.207377 ignition[1252]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:39.207377 ignition[1252]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:47:39.207377 ignition[1252]: INFO : umount: umount passed Jul 6 23:47:39.207377 ignition[1252]: INFO : Ignition finished successfully Jul 6 23:47:39.165781 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:47:39.165996 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:47:39.178295 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:47:39.194124 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:47:39.194243 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:47:39.202950 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:47:39.203063 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:47:39.219560 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:47:39.220332 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:47:39.220419 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:47:39.232822 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:47:39.232920 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:47:39.243287 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:47:39.243333 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:47:39.248591 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:47:39.248659 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:47:39.256914 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:47:39.256947 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:47:39.267048 systemd[1]: Stopped target network.target - Network. Jul 6 23:47:39.276409 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:47:39.276482 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:47:39.287709 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:47:39.296834 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:47:39.300200 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:47:39.307644 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:47:39.315843 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:47:39.324327 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:47:39.324370 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:47:39.333179 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:47:39.333211 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:47:39.341309 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:47:39.341355 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:47:39.349462 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:47:39.349488 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:47:39.358796 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:47:39.368151 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:47:39.373120 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:47:39.373222 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:47:39.388528 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:47:39.388704 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:47:39.388789 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:47:39.400823 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:47:39.613122 kernel: hv_netvsc 002248bc-65c6-0022-48bc-65c6002248bc eth0: Data path switched from VF: enP10860s1 Jul 6 23:47:39.400994 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:47:39.401124 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:47:39.408616 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:47:39.416963 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:47:39.417006 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:47:39.426386 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:47:39.426433 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:47:39.436127 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:47:39.452509 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:47:39.452711 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:47:39.461976 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:47:39.462046 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:47:39.481836 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:47:39.481888 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:47:39.486782 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:47:39.486822 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:47:39.500652 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:47:39.509149 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:47:39.509217 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:47:39.535839 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:47:39.535994 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:47:39.545507 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:47:39.545540 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:47:39.554651 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:47:39.554684 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:47:39.563998 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:47:39.564033 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:47:39.576475 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:47:39.576520 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:47:39.597939 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:47:39.597991 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:47:39.613974 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:47:39.628268 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:47:39.628349 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:47:39.643110 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:47:39.643159 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:47:39.653874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:47:39.653920 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:47:39.664794 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:47:39.664843 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:47:39.664869 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:47:39.665140 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:47:39.665269 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:47:39.684715 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:47:39.684840 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:47:39.694217 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:47:39.703523 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:47:39.735873 systemd[1]: Switching root. Jul 6 23:47:39.980994 systemd-journald[224]: Journal stopped Jul 6 23:47:44.316720 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 6 23:47:44.316740 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:47:44.316748 kernel: SELinux: policy capability open_perms=1 Jul 6 23:47:44.316755 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:47:44.316760 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:47:44.316765 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:47:44.316771 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:47:44.316776 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:47:44.316799 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:47:44.316804 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:47:44.316810 kernel: audit: type=1403 audit(1751845661.258:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:47:44.316816 systemd[1]: Successfully loaded SELinux policy in 167.890ms. Jul 6 23:47:44.316823 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.828ms. Jul 6 23:47:44.316830 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:47:44.316836 systemd[1]: Detected virtualization microsoft. Jul 6 23:47:44.316844 systemd[1]: Detected architecture arm64. Jul 6 23:47:44.316849 systemd[1]: Detected first boot. Jul 6 23:47:44.316855 systemd[1]: Hostname set to . Jul 6 23:47:44.316861 systemd[1]: Initializing machine ID from random generator. Jul 6 23:47:44.316867 zram_generator::config[1295]: No configuration found. Jul 6 23:47:44.316875 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:47:44.316880 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:47:44.316888 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:47:44.316893 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:47:44.316899 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:47:44.316905 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:47:44.316911 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:47:44.316917 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:47:44.316923 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:47:44.316930 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:47:44.316936 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:47:44.316942 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:47:44.316948 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:47:44.316954 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:47:44.316960 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:47:44.316966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:47:44.316972 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:47:44.316979 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:47:44.316985 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:47:44.316991 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:47:44.316999 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:47:44.317006 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:47:44.317012 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:47:44.317018 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:47:44.317024 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:47:44.317031 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:47:44.317037 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:47:44.317043 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:47:44.317049 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:47:44.317055 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:47:44.317061 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:47:44.317067 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:47:44.317073 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:47:44.317081 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:47:44.317087 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:47:44.317093 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:47:44.317099 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:47:44.317105 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:47:44.317112 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:47:44.317118 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:47:44.317124 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:47:44.317130 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:47:44.317137 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:47:44.317143 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:47:44.317149 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:47:44.317156 systemd[1]: Reached target machines.target - Containers. Jul 6 23:47:44.317163 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:47:44.317169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:47:44.317175 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:47:44.317182 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:47:44.318816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:47:44.318825 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:47:44.318833 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:47:44.318842 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:47:44.318852 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:47:44.318858 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:47:44.318865 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:47:44.318871 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:47:44.318877 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:47:44.318883 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:47:44.318890 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:47:44.318896 kernel: fuse: init (API version 7.41) Jul 6 23:47:44.318903 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:47:44.318910 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:47:44.318916 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:47:44.318922 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:47:44.318929 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:47:44.318935 kernel: ACPI: bus type drm_connector registered Jul 6 23:47:44.318961 systemd-journald[1399]: Collecting audit messages is disabled. Jul 6 23:47:44.318977 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:47:44.318985 systemd-journald[1399]: Journal started Jul 6 23:47:44.319000 systemd-journald[1399]: Runtime Journal (/run/log/journal/5adb3aee39be4fc0be0fef3acb669bad) is 8M, max 78.5M, 70.5M free. Jul 6 23:47:43.411606 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:47:43.426769 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:47:43.427161 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:47:43.427460 systemd[1]: systemd-journald.service: Consumed 2.747s CPU time. Jul 6 23:47:44.324178 kernel: loop: module loaded Jul 6 23:47:44.340623 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:47:44.340680 systemd[1]: Stopped verity-setup.service. Jul 6 23:47:44.357673 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:47:44.358327 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:47:44.364026 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:47:44.371017 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:47:44.376603 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:47:44.382816 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:47:44.388435 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:47:44.393105 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:47:44.400001 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:47:44.408164 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:47:44.408391 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:47:44.415094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:47:44.415275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:47:44.421703 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:47:44.421834 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:47:44.427686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:47:44.427832 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:47:44.434592 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:47:44.434725 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:47:44.440662 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:47:44.440790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:47:44.447787 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:47:44.453780 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:47:44.460796 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:47:44.466992 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:47:44.473341 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:47:44.487950 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:47:44.494555 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:47:44.507060 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:47:44.513486 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:47:44.513517 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:47:44.519368 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:47:44.529214 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:47:44.534860 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:47:44.543320 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:47:44.550839 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:47:44.557094 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:47:44.559305 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:47:44.565172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:47:44.566098 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:47:44.572372 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:47:44.584514 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:47:44.591422 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:47:44.597995 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:47:44.606768 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:47:44.614800 systemd-journald[1399]: Time spent on flushing to /var/log/journal/5adb3aee39be4fc0be0fef3acb669bad is 59.844ms for 935 entries. Jul 6 23:47:44.614800 systemd-journald[1399]: System Journal (/var/log/journal/5adb3aee39be4fc0be0fef3acb669bad) is 11.8M, max 2.6G, 2.6G free. Jul 6 23:47:44.726518 systemd-journald[1399]: Received client request to flush runtime journal. Jul 6 23:47:44.726564 systemd-journald[1399]: /var/log/journal/5adb3aee39be4fc0be0fef3acb669bad/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 6 23:47:44.726583 systemd-journald[1399]: Rotating system journal. Jul 6 23:47:44.726601 kernel: loop0: detected capacity change from 0 to 203944 Jul 6 23:47:44.726616 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:47:44.615984 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:47:44.632237 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:47:44.657919 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:47:44.727688 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:47:44.729539 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:47:44.736307 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:47:44.742116 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:47:44.751009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:47:44.771206 kernel: loop1: detected capacity change from 0 to 138376 Jul 6 23:47:44.836351 systemd-tmpfiles[1451]: ACLs are not supported, ignoring. Jul 6 23:47:44.836714 systemd-tmpfiles[1451]: ACLs are not supported, ignoring. Jul 6 23:47:44.839983 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:47:45.207207 kernel: loop2: detected capacity change from 0 to 28936 Jul 6 23:47:45.514210 kernel: loop3: detected capacity change from 0 to 107312 Jul 6 23:47:45.652298 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:47:45.659591 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:47:45.690238 systemd-udevd[1459]: Using default interface naming scheme 'v255'. Jul 6 23:47:45.851204 kernel: loop4: detected capacity change from 0 to 203944 Jul 6 23:47:45.861206 kernel: loop5: detected capacity change from 0 to 138376 Jul 6 23:47:45.869210 kernel: loop6: detected capacity change from 0 to 28936 Jul 6 23:47:45.878307 kernel: loop7: detected capacity change from 0 to 107312 Jul 6 23:47:45.885104 (sd-merge)[1461]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 6 23:47:45.885634 (sd-merge)[1461]: Merged extensions into '/usr'. Jul 6 23:47:45.888946 systemd[1]: Reload requested from client PID 1434 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:47:45.888960 systemd[1]: Reloading... Jul 6 23:47:46.008207 zram_generator::config[1519]: No configuration found. Jul 6 23:47:46.090827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#147 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 6 23:47:46.116376 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:47:46.151620 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:47:46.171328 kernel: hv_vmbus: registering driver hv_balloon Jul 6 23:47:46.171432 kernel: hv_vmbus: registering driver hyperv_fb Jul 6 23:47:46.171445 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 6 23:47:46.171459 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 6 23:47:46.199305 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 6 23:47:46.207812 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 6 23:47:46.217021 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:47:46.225649 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:47:46.279570 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:47:46.279650 systemd[1]: Reloading finished in 390 ms. Jul 6 23:47:46.291394 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:47:46.299973 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:47:46.346551 systemd[1]: Starting ensure-sysext.service... Jul 6 23:47:46.357425 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:47:46.369248 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:47:46.384487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:47:46.396690 systemd-tmpfiles[1659]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:47:46.396811 systemd-tmpfiles[1659]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:47:46.396991 systemd-tmpfiles[1659]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:47:46.397127 systemd-tmpfiles[1659]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:47:46.397577 systemd-tmpfiles[1659]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:47:46.397721 systemd-tmpfiles[1659]: ACLs are not supported, ignoring. Jul 6 23:47:46.397754 systemd-tmpfiles[1659]: ACLs are not supported, ignoring. Jul 6 23:47:46.403488 systemd-tmpfiles[1659]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:47:46.403608 systemd-tmpfiles[1659]: Skipping /boot Jul 6 23:47:46.413356 systemd[1]: Reload requested from client PID 1652 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:47:46.413366 systemd[1]: Reloading... Jul 6 23:47:46.414202 kernel: MACsec IEEE 802.1AE Jul 6 23:47:46.415584 systemd-tmpfiles[1659]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:47:46.415598 systemd-tmpfiles[1659]: Skipping /boot Jul 6 23:47:46.479212 zram_generator::config[1694]: No configuration found. Jul 6 23:47:46.551412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:47:46.629823 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:47:46.635474 systemd[1]: Reloading finished in 221 ms. Jul 6 23:47:46.645509 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:47:46.679084 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:47:46.691063 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:47:46.696218 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:47:46.698708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:47:46.707168 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:47:46.718474 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:47:46.724126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:47:46.726382 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:47:46.735122 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:47:46.737051 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:47:46.748396 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:47:46.757469 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:47:46.770905 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:47:46.778055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:47:46.778265 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:47:46.787288 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:47:46.788408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:47:46.788732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:47:46.800617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:47:46.802548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:47:46.810847 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:47:46.812231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:47:46.818881 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:47:46.826030 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:47:46.841579 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:47:46.847200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:47:46.848650 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:47:46.860442 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:47:46.870162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:47:46.877332 augenrules[1797]: No rules Jul 6 23:47:46.879641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:47:46.879892 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:47:46.883522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:47:46.896816 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:47:46.897578 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:47:46.905154 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:47:46.916579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:47:46.917277 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:47:46.925764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:47:46.926152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:47:46.933774 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:47:46.933961 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:47:46.951009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:47:46.962633 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:47:46.970057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:47:46.976929 systemd-resolved[1769]: Positive Trust Anchors: Jul 6 23:47:46.976944 systemd-resolved[1769]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:47:46.976963 systemd-resolved[1769]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:47:46.981366 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:47:46.981942 systemd-resolved[1769]: Using system hostname 'ci-4344.1.1-a-aa3e6ac533'. Jul 6 23:47:46.991474 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:47:47.000824 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:47:47.009647 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:47:47.014733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:47:47.014777 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:47:47.014822 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:47:47.020479 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:47:47.024122 augenrules[1818]: /sbin/augenrules: No change Jul 6 23:47:47.038152 augenrules[1840]: No rules Jul 6 23:47:47.056700 systemd[1]: Finished ensure-sysext.service. Jul 6 23:47:47.061476 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:47:47.061647 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:47:47.066807 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:47:47.066937 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:47:47.072672 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:47:47.074221 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:47:47.078966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:47:47.079087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:47:47.084710 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:47:47.084828 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:47:47.094034 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:47:47.099498 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:47:47.099563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:47:47.100348 systemd-networkd[1657]: lo: Link UP Jul 6 23:47:47.100354 systemd-networkd[1657]: lo: Gained carrier Jul 6 23:47:47.103331 systemd-networkd[1657]: Enumeration completed Jul 6 23:47:47.103478 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:47:47.103944 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:47:47.103954 systemd-networkd[1657]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:47:47.108964 systemd[1]: Reached target network.target - Network. Jul 6 23:47:47.114480 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:47:47.122482 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:47:47.165519 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:47:47.171664 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:47:47.176201 kernel: mlx5_core 2a6c:00:02.0 enP10860s1: Link up Jul 6 23:47:47.203360 kernel: hv_netvsc 002248bc-65c6-0022-48bc-65c6002248bc eth0: Data path switched to VF: enP10860s1 Jul 6 23:47:47.204384 systemd-networkd[1657]: enP10860s1: Link UP Jul 6 23:47:47.204615 systemd-networkd[1657]: eth0: Link UP Jul 6 23:47:47.204671 systemd-networkd[1657]: eth0: Gained carrier Jul 6 23:47:47.204719 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:47:47.205864 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:47:47.215796 systemd-networkd[1657]: enP10860s1: Gained carrier Jul 6 23:47:47.226223 systemd-networkd[1657]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:47:48.457662 systemd-networkd[1657]: enP10860s1: Gained IPv6LL Jul 6 23:47:49.097403 systemd-networkd[1657]: eth0: Gained IPv6LL Jul 6 23:47:49.098912 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:47:49.106365 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:47:49.454929 ldconfig[1429]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:47:49.466103 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:47:49.472913 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:47:49.486385 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:47:49.492526 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:47:49.498017 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:47:49.504034 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:47:49.510560 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:47:49.515935 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:47:49.522277 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:47:49.528710 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:47:49.528738 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:47:49.532910 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:47:49.538348 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:47:49.544754 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:47:49.550684 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:47:49.624282 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:47:49.630039 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:47:49.636731 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:47:49.641920 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:47:49.647749 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:47:49.652646 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:47:49.656597 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:47:49.660673 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:47:49.660696 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:47:49.662758 systemd[1]: Starting chronyd.service - NTP client/server... Jul 6 23:47:49.676293 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:47:49.690309 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:47:49.698318 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:47:49.706352 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:47:49.714933 (chronyd)[1862]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 6 23:47:49.715736 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:47:49.729054 jq[1870]: false Jul 6 23:47:49.729357 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:47:49.734681 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:47:49.736393 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 6 23:47:49.742675 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 6 23:47:49.743730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:47:49.751370 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:47:49.764614 chronyd[1880]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 6 23:47:49.764711 KVP[1872]: KVP starting; pid is:1872 Jul 6 23:47:49.765422 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:47:49.775737 KVP[1872]: KVP LIC Version: 3.1 Jul 6 23:47:49.776199 kernel: hv_utils: KVP IC version 4.0 Jul 6 23:47:49.776366 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:47:49.783791 extend-filesystems[1871]: Found /dev/sda6 Jul 6 23:47:49.787020 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:47:49.784448 chronyd[1880]: Timezone right/UTC failed leap second check, ignoring Jul 6 23:47:49.784611 chronyd[1880]: Loaded seccomp filter (level 2) Jul 6 23:47:49.805052 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:47:49.812688 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:47:49.819406 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:47:49.819843 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:47:49.822038 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:47:49.828963 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:47:49.836008 extend-filesystems[1871]: Found /dev/sda9 Jul 6 23:47:49.850968 extend-filesystems[1871]: Checking size of /dev/sda9 Jul 6 23:47:49.840352 systemd[1]: Started chronyd.service - NTP client/server. Jul 6 23:47:49.856431 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:47:49.868730 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:47:49.869425 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:47:49.871774 jq[1899]: true Jul 6 23:47:49.872419 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:47:49.872580 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:47:49.881047 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:47:49.892635 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:47:49.894357 extend-filesystems[1871]: Old size kept for /dev/sda9 Jul 6 23:47:49.894455 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:47:49.917658 update_engine[1897]: I20250706 23:47:49.913818 1897 main.cc:92] Flatcar Update Engine starting Jul 6 23:47:49.914864 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:47:49.918576 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:47:49.925979 systemd-logind[1894]: New seat seat0. Jul 6 23:47:49.930252 systemd-logind[1894]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 6 23:47:49.931501 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:47:49.955891 (ntainerd)[1913]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:47:49.958435 jq[1911]: true Jul 6 23:47:49.967884 tar[1910]: linux-arm64/helm Jul 6 23:47:50.074200 dbus-daemon[1868]: [system] SELinux support is enabled Jul 6 23:47:50.074596 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:47:50.082823 bash[1951]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:47:50.086121 sshd_keygen[1900]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:47:50.086619 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:47:50.098474 update_engine[1897]: I20250706 23:47:50.097802 1897 update_check_scheduler.cc:74] Next update check in 11m36s Jul 6 23:47:50.102275 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:47:50.102820 dbus-daemon[1868]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:47:50.102385 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:47:50.102416 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:47:50.111943 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:47:50.112218 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:47:50.120485 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:47:50.139820 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:47:50.154253 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:47:50.165059 coreos-metadata[1864]: Jul 06 23:47:50.164 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:47:50.168623 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:47:50.177402 coreos-metadata[1864]: Jul 06 23:47:50.177 INFO Fetch successful Jul 6 23:47:50.177746 coreos-metadata[1864]: Jul 06 23:47:50.177 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 6 23:47:50.179370 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 6 23:47:50.185244 coreos-metadata[1864]: Jul 06 23:47:50.185 INFO Fetch successful Jul 6 23:47:50.185244 coreos-metadata[1864]: Jul 06 23:47:50.185 INFO Fetching http://168.63.129.16/machine/46ecd954-6cf9-4c35-9313-dece050b400a/b3289745%2Dee4b%2D4618%2D876d%2Dd19a3d44e2c3.%5Fci%2D4344.1.1%2Da%2Daa3e6ac533?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 6 23:47:50.194608 coreos-metadata[1864]: Jul 06 23:47:50.194 INFO Fetch successful Jul 6 23:47:50.196428 coreos-metadata[1864]: Jul 06 23:47:50.196 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:47:50.202575 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:47:50.203246 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:47:50.208209 coreos-metadata[1864]: Jul 06 23:47:50.208 INFO Fetch successful Jul 6 23:47:50.228343 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:47:50.255582 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:47:50.261113 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:47:50.266503 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:47:50.275401 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:47:50.283426 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:47:50.291314 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:47:50.300339 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 6 23:47:50.514039 locksmithd[1995]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:47:50.524162 tar[1910]: linux-arm64/LICENSE Jul 6 23:47:50.524262 tar[1910]: linux-arm64/README.md Jul 6 23:47:50.536834 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:47:50.566029 containerd[1913]: time="2025-07-06T23:47:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:47:50.567524 containerd[1913]: time="2025-07-06T23:47:50.567489952Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:47:50.574213 containerd[1913]: time="2025-07-06T23:47:50.574172512Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.64µs" Jul 6 23:47:50.574298 containerd[1913]: time="2025-07-06T23:47:50.574284328Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:47:50.574387 containerd[1913]: time="2025-07-06T23:47:50.574372784Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:47:50.574568 containerd[1913]: time="2025-07-06T23:47:50.574552776Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:47:50.574628 containerd[1913]: time="2025-07-06T23:47:50.574617600Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:47:50.574678 containerd[1913]: time="2025-07-06T23:47:50.574668624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:47:50.574779 containerd[1913]: time="2025-07-06T23:47:50.574765168Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:47:50.574893 containerd[1913]: time="2025-07-06T23:47:50.574877128Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:47:50.575195 containerd[1913]: time="2025-07-06T23:47:50.575159584Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:47:50.575253 containerd[1913]: time="2025-07-06T23:47:50.575242976Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:47:50.575309 containerd[1913]: time="2025-07-06T23:47:50.575297816Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:47:50.575344 containerd[1913]: time="2025-07-06T23:47:50.575332696Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:47:50.575482 containerd[1913]: time="2025-07-06T23:47:50.575465568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:47:50.575728 containerd[1913]: time="2025-07-06T23:47:50.575708144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:47:50.575808 containerd[1913]: time="2025-07-06T23:47:50.575794960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:47:50.575853 containerd[1913]: time="2025-07-06T23:47:50.575840648Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:47:50.575926 containerd[1913]: time="2025-07-06T23:47:50.575914112Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:47:50.576123 containerd[1913]: time="2025-07-06T23:47:50.576107552Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:47:50.576267 containerd[1913]: time="2025-07-06T23:47:50.576241200Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:47:50.591388 containerd[1913]: time="2025-07-06T23:47:50.591345224Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:47:50.591618 containerd[1913]: time="2025-07-06T23:47:50.591584728Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:47:50.592296 containerd[1913]: time="2025-07-06T23:47:50.592224008Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:47:50.592296 containerd[1913]: time="2025-07-06T23:47:50.592252904Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:47:50.592296 containerd[1913]: time="2025-07-06T23:47:50.592266376Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:47:50.592296 containerd[1913]: time="2025-07-06T23:47:50.592274736Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:47:50.592296 containerd[1913]: time="2025-07-06T23:47:50.592283520Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:47:50.592296 containerd[1913]: time="2025-07-06T23:47:50.592292184Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:47:50.592296 containerd[1913]: time="2025-07-06T23:47:50.592300888Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592310232Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592330616Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592341072Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592481864Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592505640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592517096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592524648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592531792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592538840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592546336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592554488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592563000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592569336Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592576536Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:47:50.592739 containerd[1913]: time="2025-07-06T23:47:50.592640800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:47:50.593241 containerd[1913]: time="2025-07-06T23:47:50.592652608Z" level=info msg="Start snapshots syncer" Jul 6 23:47:50.593241 containerd[1913]: time="2025-07-06T23:47:50.592681480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:47:50.593241 containerd[1913]: time="2025-07-06T23:47:50.592899440Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.592936656Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.592996992Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593094920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593109584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593116424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593123232Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593133000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593145376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593152312Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593179088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593214280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593222080Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593245952Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:47:50.593524 containerd[1913]: time="2025-07-06T23:47:50.593257712Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:47:50.594894 containerd[1913]: time="2025-07-06T23:47:50.593263104Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:47:50.594894 containerd[1913]: time="2025-07-06T23:47:50.593269152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:47:50.594894 containerd[1913]: time="2025-07-06T23:47:50.593273792Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:47:50.594894 containerd[1913]: time="2025-07-06T23:47:50.593279224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:47:50.594894 containerd[1913]: time="2025-07-06T23:47:50.593286832Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:47:50.594894 containerd[1913]: time="2025-07-06T23:47:50.593299280Z" level=info msg="runtime interface created" Jul 6 23:47:50.594894 containerd[1913]: time="2025-07-06T23:47:50.593302952Z" level=info msg="created NRI interface" Jul 6 23:47:50.594894 containerd[1913]: time="2025-07-06T23:47:50.593308264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:47:50.594894 containerd[1913]: time="2025-07-06T23:47:50.593316872Z" level=info msg="Connect containerd service" Jul 6 23:47:50.594894 containerd[1913]: time="2025-07-06T23:47:50.593336552Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:47:50.594894 containerd[1913]: time="2025-07-06T23:47:50.593994584Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:47:50.646540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:47:50.655649 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:47:50.892269 kubelet[2057]: E0706 23:47:50.892129 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:47:50.894412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:47:50.894644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:47:50.895223 systemd[1]: kubelet.service: Consumed 551ms CPU time, 256.2M memory peak. Jul 6 23:47:51.362340 containerd[1913]: time="2025-07-06T23:47:51.361990776Z" level=info msg="Start subscribing containerd event" Jul 6 23:47:51.362340 containerd[1913]: time="2025-07-06T23:47:51.362057880Z" level=info msg="Start recovering state" Jul 6 23:47:51.362340 containerd[1913]: time="2025-07-06T23:47:51.362145144Z" level=info msg="Start event monitor" Jul 6 23:47:51.362340 containerd[1913]: time="2025-07-06T23:47:51.362157312Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:47:51.362340 containerd[1913]: time="2025-07-06T23:47:51.362162408Z" level=info msg="Start streaming server" Jul 6 23:47:51.362340 containerd[1913]: time="2025-07-06T23:47:51.362169040Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:47:51.362340 containerd[1913]: time="2025-07-06T23:47:51.362173680Z" level=info msg="runtime interface starting up..." Jul 6 23:47:51.362340 containerd[1913]: time="2025-07-06T23:47:51.362177384Z" level=info msg="starting plugins..." Jul 6 23:47:51.362340 containerd[1913]: time="2025-07-06T23:47:51.362200232Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:47:51.362340 containerd[1913]: time="2025-07-06T23:47:51.362143664Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:47:51.362340 containerd[1913]: time="2025-07-06T23:47:51.362319008Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:47:51.362647 containerd[1913]: time="2025-07-06T23:47:51.362368536Z" level=info msg="containerd successfully booted in 0.796671s" Jul 6 23:47:51.362796 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:47:51.369316 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:47:51.379275 systemd[1]: Startup finished in 1.723s (kernel) + 12.676s (initrd) + 10.312s (userspace) = 24.713s. Jul 6 23:47:51.551868 login[2031]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 6 23:47:51.553059 login[2032]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:51.578366 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:47:51.579512 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:47:51.581853 systemd-logind[1894]: New session 2 of user core. Jul 6 23:47:51.594224 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:47:51.595995 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:47:51.605851 (systemd)[2081]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:47:51.607798 systemd-logind[1894]: New session c1 of user core. Jul 6 23:47:51.787178 systemd[2081]: Queued start job for default target default.target. Jul 6 23:47:51.794937 systemd[2081]: Created slice app.slice - User Application Slice. Jul 6 23:47:51.794961 systemd[2081]: Reached target paths.target - Paths. Jul 6 23:47:51.794994 systemd[2081]: Reached target timers.target - Timers. Jul 6 23:47:51.796209 systemd[2081]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:47:51.804135 systemd[2081]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:47:51.804300 systemd[2081]: Reached target sockets.target - Sockets. Jul 6 23:47:51.804397 systemd[2081]: Reached target basic.target - Basic System. Jul 6 23:47:51.804484 systemd[2081]: Reached target default.target - Main User Target. Jul 6 23:47:51.804554 systemd[2081]: Startup finished in 191ms. Jul 6 23:47:51.805404 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:47:51.810325 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:47:51.851275 waagent[2033]: 2025-07-06T23:47:51.851181Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 6 23:47:51.858762 waagent[2033]: 2025-07-06T23:47:51.858699Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 6 23:47:51.862806 waagent[2033]: 2025-07-06T23:47:51.862770Z INFO Daemon Daemon Python: 3.11.12 Jul 6 23:47:51.866381 waagent[2033]: 2025-07-06T23:47:51.866325Z INFO Daemon Daemon Run daemon Jul 6 23:47:51.869943 waagent[2033]: 2025-07-06T23:47:51.869909Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 6 23:47:51.877721 waagent[2033]: 2025-07-06T23:47:51.877350Z INFO Daemon Daemon Using waagent for provisioning Jul 6 23:47:51.882095 waagent[2033]: 2025-07-06T23:47:51.882046Z INFO Daemon Daemon Activate resource disk Jul 6 23:47:51.886497 waagent[2033]: 2025-07-06T23:47:51.886453Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 6 23:47:51.895574 waagent[2033]: 2025-07-06T23:47:51.895534Z INFO Daemon Daemon Found device: None Jul 6 23:47:51.899502 waagent[2033]: 2025-07-06T23:47:51.899469Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 6 23:47:51.907474 waagent[2033]: 2025-07-06T23:47:51.907446Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 6 23:47:51.918423 waagent[2033]: 2025-07-06T23:47:51.918382Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:47:51.923804 waagent[2033]: 2025-07-06T23:47:51.923773Z INFO Daemon Daemon Running default provisioning handler Jul 6 23:47:51.933742 waagent[2033]: 2025-07-06T23:47:51.933310Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 6 23:47:51.944931 waagent[2033]: 2025-07-06T23:47:51.944892Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 6 23:47:51.953606 waagent[2033]: 2025-07-06T23:47:51.953567Z INFO Daemon Daemon cloud-init is enabled: False Jul 6 23:47:51.957998 waagent[2033]: 2025-07-06T23:47:51.957972Z INFO Daemon Daemon Copying ovf-env.xml Jul 6 23:47:52.042911 waagent[2033]: 2025-07-06T23:47:52.042301Z INFO Daemon Daemon Successfully mounted dvd Jul 6 23:47:52.070615 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 6 23:47:52.072304 waagent[2033]: 2025-07-06T23:47:52.072248Z INFO Daemon Daemon Detect protocol endpoint Jul 6 23:47:52.076628 waagent[2033]: 2025-07-06T23:47:52.076589Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:47:52.081642 waagent[2033]: 2025-07-06T23:47:52.081608Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 6 23:47:52.089422 waagent[2033]: 2025-07-06T23:47:52.089393Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 6 23:47:52.094237 waagent[2033]: 2025-07-06T23:47:52.094204Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 6 23:47:52.098522 waagent[2033]: 2025-07-06T23:47:52.098496Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 6 23:47:52.143212 waagent[2033]: 2025-07-06T23:47:52.142829Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 6 23:47:52.148558 waagent[2033]: 2025-07-06T23:47:52.148534Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 6 23:47:52.153594 waagent[2033]: 2025-07-06T23:47:52.153561Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 6 23:47:52.295279 waagent[2033]: 2025-07-06T23:47:52.294684Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 6 23:47:52.300598 waagent[2033]: 2025-07-06T23:47:52.300552Z INFO Daemon Daemon Forcing an update of the goal state. Jul 6 23:47:52.309848 waagent[2033]: 2025-07-06T23:47:52.309808Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:47:52.368482 waagent[2033]: 2025-07-06T23:47:52.368440Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 6 23:47:52.373929 waagent[2033]: 2025-07-06T23:47:52.373896Z INFO Daemon Jul 6 23:47:52.376248 waagent[2033]: 2025-07-06T23:47:52.376220Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 866869d9-91bb-4078-a73b-1b681f35dbdc eTag: 5694925646455547157 source: Fabric] Jul 6 23:47:52.386110 waagent[2033]: 2025-07-06T23:47:52.386071Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 6 23:47:52.392936 waagent[2033]: 2025-07-06T23:47:52.392904Z INFO Daemon Jul 6 23:47:52.395227 waagent[2033]: 2025-07-06T23:47:52.395200Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:47:52.405212 waagent[2033]: 2025-07-06T23:47:52.405172Z INFO Daemon Daemon Downloading artifacts profile blob Jul 6 23:47:52.486830 waagent[2033]: 2025-07-06T23:47:52.486761Z INFO Daemon Downloaded certificate {'thumbprint': 'F46A024716BD64C274596DCFAEA2F9D4816D9D7D', 'hasPrivateKey': False} Jul 6 23:47:52.495024 waagent[2033]: 2025-07-06T23:47:52.494988Z INFO Daemon Downloaded certificate {'thumbprint': 'C4BB845DE29A34AF0E8359981A1369D94D961965', 'hasPrivateKey': True} Jul 6 23:47:52.503342 waagent[2033]: 2025-07-06T23:47:52.503310Z INFO Daemon Fetch goal state completed Jul 6 23:47:52.514160 waagent[2033]: 2025-07-06T23:47:52.514128Z INFO Daemon Daemon Starting provisioning Jul 6 23:47:52.518718 waagent[2033]: 2025-07-06T23:47:52.518681Z INFO Daemon Daemon Handle ovf-env.xml. Jul 6 23:47:52.522657 waagent[2033]: 2025-07-06T23:47:52.522632Z INFO Daemon Daemon Set hostname [ci-4344.1.1-a-aa3e6ac533] Jul 6 23:47:52.529144 waagent[2033]: 2025-07-06T23:47:52.529106Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-a-aa3e6ac533] Jul 6 23:47:52.535467 waagent[2033]: 2025-07-06T23:47:52.535429Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 6 23:47:52.540929 waagent[2033]: 2025-07-06T23:47:52.540889Z INFO Daemon Daemon Primary interface is [eth0] Jul 6 23:47:52.551966 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:47:52.552218 login[2031]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:52.551973 systemd-networkd[1657]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:47:52.552005 systemd-networkd[1657]: eth0: DHCP lease lost Jul 6 23:47:52.558091 waagent[2033]: 2025-07-06T23:47:52.553406Z INFO Daemon Daemon Create user account if not exists Jul 6 23:47:52.557175 systemd-logind[1894]: New session 1 of user core. Jul 6 23:47:52.558652 waagent[2033]: 2025-07-06T23:47:52.558610Z INFO Daemon Daemon User core already exists, skip useradd Jul 6 23:47:52.563987 waagent[2033]: 2025-07-06T23:47:52.563941Z INFO Daemon Daemon Configure sudoer Jul 6 23:47:52.568357 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:47:52.573442 waagent[2033]: 2025-07-06T23:47:52.573395Z INFO Daemon Daemon Configure sshd Jul 6 23:47:52.578558 systemd-networkd[1657]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:47:52.583368 waagent[2033]: 2025-07-06T23:47:52.583285Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 6 23:47:52.593916 waagent[2033]: 2025-07-06T23:47:52.593861Z INFO Daemon Daemon Deploy ssh public key. Jul 6 23:47:53.700535 waagent[2033]: 2025-07-06T23:47:53.696313Z INFO Daemon Daemon Provisioning complete Jul 6 23:47:53.712077 waagent[2033]: 2025-07-06T23:47:53.712038Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 6 23:47:53.718137 waagent[2033]: 2025-07-06T23:47:53.718098Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 6 23:47:53.726652 waagent[2033]: 2025-07-06T23:47:53.726619Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 6 23:47:53.826219 waagent[2135]: 2025-07-06T23:47:53.825766Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 6 23:47:53.826219 waagent[2135]: 2025-07-06T23:47:53.825903Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 6 23:47:53.826219 waagent[2135]: 2025-07-06T23:47:53.825937Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 6 23:47:53.826219 waagent[2135]: 2025-07-06T23:47:53.825970Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 6 23:47:53.861291 waagent[2135]: 2025-07-06T23:47:53.861226Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 6 23:47:53.861626 waagent[2135]: 2025-07-06T23:47:53.861594Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:47:53.861745 waagent[2135]: 2025-07-06T23:47:53.861723Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:47:53.868350 waagent[2135]: 2025-07-06T23:47:53.868301Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:47:53.877280 waagent[2135]: 2025-07-06T23:47:53.877244Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 6 23:47:53.877783 waagent[2135]: 2025-07-06T23:47:53.877750Z INFO ExtHandler Jul 6 23:47:53.877911 waagent[2135]: 2025-07-06T23:47:53.877887Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d40d33bc-5f53-41df-80fe-4aafeff74d12 eTag: 5694925646455547157 source: Fabric] Jul 6 23:47:53.878275 waagent[2135]: 2025-07-06T23:47:53.878244Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 6 23:47:53.878787 waagent[2135]: 2025-07-06T23:47:53.878754Z INFO ExtHandler Jul 6 23:47:53.878891 waagent[2135]: 2025-07-06T23:47:53.878873Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:47:53.883027 waagent[2135]: 2025-07-06T23:47:53.882998Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 6 23:47:53.956739 waagent[2135]: 2025-07-06T23:47:53.956621Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F46A024716BD64C274596DCFAEA2F9D4816D9D7D', 'hasPrivateKey': False} Jul 6 23:47:53.957215 waagent[2135]: 2025-07-06T23:47:53.957160Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C4BB845DE29A34AF0E8359981A1369D94D961965', 'hasPrivateKey': True} Jul 6 23:47:53.957670 waagent[2135]: 2025-07-06T23:47:53.957635Z INFO ExtHandler Fetch goal state completed Jul 6 23:47:53.975401 waagent[2135]: 2025-07-06T23:47:53.975355Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 6 23:47:53.979217 waagent[2135]: 2025-07-06T23:47:53.979151Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2135 Jul 6 23:47:53.979470 waagent[2135]: 2025-07-06T23:47:53.979441Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 6 23:47:53.979812 waagent[2135]: 2025-07-06T23:47:53.979782Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 6 23:47:53.981053 waagent[2135]: 2025-07-06T23:47:53.981016Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 6 23:47:53.981481 waagent[2135]: 2025-07-06T23:47:53.981448Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 6 23:47:53.981707 waagent[2135]: 2025-07-06T23:47:53.981677Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 6 23:47:53.982259 waagent[2135]: 2025-07-06T23:47:53.982227Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 6 23:47:54.022266 waagent[2135]: 2025-07-06T23:47:54.022229Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 6 23:47:54.022590 waagent[2135]: 2025-07-06T23:47:54.022558Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 6 23:47:54.027078 waagent[2135]: 2025-07-06T23:47:54.027047Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 6 23:47:54.032052 systemd[1]: Reload requested from client PID 2152 ('systemctl') (unit waagent.service)... Jul 6 23:47:54.032322 systemd[1]: Reloading... Jul 6 23:47:54.104226 zram_generator::config[2190]: No configuration found. Jul 6 23:47:54.168737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:47:54.251325 systemd[1]: Reloading finished in 218 ms. Jul 6 23:47:54.274706 waagent[2135]: 2025-07-06T23:47:54.274526Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 6 23:47:54.274706 waagent[2135]: 2025-07-06T23:47:54.274673Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 6 23:47:54.558834 waagent[2135]: 2025-07-06T23:47:54.558695Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 6 23:47:54.559060 waagent[2135]: 2025-07-06T23:47:54.559024Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 6 23:47:54.559766 waagent[2135]: 2025-07-06T23:47:54.559723Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 6 23:47:54.560072 waagent[2135]: 2025-07-06T23:47:54.560029Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 6 23:47:54.560366 waagent[2135]: 2025-07-06T23:47:54.560329Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:47:54.561092 waagent[2135]: 2025-07-06T23:47:54.560471Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:47:54.561092 waagent[2135]: 2025-07-06T23:47:54.560537Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:47:54.561092 waagent[2135]: 2025-07-06T23:47:54.560648Z INFO EnvHandler ExtHandler Configure routes Jul 6 23:47:54.561092 waagent[2135]: 2025-07-06T23:47:54.560688Z INFO EnvHandler ExtHandler Gateway:None Jul 6 23:47:54.561092 waagent[2135]: 2025-07-06T23:47:54.560711Z INFO EnvHandler ExtHandler Routes:None Jul 6 23:47:54.561332 waagent[2135]: 2025-07-06T23:47:54.561298Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 6 23:47:54.561528 waagent[2135]: 2025-07-06T23:47:54.561502Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:47:54.561770 waagent[2135]: 2025-07-06T23:47:54.561737Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 6 23:47:54.561979 waagent[2135]: 2025-07-06T23:47:54.561952Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 6 23:47:54.561979 waagent[2135]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 6 23:47:54.561979 waagent[2135]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 6 23:47:54.561979 waagent[2135]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 6 23:47:54.561979 waagent[2135]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:47:54.561979 waagent[2135]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:47:54.561979 waagent[2135]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:47:54.562340 waagent[2135]: 2025-07-06T23:47:54.562293Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 6 23:47:54.562606 waagent[2135]: 2025-07-06T23:47:54.562554Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 6 23:47:54.562649 waagent[2135]: 2025-07-06T23:47:54.562602Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 6 23:47:54.563178 waagent[2135]: 2025-07-06T23:47:54.563142Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 6 23:47:54.569827 waagent[2135]: 2025-07-06T23:47:54.569787Z INFO ExtHandler ExtHandler Jul 6 23:47:54.569982 waagent[2135]: 2025-07-06T23:47:54.569955Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: da40b6a6-91c1-4067-98ef-d4c505926862 correlation 6a7a65f9-618c-401c-82a4-2803fecb736b created: 2025-07-06T23:46:35.436546Z] Jul 6 23:47:54.570384 waagent[2135]: 2025-07-06T23:47:54.570350Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 6 23:47:54.570890 waagent[2135]: 2025-07-06T23:47:54.570860Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 6 23:47:54.598456 waagent[2135]: 2025-07-06T23:47:54.598408Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 6 23:47:54.598456 waagent[2135]: Try `iptables -h' or 'iptables --help' for more information.) Jul 6 23:47:54.599103 waagent[2135]: 2025-07-06T23:47:54.599030Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4C8C2FC5-17E3-4CD0-AC8D-E49A299EEDD8;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 6 23:47:54.629419 waagent[2135]: 2025-07-06T23:47:54.629353Z INFO MonitorHandler ExtHandler Network interfaces: Jul 6 23:47:54.629419 waagent[2135]: Executing ['ip', '-a', '-o', 'link']: Jul 6 23:47:54.629419 waagent[2135]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 6 23:47:54.629419 waagent[2135]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:65:c6 brd ff:ff:ff:ff:ff:ff Jul 6 23:47:54.629419 waagent[2135]: 3: enP10860s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:65:c6 brd ff:ff:ff:ff:ff:ff\ altname enP10860p0s2 Jul 6 23:47:54.629419 waagent[2135]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 6 23:47:54.629419 waagent[2135]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 6 23:47:54.629419 waagent[2135]: 2: eth0 inet 10.200.20.10/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 6 23:47:54.629419 waagent[2135]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 6 23:47:54.629419 waagent[2135]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 6 23:47:54.629419 waagent[2135]: 2: eth0 inet6 fe80::222:48ff:febc:65c6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:47:54.629419 waagent[2135]: 3: enP10860s1 inet6 fe80::222:48ff:febc:65c6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:47:54.679818 waagent[2135]: 2025-07-06T23:47:54.679761Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 6 23:47:54.679818 waagent[2135]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:47:54.679818 waagent[2135]: pkts bytes target prot opt in out source destination Jul 6 23:47:54.679818 waagent[2135]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:47:54.679818 waagent[2135]: pkts bytes target prot opt in out source destination Jul 6 23:47:54.679818 waagent[2135]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Jul 6 23:47:54.679818 waagent[2135]: pkts bytes target prot opt in out source destination Jul 6 23:47:54.679818 waagent[2135]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:47:54.679818 waagent[2135]: 4 595 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:47:54.679818 waagent[2135]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:47:54.682336 waagent[2135]: 2025-07-06T23:47:54.682298Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 6 23:47:54.682336 waagent[2135]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:47:54.682336 waagent[2135]: pkts bytes target prot opt in out source destination Jul 6 23:47:54.682336 waagent[2135]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:47:54.682336 waagent[2135]: pkts bytes target prot opt in out source destination Jul 6 23:47:54.682336 waagent[2135]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Jul 6 23:47:54.682336 waagent[2135]: pkts bytes target prot opt in out source destination Jul 6 23:47:54.682336 waagent[2135]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:47:54.682336 waagent[2135]: 4 595 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:47:54.682336 waagent[2135]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:47:54.682538 waagent[2135]: 2025-07-06T23:47:54.682513Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 6 23:48:01.145384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:48:01.147155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:01.252617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:01.258449 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:48:01.394821 kubelet[2285]: E0706 23:48:01.394746 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:48:01.397677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:48:01.397792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:48:01.398324 systemd[1]: kubelet.service: Consumed 112ms CPU time, 104.9M memory peak. Jul 6 23:48:11.648377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:48:11.649866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:11.746038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:11.752464 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:48:11.841104 kubelet[2300]: E0706 23:48:11.841035 2300 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:48:11.843410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:48:11.843622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:48:11.844173 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105.1M memory peak. Jul 6 23:48:13.599730 chronyd[1880]: Selected source PHC0 Jul 6 23:48:22.067682 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:48:22.069568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:22.167351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:22.169810 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:48:22.195176 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:48:22.197399 systemd[1]: Started sshd@0-10.200.20.10:22-10.200.16.10:39774.service - OpenSSH per-connection server daemon (10.200.16.10:39774). Jul 6 23:48:22.277674 kubelet[2315]: E0706 23:48:22.277616 2315 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:48:22.279933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:48:22.280049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:48:22.280354 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105.4M memory peak. Jul 6 23:48:26.599224 sshd[2321]: Accepted publickey for core from 10.200.16.10 port 39774 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:26.600406 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:26.604008 systemd-logind[1894]: New session 3 of user core. Jul 6 23:48:26.611312 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:48:27.030692 systemd[1]: Started sshd@1-10.200.20.10:22-10.200.16.10:39782.service - OpenSSH per-connection server daemon (10.200.16.10:39782). Jul 6 23:48:27.523817 sshd[2328]: Accepted publickey for core from 10.200.16.10 port 39782 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:27.525025 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:27.530180 systemd-logind[1894]: New session 4 of user core. Jul 6 23:48:27.535315 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:48:27.873221 sshd[2330]: Connection closed by 10.200.16.10 port 39782 Jul 6 23:48:27.873762 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:27.876640 systemd[1]: sshd@1-10.200.20.10:22-10.200.16.10:39782.service: Deactivated successfully. Jul 6 23:48:27.878064 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:48:27.881288 systemd-logind[1894]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:48:27.882473 systemd-logind[1894]: Removed session 4. Jul 6 23:48:27.965830 systemd[1]: Started sshd@2-10.200.20.10:22-10.200.16.10:39798.service - OpenSSH per-connection server daemon (10.200.16.10:39798). Jul 6 23:48:28.463807 sshd[2336]: Accepted publickey for core from 10.200.16.10 port 39798 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:28.464931 sshd-session[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:28.468611 systemd-logind[1894]: New session 5 of user core. Jul 6 23:48:28.477347 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:48:28.816996 sshd[2338]: Connection closed by 10.200.16.10 port 39798 Jul 6 23:48:28.817524 sshd-session[2336]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:28.820272 systemd-logind[1894]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:48:28.821768 systemd[1]: sshd@2-10.200.20.10:22-10.200.16.10:39798.service: Deactivated successfully. Jul 6 23:48:28.823627 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:48:28.825718 systemd-logind[1894]: Removed session 5. Jul 6 23:48:28.904405 systemd[1]: Started sshd@3-10.200.20.10:22-10.200.16.10:39812.service - OpenSSH per-connection server daemon (10.200.16.10:39812). Jul 6 23:48:29.379441 sshd[2344]: Accepted publickey for core from 10.200.16.10 port 39812 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:29.380553 sshd-session[2344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:29.384164 systemd-logind[1894]: New session 6 of user core. Jul 6 23:48:29.395344 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:48:29.728684 sshd[2346]: Connection closed by 10.200.16.10 port 39812 Jul 6 23:48:29.727944 sshd-session[2344]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:29.730786 systemd-logind[1894]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:48:29.730924 systemd[1]: sshd@3-10.200.20.10:22-10.200.16.10:39812.service: Deactivated successfully. Jul 6 23:48:29.732136 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:48:29.734086 systemd-logind[1894]: Removed session 6. Jul 6 23:48:29.818866 systemd[1]: Started sshd@4-10.200.20.10:22-10.200.16.10:43572.service - OpenSSH per-connection server daemon (10.200.16.10:43572). Jul 6 23:48:30.302314 sshd[2352]: Accepted publickey for core from 10.200.16.10 port 43572 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:30.303462 sshd-session[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:30.307243 systemd-logind[1894]: New session 7 of user core. Jul 6 23:48:30.314329 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:48:30.685107 sudo[2355]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:48:30.685363 sudo[2355]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:30.718886 sudo[2355]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:30.808895 sshd[2354]: Connection closed by 10.200.16.10 port 43572 Jul 6 23:48:30.809698 sshd-session[2352]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:30.813112 systemd-logind[1894]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:48:30.813402 systemd[1]: sshd@4-10.200.20.10:22-10.200.16.10:43572.service: Deactivated successfully. Jul 6 23:48:30.814845 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:48:30.816534 systemd-logind[1894]: Removed session 7. Jul 6 23:48:30.894845 systemd[1]: Started sshd@5-10.200.20.10:22-10.200.16.10:43574.service - OpenSSH per-connection server daemon (10.200.16.10:43574). Jul 6 23:48:31.377422 sshd[2361]: Accepted publickey for core from 10.200.16.10 port 43574 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:31.378650 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:31.382438 systemd-logind[1894]: New session 8 of user core. Jul 6 23:48:31.389447 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:48:31.645052 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:48:31.645519 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:31.652728 sudo[2365]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:31.656506 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:48:31.656715 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:31.663427 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:48:31.692362 augenrules[2387]: No rules Jul 6 23:48:31.693576 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:48:31.693878 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:48:31.694843 sudo[2364]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:31.783659 sshd[2363]: Connection closed by 10.200.16.10 port 43574 Jul 6 23:48:31.784008 sshd-session[2361]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:31.787494 systemd[1]: sshd@5-10.200.20.10:22-10.200.16.10:43574.service: Deactivated successfully. Jul 6 23:48:31.788933 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:48:31.789610 systemd-logind[1894]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:48:31.790592 systemd-logind[1894]: Removed session 8. Jul 6 23:48:31.869982 systemd[1]: Started sshd@6-10.200.20.10:22-10.200.16.10:43588.service - OpenSSH per-connection server daemon (10.200.16.10:43588). Jul 6 23:48:32.317871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:48:32.319423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:32.354210 sshd[2396]: Accepted publickey for core from 10.200.16.10 port 43588 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:32.354987 sshd-session[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:32.360244 systemd-logind[1894]: New session 9 of user core. Jul 6 23:48:32.363317 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:48:32.424960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:32.431452 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:48:32.457881 kubelet[2407]: E0706 23:48:32.457818 2407 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:48:32.460100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:48:32.460360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:48:32.460872 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105M memory peak. Jul 6 23:48:32.624173 sudo[2414]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:48:32.624453 sudo[2414]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:34.266623 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 6 23:48:34.426469 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:48:34.437479 (dockerd)[2431]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:48:35.208222 dockerd[2431]: time="2025-07-06T23:48:35.207848688Z" level=info msg="Starting up" Jul 6 23:48:35.209205 dockerd[2431]: time="2025-07-06T23:48:35.208996872Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:48:35.274009 systemd[1]: var-lib-docker-metacopy\x2dcheck3345038247-merged.mount: Deactivated successfully. Jul 6 23:48:35.294862 dockerd[2431]: time="2025-07-06T23:48:35.294815433Z" level=info msg="Loading containers: start." Jul 6 23:48:35.339226 kernel: Initializing XFRM netlink socket Jul 6 23:48:35.629499 systemd-networkd[1657]: docker0: Link UP Jul 6 23:48:35.643588 dockerd[2431]: time="2025-07-06T23:48:35.643539017Z" level=info msg="Loading containers: done." Jul 6 23:48:35.653857 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck909370166-merged.mount: Deactivated successfully. Jul 6 23:48:35.666423 update_engine[1897]: I20250706 23:48:35.666364 1897 update_attempter.cc:509] Updating boot flags... Jul 6 23:48:35.667294 dockerd[2431]: time="2025-07-06T23:48:35.666938533Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:48:35.667433 dockerd[2431]: time="2025-07-06T23:48:35.667276254Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:48:35.667625 dockerd[2431]: time="2025-07-06T23:48:35.667530695Z" level=info msg="Initializing buildkit" Jul 6 23:48:35.721754 dockerd[2431]: time="2025-07-06T23:48:35.721711209Z" level=info msg="Completed buildkit initialization" Jul 6 23:48:35.732305 dockerd[2431]: time="2025-07-06T23:48:35.732236357Z" level=info msg="Daemon has completed initialization" Jul 6 23:48:35.732419 dockerd[2431]: time="2025-07-06T23:48:35.732320374Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:48:35.734305 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:48:36.407663 containerd[1913]: time="2025-07-06T23:48:36.407608225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:48:37.581997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2956329271.mount: Deactivated successfully. Jul 6 23:48:38.750873 containerd[1913]: time="2025-07-06T23:48:38.750261370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:38.754889 containerd[1913]: time="2025-07-06T23:48:38.754861367Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 6 23:48:38.758054 containerd[1913]: time="2025-07-06T23:48:38.758030779Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:38.763291 containerd[1913]: time="2025-07-06T23:48:38.763263007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:38.763770 containerd[1913]: time="2025-07-06T23:48:38.763736972Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.355647335s" Jul 6 23:48:38.763817 containerd[1913]: time="2025-07-06T23:48:38.763773529Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 6 23:48:38.764941 containerd[1913]: time="2025-07-06T23:48:38.764917179Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:48:39.996227 containerd[1913]: time="2025-07-06T23:48:39.995773810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:40.001206 containerd[1913]: time="2025-07-06T23:48:40.001098478Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 6 23:48:40.005214 containerd[1913]: time="2025-07-06T23:48:40.005162858Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:40.011597 containerd[1913]: time="2025-07-06T23:48:40.011530464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:40.012374 containerd[1913]: time="2025-07-06T23:48:40.012238568Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.247290744s" Jul 6 23:48:40.012374 containerd[1913]: time="2025-07-06T23:48:40.012268006Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 6 23:48:40.012860 containerd[1913]: time="2025-07-06T23:48:40.012828499Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:48:41.061793 containerd[1913]: time="2025-07-06T23:48:41.061737608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:41.064461 containerd[1913]: time="2025-07-06T23:48:41.064430431Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 6 23:48:41.067259 containerd[1913]: time="2025-07-06T23:48:41.067236604Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:41.072936 containerd[1913]: time="2025-07-06T23:48:41.072907176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:41.073474 containerd[1913]: time="2025-07-06T23:48:41.073268312Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.060414735s" Jul 6 23:48:41.073474 containerd[1913]: time="2025-07-06T23:48:41.073292222Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 6 23:48:41.073714 containerd[1913]: time="2025-07-06T23:48:41.073688962Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:48:42.052819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889645897.mount: Deactivated successfully. Jul 6 23:48:42.311262 containerd[1913]: time="2025-07-06T23:48:42.311117726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:42.316731 containerd[1913]: time="2025-07-06T23:48:42.316584924Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 6 23:48:42.320791 containerd[1913]: time="2025-07-06T23:48:42.320764022Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:42.324700 containerd[1913]: time="2025-07-06T23:48:42.324641659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:42.325041 containerd[1913]: time="2025-07-06T23:48:42.324896396Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.251182492s" Jul 6 23:48:42.325041 containerd[1913]: time="2025-07-06T23:48:42.324927106Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 6 23:48:42.325539 containerd[1913]: time="2025-07-06T23:48:42.325509406Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:48:42.567376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 6 23:48:42.569416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:42.676065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:42.678489 (kubelet)[2768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:48:42.791968 kubelet[2768]: E0706 23:48:42.791901 2768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:48:42.793928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:48:42.794037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:48:42.795305 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105.4M memory peak. Jul 6 23:48:43.617369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752694113.mount: Deactivated successfully. Jul 6 23:48:45.044410 containerd[1913]: time="2025-07-06T23:48:45.043729424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:45.046583 containerd[1913]: time="2025-07-06T23:48:45.046557626Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 6 23:48:45.052264 containerd[1913]: time="2025-07-06T23:48:45.052243117Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:45.056102 containerd[1913]: time="2025-07-06T23:48:45.056061175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:45.056757 containerd[1913]: time="2025-07-06T23:48:45.056731731Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.731192217s" Jul 6 23:48:45.056840 containerd[1913]: time="2025-07-06T23:48:45.056827515Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:48:45.057472 containerd[1913]: time="2025-07-06T23:48:45.057255413Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:48:46.344727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525583779.mount: Deactivated successfully. Jul 6 23:48:46.373222 containerd[1913]: time="2025-07-06T23:48:46.372726729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:48:46.375608 containerd[1913]: time="2025-07-06T23:48:46.375583183Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:48:46.379925 containerd[1913]: time="2025-07-06T23:48:46.379897935Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:48:46.384510 containerd[1913]: time="2025-07-06T23:48:46.384467737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:48:46.384921 containerd[1913]: time="2025-07-06T23:48:46.384762609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.327484815s" Jul 6 23:48:46.384921 containerd[1913]: time="2025-07-06T23:48:46.384793479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:48:46.385283 containerd[1913]: time="2025-07-06T23:48:46.385250001Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:48:47.032880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount859986376.mount: Deactivated successfully. Jul 6 23:48:48.664743 containerd[1913]: time="2025-07-06T23:48:48.664681925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:48.667508 containerd[1913]: time="2025-07-06T23:48:48.667460786Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 6 23:48:48.671711 containerd[1913]: time="2025-07-06T23:48:48.671646620Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:48.676599 containerd[1913]: time="2025-07-06T23:48:48.676533644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:48.677279 containerd[1913]: time="2025-07-06T23:48:48.677110005Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.291834653s" Jul 6 23:48:48.677279 containerd[1913]: time="2025-07-06T23:48:48.677138547Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 6 23:48:50.946537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:50.946948 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105.4M memory peak. Jul 6 23:48:50.949406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:50.967860 systemd[1]: Reload requested from client PID 2914 ('systemctl') (unit session-9.scope)... Jul 6 23:48:50.967873 systemd[1]: Reloading... Jul 6 23:48:51.065219 zram_generator::config[2966]: No configuration found. Jul 6 23:48:51.131125 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:48:51.214109 systemd[1]: Reloading finished in 245 ms. Jul 6 23:48:51.258654 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:48:51.258717 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:48:51.258922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:51.258959 systemd[1]: kubelet.service: Consumed 72ms CPU time, 95M memory peak. Jul 6 23:48:51.260135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:51.863212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:51.865747 (kubelet)[3027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:48:51.893905 kubelet[3027]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:48:51.893905 kubelet[3027]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:48:51.893905 kubelet[3027]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:48:51.894818 kubelet[3027]: I0706 23:48:51.894315 3027 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:48:52.069612 kubelet[3027]: I0706 23:48:52.069571 3027 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:48:52.069612 kubelet[3027]: I0706 23:48:52.069603 3027 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:48:52.069824 kubelet[3027]: I0706 23:48:52.069801 3027 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:48:52.083344 kubelet[3027]: E0706 23:48:52.083297 3027 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:52.084518 kubelet[3027]: I0706 23:48:52.084399 3027 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:48:52.091950 kubelet[3027]: I0706 23:48:52.091930 3027 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:48:52.095213 kubelet[3027]: I0706 23:48:52.095080 3027 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:48:52.095574 kubelet[3027]: I0706 23:48:52.095551 3027 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:48:52.095711 kubelet[3027]: I0706 23:48:52.095682 3027 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:48:52.095846 kubelet[3027]: I0706 23:48:52.095711 3027 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-aa3e6ac533","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:48:52.095917 kubelet[3027]: I0706 23:48:52.095854 3027 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:48:52.095917 kubelet[3027]: I0706 23:48:52.095861 3027 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:48:52.095996 kubelet[3027]: I0706 23:48:52.095983 3027 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:48:52.097423 kubelet[3027]: I0706 23:48:52.097263 3027 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:48:52.097423 kubelet[3027]: I0706 23:48:52.097287 3027 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:48:52.097423 kubelet[3027]: I0706 23:48:52.097307 3027 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:48:52.097423 kubelet[3027]: I0706 23:48:52.097320 3027 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:48:52.099962 kubelet[3027]: W0706 23:48:52.099920 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-aa3e6ac533&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jul 6 23:48:52.100297 kubelet[3027]: E0706 23:48:52.100276 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-aa3e6ac533&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:52.100466 kubelet[3027]: I0706 23:48:52.100451 3027 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:48:52.100848 kubelet[3027]: I0706 23:48:52.100831 3027 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:48:52.100980 kubelet[3027]: W0706 23:48:52.100970 3027 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:48:52.102262 kubelet[3027]: I0706 23:48:52.102241 3027 server.go:1274] "Started kubelet" Jul 6 23:48:52.103513 kubelet[3027]: I0706 23:48:52.103492 3027 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:48:52.105200 kubelet[3027]: W0706 23:48:52.105115 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jul 6 23:48:52.105200 kubelet[3027]: E0706 23:48:52.105160 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:52.106685 kubelet[3027]: I0706 23:48:52.106298 3027 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:48:52.106980 kubelet[3027]: I0706 23:48:52.106963 3027 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:48:52.107581 kubelet[3027]: I0706 23:48:52.107531 3027 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:48:52.107724 kubelet[3027]: I0706 23:48:52.107706 3027 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:48:52.108686 kubelet[3027]: I0706 23:48:52.108672 3027 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:48:52.108956 kubelet[3027]: E0706 23:48:52.108935 3027 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:52.109369 kubelet[3027]: I0706 23:48:52.109344 3027 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:48:52.111269 kubelet[3027]: E0706 23:48:52.110635 3027 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-a-aa3e6ac533.184fce69633ca6c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-a-aa3e6ac533,UID:ci-4344.1.1-a-aa3e6ac533,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-a-aa3e6ac533,},FirstTimestamp:2025-07-06 23:48:52.102219456 +0000 UTC m=+0.234103722,LastTimestamp:2025-07-06 23:48:52.102219456 +0000 UTC m=+0.234103722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-a-aa3e6ac533,}" Jul 6 23:48:52.111369 kubelet[3027]: E0706 23:48:52.111319 3027 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-aa3e6ac533?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="200ms" Jul 6 23:48:52.111462 kubelet[3027]: I0706 23:48:52.111448 3027 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:48:52.111566 kubelet[3027]: I0706 23:48:52.111542 3027 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:48:52.111627 kubelet[3027]: I0706 23:48:52.111612 3027 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:48:52.112232 kubelet[3027]: I0706 23:48:52.111545 3027 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:48:52.114320 kubelet[3027]: W0706 23:48:52.113253 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jul 6 23:48:52.114320 kubelet[3027]: E0706 23:48:52.113293 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:52.114320 kubelet[3027]: I0706 23:48:52.113393 3027 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:48:52.127302 kubelet[3027]: I0706 23:48:52.127279 3027 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:48:52.127302 kubelet[3027]: I0706 23:48:52.127294 3027 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:48:52.127302 kubelet[3027]: I0706 23:48:52.127310 3027 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:48:52.210143 kubelet[3027]: E0706 23:48:52.210099 3027 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:52.310400 kubelet[3027]: E0706 23:48:52.310362 3027 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:52.311873 kubelet[3027]: E0706 23:48:52.311786 3027 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-aa3e6ac533?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="400ms" Jul 6 23:48:52.410687 kubelet[3027]: E0706 23:48:52.410567 3027 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:52.446913 kubelet[3027]: I0706 23:48:52.446858 3027 policy_none.go:49] "None policy: Start" Jul 6 23:48:52.447794 kubelet[3027]: I0706 23:48:52.447763 3027 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:48:52.447966 kubelet[3027]: I0706 23:48:52.447918 3027 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:48:52.448766 kubelet[3027]: I0706 23:48:52.448739 3027 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:48:52.450680 kubelet[3027]: I0706 23:48:52.450663 3027 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:48:52.452354 kubelet[3027]: I0706 23:48:52.450744 3027 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:48:52.452354 kubelet[3027]: I0706 23:48:52.450765 3027 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:48:52.452354 kubelet[3027]: E0706 23:48:52.450806 3027 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:48:52.452354 kubelet[3027]: W0706 23:48:52.451976 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jul 6 23:48:52.452354 kubelet[3027]: E0706 23:48:52.452048 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:52.484775 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:48:52.498247 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:48:52.501249 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:48:52.510861 kubelet[3027]: I0706 23:48:52.510836 3027 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:48:52.511063 kubelet[3027]: I0706 23:48:52.511037 3027 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:48:52.511100 kubelet[3027]: I0706 23:48:52.511058 3027 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:48:52.511624 kubelet[3027]: I0706 23:48:52.511607 3027 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:48:52.514370 kubelet[3027]: E0706 23:48:52.514340 3027 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:52.560483 systemd[1]: Created slice kubepods-burstable-pod7f024533c746201035083b5accca4980.slice - libcontainer container kubepods-burstable-pod7f024533c746201035083b5accca4980.slice. Jul 6 23:48:52.584608 systemd[1]: Created slice kubepods-burstable-pod10caeb34e013a47d97115b70e16155a1.slice - libcontainer container kubepods-burstable-pod10caeb34e013a47d97115b70e16155a1.slice. Jul 6 23:48:52.596701 systemd[1]: Created slice kubepods-burstable-pod93ae8b3eb9e4e5bef00ca7029214a5ba.slice - libcontainer container kubepods-burstable-pod93ae8b3eb9e4e5bef00ca7029214a5ba.slice. Jul 6 23:48:52.612939 kubelet[3027]: I0706 23:48:52.612884 3027 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.613326 kubelet[3027]: E0706 23:48:52.613294 3027 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.615452 kubelet[3027]: I0706 23:48:52.615425 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f024533c746201035083b5accca4980-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-aa3e6ac533\" (UID: \"7f024533c746201035083b5accca4980\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.615503 kubelet[3027]: I0706 23:48:52.615456 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/10caeb34e013a47d97115b70e16155a1-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-aa3e6ac533\" (UID: \"10caeb34e013a47d97115b70e16155a1\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.615503 kubelet[3027]: I0706 23:48:52.615471 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10caeb34e013a47d97115b70e16155a1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-aa3e6ac533\" (UID: \"10caeb34e013a47d97115b70e16155a1\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.615503 kubelet[3027]: I0706 23:48:52.615482 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/93ae8b3eb9e4e5bef00ca7029214a5ba-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-aa3e6ac533\" (UID: \"93ae8b3eb9e4e5bef00ca7029214a5ba\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.615503 kubelet[3027]: I0706 23:48:52.615492 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f024533c746201035083b5accca4980-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-aa3e6ac533\" (UID: \"7f024533c746201035083b5accca4980\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.615503 kubelet[3027]: I0706 23:48:52.615501 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f024533c746201035083b5accca4980-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-aa3e6ac533\" (UID: \"7f024533c746201035083b5accca4980\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.615579 kubelet[3027]: I0706 23:48:52.615509 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10caeb34e013a47d97115b70e16155a1-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-aa3e6ac533\" (UID: \"10caeb34e013a47d97115b70e16155a1\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.615579 kubelet[3027]: I0706 23:48:52.615518 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10caeb34e013a47d97115b70e16155a1-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-aa3e6ac533\" (UID: \"10caeb34e013a47d97115b70e16155a1\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.615579 kubelet[3027]: I0706 23:48:52.615528 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10caeb34e013a47d97115b70e16155a1-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-aa3e6ac533\" (UID: \"10caeb34e013a47d97115b70e16155a1\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.713144 kubelet[3027]: E0706 23:48:52.713005 3027 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-aa3e6ac533?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="800ms" Jul 6 23:48:52.815084 kubelet[3027]: I0706 23:48:52.815039 3027 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.815498 kubelet[3027]: E0706 23:48:52.815474 3027 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:52.883553 containerd[1913]: time="2025-07-06T23:48:52.883452848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-aa3e6ac533,Uid:7f024533c746201035083b5accca4980,Namespace:kube-system,Attempt:0,}" Jul 6 23:48:52.896201 containerd[1913]: time="2025-07-06T23:48:52.896097207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-aa3e6ac533,Uid:10caeb34e013a47d97115b70e16155a1,Namespace:kube-system,Attempt:0,}" Jul 6 23:48:52.898940 containerd[1913]: time="2025-07-06T23:48:52.898842642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-aa3e6ac533,Uid:93ae8b3eb9e4e5bef00ca7029214a5ba,Namespace:kube-system,Attempt:0,}" Jul 6 23:48:52.973692 containerd[1913]: time="2025-07-06T23:48:52.973452545Z" level=info msg="connecting to shim f157398ca8a49164323ded0907d04ac7577d79d19cbb0da5b0c1d7f6e7738e99" address="unix:///run/containerd/s/cd08859d9cc86aa7bb5e90954e45e5a131dc4eb2eafac607e972088c0426b529" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:48:52.988477 containerd[1913]: time="2025-07-06T23:48:52.988435345Z" level=info msg="connecting to shim b077456cc7b6afceacb6c8343688adcb74f784f928fbdceb63665d88a2218d70" address="unix:///run/containerd/s/07e39d32888c55e9414444c6ffc8bb17e56d88695f3d77bcaf8c5231d53101e9" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:48:52.997996 systemd[1]: Started cri-containerd-f157398ca8a49164323ded0907d04ac7577d79d19cbb0da5b0c1d7f6e7738e99.scope - libcontainer container f157398ca8a49164323ded0907d04ac7577d79d19cbb0da5b0c1d7f6e7738e99. Jul 6 23:48:53.015869 containerd[1913]: time="2025-07-06T23:48:53.015794360Z" level=info msg="connecting to shim be56a35e28fec04ee19ca1f313ad065e3e2d4ed935b4dd688d5dc8e7ad9ced32" address="unix:///run/containerd/s/69f909d0bd23931435f48b93c8e783bf36f00697a32400024eeba44aca5b0816" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:48:53.017429 systemd[1]: Started cri-containerd-b077456cc7b6afceacb6c8343688adcb74f784f928fbdceb63665d88a2218d70.scope - libcontainer container b077456cc7b6afceacb6c8343688adcb74f784f928fbdceb63665d88a2218d70. Jul 6 23:48:53.053469 systemd[1]: Started cri-containerd-be56a35e28fec04ee19ca1f313ad065e3e2d4ed935b4dd688d5dc8e7ad9ced32.scope - libcontainer container be56a35e28fec04ee19ca1f313ad065e3e2d4ed935b4dd688d5dc8e7ad9ced32. Jul 6 23:48:53.061515 containerd[1913]: time="2025-07-06T23:48:53.061426080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-aa3e6ac533,Uid:7f024533c746201035083b5accca4980,Namespace:kube-system,Attempt:0,} returns sandbox id \"f157398ca8a49164323ded0907d04ac7577d79d19cbb0da5b0c1d7f6e7738e99\"" Jul 6 23:48:53.067125 containerd[1913]: time="2025-07-06T23:48:53.067005622Z" level=info msg="CreateContainer within sandbox \"f157398ca8a49164323ded0907d04ac7577d79d19cbb0da5b0c1d7f6e7738e99\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:48:53.068123 containerd[1913]: time="2025-07-06T23:48:53.067812494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-aa3e6ac533,Uid:93ae8b3eb9e4e5bef00ca7029214a5ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"b077456cc7b6afceacb6c8343688adcb74f784f928fbdceb63665d88a2218d70\"" Jul 6 23:48:53.071602 containerd[1913]: time="2025-07-06T23:48:53.071568863Z" level=info msg="CreateContainer within sandbox \"b077456cc7b6afceacb6c8343688adcb74f784f928fbdceb63665d88a2218d70\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:48:53.085499 kubelet[3027]: W0706 23:48:53.085414 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-aa3e6ac533&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jul 6 23:48:53.085785 kubelet[3027]: E0706 23:48:53.085506 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-aa3e6ac533&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:53.107535 containerd[1913]: time="2025-07-06T23:48:53.107491170Z" level=info msg="Container 6e8c9f848183eb5287be0cc19d30a1b6d9f3611b3b723aa14b5b5a6c0ffcb2b4: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:48:53.112764 containerd[1913]: time="2025-07-06T23:48:53.112653382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-aa3e6ac533,Uid:10caeb34e013a47d97115b70e16155a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"be56a35e28fec04ee19ca1f313ad065e3e2d4ed935b4dd688d5dc8e7ad9ced32\"" Jul 6 23:48:53.115941 containerd[1913]: time="2025-07-06T23:48:53.115914307Z" level=info msg="CreateContainer within sandbox \"be56a35e28fec04ee19ca1f313ad065e3e2d4ed935b4dd688d5dc8e7ad9ced32\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:48:53.118951 containerd[1913]: time="2025-07-06T23:48:53.118922174Z" level=info msg="Container f5a5651aa983cd636a8d32decf2276ae15817b7a70658158acf0f68a6fb43b02: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:48:53.135065 containerd[1913]: time="2025-07-06T23:48:53.134775392Z" level=info msg="CreateContainer within sandbox \"f157398ca8a49164323ded0907d04ac7577d79d19cbb0da5b0c1d7f6e7738e99\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e8c9f848183eb5287be0cc19d30a1b6d9f3611b3b723aa14b5b5a6c0ffcb2b4\"" Jul 6 23:48:53.135567 containerd[1913]: time="2025-07-06T23:48:53.135546379Z" level=info msg="StartContainer for \"6e8c9f848183eb5287be0cc19d30a1b6d9f3611b3b723aa14b5b5a6c0ffcb2b4\"" Jul 6 23:48:53.136672 containerd[1913]: time="2025-07-06T23:48:53.136590638Z" level=info msg="connecting to shim 6e8c9f848183eb5287be0cc19d30a1b6d9f3611b3b723aa14b5b5a6c0ffcb2b4" address="unix:///run/containerd/s/cd08859d9cc86aa7bb5e90954e45e5a131dc4eb2eafac607e972088c0426b529" protocol=ttrpc version=3 Jul 6 23:48:53.140057 kubelet[3027]: W0706 23:48:53.139969 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jul 6 23:48:53.140057 kubelet[3027]: E0706 23:48:53.140034 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:53.151339 systemd[1]: Started cri-containerd-6e8c9f848183eb5287be0cc19d30a1b6d9f3611b3b723aa14b5b5a6c0ffcb2b4.scope - libcontainer container 6e8c9f848183eb5287be0cc19d30a1b6d9f3611b3b723aa14b5b5a6c0ffcb2b4. Jul 6 23:48:53.154213 containerd[1913]: time="2025-07-06T23:48:53.153909028Z" level=info msg="CreateContainer within sandbox \"b077456cc7b6afceacb6c8343688adcb74f784f928fbdceb63665d88a2218d70\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f5a5651aa983cd636a8d32decf2276ae15817b7a70658158acf0f68a6fb43b02\"" Jul 6 23:48:53.155825 containerd[1913]: time="2025-07-06T23:48:53.155798892Z" level=info msg="StartContainer for \"f5a5651aa983cd636a8d32decf2276ae15817b7a70658158acf0f68a6fb43b02\"" Jul 6 23:48:53.158167 containerd[1913]: time="2025-07-06T23:48:53.158139603Z" level=info msg="connecting to shim f5a5651aa983cd636a8d32decf2276ae15817b7a70658158acf0f68a6fb43b02" address="unix:///run/containerd/s/07e39d32888c55e9414444c6ffc8bb17e56d88695f3d77bcaf8c5231d53101e9" protocol=ttrpc version=3 Jul 6 23:48:53.168172 containerd[1913]: time="2025-07-06T23:48:53.168136183Z" level=info msg="Container 268d78ddfa7f50436500a45a8ff617af62783a84e23df389f15b27a6571f5201: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:48:53.181380 systemd[1]: Started cri-containerd-f5a5651aa983cd636a8d32decf2276ae15817b7a70658158acf0f68a6fb43b02.scope - libcontainer container f5a5651aa983cd636a8d32decf2276ae15817b7a70658158acf0f68a6fb43b02. Jul 6 23:48:53.188279 containerd[1913]: time="2025-07-06T23:48:53.188244677Z" level=info msg="CreateContainer within sandbox \"be56a35e28fec04ee19ca1f313ad065e3e2d4ed935b4dd688d5dc8e7ad9ced32\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"268d78ddfa7f50436500a45a8ff617af62783a84e23df389f15b27a6571f5201\"" Jul 6 23:48:53.188737 containerd[1913]: time="2025-07-06T23:48:53.188713931Z" level=info msg="StartContainer for \"268d78ddfa7f50436500a45a8ff617af62783a84e23df389f15b27a6571f5201\"" Jul 6 23:48:53.190074 containerd[1913]: time="2025-07-06T23:48:53.189795627Z" level=info msg="connecting to shim 268d78ddfa7f50436500a45a8ff617af62783a84e23df389f15b27a6571f5201" address="unix:///run/containerd/s/69f909d0bd23931435f48b93c8e783bf36f00697a32400024eeba44aca5b0816" protocol=ttrpc version=3 Jul 6 23:48:53.212774 containerd[1913]: time="2025-07-06T23:48:53.212684696Z" level=info msg="StartContainer for \"6e8c9f848183eb5287be0cc19d30a1b6d9f3611b3b723aa14b5b5a6c0ffcb2b4\" returns successfully" Jul 6 23:48:53.215445 systemd[1]: Started cri-containerd-268d78ddfa7f50436500a45a8ff617af62783a84e23df389f15b27a6571f5201.scope - libcontainer container 268d78ddfa7f50436500a45a8ff617af62783a84e23df389f15b27a6571f5201. Jul 6 23:48:53.221231 kubelet[3027]: I0706 23:48:53.221121 3027 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:53.222022 kubelet[3027]: E0706 23:48:53.221991 3027 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:53.237366 containerd[1913]: time="2025-07-06T23:48:53.237171440Z" level=info msg="StartContainer for \"f5a5651aa983cd636a8d32decf2276ae15817b7a70658158acf0f68a6fb43b02\" returns successfully" Jul 6 23:48:53.286703 containerd[1913]: time="2025-07-06T23:48:53.286662960Z" level=info msg="StartContainer for \"268d78ddfa7f50436500a45a8ff617af62783a84e23df389f15b27a6571f5201\" returns successfully" Jul 6 23:48:54.025946 kubelet[3027]: I0706 23:48:54.025912 3027 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:54.539405 kubelet[3027]: E0706 23:48:54.539359 3027 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-a-aa3e6ac533\" not found" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:54.581542 kubelet[3027]: I0706 23:48:54.581467 3027 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:54.581542 kubelet[3027]: E0706 23:48:54.581510 3027 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4344.1.1-a-aa3e6ac533\": node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:54.592770 kubelet[3027]: E0706 23:48:54.592733 3027 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:54.692979 kubelet[3027]: E0706 23:48:54.692884 3027 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:54.795209 kubelet[3027]: E0706 23:48:54.793460 3027 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:54.894218 kubelet[3027]: E0706 23:48:54.894054 3027 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:54.994567 kubelet[3027]: E0706 23:48:54.994519 3027 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:55.095078 kubelet[3027]: E0706 23:48:55.095042 3027 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-aa3e6ac533\" not found" Jul 6 23:48:55.791202 kubelet[3027]: W0706 23:48:55.790762 3027 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:48:56.106516 kubelet[3027]: I0706 23:48:56.106470 3027 apiserver.go:52] "Watching apiserver" Jul 6 23:48:56.111720 kubelet[3027]: I0706 23:48:56.111695 3027 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:48:56.808928 systemd[1]: Reload requested from client PID 3296 ('systemctl') (unit session-9.scope)... Jul 6 23:48:56.808942 systemd[1]: Reloading... Jul 6 23:48:56.879238 zram_generator::config[3342]: No configuration found. Jul 6 23:48:56.948929 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:48:57.044598 systemd[1]: Reloading finished in 235 ms. Jul 6 23:48:57.071074 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:57.082968 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:48:57.083223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:57.083287 systemd[1]: kubelet.service: Consumed 494ms CPU time, 128M memory peak. Jul 6 23:48:57.084902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:57.192348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:57.197637 (kubelet)[3406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:48:57.292889 kubelet[3406]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:48:57.292889 kubelet[3406]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:48:57.292889 kubelet[3406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:48:57.292889 kubelet[3406]: I0706 23:48:57.292436 3406 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:48:57.298658 kubelet[3406]: I0706 23:48:57.298627 3406 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:48:57.298830 kubelet[3406]: I0706 23:48:57.298819 3406 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:48:57.299089 kubelet[3406]: I0706 23:48:57.299071 3406 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:48:57.300157 kubelet[3406]: I0706 23:48:57.300132 3406 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:48:57.301694 kubelet[3406]: I0706 23:48:57.301656 3406 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:48:57.304734 kubelet[3406]: I0706 23:48:57.304712 3406 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:48:57.306980 kubelet[3406]: I0706 23:48:57.306964 3406 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:48:57.307134 kubelet[3406]: I0706 23:48:57.307120 3406 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:48:57.307245 kubelet[3406]: I0706 23:48:57.307222 3406 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:48:57.307358 kubelet[3406]: I0706 23:48:57.307243 3406 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-aa3e6ac533","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:48:57.307417 kubelet[3406]: I0706 23:48:57.307363 3406 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:48:57.307417 kubelet[3406]: I0706 23:48:57.307370 3406 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:48:57.307417 kubelet[3406]: I0706 23:48:57.307398 3406 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:48:57.307490 kubelet[3406]: I0706 23:48:57.307478 3406 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:48:57.307490 kubelet[3406]: I0706 23:48:57.307490 3406 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:48:57.307538 kubelet[3406]: I0706 23:48:57.307510 3406 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:48:57.307538 kubelet[3406]: I0706 23:48:57.307520 3406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:48:57.315649 kubelet[3406]: I0706 23:48:57.315615 3406 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:48:57.316810 kubelet[3406]: I0706 23:48:57.316795 3406 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:48:57.318469 kubelet[3406]: I0706 23:48:57.318446 3406 server.go:1274] "Started kubelet" Jul 6 23:48:57.320597 kubelet[3406]: I0706 23:48:57.320448 3406 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:48:57.321727 kubelet[3406]: I0706 23:48:57.321654 3406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:48:57.322273 kubelet[3406]: I0706 23:48:57.322256 3406 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:48:57.322928 kubelet[3406]: I0706 23:48:57.322887 3406 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:48:57.323854 kubelet[3406]: I0706 23:48:57.323829 3406 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:48:57.326116 kubelet[3406]: I0706 23:48:57.326093 3406 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:48:57.326996 kubelet[3406]: I0706 23:48:57.326971 3406 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:48:57.327058 kubelet[3406]: I0706 23:48:57.327045 3406 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:48:57.327058 kubelet[3406]: I0706 23:48:57.327133 3406 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:48:57.330819 kubelet[3406]: I0706 23:48:57.329529 3406 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:48:57.330819 kubelet[3406]: I0706 23:48:57.329700 3406 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:48:57.330819 kubelet[3406]: E0706 23:48:57.329855 3406 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:48:57.331206 kubelet[3406]: I0706 23:48:57.331150 3406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:48:57.332137 kubelet[3406]: I0706 23:48:57.331960 3406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:48:57.332137 kubelet[3406]: I0706 23:48:57.331985 3406 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:48:57.332137 kubelet[3406]: I0706 23:48:57.331999 3406 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:48:57.332137 kubelet[3406]: E0706 23:48:57.332031 3406 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:48:57.335321 kubelet[3406]: I0706 23:48:57.335300 3406 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:48:57.375654 kubelet[3406]: I0706 23:48:57.375627 3406 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:48:57.375654 kubelet[3406]: I0706 23:48:57.375644 3406 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:48:57.375654 kubelet[3406]: I0706 23:48:57.375664 3406 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:48:57.375817 kubelet[3406]: I0706 23:48:57.375788 3406 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:48:57.375817 kubelet[3406]: I0706 23:48:57.375795 3406 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:48:57.375847 kubelet[3406]: I0706 23:48:57.375818 3406 policy_none.go:49] "None policy: Start" Jul 6 23:48:57.376415 kubelet[3406]: I0706 23:48:57.376315 3406 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:48:57.376488 kubelet[3406]: I0706 23:48:57.376423 3406 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:48:57.376554 kubelet[3406]: I0706 23:48:57.376538 3406 state_mem.go:75] "Updated machine memory state" Jul 6 23:48:57.380822 kubelet[3406]: I0706 23:48:57.380801 3406 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:48:57.380968 kubelet[3406]: I0706 23:48:57.380949 3406 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:48:57.381012 kubelet[3406]: I0706 23:48:57.380965 3406 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:48:57.381663 kubelet[3406]: I0706 23:48:57.381555 3406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:48:57.441161 kubelet[3406]: W0706 23:48:57.441062 3406 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:48:57.445438 kubelet[3406]: W0706 23:48:57.445398 3406 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:48:57.446612 kubelet[3406]: W0706 23:48:57.446591 3406 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:48:57.446739 kubelet[3406]: E0706 23:48:57.446721 3406 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4344.1.1-a-aa3e6ac533\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.491420 kubelet[3406]: I0706 23:48:57.491248 3406 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.503327 kubelet[3406]: I0706 23:48:57.503251 3406 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.503471 kubelet[3406]: I0706 23:48:57.503341 3406 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.527829 kubelet[3406]: I0706 23:48:57.527746 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10caeb34e013a47d97115b70e16155a1-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-aa3e6ac533\" (UID: \"10caeb34e013a47d97115b70e16155a1\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.527829 kubelet[3406]: I0706 23:48:57.527783 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10caeb34e013a47d97115b70e16155a1-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-aa3e6ac533\" (UID: \"10caeb34e013a47d97115b70e16155a1\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.527829 kubelet[3406]: I0706 23:48:57.527801 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/10caeb34e013a47d97115b70e16155a1-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-aa3e6ac533\" (UID: \"10caeb34e013a47d97115b70e16155a1\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.528008 kubelet[3406]: I0706 23:48:57.527846 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10caeb34e013a47d97115b70e16155a1-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-aa3e6ac533\" (UID: \"10caeb34e013a47d97115b70e16155a1\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.528008 kubelet[3406]: I0706 23:48:57.527892 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10caeb34e013a47d97115b70e16155a1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-aa3e6ac533\" (UID: \"10caeb34e013a47d97115b70e16155a1\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.528008 kubelet[3406]: I0706 23:48:57.527911 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/93ae8b3eb9e4e5bef00ca7029214a5ba-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-aa3e6ac533\" (UID: \"93ae8b3eb9e4e5bef00ca7029214a5ba\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.528008 kubelet[3406]: I0706 23:48:57.527930 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f024533c746201035083b5accca4980-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-aa3e6ac533\" (UID: \"7f024533c746201035083b5accca4980\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.528008 kubelet[3406]: I0706 23:48:57.527949 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f024533c746201035083b5accca4980-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-aa3e6ac533\" (UID: \"7f024533c746201035083b5accca4980\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.528095 kubelet[3406]: I0706 23:48:57.527959 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f024533c746201035083b5accca4980-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-aa3e6ac533\" (UID: \"7f024533c746201035083b5accca4980\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-aa3e6ac533" Jul 6 23:48:57.826354 sudo[3438]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:48:57.826911 sudo[3438]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:48:58.182092 sudo[3438]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:58.308461 kubelet[3406]: I0706 23:48:58.308418 3406 apiserver.go:52] "Watching apiserver" Jul 6 23:48:58.327423 kubelet[3406]: I0706 23:48:58.327375 3406 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:48:58.392074 kubelet[3406]: I0706 23:48:58.392015 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-a-aa3e6ac533" podStartSLOduration=1.391978467 podStartE2EDuration="1.391978467s" podCreationTimestamp="2025-07-06 23:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:48:58.391798538 +0000 UTC m=+1.190460689" watchObservedRunningTime="2025-07-06 23:48:58.391978467 +0000 UTC m=+1.190640618" Jul 6 23:48:58.413198 kubelet[3406]: I0706 23:48:58.412887 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-a-aa3e6ac533" podStartSLOduration=1.412870576 podStartE2EDuration="1.412870576s" podCreationTimestamp="2025-07-06 23:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:48:58.4026722 +0000 UTC m=+1.201334351" watchObservedRunningTime="2025-07-06 23:48:58.412870576 +0000 UTC m=+1.211532727" Jul 6 23:48:58.424298 kubelet[3406]: I0706 23:48:58.424228 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-aa3e6ac533" podStartSLOduration=3.424212711 podStartE2EDuration="3.424212711s" podCreationTimestamp="2025-07-06 23:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:48:58.413371374 +0000 UTC m=+1.212033557" watchObservedRunningTime="2025-07-06 23:48:58.424212711 +0000 UTC m=+1.222874862" Jul 6 23:48:59.320738 sudo[2414]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:59.392552 sshd[2401]: Connection closed by 10.200.16.10 port 43588 Jul 6 23:48:59.393196 sshd-session[2396]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:59.396932 systemd[1]: sshd@6-10.200.20.10:22-10.200.16.10:43588.service: Deactivated successfully. Jul 6 23:48:59.399548 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:48:59.399819 systemd[1]: session-9.scope: Consumed 3.265s CPU time, 266.4M memory peak. Jul 6 23:48:59.401121 systemd-logind[1894]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:48:59.403657 systemd-logind[1894]: Removed session 9. Jul 6 23:49:01.836434 kubelet[3406]: I0706 23:49:01.836292 3406 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:49:01.837466 containerd[1913]: time="2025-07-06T23:49:01.837438352Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:49:01.838206 kubelet[3406]: I0706 23:49:01.837718 3406 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:49:02.487202 systemd[1]: Created slice kubepods-burstable-pod58b4afee_9e2e_4939_876d_fd2fe4409b78.slice - libcontainer container kubepods-burstable-pod58b4afee_9e2e_4939_876d_fd2fe4409b78.slice. Jul 6 23:49:02.494829 systemd[1]: Created slice kubepods-besteffort-pod51bfb8ca_a7d1_4795_be38_6e395a11b0c8.slice - libcontainer container kubepods-besteffort-pod51bfb8ca_a7d1_4795_be38_6e395a11b0c8.slice. Jul 6 23:49:02.558228 kubelet[3406]: I0706 23:49:02.557917 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cni-path\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558228 kubelet[3406]: I0706 23:49:02.557962 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51bfb8ca-a7d1-4795-be38-6e395a11b0c8-xtables-lock\") pod \"kube-proxy-kjf5w\" (UID: \"51bfb8ca-a7d1-4795-be38-6e395a11b0c8\") " pod="kube-system/kube-proxy-kjf5w" Jul 6 23:49:02.558228 kubelet[3406]: I0706 23:49:02.557978 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-cgroup\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558228 kubelet[3406]: I0706 23:49:02.557990 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-xtables-lock\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558228 kubelet[3406]: I0706 23:49:02.558001 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-lib-modules\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558228 kubelet[3406]: I0706 23:49:02.558012 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcqj8\" (UniqueName: \"kubernetes.io/projected/58b4afee-9e2e-4939-876d-fd2fe4409b78-kube-api-access-zcqj8\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558470 kubelet[3406]: I0706 23:49:02.558022 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58b4afee-9e2e-4939-876d-fd2fe4409b78-clustermesh-secrets\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558470 kubelet[3406]: I0706 23:49:02.558033 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-host-proc-sys-kernel\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558470 kubelet[3406]: I0706 23:49:02.558042 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-bpf-maps\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558470 kubelet[3406]: I0706 23:49:02.558050 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-hostproc\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558470 kubelet[3406]: I0706 23:49:02.558058 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58b4afee-9e2e-4939-876d-fd2fe4409b78-hubble-tls\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558470 kubelet[3406]: I0706 23:49:02.558067 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51bfb8ca-a7d1-4795-be38-6e395a11b0c8-lib-modules\") pod \"kube-proxy-kjf5w\" (UID: \"51bfb8ca-a7d1-4795-be38-6e395a11b0c8\") " pod="kube-system/kube-proxy-kjf5w" Jul 6 23:49:02.558559 kubelet[3406]: I0706 23:49:02.558078 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-config-path\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558559 kubelet[3406]: I0706 23:49:02.558088 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-host-proc-sys-net\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558559 kubelet[3406]: I0706 23:49:02.558099 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8ng\" (UniqueName: \"kubernetes.io/projected/51bfb8ca-a7d1-4795-be38-6e395a11b0c8-kube-api-access-lr8ng\") pod \"kube-proxy-kjf5w\" (UID: \"51bfb8ca-a7d1-4795-be38-6e395a11b0c8\") " pod="kube-system/kube-proxy-kjf5w" Jul 6 23:49:02.558559 kubelet[3406]: I0706 23:49:02.558109 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-etc-cni-netd\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.558559 kubelet[3406]: I0706 23:49:02.558130 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51bfb8ca-a7d1-4795-be38-6e395a11b0c8-kube-proxy\") pod \"kube-proxy-kjf5w\" (UID: \"51bfb8ca-a7d1-4795-be38-6e395a11b0c8\") " pod="kube-system/kube-proxy-kjf5w" Jul 6 23:49:02.558639 kubelet[3406]: I0706 23:49:02.558143 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-run\") pod \"cilium-pcdh5\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " pod="kube-system/cilium-pcdh5" Jul 6 23:49:02.671402 kubelet[3406]: E0706 23:49:02.671368 3406 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:49:02.671402 kubelet[3406]: E0706 23:49:02.671400 3406 projected.go:194] Error preparing data for projected volume kube-api-access-zcqj8 for pod kube-system/cilium-pcdh5: configmap "kube-root-ca.crt" not found Jul 6 23:49:02.671402 kubelet[3406]: E0706 23:49:02.671447 3406 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/58b4afee-9e2e-4939-876d-fd2fe4409b78-kube-api-access-zcqj8 podName:58b4afee-9e2e-4939-876d-fd2fe4409b78 nodeName:}" failed. No retries permitted until 2025-07-06 23:49:03.171427231 +0000 UTC m=+5.970089382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqj8" (UniqueName: "kubernetes.io/projected/58b4afee-9e2e-4939-876d-fd2fe4409b78-kube-api-access-zcqj8") pod "cilium-pcdh5" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78") : configmap "kube-root-ca.crt" not found Jul 6 23:49:02.677493 kubelet[3406]: E0706 23:49:02.676789 3406 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:49:02.677493 kubelet[3406]: E0706 23:49:02.676818 3406 projected.go:194] Error preparing data for projected volume kube-api-access-lr8ng for pod kube-system/kube-proxy-kjf5w: configmap "kube-root-ca.crt" not found Jul 6 23:49:02.677493 kubelet[3406]: E0706 23:49:02.676859 3406 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/51bfb8ca-a7d1-4795-be38-6e395a11b0c8-kube-api-access-lr8ng podName:51bfb8ca-a7d1-4795-be38-6e395a11b0c8 nodeName:}" failed. No retries permitted until 2025-07-06 23:49:03.176841493 +0000 UTC m=+5.975503652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lr8ng" (UniqueName: "kubernetes.io/projected/51bfb8ca-a7d1-4795-be38-6e395a11b0c8-kube-api-access-lr8ng") pod "kube-proxy-kjf5w" (UID: "51bfb8ca-a7d1-4795-be38-6e395a11b0c8") : configmap "kube-root-ca.crt" not found Jul 6 23:49:02.920627 systemd[1]: Created slice kubepods-besteffort-pod062b27d8_3a8e_497b_93b7_a26ed55682bc.slice - libcontainer container kubepods-besteffort-pod062b27d8_3a8e_497b_93b7_a26ed55682bc.slice. Jul 6 23:49:02.961148 kubelet[3406]: I0706 23:49:02.961079 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/062b27d8-3a8e-497b-93b7-a26ed55682bc-cilium-config-path\") pod \"cilium-operator-5d85765b45-4xbfr\" (UID: \"062b27d8-3a8e-497b-93b7-a26ed55682bc\") " pod="kube-system/cilium-operator-5d85765b45-4xbfr" Jul 6 23:49:02.961148 kubelet[3406]: I0706 23:49:02.961152 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsgx2\" (UniqueName: \"kubernetes.io/projected/062b27d8-3a8e-497b-93b7-a26ed55682bc-kube-api-access-lsgx2\") pod \"cilium-operator-5d85765b45-4xbfr\" (UID: \"062b27d8-3a8e-497b-93b7-a26ed55682bc\") " pod="kube-system/cilium-operator-5d85765b45-4xbfr" Jul 6 23:49:03.224160 containerd[1913]: time="2025-07-06T23:49:03.224034574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4xbfr,Uid:062b27d8-3a8e-497b-93b7-a26ed55682bc,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:03.279150 containerd[1913]: time="2025-07-06T23:49:03.279044166Z" level=info msg="connecting to shim 245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5" address="unix:///run/containerd/s/366c65de40d012e1608edd21a82ab532fdfe1e3873fbcbe1c1553106120aa769" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:49:03.298351 systemd[1]: Started cri-containerd-245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5.scope - libcontainer container 245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5. Jul 6 23:49:03.327024 containerd[1913]: time="2025-07-06T23:49:03.326983586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4xbfr,Uid:062b27d8-3a8e-497b-93b7-a26ed55682bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\"" Jul 6 23:49:03.329206 containerd[1913]: time="2025-07-06T23:49:03.328956310Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:49:03.393149 containerd[1913]: time="2025-07-06T23:49:03.393107390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pcdh5,Uid:58b4afee-9e2e-4939-876d-fd2fe4409b78,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:03.406960 containerd[1913]: time="2025-07-06T23:49:03.406910577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjf5w,Uid:51bfb8ca-a7d1-4795-be38-6e395a11b0c8,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:03.495978 containerd[1913]: time="2025-07-06T23:49:03.495778905Z" level=info msg="connecting to shim 0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c" address="unix:///run/containerd/s/45051041034149ce1d02a24b07b5b538ae0e040f85dec599b1b5e33b0869cedf" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:49:03.517057 containerd[1913]: time="2025-07-06T23:49:03.517009539Z" level=info msg="connecting to shim ad24eaf1ad0a8db88342da57444ec39609c58134e6e57603c0d9390fccb639f1" address="unix:///run/containerd/s/b4f549c0102130549ed28b2d47389941c5843a8b1cbbe485ca07f1f5053d11c8" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:49:03.518417 systemd[1]: Started cri-containerd-0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c.scope - libcontainer container 0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c. Jul 6 23:49:03.537370 systemd[1]: Started cri-containerd-ad24eaf1ad0a8db88342da57444ec39609c58134e6e57603c0d9390fccb639f1.scope - libcontainer container ad24eaf1ad0a8db88342da57444ec39609c58134e6e57603c0d9390fccb639f1. Jul 6 23:49:03.549765 containerd[1913]: time="2025-07-06T23:49:03.549730745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pcdh5,Uid:58b4afee-9e2e-4939-876d-fd2fe4409b78,Namespace:kube-system,Attempt:0,} returns sandbox id \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\"" Jul 6 23:49:03.568552 containerd[1913]: time="2025-07-06T23:49:03.568513470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjf5w,Uid:51bfb8ca-a7d1-4795-be38-6e395a11b0c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad24eaf1ad0a8db88342da57444ec39609c58134e6e57603c0d9390fccb639f1\"" Jul 6 23:49:03.571392 containerd[1913]: time="2025-07-06T23:49:03.571055915Z" level=info msg="CreateContainer within sandbox \"ad24eaf1ad0a8db88342da57444ec39609c58134e6e57603c0d9390fccb639f1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:49:03.594017 containerd[1913]: time="2025-07-06T23:49:03.593938643Z" level=info msg="Container 5b0a6ab8b6f7a8a9d4dd7dded1f1324273adcd611199ceae60d554992f38af59: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:03.613434 containerd[1913]: time="2025-07-06T23:49:03.613383282Z" level=info msg="CreateContainer within sandbox \"ad24eaf1ad0a8db88342da57444ec39609c58134e6e57603c0d9390fccb639f1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5b0a6ab8b6f7a8a9d4dd7dded1f1324273adcd611199ceae60d554992f38af59\"" Jul 6 23:49:03.616360 containerd[1913]: time="2025-07-06T23:49:03.616309950Z" level=info msg="StartContainer for \"5b0a6ab8b6f7a8a9d4dd7dded1f1324273adcd611199ceae60d554992f38af59\"" Jul 6 23:49:03.619228 containerd[1913]: time="2025-07-06T23:49:03.619179975Z" level=info msg="connecting to shim 5b0a6ab8b6f7a8a9d4dd7dded1f1324273adcd611199ceae60d554992f38af59" address="unix:///run/containerd/s/b4f549c0102130549ed28b2d47389941c5843a8b1cbbe485ca07f1f5053d11c8" protocol=ttrpc version=3 Jul 6 23:49:03.635363 systemd[1]: Started cri-containerd-5b0a6ab8b6f7a8a9d4dd7dded1f1324273adcd611199ceae60d554992f38af59.scope - libcontainer container 5b0a6ab8b6f7a8a9d4dd7dded1f1324273adcd611199ceae60d554992f38af59. Jul 6 23:49:03.672390 containerd[1913]: time="2025-07-06T23:49:03.672349160Z" level=info msg="StartContainer for \"5b0a6ab8b6f7a8a9d4dd7dded1f1324273adcd611199ceae60d554992f38af59\" returns successfully" Jul 6 23:49:04.386015 kubelet[3406]: I0706 23:49:04.385787 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kjf5w" podStartSLOduration=2.3857570790000002 podStartE2EDuration="2.385757079s" podCreationTimestamp="2025-07-06 23:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:04.385163273 +0000 UTC m=+7.183825424" watchObservedRunningTime="2025-07-06 23:49:04.385757079 +0000 UTC m=+7.184419238" Jul 6 23:49:05.141266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890240667.mount: Deactivated successfully. Jul 6 23:49:05.549580 containerd[1913]: time="2025-07-06T23:49:05.549283986Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:05.550868 containerd[1913]: time="2025-07-06T23:49:05.550823730Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 6 23:49:05.556327 containerd[1913]: time="2025-07-06T23:49:05.556277132Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:05.557091 containerd[1913]: time="2025-07-06T23:49:05.556977682Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.227886168s" Jul 6 23:49:05.557091 containerd[1913]: time="2025-07-06T23:49:05.557010759Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 6 23:49:05.558312 containerd[1913]: time="2025-07-06T23:49:05.557963128Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:49:05.559675 containerd[1913]: time="2025-07-06T23:49:05.559638165Z" level=info msg="CreateContainer within sandbox \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:49:05.592166 containerd[1913]: time="2025-07-06T23:49:05.592127542Z" level=info msg="Container bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:05.613135 containerd[1913]: time="2025-07-06T23:49:05.613075280Z" level=info msg="CreateContainer within sandbox \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\"" Jul 6 23:49:05.613823 containerd[1913]: time="2025-07-06T23:49:05.613739400Z" level=info msg="StartContainer for \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\"" Jul 6 23:49:05.614920 containerd[1913]: time="2025-07-06T23:49:05.614892064Z" level=info msg="connecting to shim bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30" address="unix:///run/containerd/s/366c65de40d012e1608edd21a82ab532fdfe1e3873fbcbe1c1553106120aa769" protocol=ttrpc version=3 Jul 6 23:49:05.635368 systemd[1]: Started cri-containerd-bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30.scope - libcontainer container bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30. Jul 6 23:49:05.670382 containerd[1913]: time="2025-07-06T23:49:05.670286720Z" level=info msg="StartContainer for \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\" returns successfully" Jul 6 23:49:06.400036 kubelet[3406]: I0706 23:49:06.399940 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-4xbfr" podStartSLOduration=2.170480132 podStartE2EDuration="4.399922889s" podCreationTimestamp="2025-07-06 23:49:02 +0000 UTC" firstStartedPulling="2025-07-06 23:49:03.328407716 +0000 UTC m=+6.127069867" lastFinishedPulling="2025-07-06 23:49:05.557850473 +0000 UTC m=+8.356512624" observedRunningTime="2025-07-06 23:49:06.398881583 +0000 UTC m=+9.197543742" watchObservedRunningTime="2025-07-06 23:49:06.399922889 +0000 UTC m=+9.198585040" Jul 6 23:49:08.596262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount156021599.mount: Deactivated successfully. Jul 6 23:49:10.508429 containerd[1913]: time="2025-07-06T23:49:10.508058808Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:10.514241 containerd[1913]: time="2025-07-06T23:49:10.514199511Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 6 23:49:10.520170 containerd[1913]: time="2025-07-06T23:49:10.520119731Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:10.521145 containerd[1913]: time="2025-07-06T23:49:10.521113134Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.963123425s" Jul 6 23:49:10.521145 containerd[1913]: time="2025-07-06T23:49:10.521146212Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 6 23:49:10.523190 containerd[1913]: time="2025-07-06T23:49:10.523155001Z" level=info msg="CreateContainer within sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:49:10.551918 containerd[1913]: time="2025-07-06T23:49:10.551335006Z" level=info msg="Container e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:10.552534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3401200433.mount: Deactivated successfully. Jul 6 23:49:10.567941 containerd[1913]: time="2025-07-06T23:49:10.567896525Z" level=info msg="CreateContainer within sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\"" Jul 6 23:49:10.568387 containerd[1913]: time="2025-07-06T23:49:10.568354705Z" level=info msg="StartContainer for \"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\"" Jul 6 23:49:10.570338 containerd[1913]: time="2025-07-06T23:49:10.570308985Z" level=info msg="connecting to shim e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b" address="unix:///run/containerd/s/45051041034149ce1d02a24b07b5b538ae0e040f85dec599b1b5e33b0869cedf" protocol=ttrpc version=3 Jul 6 23:49:10.591342 systemd[1]: Started cri-containerd-e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b.scope - libcontainer container e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b. Jul 6 23:49:10.621887 systemd[1]: cri-containerd-e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b.scope: Deactivated successfully. Jul 6 23:49:10.624438 containerd[1913]: time="2025-07-06T23:49:10.623679323Z" level=info msg="StartContainer for \"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\" returns successfully" Jul 6 23:49:10.627775 containerd[1913]: time="2025-07-06T23:49:10.627722219Z" level=info msg="received exit event container_id:\"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\" id:\"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\" pid:3868 exited_at:{seconds:1751845750 nanos:626833809}" Jul 6 23:49:10.627958 containerd[1913]: time="2025-07-06T23:49:10.627936221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\" id:\"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\" pid:3868 exited_at:{seconds:1751845750 nanos:626833809}" Jul 6 23:49:10.643793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b-rootfs.mount: Deactivated successfully. Jul 6 23:49:12.392570 containerd[1913]: time="2025-07-06T23:49:12.392527536Z" level=info msg="CreateContainer within sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:49:12.417849 containerd[1913]: time="2025-07-06T23:49:12.417807226Z" level=info msg="Container 898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:12.436088 containerd[1913]: time="2025-07-06T23:49:12.436022004Z" level=info msg="CreateContainer within sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\"" Jul 6 23:49:12.436876 containerd[1913]: time="2025-07-06T23:49:12.436809760Z" level=info msg="StartContainer for \"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\"" Jul 6 23:49:12.437936 containerd[1913]: time="2025-07-06T23:49:12.437909426Z" level=info msg="connecting to shim 898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd" address="unix:///run/containerd/s/45051041034149ce1d02a24b07b5b538ae0e040f85dec599b1b5e33b0869cedf" protocol=ttrpc version=3 Jul 6 23:49:12.460359 systemd[1]: Started cri-containerd-898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd.scope - libcontainer container 898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd. Jul 6 23:49:12.488018 containerd[1913]: time="2025-07-06T23:49:12.487949299Z" level=info msg="StartContainer for \"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\" returns successfully" Jul 6 23:49:12.496771 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:49:12.496943 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:49:12.497387 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:49:12.500508 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:49:12.501674 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:49:12.501969 systemd[1]: cri-containerd-898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd.scope: Deactivated successfully. Jul 6 23:49:12.505728 containerd[1913]: time="2025-07-06T23:49:12.505676199Z" level=info msg="received exit event container_id:\"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\" id:\"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\" pid:3914 exited_at:{seconds:1751845752 nanos:504133140}" Jul 6 23:49:12.506867 containerd[1913]: time="2025-07-06T23:49:12.506491857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\" id:\"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\" pid:3914 exited_at:{seconds:1751845752 nanos:504133140}" Jul 6 23:49:12.526167 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:49:13.395526 containerd[1913]: time="2025-07-06T23:49:13.395419642Z" level=info msg="CreateContainer within sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:49:13.418041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd-rootfs.mount: Deactivated successfully. Jul 6 23:49:13.430222 containerd[1913]: time="2025-07-06T23:49:13.430167631Z" level=info msg="Container 518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:13.467947 containerd[1913]: time="2025-07-06T23:49:13.467882204Z" level=info msg="CreateContainer within sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\"" Jul 6 23:49:13.469372 containerd[1913]: time="2025-07-06T23:49:13.469159942Z" level=info msg="StartContainer for \"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\"" Jul 6 23:49:13.470761 containerd[1913]: time="2025-07-06T23:49:13.470710105Z" level=info msg="connecting to shim 518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489" address="unix:///run/containerd/s/45051041034149ce1d02a24b07b5b538ae0e040f85dec599b1b5e33b0869cedf" protocol=ttrpc version=3 Jul 6 23:49:13.485335 systemd[1]: Started cri-containerd-518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489.scope - libcontainer container 518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489. Jul 6 23:49:13.515783 systemd[1]: cri-containerd-518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489.scope: Deactivated successfully. Jul 6 23:49:13.517903 containerd[1913]: time="2025-07-06T23:49:13.517870394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\" id:\"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\" pid:3965 exited_at:{seconds:1751845753 nanos:517532143}" Jul 6 23:49:13.519154 containerd[1913]: time="2025-07-06T23:49:13.518536393Z" level=info msg="received exit event container_id:\"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\" id:\"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\" pid:3965 exited_at:{seconds:1751845753 nanos:517532143}" Jul 6 23:49:13.521027 containerd[1913]: time="2025-07-06T23:49:13.521003205Z" level=info msg="StartContainer for \"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\" returns successfully" Jul 6 23:49:13.535691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489-rootfs.mount: Deactivated successfully. Jul 6 23:49:14.401083 containerd[1913]: time="2025-07-06T23:49:14.400700871Z" level=info msg="CreateContainer within sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:49:14.433596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403665127.mount: Deactivated successfully. Jul 6 23:49:14.434931 containerd[1913]: time="2025-07-06T23:49:14.434147292Z" level=info msg="Container 31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:14.449777 containerd[1913]: time="2025-07-06T23:49:14.449734863Z" level=info msg="CreateContainer within sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\"" Jul 6 23:49:14.450931 containerd[1913]: time="2025-07-06T23:49:14.450911730Z" level=info msg="StartContainer for \"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\"" Jul 6 23:49:14.452826 containerd[1913]: time="2025-07-06T23:49:14.452796312Z" level=info msg="connecting to shim 31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985" address="unix:///run/containerd/s/45051041034149ce1d02a24b07b5b538ae0e040f85dec599b1b5e33b0869cedf" protocol=ttrpc version=3 Jul 6 23:49:14.473338 systemd[1]: Started cri-containerd-31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985.scope - libcontainer container 31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985. Jul 6 23:49:14.492718 systemd[1]: cri-containerd-31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985.scope: Deactivated successfully. Jul 6 23:49:14.494734 containerd[1913]: time="2025-07-06T23:49:14.494702421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\" id:\"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\" pid:4003 exited_at:{seconds:1751845754 nanos:493812034}" Jul 6 23:49:14.500318 containerd[1913]: time="2025-07-06T23:49:14.499534886Z" level=info msg="received exit event container_id:\"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\" id:\"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\" pid:4003 exited_at:{seconds:1751845754 nanos:493812034}" Jul 6 23:49:14.501030 containerd[1913]: time="2025-07-06T23:49:14.500994840Z" level=info msg="StartContainer for \"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\" returns successfully" Jul 6 23:49:14.516968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985-rootfs.mount: Deactivated successfully. Jul 6 23:49:15.405194 containerd[1913]: time="2025-07-06T23:49:15.404679988Z" level=info msg="CreateContainer within sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:49:15.436023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13320655.mount: Deactivated successfully. Jul 6 23:49:15.437455 containerd[1913]: time="2025-07-06T23:49:15.437293776Z" level=info msg="Container ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:15.457291 containerd[1913]: time="2025-07-06T23:49:15.457221647Z" level=info msg="CreateContainer within sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\"" Jul 6 23:49:15.458485 containerd[1913]: time="2025-07-06T23:49:15.458450509Z" level=info msg="StartContainer for \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\"" Jul 6 23:49:15.459669 containerd[1913]: time="2025-07-06T23:49:15.459238690Z" level=info msg="connecting to shim ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc" address="unix:///run/containerd/s/45051041034149ce1d02a24b07b5b538ae0e040f85dec599b1b5e33b0869cedf" protocol=ttrpc version=3 Jul 6 23:49:15.478416 systemd[1]: Started cri-containerd-ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc.scope - libcontainer container ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc. Jul 6 23:49:15.509518 containerd[1913]: time="2025-07-06T23:49:15.509472650Z" level=info msg="StartContainer for \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" returns successfully" Jul 6 23:49:15.572574 containerd[1913]: time="2025-07-06T23:49:15.572536484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" id:\"64c2437df7e2d33c8ad9583f821c9ed9c63a3e083402c3ba76e2e5a983aee969\" pid:4068 exited_at:{seconds:1751845755 nanos:571692237}" Jul 6 23:49:15.596761 kubelet[3406]: I0706 23:49:15.596361 3406 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:49:15.633748 systemd[1]: Created slice kubepods-burstable-pod9288fcb6_30a7_457b_b84b_385e5bbc654c.slice - libcontainer container kubepods-burstable-pod9288fcb6_30a7_457b_b84b_385e5bbc654c.slice. Jul 6 23:49:15.636239 kubelet[3406]: I0706 23:49:15.635911 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9288fcb6-30a7-457b-b84b-385e5bbc654c-config-volume\") pod \"coredns-7c65d6cfc9-27x5t\" (UID: \"9288fcb6-30a7-457b-b84b-385e5bbc654c\") " pod="kube-system/coredns-7c65d6cfc9-27x5t" Jul 6 23:49:15.636495 kubelet[3406]: I0706 23:49:15.636343 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czvhb\" (UniqueName: \"kubernetes.io/projected/9288fcb6-30a7-457b-b84b-385e5bbc654c-kube-api-access-czvhb\") pod \"coredns-7c65d6cfc9-27x5t\" (UID: \"9288fcb6-30a7-457b-b84b-385e5bbc654c\") " pod="kube-system/coredns-7c65d6cfc9-27x5t" Jul 6 23:49:15.642312 systemd[1]: Created slice kubepods-burstable-poddccdcd84_a949_4de6_b74e_4fd4a17ff263.slice - libcontainer container kubepods-burstable-poddccdcd84_a949_4de6_b74e_4fd4a17ff263.slice. Jul 6 23:49:15.737332 kubelet[3406]: I0706 23:49:15.737210 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqcvn\" (UniqueName: \"kubernetes.io/projected/dccdcd84-a949-4de6-b74e-4fd4a17ff263-kube-api-access-sqcvn\") pod \"coredns-7c65d6cfc9-zhghk\" (UID: \"dccdcd84-a949-4de6-b74e-4fd4a17ff263\") " pod="kube-system/coredns-7c65d6cfc9-zhghk" Jul 6 23:49:15.737598 kubelet[3406]: I0706 23:49:15.737258 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dccdcd84-a949-4de6-b74e-4fd4a17ff263-config-volume\") pod \"coredns-7c65d6cfc9-zhghk\" (UID: \"dccdcd84-a949-4de6-b74e-4fd4a17ff263\") " pod="kube-system/coredns-7c65d6cfc9-zhghk" Jul 6 23:49:15.937076 containerd[1913]: time="2025-07-06T23:49:15.937029552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-27x5t,Uid:9288fcb6-30a7-457b-b84b-385e5bbc654c,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:15.948370 containerd[1913]: time="2025-07-06T23:49:15.948338508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zhghk,Uid:dccdcd84-a949-4de6-b74e-4fd4a17ff263,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:16.424862 kubelet[3406]: I0706 23:49:16.424670 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pcdh5" podStartSLOduration=7.454822972 podStartE2EDuration="14.424654626s" podCreationTimestamp="2025-07-06 23:49:02 +0000 UTC" firstStartedPulling="2025-07-06 23:49:03.551980701 +0000 UTC m=+6.350642852" lastFinishedPulling="2025-07-06 23:49:10.521812355 +0000 UTC m=+13.320474506" observedRunningTime="2025-07-06 23:49:16.424107897 +0000 UTC m=+19.222770056" watchObservedRunningTime="2025-07-06 23:49:16.424654626 +0000 UTC m=+19.223316777" Jul 6 23:49:17.499227 systemd-networkd[1657]: cilium_host: Link UP Jul 6 23:49:17.499693 systemd-networkd[1657]: cilium_net: Link UP Jul 6 23:49:17.500236 systemd-networkd[1657]: cilium_net: Gained carrier Jul 6 23:49:17.500690 systemd-networkd[1657]: cilium_host: Gained carrier Jul 6 23:49:17.601733 systemd-networkd[1657]: cilium_vxlan: Link UP Jul 6 23:49:17.601738 systemd-networkd[1657]: cilium_vxlan: Gained carrier Jul 6 23:49:17.830217 kernel: NET: Registered PF_ALG protocol family Jul 6 23:49:18.249410 systemd-networkd[1657]: cilium_host: Gained IPv6LL Jul 6 23:49:18.313444 systemd-networkd[1657]: cilium_net: Gained IPv6LL Jul 6 23:49:18.348483 systemd-networkd[1657]: lxc_health: Link UP Jul 6 23:49:18.357591 systemd-networkd[1657]: lxc_health: Gained carrier Jul 6 23:49:18.472226 kernel: eth0: renamed from tmp26d3d Jul 6 23:49:18.472320 systemd-networkd[1657]: lxc39087c956dd3: Link UP Jul 6 23:49:18.474565 systemd-networkd[1657]: lxc39087c956dd3: Gained carrier Jul 6 23:49:18.498158 systemd-networkd[1657]: lxc3cd5a0e2133d: Link UP Jul 6 23:49:18.505433 kernel: eth0: renamed from tmp4ae4a Jul 6 23:49:18.508572 systemd-networkd[1657]: lxc3cd5a0e2133d: Gained carrier Jul 6 23:49:19.402398 systemd-networkd[1657]: cilium_vxlan: Gained IPv6LL Jul 6 23:49:19.593365 systemd-networkd[1657]: lxc_health: Gained IPv6LL Jul 6 23:49:19.657371 systemd-networkd[1657]: lxc39087c956dd3: Gained IPv6LL Jul 6 23:49:19.849385 systemd-networkd[1657]: lxc3cd5a0e2133d: Gained IPv6LL Jul 6 23:49:21.114313 containerd[1913]: time="2025-07-06T23:49:21.114223671Z" level=info msg="connecting to shim 26d3deab56348cff9d10d2abff8f26771159c4c14a304871752100607556f3dc" address="unix:///run/containerd/s/d5fb509e6d2a5e020938fd51a853a6f1709eb60284b5c3242c39f160dc087912" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:49:21.114940 containerd[1913]: time="2025-07-06T23:49:21.114882647Z" level=info msg="connecting to shim 4ae4ad3d42585a827364e2ac45e6aa7b544cebd6d76b15c0a60d270b9c538254" address="unix:///run/containerd/s/fec765f21491330c0986dc3c5cba813b3f8fc54bfba8eff8c4ae1a5483eee66d" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:49:21.152327 systemd[1]: Started cri-containerd-26d3deab56348cff9d10d2abff8f26771159c4c14a304871752100607556f3dc.scope - libcontainer container 26d3deab56348cff9d10d2abff8f26771159c4c14a304871752100607556f3dc. Jul 6 23:49:21.153234 systemd[1]: Started cri-containerd-4ae4ad3d42585a827364e2ac45e6aa7b544cebd6d76b15c0a60d270b9c538254.scope - libcontainer container 4ae4ad3d42585a827364e2ac45e6aa7b544cebd6d76b15c0a60d270b9c538254. Jul 6 23:49:21.192846 containerd[1913]: time="2025-07-06T23:49:21.192806604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zhghk,Uid:dccdcd84-a949-4de6-b74e-4fd4a17ff263,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ae4ad3d42585a827364e2ac45e6aa7b544cebd6d76b15c0a60d270b9c538254\"" Jul 6 23:49:21.198220 containerd[1913]: time="2025-07-06T23:49:21.198136183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-27x5t,Uid:9288fcb6-30a7-457b-b84b-385e5bbc654c,Namespace:kube-system,Attempt:0,} returns sandbox id \"26d3deab56348cff9d10d2abff8f26771159c4c14a304871752100607556f3dc\"" Jul 6 23:49:21.199440 containerd[1913]: time="2025-07-06T23:49:21.198482761Z" level=info msg="CreateContainer within sandbox \"4ae4ad3d42585a827364e2ac45e6aa7b544cebd6d76b15c0a60d270b9c538254\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:49:21.201848 containerd[1913]: time="2025-07-06T23:49:21.201790304Z" level=info msg="CreateContainer within sandbox \"26d3deab56348cff9d10d2abff8f26771159c4c14a304871752100607556f3dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:49:21.231149 containerd[1913]: time="2025-07-06T23:49:21.231114149Z" level=info msg="Container 0148e3092c700e97fabbfead6d944cc8ab483bb1bc0747c1d4f574564868e032: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:21.236867 containerd[1913]: time="2025-07-06T23:49:21.236828280Z" level=info msg="Container a5b4f2361e3b9ceef4319da95d13a9538cb88757a95ea33c7b9d5c9bc1c470b5: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:21.251362 containerd[1913]: time="2025-07-06T23:49:21.251321881Z" level=info msg="CreateContainer within sandbox \"4ae4ad3d42585a827364e2ac45e6aa7b544cebd6d76b15c0a60d270b9c538254\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0148e3092c700e97fabbfead6d944cc8ab483bb1bc0747c1d4f574564868e032\"" Jul 6 23:49:21.252071 containerd[1913]: time="2025-07-06T23:49:21.251924845Z" level=info msg="StartContainer for \"0148e3092c700e97fabbfead6d944cc8ab483bb1bc0747c1d4f574564868e032\"" Jul 6 23:49:21.253880 containerd[1913]: time="2025-07-06T23:49:21.253855833Z" level=info msg="connecting to shim 0148e3092c700e97fabbfead6d944cc8ab483bb1bc0747c1d4f574564868e032" address="unix:///run/containerd/s/fec765f21491330c0986dc3c5cba813b3f8fc54bfba8eff8c4ae1a5483eee66d" protocol=ttrpc version=3 Jul 6 23:49:21.269752 containerd[1913]: time="2025-07-06T23:49:21.269679465Z" level=info msg="CreateContainer within sandbox \"26d3deab56348cff9d10d2abff8f26771159c4c14a304871752100607556f3dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5b4f2361e3b9ceef4319da95d13a9538cb88757a95ea33c7b9d5c9bc1c470b5\"" Jul 6 23:49:21.270550 containerd[1913]: time="2025-07-06T23:49:21.270525129Z" level=info msg="StartContainer for \"a5b4f2361e3b9ceef4319da95d13a9538cb88757a95ea33c7b9d5c9bc1c470b5\"" Jul 6 23:49:21.271548 containerd[1913]: time="2025-07-06T23:49:21.271515469Z" level=info msg="connecting to shim a5b4f2361e3b9ceef4319da95d13a9538cb88757a95ea33c7b9d5c9bc1c470b5" address="unix:///run/containerd/s/d5fb509e6d2a5e020938fd51a853a6f1709eb60284b5c3242c39f160dc087912" protocol=ttrpc version=3 Jul 6 23:49:21.272340 systemd[1]: Started cri-containerd-0148e3092c700e97fabbfead6d944cc8ab483bb1bc0747c1d4f574564868e032.scope - libcontainer container 0148e3092c700e97fabbfead6d944cc8ab483bb1bc0747c1d4f574564868e032. Jul 6 23:49:21.289348 systemd[1]: Started cri-containerd-a5b4f2361e3b9ceef4319da95d13a9538cb88757a95ea33c7b9d5c9bc1c470b5.scope - libcontainer container a5b4f2361e3b9ceef4319da95d13a9538cb88757a95ea33c7b9d5c9bc1c470b5. Jul 6 23:49:21.337767 containerd[1913]: time="2025-07-06T23:49:21.336384055Z" level=info msg="StartContainer for \"0148e3092c700e97fabbfead6d944cc8ab483bb1bc0747c1d4f574564868e032\" returns successfully" Jul 6 23:49:21.340556 containerd[1913]: time="2025-07-06T23:49:21.340312353Z" level=info msg="StartContainer for \"a5b4f2361e3b9ceef4319da95d13a9538cb88757a95ea33c7b9d5c9bc1c470b5\" returns successfully" Jul 6 23:49:21.433673 kubelet[3406]: I0706 23:49:21.433519 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zhghk" podStartSLOduration=19.433500965 podStartE2EDuration="19.433500965s" podCreationTimestamp="2025-07-06 23:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:21.432515633 +0000 UTC m=+24.231177792" watchObservedRunningTime="2025-07-06 23:49:21.433500965 +0000 UTC m=+24.232163116" Jul 6 23:49:21.447724 kubelet[3406]: I0706 23:49:21.447649 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-27x5t" podStartSLOduration=19.44762187 podStartE2EDuration="19.44762187s" podCreationTimestamp="2025-07-06 23:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:21.446003479 +0000 UTC m=+24.244665638" watchObservedRunningTime="2025-07-06 23:49:21.44762187 +0000 UTC m=+24.246284021" Jul 6 23:49:22.101298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541592017.mount: Deactivated successfully. Jul 6 23:50:42.700481 systemd[1]: Started sshd@7-10.200.20.10:22-10.200.16.10:35894.service - OpenSSH per-connection server daemon (10.200.16.10:35894). Jul 6 23:50:43.181238 sshd[4727]: Accepted publickey for core from 10.200.16.10 port 35894 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:50:43.182854 sshd-session[4727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:43.186817 systemd-logind[1894]: New session 10 of user core. Jul 6 23:50:43.192331 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:50:43.640214 sshd[4729]: Connection closed by 10.200.16.10 port 35894 Jul 6 23:50:43.640614 sshd-session[4727]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:43.644470 systemd[1]: sshd@7-10.200.20.10:22-10.200.16.10:35894.service: Deactivated successfully. Jul 6 23:50:43.646512 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:50:43.647571 systemd-logind[1894]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:50:43.648804 systemd-logind[1894]: Removed session 10. Jul 6 23:50:48.731254 systemd[1]: Started sshd@8-10.200.20.10:22-10.200.16.10:35902.service - OpenSSH per-connection server daemon (10.200.16.10:35902). Jul 6 23:50:49.214113 sshd[4742]: Accepted publickey for core from 10.200.16.10 port 35902 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:50:49.215731 sshd-session[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:49.219805 systemd-logind[1894]: New session 11 of user core. Jul 6 23:50:49.225339 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:50:49.611723 sshd[4744]: Connection closed by 10.200.16.10 port 35902 Jul 6 23:50:49.612314 sshd-session[4742]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:49.615520 systemd[1]: sshd@8-10.200.20.10:22-10.200.16.10:35902.service: Deactivated successfully. Jul 6 23:50:49.617255 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:50:49.617982 systemd-logind[1894]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:50:49.620123 systemd-logind[1894]: Removed session 11. Jul 6 23:50:54.712523 systemd[1]: Started sshd@9-10.200.20.10:22-10.200.16.10:46516.service - OpenSSH per-connection server daemon (10.200.16.10:46516). Jul 6 23:50:55.194269 sshd[4758]: Accepted publickey for core from 10.200.16.10 port 46516 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:50:55.195524 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:55.199930 systemd-logind[1894]: New session 12 of user core. Jul 6 23:50:55.207385 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:50:55.577204 sshd[4760]: Connection closed by 10.200.16.10 port 46516 Jul 6 23:50:55.578417 sshd-session[4758]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:55.581945 systemd[1]: sshd@9-10.200.20.10:22-10.200.16.10:46516.service: Deactivated successfully. Jul 6 23:50:55.583877 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:50:55.585307 systemd-logind[1894]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:50:55.589266 systemd-logind[1894]: Removed session 12. Jul 6 23:51:00.668260 systemd[1]: Started sshd@10-10.200.20.10:22-10.200.16.10:59382.service - OpenSSH per-connection server daemon (10.200.16.10:59382). Jul 6 23:51:01.149887 sshd[4774]: Accepted publickey for core from 10.200.16.10 port 59382 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:01.151450 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:01.155806 systemd-logind[1894]: New session 13 of user core. Jul 6 23:51:01.160474 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:51:01.532297 sshd[4776]: Connection closed by 10.200.16.10 port 59382 Jul 6 23:51:01.531857 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:01.536008 systemd-logind[1894]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:51:01.536210 systemd[1]: sshd@10-10.200.20.10:22-10.200.16.10:59382.service: Deactivated successfully. Jul 6 23:51:01.538632 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:51:01.540639 systemd-logind[1894]: Removed session 13. Jul 6 23:51:06.619285 systemd[1]: Started sshd@11-10.200.20.10:22-10.200.16.10:59392.service - OpenSSH per-connection server daemon (10.200.16.10:59392). Jul 6 23:51:07.102636 sshd[4793]: Accepted publickey for core from 10.200.16.10 port 59392 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:07.103868 sshd-session[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:07.108091 systemd-logind[1894]: New session 14 of user core. Jul 6 23:51:07.115368 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:51:07.489664 sshd[4795]: Connection closed by 10.200.16.10 port 59392 Jul 6 23:51:07.490474 sshd-session[4793]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:07.494107 systemd[1]: sshd@11-10.200.20.10:22-10.200.16.10:59392.service: Deactivated successfully. Jul 6 23:51:07.495979 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:51:07.496931 systemd-logind[1894]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:51:07.498649 systemd-logind[1894]: Removed session 14. Jul 6 23:51:07.602269 systemd[1]: Started sshd@12-10.200.20.10:22-10.200.16.10:59402.service - OpenSSH per-connection server daemon (10.200.16.10:59402). Jul 6 23:51:08.085064 sshd[4807]: Accepted publickey for core from 10.200.16.10 port 59402 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:08.086173 sshd-session[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:08.089928 systemd-logind[1894]: New session 15 of user core. Jul 6 23:51:08.098581 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:51:08.496441 sshd[4809]: Connection closed by 10.200.16.10 port 59402 Jul 6 23:51:08.495624 sshd-session[4807]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:08.498895 systemd-logind[1894]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:51:08.499490 systemd[1]: sshd@12-10.200.20.10:22-10.200.16.10:59402.service: Deactivated successfully. Jul 6 23:51:08.501495 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:51:08.504057 systemd-logind[1894]: Removed session 15. Jul 6 23:51:08.595113 systemd[1]: Started sshd@13-10.200.20.10:22-10.200.16.10:59416.service - OpenSSH per-connection server daemon (10.200.16.10:59416). Jul 6 23:51:09.084848 sshd[4818]: Accepted publickey for core from 10.200.16.10 port 59416 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:09.086124 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:09.089976 systemd-logind[1894]: New session 16 of user core. Jul 6 23:51:09.095366 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:51:09.470501 sshd[4820]: Connection closed by 10.200.16.10 port 59416 Jul 6 23:51:09.471083 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:09.474867 systemd[1]: sshd@13-10.200.20.10:22-10.200.16.10:59416.service: Deactivated successfully. Jul 6 23:51:09.476558 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:51:09.477367 systemd-logind[1894]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:51:09.478713 systemd-logind[1894]: Removed session 16. Jul 6 23:51:14.561166 systemd[1]: Started sshd@14-10.200.20.10:22-10.200.16.10:50988.service - OpenSSH per-connection server daemon (10.200.16.10:50988). Jul 6 23:51:15.046049 sshd[4832]: Accepted publickey for core from 10.200.16.10 port 50988 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:15.047258 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:15.051141 systemd-logind[1894]: New session 17 of user core. Jul 6 23:51:15.059606 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:51:15.447626 sshd[4834]: Connection closed by 10.200.16.10 port 50988 Jul 6 23:51:15.448273 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:15.451773 systemd[1]: sshd@14-10.200.20.10:22-10.200.16.10:50988.service: Deactivated successfully. Jul 6 23:51:15.453679 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:51:15.454645 systemd-logind[1894]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:51:15.456480 systemd-logind[1894]: Removed session 17. Jul 6 23:51:15.542440 systemd[1]: Started sshd@15-10.200.20.10:22-10.200.16.10:50990.service - OpenSSH per-connection server daemon (10.200.16.10:50990). Jul 6 23:51:16.046879 sshd[4846]: Accepted publickey for core from 10.200.16.10 port 50990 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:16.048095 sshd-session[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:16.052315 systemd-logind[1894]: New session 18 of user core. Jul 6 23:51:16.059353 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:51:16.492300 sshd[4848]: Connection closed by 10.200.16.10 port 50990 Jul 6 23:51:16.491677 sshd-session[4846]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:16.494664 systemd-logind[1894]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:51:16.494816 systemd[1]: sshd@15-10.200.20.10:22-10.200.16.10:50990.service: Deactivated successfully. Jul 6 23:51:16.496561 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:51:16.499044 systemd-logind[1894]: Removed session 18. Jul 6 23:51:16.580616 systemd[1]: Started sshd@16-10.200.20.10:22-10.200.16.10:51002.service - OpenSSH per-connection server daemon (10.200.16.10:51002). Jul 6 23:51:17.064999 sshd[4857]: Accepted publickey for core from 10.200.16.10 port 51002 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:17.066206 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:17.070338 systemd-logind[1894]: New session 19 of user core. Jul 6 23:51:17.078353 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:51:18.583231 sshd[4859]: Connection closed by 10.200.16.10 port 51002 Jul 6 23:51:18.583835 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:18.587738 systemd-logind[1894]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:51:18.588318 systemd[1]: sshd@16-10.200.20.10:22-10.200.16.10:51002.service: Deactivated successfully. Jul 6 23:51:18.590511 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:51:18.590677 systemd[1]: session-19.scope: Consumed 308ms CPU time, 66.8M memory peak. Jul 6 23:51:18.592323 systemd-logind[1894]: Removed session 19. Jul 6 23:51:18.672371 systemd[1]: Started sshd@17-10.200.20.10:22-10.200.16.10:51012.service - OpenSSH per-connection server daemon (10.200.16.10:51012). Jul 6 23:51:19.163604 sshd[4876]: Accepted publickey for core from 10.200.16.10 port 51012 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:19.164845 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:19.169154 systemd-logind[1894]: New session 20 of user core. Jul 6 23:51:19.173378 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:51:19.618563 sshd[4878]: Connection closed by 10.200.16.10 port 51012 Jul 6 23:51:19.617928 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:19.621169 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:51:19.622572 systemd[1]: sshd@17-10.200.20.10:22-10.200.16.10:51012.service: Deactivated successfully. Jul 6 23:51:19.624911 systemd-logind[1894]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:51:19.626698 systemd-logind[1894]: Removed session 20. Jul 6 23:51:19.712400 systemd[1]: Started sshd@18-10.200.20.10:22-10.200.16.10:55998.service - OpenSSH per-connection server daemon (10.200.16.10:55998). Jul 6 23:51:20.196621 sshd[4887]: Accepted publickey for core from 10.200.16.10 port 55998 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:20.197810 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:20.201924 systemd-logind[1894]: New session 21 of user core. Jul 6 23:51:20.206363 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:51:20.574001 sshd[4889]: Connection closed by 10.200.16.10 port 55998 Jul 6 23:51:20.574587 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:20.577727 systemd[1]: sshd@18-10.200.20.10:22-10.200.16.10:55998.service: Deactivated successfully. Jul 6 23:51:20.579924 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:51:20.580901 systemd-logind[1894]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:51:20.582493 systemd-logind[1894]: Removed session 21. Jul 6 23:51:25.668618 systemd[1]: Started sshd@19-10.200.20.10:22-10.200.16.10:56006.service - OpenSSH per-connection server daemon (10.200.16.10:56006). Jul 6 23:51:26.163343 sshd[4904]: Accepted publickey for core from 10.200.16.10 port 56006 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:26.164591 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:26.168410 systemd-logind[1894]: New session 22 of user core. Jul 6 23:51:26.178346 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:51:26.554547 sshd[4906]: Connection closed by 10.200.16.10 port 56006 Jul 6 23:51:26.555203 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:26.559240 systemd[1]: sshd@19-10.200.20.10:22-10.200.16.10:56006.service: Deactivated successfully. Jul 6 23:51:26.561632 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:51:26.562646 systemd-logind[1894]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:51:26.563955 systemd-logind[1894]: Removed session 22. Jul 6 23:51:31.652083 systemd[1]: Started sshd@20-10.200.20.10:22-10.200.16.10:58638.service - OpenSSH per-connection server daemon (10.200.16.10:58638). Jul 6 23:51:32.159939 sshd[4918]: Accepted publickey for core from 10.200.16.10 port 58638 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:32.161151 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:32.164789 systemd-logind[1894]: New session 23 of user core. Jul 6 23:51:32.169329 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:51:32.551990 sshd[4920]: Connection closed by 10.200.16.10 port 58638 Jul 6 23:51:32.552694 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:32.556024 systemd[1]: sshd@20-10.200.20.10:22-10.200.16.10:58638.service: Deactivated successfully. Jul 6 23:51:32.557585 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:51:32.558345 systemd-logind[1894]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:51:32.559911 systemd-logind[1894]: Removed session 23. Jul 6 23:51:37.649378 systemd[1]: Started sshd@21-10.200.20.10:22-10.200.16.10:58644.service - OpenSSH per-connection server daemon (10.200.16.10:58644). Jul 6 23:51:38.132895 sshd[4934]: Accepted publickey for core from 10.200.16.10 port 58644 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:38.134032 sshd-session[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:38.138241 systemd-logind[1894]: New session 24 of user core. Jul 6 23:51:38.144341 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:51:38.517271 sshd[4936]: Connection closed by 10.200.16.10 port 58644 Jul 6 23:51:38.517775 sshd-session[4934]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:38.521005 systemd[1]: sshd@21-10.200.20.10:22-10.200.16.10:58644.service: Deactivated successfully. Jul 6 23:51:38.522732 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:51:38.523576 systemd-logind[1894]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:51:38.525068 systemd-logind[1894]: Removed session 24. Jul 6 23:51:38.613596 systemd[1]: Started sshd@22-10.200.20.10:22-10.200.16.10:58648.service - OpenSSH per-connection server daemon (10.200.16.10:58648). Jul 6 23:51:39.096910 sshd[4947]: Accepted publickey for core from 10.200.16.10 port 58648 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:39.098063 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:39.101949 systemd-logind[1894]: New session 25 of user core. Jul 6 23:51:39.112327 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:51:40.664054 containerd[1913]: time="2025-07-06T23:51:40.663990025Z" level=info msg="StopContainer for \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\" with timeout 30 (s)" Jul 6 23:51:40.665169 containerd[1913]: time="2025-07-06T23:51:40.665068874Z" level=info msg="Stop container \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\" with signal terminated" Jul 6 23:51:40.680880 systemd[1]: cri-containerd-bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30.scope: Deactivated successfully. Jul 6 23:51:40.684580 containerd[1913]: time="2025-07-06T23:51:40.684501248Z" level=info msg="received exit event container_id:\"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\" id:\"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\" pid:3805 exited_at:{seconds:1751845900 nanos:684010891}" Jul 6 23:51:40.685013 containerd[1913]: time="2025-07-06T23:51:40.684924929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\" id:\"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\" pid:3805 exited_at:{seconds:1751845900 nanos:684010891}" Jul 6 23:51:40.686363 containerd[1913]: time="2025-07-06T23:51:40.686332906Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:51:40.692603 containerd[1913]: time="2025-07-06T23:51:40.692552436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" id:\"2423aade68913901b05be6397440f163d5769e86eb2a01908ae609c29761011d\" pid:4969 exited_at:{seconds:1751845900 nanos:692336563}" Jul 6 23:51:40.695467 containerd[1913]: time="2025-07-06T23:51:40.695438601Z" level=info msg="StopContainer for \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" with timeout 2 (s)" Jul 6 23:51:40.695709 containerd[1913]: time="2025-07-06T23:51:40.695687423Z" level=info msg="Stop container \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" with signal terminated" Jul 6 23:51:40.702223 systemd-networkd[1657]: lxc_health: Link DOWN Jul 6 23:51:40.702228 systemd-networkd[1657]: lxc_health: Lost carrier Jul 6 23:51:40.710776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30-rootfs.mount: Deactivated successfully. Jul 6 23:51:40.718031 systemd[1]: cri-containerd-ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc.scope: Deactivated successfully. Jul 6 23:51:40.718412 systemd[1]: cri-containerd-ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc.scope: Consumed 4.509s CPU time, 124M memory peak, 152K read from disk, 12.9M written to disk. Jul 6 23:51:40.720051 containerd[1913]: time="2025-07-06T23:51:40.719998576Z" level=info msg="received exit event container_id:\"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" id:\"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" pid:4040 exited_at:{seconds:1751845900 nanos:719709533}" Jul 6 23:51:40.720368 containerd[1913]: time="2025-07-06T23:51:40.720341671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" id:\"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" pid:4040 exited_at:{seconds:1751845900 nanos:719709533}" Jul 6 23:51:40.735161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc-rootfs.mount: Deactivated successfully. Jul 6 23:51:40.798652 containerd[1913]: time="2025-07-06T23:51:40.798612901Z" level=info msg="StopContainer for \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\" returns successfully" Jul 6 23:51:40.799349 containerd[1913]: time="2025-07-06T23:51:40.799324889Z" level=info msg="StopPodSandbox for \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\"" Jul 6 23:51:40.799424 containerd[1913]: time="2025-07-06T23:51:40.799374422Z" level=info msg="Container to stop \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:51:40.802052 containerd[1913]: time="2025-07-06T23:51:40.802016853Z" level=info msg="StopContainer for \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" returns successfully" Jul 6 23:51:40.802574 containerd[1913]: time="2025-07-06T23:51:40.802528087Z" level=info msg="StopPodSandbox for \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\"" Jul 6 23:51:40.802842 containerd[1913]: time="2025-07-06T23:51:40.802819242Z" level=info msg="Container to stop \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:51:40.802842 containerd[1913]: time="2025-07-06T23:51:40.802841888Z" level=info msg="Container to stop \"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:51:40.802920 containerd[1913]: time="2025-07-06T23:51:40.802850248Z" level=info msg="Container to stop \"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:51:40.802920 containerd[1913]: time="2025-07-06T23:51:40.802855911Z" level=info msg="Container to stop \"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:51:40.802920 containerd[1913]: time="2025-07-06T23:51:40.802862879Z" level=info msg="Container to stop \"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:51:40.805885 systemd[1]: cri-containerd-245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5.scope: Deactivated successfully. Jul 6 23:51:40.808244 containerd[1913]: time="2025-07-06T23:51:40.808138166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\" id:\"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\" pid:3512 exit_status:137 exited_at:{seconds:1751845900 nanos:807509284}" Jul 6 23:51:40.810056 systemd[1]: cri-containerd-0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c.scope: Deactivated successfully. Jul 6 23:51:40.831629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c-rootfs.mount: Deactivated successfully. Jul 6 23:51:40.836875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5-rootfs.mount: Deactivated successfully. Jul 6 23:51:40.850141 containerd[1913]: time="2025-07-06T23:51:40.850103486Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" id:\"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" pid:3578 exit_status:137 exited_at:{seconds:1751845900 nanos:810494674}" Jul 6 23:51:40.850566 containerd[1913]: time="2025-07-06T23:51:40.850234756Z" level=info msg="received exit event sandbox_id:\"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" exit_status:137 exited_at:{seconds:1751845900 nanos:810494674}" Jul 6 23:51:40.850909 containerd[1913]: time="2025-07-06T23:51:40.850725177Z" level=info msg="shim disconnected" id=245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5 namespace=k8s.io Jul 6 23:51:40.850909 containerd[1913]: time="2025-07-06T23:51:40.850746487Z" level=warning msg="cleaning up after shim disconnected" id=245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5 namespace=k8s.io Jul 6 23:51:40.850909 containerd[1913]: time="2025-07-06T23:51:40.850771365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:51:40.855264 containerd[1913]: time="2025-07-06T23:51:40.854375430Z" level=info msg="received exit event sandbox_id:\"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\" exit_status:137 exited_at:{seconds:1751845900 nanos:807509284}" Jul 6 23:51:40.852628 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5-shm.mount: Deactivated successfully. Jul 6 23:51:40.856480 containerd[1913]: time="2025-07-06T23:51:40.856453727Z" level=info msg="TearDown network for sandbox \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\" successfully" Jul 6 23:51:40.856634 containerd[1913]: time="2025-07-06T23:51:40.856563710Z" level=info msg="StopPodSandbox for \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\" returns successfully" Jul 6 23:51:40.856811 containerd[1913]: time="2025-07-06T23:51:40.856796909Z" level=info msg="shim disconnected" id=0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c namespace=k8s.io Jul 6 23:51:40.857021 containerd[1913]: time="2025-07-06T23:51:40.856986024Z" level=warning msg="cleaning up after shim disconnected" id=0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c namespace=k8s.io Jul 6 23:51:40.857407 containerd[1913]: time="2025-07-06T23:51:40.857390130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:51:40.859703 containerd[1913]: time="2025-07-06T23:51:40.858838536Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/containerd.sock.ttrpc->@: write: broken pipe" Jul 6 23:51:40.859703 containerd[1913]: time="2025-07-06T23:51:40.856954610Z" level=info msg="TearDown network for sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" successfully" Jul 6 23:51:40.859703 containerd[1913]: time="2025-07-06T23:51:40.859098997Z" level=info msg="StopPodSandbox for \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" returns successfully" Jul 6 23:51:40.909942 kubelet[3406]: I0706 23:51:40.909877 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-config-path\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911396 kubelet[3406]: I0706 23:51:40.909926 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsgx2\" (UniqueName: \"kubernetes.io/projected/062b27d8-3a8e-497b-93b7-a26ed55682bc-kube-api-access-lsgx2\") pod \"062b27d8-3a8e-497b-93b7-a26ed55682bc\" (UID: \"062b27d8-3a8e-497b-93b7-a26ed55682bc\") " Jul 6 23:51:40.911396 kubelet[3406]: I0706 23:51:40.910278 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58b4afee-9e2e-4939-876d-fd2fe4409b78-clustermesh-secrets\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911396 kubelet[3406]: I0706 23:51:40.910295 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-cgroup\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911396 kubelet[3406]: I0706 23:51:40.910308 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-host-proc-sys-kernel\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911396 kubelet[3406]: I0706 23:51:40.910320 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-host-proc-sys-net\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911396 kubelet[3406]: I0706 23:51:40.910331 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-etc-cni-netd\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911509 kubelet[3406]: I0706 23:51:40.910345 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cni-path\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911509 kubelet[3406]: I0706 23:51:40.910355 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-xtables-lock\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911509 kubelet[3406]: I0706 23:51:40.910364 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-lib-modules\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911509 kubelet[3406]: I0706 23:51:40.910373 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-bpf-maps\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911509 kubelet[3406]: I0706 23:51:40.910385 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-hostproc\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911509 kubelet[3406]: I0706 23:51:40.910416 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcqj8\" (UniqueName: \"kubernetes.io/projected/58b4afee-9e2e-4939-876d-fd2fe4409b78-kube-api-access-zcqj8\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911596 kubelet[3406]: I0706 23:51:40.910429 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-run\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911596 kubelet[3406]: I0706 23:51:40.910441 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58b4afee-9e2e-4939-876d-fd2fe4409b78-hubble-tls\") pod \"58b4afee-9e2e-4939-876d-fd2fe4409b78\" (UID: \"58b4afee-9e2e-4939-876d-fd2fe4409b78\") " Jul 6 23:51:40.911596 kubelet[3406]: I0706 23:51:40.910451 3406 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/062b27d8-3a8e-497b-93b7-a26ed55682bc-cilium-config-path\") pod \"062b27d8-3a8e-497b-93b7-a26ed55682bc\" (UID: \"062b27d8-3a8e-497b-93b7-a26ed55682bc\") " Jul 6 23:51:40.911596 kubelet[3406]: I0706 23:51:40.911537 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:51:40.911654 kubelet[3406]: I0706 23:51:40.911598 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cni-path" (OuterVolumeSpecName: "cni-path") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:51:40.912035 kubelet[3406]: I0706 23:51:40.911778 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:51:40.912035 kubelet[3406]: I0706 23:51:40.911813 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:51:40.912035 kubelet[3406]: I0706 23:51:40.911824 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:51:40.912035 kubelet[3406]: I0706 23:51:40.911833 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-hostproc" (OuterVolumeSpecName: "hostproc") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:51:40.913307 kubelet[3406]: I0706 23:51:40.913283 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:51:40.913418 kubelet[3406]: I0706 23:51:40.913387 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:51:40.913479 kubelet[3406]: I0706 23:51:40.913403 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:51:40.913536 kubelet[3406]: I0706 23:51:40.913524 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:51:40.913627 kubelet[3406]: I0706 23:51:40.913613 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:51:40.914549 kubelet[3406]: I0706 23:51:40.914388 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062b27d8-3a8e-497b-93b7-a26ed55682bc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "062b27d8-3a8e-497b-93b7-a26ed55682bc" (UID: "062b27d8-3a8e-497b-93b7-a26ed55682bc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:51:40.916234 kubelet[3406]: I0706 23:51:40.916180 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062b27d8-3a8e-497b-93b7-a26ed55682bc-kube-api-access-lsgx2" (OuterVolumeSpecName: "kube-api-access-lsgx2") pod "062b27d8-3a8e-497b-93b7-a26ed55682bc" (UID: "062b27d8-3a8e-497b-93b7-a26ed55682bc"). InnerVolumeSpecName "kube-api-access-lsgx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:51:40.916952 kubelet[3406]: I0706 23:51:40.916917 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b4afee-9e2e-4939-876d-fd2fe4409b78-kube-api-access-zcqj8" (OuterVolumeSpecName: "kube-api-access-zcqj8") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "kube-api-access-zcqj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:51:40.917315 kubelet[3406]: I0706 23:51:40.917277 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b4afee-9e2e-4939-876d-fd2fe4409b78-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:51:40.917770 kubelet[3406]: I0706 23:51:40.917737 3406 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b4afee-9e2e-4939-876d-fd2fe4409b78-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "58b4afee-9e2e-4939-876d-fd2fe4409b78" (UID: "58b4afee-9e2e-4939-876d-fd2fe4409b78"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:51:41.011072 kubelet[3406]: I0706 23:51:41.011029 3406 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-run\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011424 kubelet[3406]: I0706 23:51:41.011280 3406 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58b4afee-9e2e-4939-876d-fd2fe4409b78-hubble-tls\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011424 kubelet[3406]: I0706 23:51:41.011299 3406 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/062b27d8-3a8e-497b-93b7-a26ed55682bc-cilium-config-path\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011424 kubelet[3406]: I0706 23:51:41.011311 3406 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-config-path\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011424 kubelet[3406]: I0706 23:51:41.011320 3406 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58b4afee-9e2e-4939-876d-fd2fe4409b78-clustermesh-secrets\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011424 kubelet[3406]: I0706 23:51:41.011327 3406 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsgx2\" (UniqueName: \"kubernetes.io/projected/062b27d8-3a8e-497b-93b7-a26ed55682bc-kube-api-access-lsgx2\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011424 kubelet[3406]: I0706 23:51:41.011347 3406 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-host-proc-sys-net\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011424 kubelet[3406]: I0706 23:51:41.011355 3406 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cilium-cgroup\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011424 kubelet[3406]: I0706 23:51:41.011362 3406 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-host-proc-sys-kernel\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011602 kubelet[3406]: I0706 23:51:41.011368 3406 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-etc-cni-netd\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011602 kubelet[3406]: I0706 23:51:41.011373 3406 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-xtables-lock\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011602 kubelet[3406]: I0706 23:51:41.011378 3406 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-lib-modules\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011602 kubelet[3406]: I0706 23:51:41.011383 3406 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-bpf-maps\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011602 kubelet[3406]: I0706 23:51:41.011389 3406 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-cni-path\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011602 kubelet[3406]: I0706 23:51:41.011394 3406 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcqj8\" (UniqueName: \"kubernetes.io/projected/58b4afee-9e2e-4939-876d-fd2fe4409b78-kube-api-access-zcqj8\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.011602 kubelet[3406]: I0706 23:51:41.011401 3406 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58b4afee-9e2e-4939-876d-fd2fe4409b78-hostproc\") on node \"ci-4344.1.1-a-aa3e6ac533\" DevicePath \"\"" Jul 6 23:51:41.339847 systemd[1]: Removed slice kubepods-burstable-pod58b4afee_9e2e_4939_876d_fd2fe4409b78.slice - libcontainer container kubepods-burstable-pod58b4afee_9e2e_4939_876d_fd2fe4409b78.slice. Jul 6 23:51:41.340255 systemd[1]: kubepods-burstable-pod58b4afee_9e2e_4939_876d_fd2fe4409b78.slice: Consumed 4.571s CPU time, 124.4M memory peak, 152K read from disk, 12.9M written to disk. Jul 6 23:51:41.341654 systemd[1]: Removed slice kubepods-besteffort-pod062b27d8_3a8e_497b_93b7_a26ed55682bc.slice - libcontainer container kubepods-besteffort-pod062b27d8_3a8e_497b_93b7_a26ed55682bc.slice. Jul 6 23:51:41.680764 kubelet[3406]: I0706 23:51:41.680662 3406 scope.go:117] "RemoveContainer" containerID="ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc" Jul 6 23:51:41.687727 containerd[1913]: time="2025-07-06T23:51:41.687164497Z" level=info msg="RemoveContainer for \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\"" Jul 6 23:51:41.703949 containerd[1913]: time="2025-07-06T23:51:41.703888596Z" level=info msg="RemoveContainer for \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" returns successfully" Jul 6 23:51:41.704852 kubelet[3406]: I0706 23:51:41.704821 3406 scope.go:117] "RemoveContainer" containerID="31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985" Jul 6 23:51:41.706436 containerd[1913]: time="2025-07-06T23:51:41.706335890Z" level=info msg="RemoveContainer for \"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\"" Jul 6 23:51:41.710125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c-shm.mount: Deactivated successfully. Jul 6 23:51:41.710262 systemd[1]: var-lib-kubelet-pods-58b4afee\x2d9e2e\x2d4939\x2d876d\x2dfd2fe4409b78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzcqj8.mount: Deactivated successfully. Jul 6 23:51:41.710312 systemd[1]: var-lib-kubelet-pods-062b27d8\x2d3a8e\x2d497b\x2d93b7\x2da26ed55682bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlsgx2.mount: Deactivated successfully. Jul 6 23:51:41.710362 systemd[1]: var-lib-kubelet-pods-58b4afee\x2d9e2e\x2d4939\x2d876d\x2dfd2fe4409b78-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:51:41.710402 systemd[1]: var-lib-kubelet-pods-58b4afee\x2d9e2e\x2d4939\x2d876d\x2dfd2fe4409b78-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:51:41.719339 containerd[1913]: time="2025-07-06T23:51:41.719292936Z" level=info msg="RemoveContainer for \"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\" returns successfully" Jul 6 23:51:41.719697 kubelet[3406]: I0706 23:51:41.719677 3406 scope.go:117] "RemoveContainer" containerID="518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489" Jul 6 23:51:41.721685 containerd[1913]: time="2025-07-06T23:51:41.721629837Z" level=info msg="RemoveContainer for \"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\"" Jul 6 23:51:41.736107 containerd[1913]: time="2025-07-06T23:51:41.736051872Z" level=info msg="RemoveContainer for \"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\" returns successfully" Jul 6 23:51:41.736362 kubelet[3406]: I0706 23:51:41.736341 3406 scope.go:117] "RemoveContainer" containerID="898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd" Jul 6 23:51:41.738869 containerd[1913]: time="2025-07-06T23:51:41.738834717Z" level=info msg="RemoveContainer for \"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\"" Jul 6 23:51:41.748116 containerd[1913]: time="2025-07-06T23:51:41.748076714Z" level=info msg="RemoveContainer for \"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\" returns successfully" Jul 6 23:51:41.748453 kubelet[3406]: I0706 23:51:41.748403 3406 scope.go:117] "RemoveContainer" containerID="e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b" Jul 6 23:51:41.749962 containerd[1913]: time="2025-07-06T23:51:41.749935651Z" level=info msg="RemoveContainer for \"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\"" Jul 6 23:51:41.761753 containerd[1913]: time="2025-07-06T23:51:41.761714487Z" level=info msg="RemoveContainer for \"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\" returns successfully" Jul 6 23:51:41.762018 kubelet[3406]: I0706 23:51:41.761991 3406 scope.go:117] "RemoveContainer" containerID="ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc" Jul 6 23:51:41.762485 containerd[1913]: time="2025-07-06T23:51:41.762450769Z" level=error msg="ContainerStatus for \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\": not found" Jul 6 23:51:41.762645 kubelet[3406]: E0706 23:51:41.762596 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\": not found" containerID="ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc" Jul 6 23:51:41.762722 kubelet[3406]: I0706 23:51:41.762651 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc"} err="failed to get container status \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba9e950115c29c02792a40b80f802142d23c8f64dc25224cd5c079f49e4bfdcc\": not found" Jul 6 23:51:41.762722 kubelet[3406]: I0706 23:51:41.762721 3406 scope.go:117] "RemoveContainer" containerID="31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985" Jul 6 23:51:41.762956 containerd[1913]: time="2025-07-06T23:51:41.762876554Z" level=error msg="ContainerStatus for \"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\": not found" Jul 6 23:51:41.762999 kubelet[3406]: E0706 23:51:41.762968 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\": not found" containerID="31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985" Jul 6 23:51:41.762999 kubelet[3406]: I0706 23:51:41.762985 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985"} err="failed to get container status \"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\": rpc error: code = NotFound desc = an error occurred when try to find container \"31f7b239ec89498b512222d0ee04ecadd59847a43bd6aff32abe2a0edd795985\": not found" Jul 6 23:51:41.762999 kubelet[3406]: I0706 23:51:41.762999 3406 scope.go:117] "RemoveContainer" containerID="518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489" Jul 6 23:51:41.763164 containerd[1913]: time="2025-07-06T23:51:41.763119848Z" level=error msg="ContainerStatus for \"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\": not found" Jul 6 23:51:41.763284 kubelet[3406]: E0706 23:51:41.763265 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\": not found" containerID="518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489" Jul 6 23:51:41.763351 kubelet[3406]: I0706 23:51:41.763303 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489"} err="failed to get container status \"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\": rpc error: code = NotFound desc = an error occurred when try to find container \"518cb7878b08ba16b9404e513c28968d90be24e9510fe0b1551db821692ae489\": not found" Jul 6 23:51:41.763351 kubelet[3406]: I0706 23:51:41.763321 3406 scope.go:117] "RemoveContainer" containerID="898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd" Jul 6 23:51:41.763562 kubelet[3406]: E0706 23:51:41.763526 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\": not found" containerID="898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd" Jul 6 23:51:41.763562 kubelet[3406]: I0706 23:51:41.763540 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd"} err="failed to get container status \"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\": not found" Jul 6 23:51:41.763562 kubelet[3406]: I0706 23:51:41.763549 3406 scope.go:117] "RemoveContainer" containerID="e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b" Jul 6 23:51:41.763620 containerd[1913]: time="2025-07-06T23:51:41.763441729Z" level=error msg="ContainerStatus for \"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"898dce1af8d3ff3acbe5c7101bb520d794a29648b9faf603c671d07c537772bd\": not found" Jul 6 23:51:41.763845 containerd[1913]: time="2025-07-06T23:51:41.763747986Z" level=error msg="ContainerStatus for \"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\": not found" Jul 6 23:51:41.763912 kubelet[3406]: E0706 23:51:41.763895 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\": not found" containerID="e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b" Jul 6 23:51:41.763936 kubelet[3406]: I0706 23:51:41.763909 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b"} err="failed to get container status \"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e50666da9b9498c374f7e3731fcc70ed7d972cfc0e379709a2dd5f5c201b257b\": not found" Jul 6 23:51:41.763936 kubelet[3406]: I0706 23:51:41.763919 3406 scope.go:117] "RemoveContainer" containerID="bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30" Jul 6 23:51:41.765297 containerd[1913]: time="2025-07-06T23:51:41.765272571Z" level=info msg="RemoveContainer for \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\"" Jul 6 23:51:41.776009 containerd[1913]: time="2025-07-06T23:51:41.775973334Z" level=info msg="RemoveContainer for \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\" returns successfully" Jul 6 23:51:41.776304 kubelet[3406]: I0706 23:51:41.776280 3406 scope.go:117] "RemoveContainer" containerID="bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30" Jul 6 23:51:41.776617 containerd[1913]: time="2025-07-06T23:51:41.776557243Z" level=error msg="ContainerStatus for \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\": not found" Jul 6 23:51:41.776686 kubelet[3406]: E0706 23:51:41.776660 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\": not found" containerID="bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30" Jul 6 23:51:41.776713 kubelet[3406]: I0706 23:51:41.776691 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30"} err="failed to get container status \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc0efb23b06985c3b965df8a3b056ffc4c130d238d8d11190ceb3c6196d13a30\": not found" Jul 6 23:51:42.420407 kubelet[3406]: E0706 23:51:42.420259 3406 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:51:42.682317 sshd[4949]: Connection closed by 10.200.16.10 port 58648 Jul 6 23:51:42.682898 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:42.686818 systemd[1]: sshd@22-10.200.20.10:22-10.200.16.10:58648.service: Deactivated successfully. Jul 6 23:51:42.689932 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:51:42.690850 systemd-logind[1894]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:51:42.692078 systemd-logind[1894]: Removed session 25. Jul 6 23:51:42.770325 systemd[1]: Started sshd@23-10.200.20.10:22-10.200.16.10:36168.service - OpenSSH per-connection server daemon (10.200.16.10:36168). Jul 6 23:51:43.279295 sshd[5102]: Accepted publickey for core from 10.200.16.10 port 36168 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:43.280499 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:43.284368 systemd-logind[1894]: New session 26 of user core. Jul 6 23:51:43.291556 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:51:43.334326 kubelet[3406]: I0706 23:51:43.334293 3406 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062b27d8-3a8e-497b-93b7-a26ed55682bc" path="/var/lib/kubelet/pods/062b27d8-3a8e-497b-93b7-a26ed55682bc/volumes" Jul 6 23:51:43.334948 kubelet[3406]: I0706 23:51:43.334857 3406 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b4afee-9e2e-4939-876d-fd2fe4409b78" path="/var/lib/kubelet/pods/58b4afee-9e2e-4939-876d-fd2fe4409b78/volumes" Jul 6 23:51:44.034026 kubelet[3406]: E0706 23:51:44.033782 3406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="062b27d8-3a8e-497b-93b7-a26ed55682bc" containerName="cilium-operator" Jul 6 23:51:44.034026 kubelet[3406]: E0706 23:51:44.033812 3406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58b4afee-9e2e-4939-876d-fd2fe4409b78" containerName="mount-cgroup" Jul 6 23:51:44.034026 kubelet[3406]: E0706 23:51:44.033818 3406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58b4afee-9e2e-4939-876d-fd2fe4409b78" containerName="apply-sysctl-overwrites" Jul 6 23:51:44.034026 kubelet[3406]: E0706 23:51:44.033821 3406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58b4afee-9e2e-4939-876d-fd2fe4409b78" containerName="mount-bpf-fs" Jul 6 23:51:44.034026 kubelet[3406]: E0706 23:51:44.033824 3406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58b4afee-9e2e-4939-876d-fd2fe4409b78" containerName="clean-cilium-state" Jul 6 23:51:44.034026 kubelet[3406]: E0706 23:51:44.033828 3406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58b4afee-9e2e-4939-876d-fd2fe4409b78" containerName="cilium-agent" Jul 6 23:51:44.034026 kubelet[3406]: I0706 23:51:44.033846 3406 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b4afee-9e2e-4939-876d-fd2fe4409b78" containerName="cilium-agent" Jul 6 23:51:44.034026 kubelet[3406]: I0706 23:51:44.033851 3406 memory_manager.go:354] "RemoveStaleState removing state" podUID="062b27d8-3a8e-497b-93b7-a26ed55682bc" containerName="cilium-operator" Jul 6 23:51:44.042015 systemd[1]: Created slice kubepods-burstable-pod0bcff33e_86a9_4bfa_9407_bd377a010557.slice - libcontainer container kubepods-burstable-pod0bcff33e_86a9_4bfa_9407_bd377a010557.slice. Jul 6 23:51:44.044448 kubelet[3406]: W0706 23:51:44.044035 3406 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4344.1.1-a-aa3e6ac533" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.1.1-a-aa3e6ac533' and this object Jul 6 23:51:44.045349 kubelet[3406]: E0706 23:51:44.044583 3406 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4344.1.1-a-aa3e6ac533\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.1-a-aa3e6ac533' and this object" logger="UnhandledError" Jul 6 23:51:44.045349 kubelet[3406]: W0706 23:51:44.044675 3406 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4344.1.1-a-aa3e6ac533" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.1.1-a-aa3e6ac533' and this object Jul 6 23:51:44.045349 kubelet[3406]: E0706 23:51:44.044688 3406 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4344.1.1-a-aa3e6ac533\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.1-a-aa3e6ac533' and this object" logger="UnhandledError" Jul 6 23:51:44.045349 kubelet[3406]: W0706 23:51:44.044715 3406 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4344.1.1-a-aa3e6ac533" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.1.1-a-aa3e6ac533' and this object Jul 6 23:51:44.045466 kubelet[3406]: E0706 23:51:44.044744 3406 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4344.1.1-a-aa3e6ac533\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.1-a-aa3e6ac533' and this object" logger="UnhandledError" Jul 6 23:51:44.045466 kubelet[3406]: W0706 23:51:44.044779 3406 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4344.1.1-a-aa3e6ac533" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.1.1-a-aa3e6ac533' and this object Jul 6 23:51:44.045466 kubelet[3406]: E0706 23:51:44.044787 3406 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4344.1.1-a-aa3e6ac533\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.1-a-aa3e6ac533' and this object" logger="UnhandledError" Jul 6 23:51:44.073082 sshd[5104]: Connection closed by 10.200.16.10 port 36168 Jul 6 23:51:44.073637 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:44.079803 systemd[1]: sshd@23-10.200.20.10:22-10.200.16.10:36168.service: Deactivated successfully. Jul 6 23:51:44.083426 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:51:44.084610 systemd-logind[1894]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:51:44.085898 systemd-logind[1894]: Removed session 26. Jul 6 23:51:44.127731 kubelet[3406]: I0706 23:51:44.127657 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0bcff33e-86a9-4bfa-9407-bd377a010557-cilium-run\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.127731 kubelet[3406]: I0706 23:51:44.127705 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0bcff33e-86a9-4bfa-9407-bd377a010557-host-proc-sys-kernel\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.127926 kubelet[3406]: I0706 23:51:44.127748 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0bcff33e-86a9-4bfa-9407-bd377a010557-hubble-tls\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.127926 kubelet[3406]: I0706 23:51:44.127793 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0bcff33e-86a9-4bfa-9407-bd377a010557-bpf-maps\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.127926 kubelet[3406]: I0706 23:51:44.127808 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bcff33e-86a9-4bfa-9407-bd377a010557-xtables-lock\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.127926 kubelet[3406]: I0706 23:51:44.127821 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0bcff33e-86a9-4bfa-9407-bd377a010557-host-proc-sys-net\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.127926 kubelet[3406]: I0706 23:51:44.127838 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0bcff33e-86a9-4bfa-9407-bd377a010557-etc-cni-netd\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.127926 kubelet[3406]: I0706 23:51:44.127850 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bcff33e-86a9-4bfa-9407-bd377a010557-cilium-config-path\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.128016 kubelet[3406]: I0706 23:51:44.127864 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0bcff33e-86a9-4bfa-9407-bd377a010557-cilium-cgroup\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.128016 kubelet[3406]: I0706 23:51:44.127874 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0bcff33e-86a9-4bfa-9407-bd377a010557-cni-path\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.128016 kubelet[3406]: I0706 23:51:44.127886 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0bcff33e-86a9-4bfa-9407-bd377a010557-cilium-ipsec-secrets\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.128016 kubelet[3406]: I0706 23:51:44.127896 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfr8s\" (UniqueName: \"kubernetes.io/projected/0bcff33e-86a9-4bfa-9407-bd377a010557-kube-api-access-bfr8s\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.128016 kubelet[3406]: I0706 23:51:44.127914 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0bcff33e-86a9-4bfa-9407-bd377a010557-hostproc\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.128016 kubelet[3406]: I0706 23:51:44.127924 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bcff33e-86a9-4bfa-9407-bd377a010557-lib-modules\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.128101 kubelet[3406]: I0706 23:51:44.127948 3406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0bcff33e-86a9-4bfa-9407-bd377a010557-clustermesh-secrets\") pod \"cilium-gm5vw\" (UID: \"0bcff33e-86a9-4bfa-9407-bd377a010557\") " pod="kube-system/cilium-gm5vw" Jul 6 23:51:44.159362 systemd[1]: Started sshd@24-10.200.20.10:22-10.200.16.10:36174.service - OpenSSH per-connection server daemon (10.200.16.10:36174). Jul 6 23:51:44.642069 sshd[5114]: Accepted publickey for core from 10.200.16.10 port 36174 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:44.643247 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:44.647456 systemd-logind[1894]: New session 27 of user core. Jul 6 23:51:44.652333 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:51:44.978873 sshd[5117]: Connection closed by 10.200.16.10 port 36174 Jul 6 23:51:44.979420 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:44.982736 systemd[1]: sshd@24-10.200.20.10:22-10.200.16.10:36174.service: Deactivated successfully. Jul 6 23:51:44.984332 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:51:44.984995 systemd-logind[1894]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:51:44.986577 systemd-logind[1894]: Removed session 27. Jul 6 23:51:45.068581 systemd[1]: Started sshd@25-10.200.20.10:22-10.200.16.10:36180.service - OpenSSH per-connection server daemon (10.200.16.10:36180). Jul 6 23:51:45.228976 kubelet[3406]: E0706 23:51:45.228849 3406 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 6 23:51:45.228976 kubelet[3406]: E0706 23:51:45.228891 3406 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-gm5vw: failed to sync secret cache: timed out waiting for the condition Jul 6 23:51:45.228976 kubelet[3406]: E0706 23:51:45.228950 3406 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bcff33e-86a9-4bfa-9407-bd377a010557-hubble-tls podName:0bcff33e-86a9-4bfa-9407-bd377a010557 nodeName:}" failed. No retries permitted until 2025-07-06 23:51:45.728932688 +0000 UTC m=+168.527594847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/0bcff33e-86a9-4bfa-9407-bd377a010557-hubble-tls") pod "cilium-gm5vw" (UID: "0bcff33e-86a9-4bfa-9407-bd377a010557") : failed to sync secret cache: timed out waiting for the condition Jul 6 23:51:45.229578 kubelet[3406]: E0706 23:51:45.229465 3406 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 6 23:51:45.229578 kubelet[3406]: E0706 23:51:45.229528 3406 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcff33e-86a9-4bfa-9407-bd377a010557-clustermesh-secrets podName:0bcff33e-86a9-4bfa-9407-bd377a010557 nodeName:}" failed. No retries permitted until 2025-07-06 23:51:45.72951097 +0000 UTC m=+168.528173129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/0bcff33e-86a9-4bfa-9407-bd377a010557-clustermesh-secrets") pod "cilium-gm5vw" (UID: "0bcff33e-86a9-4bfa-9407-bd377a010557") : failed to sync secret cache: timed out waiting for the condition Jul 6 23:51:45.568568 sshd[5125]: Accepted publickey for core from 10.200.16.10 port 36180 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:51:45.569677 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:45.573731 systemd-logind[1894]: New session 28 of user core. Jul 6 23:51:45.583331 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 6 23:51:45.846447 containerd[1913]: time="2025-07-06T23:51:45.846317255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gm5vw,Uid:0bcff33e-86a9-4bfa-9407-bd377a010557,Namespace:kube-system,Attempt:0,}" Jul 6 23:51:45.887741 containerd[1913]: time="2025-07-06T23:51:45.887679555Z" level=info msg="connecting to shim 7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840" address="unix:///run/containerd/s/3a1699c646e55cb8aa03fdbb66fd92172cbc4d72f2348ea82754296fcdee2a26" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:51:45.916360 systemd[1]: Started cri-containerd-7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840.scope - libcontainer container 7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840. Jul 6 23:51:45.938290 containerd[1913]: time="2025-07-06T23:51:45.938236284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gm5vw,Uid:0bcff33e-86a9-4bfa-9407-bd377a010557,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840\"" Jul 6 23:51:45.941555 containerd[1913]: time="2025-07-06T23:51:45.941453117Z" level=info msg="CreateContainer within sandbox \"7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:51:45.964072 containerd[1913]: time="2025-07-06T23:51:45.964026100Z" level=info msg="Container 9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:51:45.979643 containerd[1913]: time="2025-07-06T23:51:45.979578697Z" level=info msg="CreateContainer within sandbox \"7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0\"" Jul 6 23:51:45.980541 containerd[1913]: time="2025-07-06T23:51:45.980507847Z" level=info msg="StartContainer for \"9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0\"" Jul 6 23:51:45.981334 containerd[1913]: time="2025-07-06T23:51:45.981182086Z" level=info msg="connecting to shim 9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0" address="unix:///run/containerd/s/3a1699c646e55cb8aa03fdbb66fd92172cbc4d72f2348ea82754296fcdee2a26" protocol=ttrpc version=3 Jul 6 23:51:45.999368 systemd[1]: Started cri-containerd-9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0.scope - libcontainer container 9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0. Jul 6 23:51:46.025873 containerd[1913]: time="2025-07-06T23:51:46.025738517Z" level=info msg="StartContainer for \"9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0\" returns successfully" Jul 6 23:51:46.029967 systemd[1]: cri-containerd-9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0.scope: Deactivated successfully. Jul 6 23:51:46.033250 containerd[1913]: time="2025-07-06T23:51:46.033211214Z" level=info msg="received exit event container_id:\"9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0\" id:\"9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0\" pid:5192 exited_at:{seconds:1751845906 nanos:32958696}" Jul 6 23:51:46.033914 containerd[1913]: time="2025-07-06T23:51:46.033746759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0\" id:\"9c7ee09e36dd719fa4d3148e6868274ee35afd413243d604b35c732b21478cf0\" pid:5192 exited_at:{seconds:1751845906 nanos:32958696}" Jul 6 23:51:46.701970 containerd[1913]: time="2025-07-06T23:51:46.701873656Z" level=info msg="CreateContainer within sandbox \"7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:51:46.725673 containerd[1913]: time="2025-07-06T23:51:46.725211503Z" level=info msg="Container 2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:51:46.748847 containerd[1913]: time="2025-07-06T23:51:46.748807828Z" level=info msg="CreateContainer within sandbox \"7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0\"" Jul 6 23:51:46.749560 containerd[1913]: time="2025-07-06T23:51:46.749525825Z" level=info msg="StartContainer for \"2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0\"" Jul 6 23:51:46.750333 containerd[1913]: time="2025-07-06T23:51:46.750305241Z" level=info msg="connecting to shim 2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0" address="unix:///run/containerd/s/3a1699c646e55cb8aa03fdbb66fd92172cbc4d72f2348ea82754296fcdee2a26" protocol=ttrpc version=3 Jul 6 23:51:46.764352 systemd[1]: Started cri-containerd-2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0.scope - libcontainer container 2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0. Jul 6 23:51:46.791720 containerd[1913]: time="2025-07-06T23:51:46.791622688Z" level=info msg="StartContainer for \"2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0\" returns successfully" Jul 6 23:51:46.791719 systemd[1]: cri-containerd-2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0.scope: Deactivated successfully. Jul 6 23:51:46.793393 containerd[1913]: time="2025-07-06T23:51:46.793311991Z" level=info msg="received exit event container_id:\"2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0\" id:\"2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0\" pid:5236 exited_at:{seconds:1751845906 nanos:792595427}" Jul 6 23:51:46.793679 containerd[1913]: time="2025-07-06T23:51:46.793632136Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0\" id:\"2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0\" pid:5236 exited_at:{seconds:1751845906 nanos:792595427}" Jul 6 23:51:46.809914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c782f66e08bda0c37d66f5dc73ee9d53c2cd3353131d77d627c6ccbf13be9a0-rootfs.mount: Deactivated successfully. Jul 6 23:51:47.421381 kubelet[3406]: E0706 23:51:47.421308 3406 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:51:47.707712 containerd[1913]: time="2025-07-06T23:51:47.707463144Z" level=info msg="CreateContainer within sandbox \"7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:51:47.738505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519398679.mount: Deactivated successfully. Jul 6 23:51:47.740082 containerd[1913]: time="2025-07-06T23:51:47.739847720Z" level=info msg="Container fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:51:47.760475 containerd[1913]: time="2025-07-06T23:51:47.760409798Z" level=info msg="CreateContainer within sandbox \"7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90\"" Jul 6 23:51:47.761570 containerd[1913]: time="2025-07-06T23:51:47.761253130Z" level=info msg="StartContainer for \"fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90\"" Jul 6 23:51:47.764505 containerd[1913]: time="2025-07-06T23:51:47.764419055Z" level=info msg="connecting to shim fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90" address="unix:///run/containerd/s/3a1699c646e55cb8aa03fdbb66fd92172cbc4d72f2348ea82754296fcdee2a26" protocol=ttrpc version=3 Jul 6 23:51:47.782348 systemd[1]: Started cri-containerd-fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90.scope - libcontainer container fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90. Jul 6 23:51:47.809226 systemd[1]: cri-containerd-fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90.scope: Deactivated successfully. Jul 6 23:51:47.812610 containerd[1913]: time="2025-07-06T23:51:47.812516865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90\" id:\"fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90\" pid:5283 exited_at:{seconds:1751845907 nanos:811600531}" Jul 6 23:51:47.812721 containerd[1913]: time="2025-07-06T23:51:47.812694748Z" level=info msg="received exit event container_id:\"fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90\" id:\"fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90\" pid:5283 exited_at:{seconds:1751845907 nanos:811600531}" Jul 6 23:51:47.818952 containerd[1913]: time="2025-07-06T23:51:47.818916166Z" level=info msg="StartContainer for \"fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90\" returns successfully" Jul 6 23:51:47.830075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fff1a30d6efffdc90473e783fa9d78b04d1d300701c8096270569279456e7d90-rootfs.mount: Deactivated successfully. Jul 6 23:51:48.711219 containerd[1913]: time="2025-07-06T23:51:48.711078383Z" level=info msg="CreateContainer within sandbox \"7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:51:48.736276 containerd[1913]: time="2025-07-06T23:51:48.736235148Z" level=info msg="Container bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:51:48.739058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489925631.mount: Deactivated successfully. Jul 6 23:51:48.755209 containerd[1913]: time="2025-07-06T23:51:48.755104316Z" level=info msg="CreateContainer within sandbox \"7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180\"" Jul 6 23:51:48.755927 containerd[1913]: time="2025-07-06T23:51:48.755862070Z" level=info msg="StartContainer for \"bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180\"" Jul 6 23:51:48.756574 containerd[1913]: time="2025-07-06T23:51:48.756550029Z" level=info msg="connecting to shim bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180" address="unix:///run/containerd/s/3a1699c646e55cb8aa03fdbb66fd92172cbc4d72f2348ea82754296fcdee2a26" protocol=ttrpc version=3 Jul 6 23:51:48.777349 systemd[1]: Started cri-containerd-bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180.scope - libcontainer container bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180. Jul 6 23:51:48.796396 systemd[1]: cri-containerd-bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180.scope: Deactivated successfully. Jul 6 23:51:48.797839 containerd[1913]: time="2025-07-06T23:51:48.797802625Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180\" id:\"bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180\" pid:5323 exited_at:{seconds:1751845908 nanos:797568722}" Jul 6 23:51:48.802579 containerd[1913]: time="2025-07-06T23:51:48.801797219Z" level=info msg="received exit event container_id:\"bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180\" id:\"bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180\" pid:5323 exited_at:{seconds:1751845908 nanos:797568722}" Jul 6 23:51:48.803488 containerd[1913]: time="2025-07-06T23:51:48.803468579Z" level=info msg="StartContainer for \"bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180\" returns successfully" Jul 6 23:51:48.819521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc5b3f3b69c440ef190350c9fbc4f45a1d7368bf57c4c27fdc65f0f9e530f180-rootfs.mount: Deactivated successfully. Jul 6 23:51:49.714986 containerd[1913]: time="2025-07-06T23:51:49.714884562Z" level=info msg="CreateContainer within sandbox \"7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:51:49.745269 containerd[1913]: time="2025-07-06T23:51:49.744297286Z" level=info msg="Container 114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:51:49.760988 containerd[1913]: time="2025-07-06T23:51:49.760938005Z" level=info msg="CreateContainer within sandbox \"7e59d2fb0176d4533eb6acff8324334fde4637df533d00661c21cd3585a15840\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759\"" Jul 6 23:51:49.761606 containerd[1913]: time="2025-07-06T23:51:49.761579439Z" level=info msg="StartContainer for \"114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759\"" Jul 6 23:51:49.763345 containerd[1913]: time="2025-07-06T23:51:49.763318739Z" level=info msg="connecting to shim 114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759" address="unix:///run/containerd/s/3a1699c646e55cb8aa03fdbb66fd92172cbc4d72f2348ea82754296fcdee2a26" protocol=ttrpc version=3 Jul 6 23:51:49.782316 systemd[1]: Started cri-containerd-114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759.scope - libcontainer container 114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759. Jul 6 23:51:49.814649 containerd[1913]: time="2025-07-06T23:51:49.814271271Z" level=info msg="StartContainer for \"114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759\" returns successfully" Jul 6 23:51:49.863728 containerd[1913]: time="2025-07-06T23:51:49.863689354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759\" id:\"492214cfa9a374ad7e819c5b29d7d06fcb1cd51425d661eb78c4bf8b6399c0ae\" pid:5390 exited_at:{seconds:1751845909 nanos:863084373}" Jul 6 23:51:50.225205 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 6 23:51:50.732745 kubelet[3406]: I0706 23:51:50.732593 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gm5vw" podStartSLOduration=6.732578133 podStartE2EDuration="6.732578133s" podCreationTimestamp="2025-07-06 23:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:51:50.731743834 +0000 UTC m=+173.530405985" watchObservedRunningTime="2025-07-06 23:51:50.732578133 +0000 UTC m=+173.531240284" Jul 6 23:51:51.239436 kubelet[3406]: I0706 23:51:51.239394 3406 setters.go:600] "Node became not ready" node="ci-4344.1.1-a-aa3e6ac533" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:51:51Z","lastTransitionTime":"2025-07-06T23:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:51:52.082014 containerd[1913]: time="2025-07-06T23:51:52.081972041Z" level=info msg="TaskExit event in podsandbox handler container_id:\"114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759\" id:\"e76baaac170cd6fd3624dcc037d6899dfb3126af7199735eaae102c4e27b999e\" pid:5638 exit_status:1 exited_at:{seconds:1751845912 nanos:81639736}" Jul 6 23:51:52.333555 kubelet[3406]: E0706 23:51:52.333308 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-27x5t" podUID="9288fcb6-30a7-457b-b84b-385e5bbc654c" Jul 6 23:51:52.716168 systemd-networkd[1657]: lxc_health: Link UP Jul 6 23:51:52.724293 systemd-networkd[1657]: lxc_health: Gained carrier Jul 6 23:51:54.171707 containerd[1913]: time="2025-07-06T23:51:54.171669137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759\" id:\"af9cfc17aed80479c87e863c62f354680ba8c6adebb15fe51b581e53945d5cb2\" pid:5926 exited_at:{seconds:1751845914 nanos:171080754}" Jul 6 23:51:54.410327 systemd-networkd[1657]: lxc_health: Gained IPv6LL Jul 6 23:51:56.254925 containerd[1913]: time="2025-07-06T23:51:56.254877059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759\" id:\"7438ff145bdb7d99a64d41e71f661cee4bab802c48c1e53ae04fcdbd42c20290\" pid:5959 exited_at:{seconds:1751845916 nanos:253718613}" Jul 6 23:51:57.334198 containerd[1913]: time="2025-07-06T23:51:57.334144696Z" level=info msg="StopPodSandbox for \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\"" Jul 6 23:51:57.334198 containerd[1913]: time="2025-07-06T23:51:57.334336682Z" level=info msg="TearDown network for sandbox \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\" successfully" Jul 6 23:51:57.334198 containerd[1913]: time="2025-07-06T23:51:57.334350009Z" level=info msg="StopPodSandbox for \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\" returns successfully" Jul 6 23:51:57.336111 containerd[1913]: time="2025-07-06T23:51:57.335159024Z" level=info msg="RemovePodSandbox for \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\"" Jul 6 23:51:57.336111 containerd[1913]: time="2025-07-06T23:51:57.335201125Z" level=info msg="Forcibly stopping sandbox \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\"" Jul 6 23:51:57.336111 containerd[1913]: time="2025-07-06T23:51:57.335270552Z" level=info msg="TearDown network for sandbox \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\" successfully" Jul 6 23:51:57.336111 containerd[1913]: time="2025-07-06T23:51:57.336058129Z" level=info msg="Ensure that sandbox 245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5 in task-service has been cleanup successfully" Jul 6 23:51:57.349946 containerd[1913]: time="2025-07-06T23:51:57.349881867Z" level=info msg="RemovePodSandbox \"245681a02132f2229ec559f0c39a26f2e20bb9fbc61639de286e40db7f2e5fe5\" returns successfully" Jul 6 23:51:57.350441 containerd[1913]: time="2025-07-06T23:51:57.350416350Z" level=info msg="StopPodSandbox for \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\"" Jul 6 23:51:57.350532 containerd[1913]: time="2025-07-06T23:51:57.350514415Z" level=info msg="TearDown network for sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" successfully" Jul 6 23:51:57.350532 containerd[1913]: time="2025-07-06T23:51:57.350527846Z" level=info msg="StopPodSandbox for \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" returns successfully" Jul 6 23:51:57.351404 containerd[1913]: time="2025-07-06T23:51:57.350755846Z" level=info msg="RemovePodSandbox for \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\"" Jul 6 23:51:57.351404 containerd[1913]: time="2025-07-06T23:51:57.350779436Z" level=info msg="Forcibly stopping sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\"" Jul 6 23:51:57.351404 containerd[1913]: time="2025-07-06T23:51:57.350837112Z" level=info msg="TearDown network for sandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" successfully" Jul 6 23:51:57.351724 containerd[1913]: time="2025-07-06T23:51:57.351696692Z" level=info msg="Ensure that sandbox 0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c in task-service has been cleanup successfully" Jul 6 23:51:57.363622 containerd[1913]: time="2025-07-06T23:51:57.363576399Z" level=info msg="RemovePodSandbox \"0492665cf4d00e13a3c012b58aaff47f47530dde984beca1b125f2c49f66ef6c\" returns successfully" Jul 6 23:51:58.335389 containerd[1913]: time="2025-07-06T23:51:58.335346475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"114ec245d2a2613ed831d6c4e622d0fc96ea57ac73c462dfea6ea31234656759\" id:\"3ee011752d1bb6bf092805fc9c8b5daac2cbeb1ea7cb90c8491da1f41dad5793\" pid:5983 exited_at:{seconds:1751845918 nanos:334636667}" Jul 6 23:51:58.423111 sshd[5127]: Connection closed by 10.200.16.10 port 36180 Jul 6 23:51:58.423987 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:58.427670 systemd[1]: sshd@25-10.200.20.10:22-10.200.16.10:36180.service: Deactivated successfully. Jul 6 23:51:58.429614 systemd[1]: session-28.scope: Deactivated successfully. Jul 6 23:51:58.431460 systemd-logind[1894]: Session 28 logged out. Waiting for processes to exit. Jul 6 23:51:58.432939 systemd-logind[1894]: Removed session 28.